2.29.0 |
Quant |
Dynamic tensor ops have been enabled in OpValidator
TopK (Activation type: All)
ElementWiseNeuron (Activation type: INT8, INT16)
BatchNorm, LayerNorm (Activation type: INT8)
Convert (Activation type: INT16)
RmsNorm (Activation type: INT16)
ElementWiseBinary Equal (Activation type: FP16)
ReduceSum (Activation type: FP16, FP32)
|
2.28.0 |
Quant |
Axes (Activation type: All)
Prelu (Activation type: All)
NonZero (Activation type: INT16, FP16)
Tile (Activation type: FP16, FP32)
RmsNorm (Activation type: INT16, INT8)
Conv2d (Activation type:INT8)
ElementwiseNeuron Relu (Activation type: INT8, INT16)
ExtractGlimpse (Activation type: INT16)
Convert (Activation type: INT16)
ElementWiseBinary (Activation type: FP16)
ElementWiseUnary Sqrt (Activation type: FP16)
ReduceMax, ReduceMin, ReduceMean (Activation type: FP16)
ElementWiseNeuron SoftPlus (Activation type: FP16)
ReduceMin (Activation type: INT32)
|
2.27.0 |
Quant |
ElementWiseNeuron, Sigmoid (Activation type: All)
Pack (Activation type: All)
CumulativeSum (Activation type: INT8)
HardSigmoid (Activation type: INT8, INT16)
Sigmoid, Softmax (Activation type: INT16)
ReduceMean (Activation type: INT16)
ReduceSum (Activation type: FP16, FP32)
|
2.26.0 |
Quant |
|
2.25.0 |
Quant |
Gather
GatherElements
ElementWiseMultiply, ElementWiseAdd, ElementWiseSub, ElementWisePow
Added rank 5d support for in[0], in[1] and out[0]
Added QNN_DATATYPE_SFIXED_POINT_16 datatype support for ElementWiseAdd,
ElementWiseMultiply
ElementwiseRsqrt
TopK
Relu
Conv2d, Matmul, FullyConnected
Prelu
ResizeBilinear
PoolMax2d
ExtractPatches
|
2.24.0 |
Quant |
|
2.23.0 |
Quant |
Cast
Transpose
Relu
Gelu
ElementwiseUnary
|
2.22.0 |
Quant |
ElementwiseUnary
Conv3d, TransposeConv3d
Reshape
ElementWiseAdd, ElementWiseAnd, ElementWiseDivide, ElementWiseNotEqual,
ElementWiseMaximum, ElementWiseMinimum, ElementWiseMultiply, ElementWisePower,
ElementWiseSquaredDifference, ElementWiseSubtract, ElementWiseEqual, ElementWiseGreater,
ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual, ElementWiseBinary,
ElementWiseFloorDiv
ElementWiseAbs
TopK
Transpose
Added constraint for in[0] of Tranpose 4D: QNN_DATATYPE_UFIXED_POINT_8,
QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16,
QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32 are supported
Added constraint for in[0] of Transpose 5D: QNN_DATATYPE_UFIXED_POINT_8,
QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_FLOAT_16,
QNN_DATATYPE_FLOAT_32 are supported
|
2.21.0 |
Quant |
Dequantize
ElementWiseBinary
|
2.20.0 |
Quant |
ElementWiseXor, CreateSparse, GetSparseIndices, GetSparseValues, SparseToDense,
ElementWiseNeuron
ElementWiseSin, ElementWiseCos
Conv3d
Resize
Convert
|
2.19.0 |
Quant |
Gather
GatherElements
BatchNorm, LayerNorm
Convert
|
2.18.0 |
Quant |
Conv2d, DepthWiseConv2d, TransposeConv2d, FullyConnected, MatMul
GridSample
Lstm
Added rest input for in[24]
Added input rank constraint of 2 for in[0]
Added description of 2d input not applicable for time_major parameter
|
2.17.0 |
Quant |
Quantize, Dequantize
EltwiseMul
|
2.16.0 |
Quant |
|
2.15.0 |
Quant |
Split, ReduceMax, ReduceMean, ReduceMin, Convert, ElementWiseAbs
ElementWiseBinary support added
Convert
Tile
Batchnorm
Conv2d, DepthWiseConv2d, TransposeConv2
ScatterNd
|
2.14.0 |
Quant |
|
2.13.0 |
Quant |
Convert
GroupNorm support added
ElementWiseRsqrt
|
2.12.0 |
Quant |
ElementwiseUnary support added
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
Added QNN_DATATYPE_SFIXED_POINT_16 datatype support for in[0]
Constraint added for in[1]: QNN_DATATYPE_SFIXED_POINT_16 Weight must have
QNN_DATATYPE_UFIXED_POINT_16 Activation and must be symmetric quantized
RoiAlign
|
2.11.0 |
Quant |
ElementWiseAsin, ExtractPatches, RoiAlign
Resize
NonMaxSuppression
DetectionOutput, MultiClassNms
|
2.10.0 |
Quant |
ElementWiseSin, ElementWiseCos, NonMaxSuppression
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
TopK
Transpose
Fully-connected, MatMul
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
Conv2d, DepthWiseConv2d, TransposeConv2d
|
2.9.0 |
Quant |
L2Norm, LogSoftmax
ElementWiseEqual, ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess
ElementWiseLessEqual, ElementWiseNotEqual
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
MatMul
ElementWiseAdd, ElementWiseDivide, ElementWiseMaximum, ElementWiseMinimum,
ElementWiseMultiply, ElementWiseSquaredDifference, ElementWiseSubtract, ElementWiseEqual,
ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual,
ElementWiseNotEqual, ElementWiseSelect, Gather, GatherNd, MatMul, ScatterNd
ElementWiseSelect, ScatterNd
ElementWisePower, ExpandDims
OneHot
ReluMinMax
|
2.8.0 |
Quant |
Cast, ElementWiseLog
ElementWiseGreaterEqual, ElementWiseLessEqual, ElementWiseNotEqual
ReduceSum, TopK
ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual,
ElementWiseNotEqual, ElementWisePower, ElementWiseSelect, GatherNd, Softmax
ElementWiseSelect
Added QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8,
QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32
datatype support for in[1]
|
2.7.0 |
Quant |
Argmax
Transpose
DepthWiseConv2d
ElementWiseAdd, ElementWiseDivide, ElementWiseExp, ElementWiseMaximum,
ElementWiseMinimum, ElementWiseMultiply, ElementWiseSquaredDifference,
ElementWiseSubtract, MatMul, Pad, Relu, Sigmoid, Transpose
ElementWiseExp, ElementWiseFloor, ReduceMax, ReduceMin, ScatterNd, LayerNorm
ElementWiseEqual, ElementWiseLess, ElementWiseSelect
ElementWiseGreater
Reshape
ScatterNd
Softmax
InstanceNorm
Gather, GatherNd
GatherNd
FullyConnected, MatMul
|
2.6.0 |
Quant |
|
2.4.0 |
Quant |
Resize support added
Gather
|
2.3.0 |
Quant |
|
2.2.0 |
Quant |
|
2.1.0 |
Quant |
Concat, ElementWiseAdd, ElementWiseDivide, ElementWiseMaximum, ElementWiseMinimum,
ElementWiseMultiply, ElementWiseSubtract, ElementWiseSquaredDifference, MatMul,
Sigmoid, StridedSlice
Gather
|