2.39.0 |
Quant |
Logit enabled on all activation type.
ScatterElements (Activation type: INT8, INT16)
DepthWiseConv2d (Activation type: INT8, INT16)
ElementWiseSqrt enabled on QNN_DATATYPE_SFIXED_POINT_16
Convert (Activation type: INT16)
Dequantize enabled QNN_DATATYPE_FLOAT_16 for out[0]
Quantize (Activation type: INT16, INT8)
Enabled on QNN_DATATYPE_FLOAT_16 to QNN_DATATYPE_UFIXED_POINT_16
Enabled on QNN_DATATYPE_FLOAT_16 to QNN_DATATYPE_SFIXED_POINT_16
Enabled on QNN_DATATYPE_FLOAT_16 to QNN_DATATYPE_UFIXED_POINT_8
Enabled on QNN_DATATYPE_FLOAT_16 to QNN_DATATYPE_SFIXED_POINT_8
ElementWiseSelect, ElementWiseBinary, ReduceMin, MatMul (Activation type: INT16)
Stft enabled on QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32
|
2.38.0 |
Quant |
Adjust constraint message from support/not support to accept/reject.
UnPack enabled on QNN_DATATYPE_SFIXED_POINT_16
ElementWiseBinary (Activation type: INT8)
GatherElements, Cast, Pad added rank 5d support on all activation type
Cast enabled on QNN_DATATYPE_SFIXED_POINT_16 to QNN_DATATYPE_UFIXED_POINT_16
Quantize, enabled on QNN_DATATYPE_FLOAT_32
ElementWiseAbs enabled on QNN_DATATYPE_INT_32
ElementWiseUnary enabled on QNN_DATATYPE_INT_32
ElementWiseMaximum enabled on QNN_DATATYPE_INT_32
ElementWiseMinimum enabled on QNN_DATATYPE_INT_32
RandomUniformLike enabled on QNN_DATATYPE_UINT_32 for in[0],
QNN_DATATYPE_FLOAT_32 for in[1]
|
2.37.0 |
Quant |
Gather (Activation type: INT8)
Tile (Activation type: INT8, INT16)
Cast (Activation type: INT8)
ChannelShuffle (Activation type: INT8, INT16)
StridedSlice enabled on QNN_DATATYPE_UINT_8
TopK (Activation type: INT8)
Nv12ToRgb enabled on QNN_DATATYPE_UFIXED_POINT_8
Split enabled on QNN_DATATYPE_SFIXED_POINT_16
IsNan enabled on FP16
|
2.36.0 |
Quant |
Pad (Activation type: INT16)
Gather (Activation type: FP16)
Gather (Activation type: INT8)
RmsNorm enabled on QNN_DATATYPE_SFIXED_POINT_16
Gru enabled on QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32
Buffer enabled on FP16
|
2.35.0 |
Quant |
Enable enabled on QNN_DATATYPE_UINT_8
Tile (Activation type: INT8, INT16)
ElementWiseAsin enabled on QNN_DATATYPE_SFIXED_POINT_16
Conv2d (Activation type: IN16)
LayerNorm (Activation type: FP16)
IsInf enabled on QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32
|
2.34.0 |
Quant |
BatchToSpace, SpaceToBatch (Activation type: INT8, INT16)
LayerNorm (Activation type: INT8, INT16)
Tile (Activation type: INT8, INT16)
Conv2d (Activation type: INT16)
ElementWiseNeuron (Activation type: FP16)
Sigmoid (Activation type: FP16)
|
2.33.0 |
Quant |
ElementWiseRsqrt, ElementWiseUnary, ElementWiseNeuron, Gelu (Activation type: INT16)
ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual,
ElementWiseNotEqual, ElementWiseBinary (Activation type: FP16)
Concat enabled on QNN_DATATYPE_BOOL_8
|
2.32.0 |
Quant |
Reshape (Activation type: INT8, INT16)
ElementWiseSubtract enabled on QNN_DATATYPE_SFIXED_POINT_16
ReduceSum enabled on QNN_DATATYPE_SFIXED_POINT_16
ScatterElements (Activation type: INT8)
Gather (Activation type: INT8)
Softmax enabled on QNN_DATATYPE_SFIXED_POINT_16
MatMul enabled on QNN_DATATYPE_SFIXED_POINT_16
StridedSlice enabled on QNN_DATATYPE_SFIXED_POINT_16
FullyConnected enabled on QNN_DATATYPE_SFIXED_POINT_16
Conv2d (Activation type: INT16)
Split enabled on QNN_DATATYPE_BOOL_8
|
2.31.0 |
Quant |
Tanh (Activation type: INT16)
Conv2d (Activation type: INT16)
FullyConnected (Activation type: INT16)
MatMul (Activation type: INT16)
|
2.30.0 |
Quant |
Conv2d (Activation type: INT8)
Conv2d (Activation type: INT16)
RmsNorm (Activation type: INT16)
DepthWiseConv2d (Activation type: INT16)
|
2.29.0 |
Quant |
Dynamic tensor ops have been enabled in OpValidator
TopK (Activation type: All)
ElementWiseNeuron (Activation type: INT8, INT16)
BatchNorm, LayerNorm (Activation type: INT8)
Convert (Activation type: INT16)
RmsNorm (Activation type: INT16)
ElementWiseBinary Equal (Activation type: FP16)
ReduceSum (Activation type: FP16, FP32)
|
2.28.0 |
Quant |
Axes (Activation type: All)
Prelu (Activation type: All)
NonZero (Activation type: INT16, FP16)
Tile (Activation type: FP16, FP32)
RmsNorm (Activation type: INT16, INT8)
Conv2d (Activation type:INT8)
ElementwiseNeuron Relu (Activation type: INT8, INT16)
ExtractGlimpse (Activation type: INT16)
Convert (Activation type: INT16)
ElementWiseBinary (Activation type: FP16)
ElementWiseUnary Sqrt (Activation type: FP16)
ReduceMax, ReduceMin, ReduceMean (Activation type: FP16)
ElementWiseNeuron SoftPlus (Activation type: FP16)
ReduceMin (Activation type: INT32)
|
2.27.0 |
Quant |
ElementWiseNeuron, Sigmoid (Activation type: All)
Pack (Activation type: All)
CumulativeSum (Activation type: INT8)
HardSigmoid (Activation type: INT8, INT16)
Sigmoid, Softmax (Activation type: INT16)
ReduceMean (Activation type: INT16)
ReduceSum (Activation type: FP16, FP32)
|
2.26.0 |
Quant |
|
2.25.0 |
Quant |
Gather
GatherElements
ElementWiseMultiply, ElementWiseAdd, ElementWiseSub, ElementWisePow
Added rank 5d support for in[0], in[1] and out[0]
Added QNN_DATATYPE_SFIXED_POINT_16 datatype support for ElementWiseAdd,
ElementWiseMultiply
ElementwiseRsqrt
TopK
Relu
Conv2d, Matmul, FullyConnected
Prelu
ResizeBilinear
PoolMax2d
ExtractPatches
|
2.24.0 |
Quant |
|
2.23.0 |
Quant |
Cast
Transpose
Relu
Gelu
ElementwiseUnary
|
2.22.0 |
Quant |
ElementwiseUnary
Conv3d, TransposeConv3d
Reshape
ElementWiseAdd, ElementWiseAnd, ElementWiseDivide, ElementWiseNotEqual,
ElementWiseMaximum, ElementWiseMinimum, ElementWiseMultiply, ElementWisePower,
ElementWiseSquaredDifference, ElementWiseSubtract, ElementWiseEqual, ElementWiseGreater,
ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual, ElementWiseBinary,
ElementWiseFloorDiv
ElementWiseAbs
TopK
Transpose
Added constraint for in[0] of Tranpose 4D: QNN_DATATYPE_UFIXED_POINT_8,
QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16,
QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32 are supported
Added constraint for in[0] of Transpose 5D: QNN_DATATYPE_UFIXED_POINT_8,
QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_FLOAT_16,
QNN_DATATYPE_FLOAT_32 are supported
|
2.21.0 |
Quant |
Dequantize
ElementWiseBinary
|
2.20.0 |
Quant |
ElementWiseXor, CreateSparse, GetSparseIndices, GetSparseValues, SparseToDense,
ElementWiseNeuron
ElementWiseSin, ElementWiseCos
Conv3d
Resize
Convert
|
2.19.0 |
Quant |
Gather
GatherElements
BatchNorm, LayerNorm
Convert
|
2.18.0 |
Quant |
Conv2d, DepthWiseConv2d, TransposeConv2d, FullyConnected, MatMul
GridSample
Lstm
Added rest input for in[24]
Added input rank constraint of 2 for in[0]
Added description of 2d input not applicable for time_major parameter
|
2.17.0 |
Quant |
Quantize, Dequantize
EltwiseMul
|
2.16.0 |
Quant |
|
2.15.0 |
Quant |
Split, ReduceMax, ReduceMean, ReduceMin, Convert, ElementWiseAbs
ElementWiseBinary support added
Convert
Tile
Batchnorm
Conv2d, DepthWiseConv2d, TransposeConv2
ScatterNd
|
2.14.0 |
Quant |
|
2.13.0 |
Quant |
Convert
GroupNorm support added
ElementWiseRsqrt
|
2.12.0 |
Quant |
ElementwiseUnary support added
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
Added QNN_DATATYPE_SFIXED_POINT_16 datatype support for in[0]
Constraint added for in[1]: QNN_DATATYPE_SFIXED_POINT_16 Weight must have
QNN_DATATYPE_UFIXED_POINT_16 Activation and must be symmetric quantized
RoiAlign
|
2.11.0 |
Quant |
ElementWiseAsin, ExtractPatches, RoiAlign
Resize
NonMaxSuppression
DetectionOutput, MultiClassNms
|
2.10.0 |
Quant |
ElementWiseSin, ElementWiseCos, NonMaxSuppression
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
TopK
Transpose
Fully-connected, MatMul
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
Conv2d, DepthWiseConv2d, TransposeConv2d
|
2.9.0 |
Quant |
L2Norm, LogSoftmax
ElementWiseEqual, ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess
ElementWiseLessEqual, ElementWiseNotEqual
Conv2d, DepthWiseConv2d, FullyConnected, MatMul, TransposeConv2d
MatMul
ElementWiseAdd, ElementWiseDivide, ElementWiseMaximum, ElementWiseMinimum,
ElementWiseMultiply, ElementWiseSquaredDifference, ElementWiseSubtract, ElementWiseEqual,
ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual,
ElementWiseNotEqual, ElementWiseSelect, Gather, GatherNd, MatMul, ScatterNd
ElementWiseSelect, ScatterNd
ElementWisePower, ExpandDims
OneHot
ReluMinMax
|
2.8.0 |
Quant |
Cast, ElementWiseLog
ElementWiseGreaterEqual, ElementWiseLessEqual, ElementWiseNotEqual
ReduceSum, TopK
ElementWiseGreater, ElementWiseGreaterEqual, ElementWiseLess, ElementWiseLessEqual,
ElementWiseNotEqual, ElementWisePower, ElementWiseSelect, GatherNd, Softmax
ElementWiseSelect
Added QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8,
QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32
datatype support for in[1]
|
2.7.0 |
Quant |
Argmax
Transpose
DepthWiseConv2d
ElementWiseAdd, ElementWiseDivide, ElementWiseExp, ElementWiseMaximum,
ElementWiseMinimum, ElementWiseMultiply, ElementWiseSquaredDifference,
ElementWiseSubtract, MatMul, Pad, Relu, Sigmoid, Transpose
ElementWiseExp, ElementWiseFloor, ReduceMax, ReduceMin, ScatterNd, LayerNorm
ElementWiseEqual, ElementWiseLess, ElementWiseSelect
ElementWiseGreater
Reshape
ScatterNd
Softmax
InstanceNorm
Gather, GatherNd
GatherNd
FullyConnected, MatMul
|
2.6.0 |
Quant |
|
2.4.0 |
Quant |
Resize support added
Gather
|
2.3.0 |
Quant |
|
2.2.0 |
Quant |
|
2.1.0 |
Quant |
Concat, ElementWiseAdd, ElementWiseDivide, ElementWiseMaximum, ElementWiseMinimum,
ElementWiseMultiply, ElementWiseSubtract, ElementWiseSquaredDifference, MatMul,
Sigmoid, StridedSlice
Gather
|