GPU Backend Op Definition Supplement

ArgbToRgb

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Argmax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank in [1,4]

  • Shape: Rank > 0

Argmin

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank in [1,4]

  • Shape: Rank > 0

Batchnorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

All

  • Rank in [1,4]

BatchToSpace

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

crops

All

  • Value: crops[0] and crops[1] should be equal.

  • Value: crops[2] and crops[3] should be equal.

Cast

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_UINT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

ChannelShuffle

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank in [3,4]

Support

Configuration

All

  • Param axis only supports default value

Concat

Datatypes

Configuration

in[0..m]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0..m]

axis

All

  • Shape: Rank in [1,5]

  • Value: If any in[0..m] dimension > 16384 and N = 4, then axis > 0

  • Value: If N = 5, then axis > 0

  • Value: If N = 5 and axis = 1, only batch = 1 supported

Conv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Support

Configuration

All

  • Param reuse_sparse_indices only supports default value

CumulativeSum

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

axis

All

  • Value: Must be in range [0, rank(in[0]) - 1].

  • Datatype: QNN_DATATYPE_UINT_32

DepthToSpace

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

block_size

All

  • Value: block_size[0] and block_size[1] should be equal.

DepthWiseConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Dequantize

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_4

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

All

  • Only supports ScaleOffset based quantization encoding type

DetectionOutput

Datatypes

Configuration

in[0]

in[1]

in[2]

All

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

in[1]

All

  • Shape: Only batch = 1 supported

  • Shape: num_anchors less than or equal to 2048 in Fast mode

  • Shape: num_classes less than 256

  • Shape: num_boxes should be equal to num_anchors

Support

Configuration

All

  • Param share_location only supports default value

  • Param nms_eta only supports default value

ElementWiseAbs

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

ElementWiseAdd

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Datatype: INT_32 supported only in broadcast

  • Shape: Only supports Rank >= 1

  • Datatype: INT_32 supported only in broadcast

  • Shape: Only supports Rank >= 1

  • Datatype: INT_32 supported only in broadcast

  • Shape: Only supports Rank >= 1

ElementWiseAnd

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseBinary

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

operation

All

  • Datatype: Numerical supports: FLOAT_32, FLOAT_16, INT_32. Comparison supports: FLOAT_32, FLOAT_16, INT_32, BOOL_8. Logical supports: UINT_8, BOOL_8

  • Shape: Only supports Rank >= 1

  • If datatype is QNN_DATATYPE_UFIXED_POINT_4, then input must be static

  • Datatype: Numerical supports: FLOAT_32, FLOAT_16, INT_32. Comparison supports: FLOAT_32, FLOAT_16, INT_32, BOOL_8. Logical supports: UINT_8, BOOL_8

  • Shape: Only supports Rank >= 1

  • If datatype is QNN_DATATYPE_UFIXED_POINT_4, then input must be static

  • Datatype: Numerical supports: FLOAT_32, FLOAT_16, INT_32. Comparison supports: UINT_8, BOOL_8. Logical supports: UINT_8, BOOL_8

  • Shape: Only supports Rank >= 1

  • Value: FMOD, MOD, and XOR not supported

ElementWiseCos

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseDivide

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseExp

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseFloor

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseFloorDiv

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

ElementWiseGreater

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseGreaterEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseLess

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseLessEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseLog

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseMaximum

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseMinimum

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseMultiply

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseNeg

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

ElementWiseNeuron

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseNot

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

ElementWiseNotEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseOr

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWisePower

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseRound

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseRsqrt

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseSelect

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UFIXED_POINT_4

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UFIXED_POINT_4

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[1]

in[2]

All

  • If datatype is QNN_DATATYPE_UFIXED_POINT_4, then input must be static

  • If datatype is QNN_DATATYPE_UFIXED_POINT_4, then input must be static

ElementWiseSin

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseSoftplus

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseSquaredDifference

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseSquareRoot

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ElementWiseSubtract

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

  • Shape: Only supports Rank >= 1

ElementWiseUnary

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0]

operation

All

  • Datatype: BOOL_8 and UINT_8 data types only supported when operation param is NOT

  • Datatype: BOOL_8 and UINT_8 data types only supported when operation param is NOT

  • Value: ASIN, ATAN, RECIPROCAL, and SIGN not supported

Elu

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ExpandDims

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

FullyConnected

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

out[0]

All

  • Datatype: out[0] must be same Datatype as in[0]

Support

Configuration

All

  • Param keep_dims only supports default value

Gather

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

All

  • If datatype is QNN_DATATYPE_UFIXED_POINT_4, then input must be static

  • Shape: k must be 1,2 or 3

  • If datatype is QNN_DATATYPE_UFIXED_POINT_4, then input must be static

GatherNd

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[1]

batch_dims

All

  • Shape: k must be 1,2 or 3 and Shape(in[1])[k-1] must be greater than 0 and less than or equal to (n - batch_dims)

  • Value: Indices cannot be out of range of in[0]

  • Value: batch_dims must be less than min(n, k)

  • Shape: Shape(in[0])[:batch_dims] must equal Shape(in[1])[:batch_dims]

Gelu

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

GridSample

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Input rank must be 4

  • Shape: Input rank must be 4

  • Datatype: Same Datatype as in[0]

  • Shape: Output rank must be 4

  • Datatype: Same Datatype as in[0]

GroupNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

All

  • Supports rank in [3,4]

HardSwish

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

HeatMapMaxKeyPoint

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

All

  • Shape: Only square heat maps supported

InstanceNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

mode

region

All

  • Supports rank in [3,4]

  • Value: Only supports mode MU_SIGMA

  • Value: Only supports region ACROSS_SPATIAL

Support

Configuration

All

  • Param normalize_variance only supports default value

L2Norm

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

axes

All

  • Unsupported

L2Pool2d

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

LayerNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[1]

in[2]

All

  • Shape: Only supports input of Rank = 1

  • Shape: Dimension must be equal to norm axis dimension of in[0]

  • Shape: Only supports input of Rank = 1

  • Shape: Dimension must be equal to norm axis dimension of in[0]

LogSoftmax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

axis

All

  • Value: Only support value of N-1

Lrn

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Lstm

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

in[10]

in[11]

in[12]

in[13]

in[14]

in[15]

in[16]

in[17]

in[21]

in[22]

in[23]

in[24]

out[0]

out[1]

out[2]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

in[12]

in[13]

in[14]

in[15]

in[16]

in[17]

in[18]

in[19]

in[20]

in[21]

in[22]

in[23]

in[24]

input_gate_qscale

forget_gate_qscale

cell_gate_qscale

output_gate_qscale

hidden_state_offset

hidden_state_qscale

time_major

All

  • Shape: Input rank must be 2.

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • input must be static

  • cell to input weights not supported

  • cell to forget weights not supported

  • cell to output weights not supported

  • input must be static

  • input must be static

  • input must be static

  • reset only supports default

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Not applicable for 2D input and is ignored.

Support

Configuration

All

  • Param direction only supports default value

MatMul

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

transpose_in0

All

  • Dynamic Shape: Dynamic dims not supported.

  • Shape: Batch dimensions (outermost N-2) if not equal to that of in[0], should = 1 or shouldn’t exist

  • Dynamic Shape: Dynamic dims not supported.

  • Dynamic Shape: Dynamic dims not supported.

  • Shape: Rank must be equal to max(rank(in[0]), rank(in[1])) i.e after batch broadcasting

  • Shape: Batch dimensions should be broadcasted from in[0], in[1]

  • Dynamic Shape: Dynamic dims not supported.

  • Value: Must be false

Nv12ToRgb

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Nv21ToRgb

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Pack

Datatypes

Configuration

in[0..m]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0..m]

axis

All

  • Shape: N less than or equal to 3

  • Value: In range [0, rank(in[0])-1] for m == 1

  • Value: In range [max(0,rank(in[0])-2), rank(in[0])-1] for m >= 2

Pad

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

scheme

pad_constant_value

All

  • Value: MIRROR_SYMMETRIC can only be used when rank(in[0]) is 4

  • Value: MIRROR_SYMMETRIC can only be used with H or W dimensions

  • Value: MIRROR_REFLECT can only be used when rank(in[0]) is 4

  • Value: EDGE can only be used when rank(in[0]) is less than or equal to 4

  • Datatype: QNN_DATATYPE_FLOAT_32

PoolAvg2d

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Support

Configuration

All

  • Param rounding_mode only supports default value

PoolMax2d

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Support

Configuration

All

  • Param rounding_mode only supports default value

Prelu

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

in[1]

All

  • Shape: N >= 1

  • Shape: M >= 1

ReduceMax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

axes

All

  • Shape: N less than or equal to 4

  • Value: Reduction across batch supported only when rank(axes) == 1

ReduceMean

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

axes

All

  • Shape: N less than or equal to 4

  • Value: Reduction across batch supported only when rank(axes) == 1

ReduceMin

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

axes

All

  • Shape: N less than or equal to 4

  • Value: Reduction across batch supported only when rank(axes) == 1

ReduceProd

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

axes

All

  • Shape: N less than or equal to 4

  • Value: Reduction across batch supported only when rank(axes) == 1

ReduceSum

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

axes

All

  • Shape: N less than or equal to 4

  • Value: Reduction across batch supported only when rank(axes) == 1

Relu

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Relu1

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Relu6

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

ReluMinMax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Reshape

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Only supports Rank >= 1

  • Dynamic Shape: Dynamic dims not supported.

  • This input is not supported.

  • Shape: Only supports Rank >= 1

  • Dynamic Shape: Dynamic dims not supported.

Resize

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

out[0]

All

  • Shape: dim of out0[1] and out0[2] must be > 1

ResizeBilinear

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

out[0]

All

  • Shape: dim of out0[1] and out0[2] must be > 1

Support

Configuration

All

  • Param antialias only supports default value

ResizeNearestNeighbor

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

out[0]

All

  • Shape: dim of out0[1] and out0[2] must be > 1

RmsNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

in[1]

in[2]

axes

All

  • Shape: Only supports input of Rank in range [1, 4]

  • Shape: Only supports input of Rank = 1

  • Shape: Dimension must be equal to norm axis dimension of in[0]

  • Shape: Only supports input of Rank = 1

  • Shape: Dimension must be equal to norm axis dimension of in[0]

  • Value: Value should be in range [0, Rank-1]

RoiAlign

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

Support

Configuration

All

  • Param aligned only supports default value

  • Param allow_invalid_roi only supports default value

ScatterElements

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

axis

All

  • Shape: Rank in [1,4]

  • Value: If N=4, then axis > 0.

Sigmoid

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Softmax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Support

Configuration

All

  • Param axis only supports default value

SpaceToBatch

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

pad_amount

All

  • Value: pad_amount[0] and pad_amount[1] should be equal.

  • Value: pad_amount[2] and pad_amount[3] should be equal.

SpaceToDepth

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

block_size

All

  • Value: block_size[0] and block_size[1] should be equal.

Support

Configuration

All

  • Param mode only supports default value

Split

Datatypes

Configuration

in[0]

out[0]…out[m]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

axis

All

  • Value: Should be greater than N-4 for N > 4

Squeeze

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

StridedSlice

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

ranges

All

  • Shape: Rank must be 5 or less

  • Value: No support for slicing across batch dimensions > 4, the slice ranges specified must align

Support

Configuration

All

  • Param new_axes_mask only supports default value

Tanh

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Tile

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

multiples

All

  • Value: For N > 3, multiples[0…N-4] should be 1

TopK

Datatypes

Configuration

in[0]

out[0]

out[1]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

Support

Configuration

All

  • Param largest only supports default value

Transpose

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

All

  • Shape: Rank must be 2, 3, 4, or 5

TransposeConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Support

Configuration

All

  • Param group only supports default value

UnPack

Datatypes

Configuration

in[0]

out[0]…out[m]

All

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank must be 2, 3, or 4