CPU Backend Op Definition Supplement

ArgbToRgb

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Argmax

Datatypes

Configuration

in[0]

out[0]

axis

keep_dims

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

Argmin

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

AxisAlignedBboxTransform

Datatypes

Configuration

in[0]

in[1]

out[0]

weights

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Support

Configuration

fp32

  • Param weights only supports default value

Batchnorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

BatchPermutation

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

in[1]

fp32

  • Supports input of Rank greater than 0

  • Supports input of Rank equal 1

  • Supports input of unique value ranging between 0 and batch-1

BatchToSpace

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

BboxTransform

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

BoxWithNmsLimit

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Buffer

Datatypes

Configuration

in[0]

in[1]

out[0]

buffer_size

buffer_dim

stride

mode

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

Cast

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_BOOL_8, QNN_DATATYPE_INT_64, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_BOOL_8, QNN_DATATYPE_INT_64, QNN_DATATYPE_FLOAT_16

int8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8, QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UINT_8, QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

out[0]

int8

  • QNN_DATATYPE_UINT_32 must be in the range of 0..INT32_MAX.

  • Dynamic Shape: Dynamic dims not supported.

  • Dynamic Shape: Dynamic dims not supported.

ChannelShuffle

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Col2Im

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

CollectRpnProposals

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

CombinedNms

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

out[2]

out[3]

max_boxes_per_class

max_total_boxes

iou_threshold

score_threshold

pad_per_class

clip_boxes

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Concat

Datatypes

Configuration

in[0..m]

out[0]

in[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0..m]

out[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

ConstantOfShape

Datatypes

Configuration

in[0]

out[0]

value

fp32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_64

Conv1d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Conv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

Conv3d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

group

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

out[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Input rank must be 5.

  • Shape: Output rank must be 5

Convert

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

Correlation1D

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

CreateSparse

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

CropAndResize

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

CumulativeSum

Datatypes

Configuration

in[0]

out[0]

axis

exclusive

reverse

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

DepthToSpace

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

DepthWiseConv1d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

DepthWiseConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

Dequantize

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_32, QNN_DATATYPE_UFIXED_POINT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

fp32

  • AXIS_SCALE_OFFSET do not support UF32 datatype.

DetectionOutput

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

out[1]

out[2]

out[3]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

DistributeFpnProposals

Datatypes

Configuration

in[0]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

ElementWiseAbs

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ElementWiseAdd

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

fp32

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

ElementWiseAnd

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseAsin

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ElementWiseAtan

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseBinary

Datatypes

Configuration

operation

fp32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

in[1]

out[0]

fp32

  • Input 1 data type validation is done based on type of operation

  • Input 2 data type validation is done based on type of operation

  • Output data type validation is done based on type of operation

int8

  • Input 1 data type validation is done based on type of operation

  • Input 2 data type validation is done based on type of operation

  • Output data type validation is done based on type of operation

ElementWiseCeil

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseCos

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ElementWiseDivide

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

fp32

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

ElementWiseEqual

Datatypes

Configuration

in[0]

in[1]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ElementWiseExp

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ElementWiseFloor

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ElementWiseFloorDiv

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ElementWiseFmod

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ElementWiseGreater

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

int8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

ElementWiseGreaterEqual

Datatypes

Configuration

in[0]

in[1]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

int8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

ElementWiseLess

Datatypes

Configuration

in[0]

in[1]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

fp32

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

ElementWiseLessEqual

Datatypes

Configuration

in[0]

in[1]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

fp32

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

ElementWiseLog

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseMaximum

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseMinimum

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseMod

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_16

QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_16

QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_16

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

ElementWiseMultiply

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

fp32

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

ElementWiseNeg

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseNeuron

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseNot

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseNotEqual

Datatypes

Configuration

in[0]

in[1]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseOr

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

ElementWisePower

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseRound

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseRsqrt

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ElementWiseSelect

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ElementWiseSign

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_64

ElementWiseSin

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseSoftplus

Datatypes

Configuration

in[0]

out[0]

beta

threshold

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

beta

fp32

  • Value: beta must be > 0

ElementWiseSquaredDifference

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ElementWiseSquareRoot

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

fp32

  • Dynamic Shape: Dynamic dims not supported.

  • Dynamic Shape: Dynamic dims not supported.

int8

  • Dynamic Shape: Dynamic dims not supported.

  • Dynamic Shape: Dynamic dims not supported.

ElementWiseSubtract

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

fp32

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 6.

ElementWiseUnary

Datatypes

Configuration

in[0]

out[0]

int8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0]

fp32

  • Input[0] data type validation is done based on the type of operation

  • Output[0] data type validation is done based on the type of operation

ElementWiseXor

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Elu

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ExpandDims

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ExtractGlimpse

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ExtractPatches

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

FullyConnected

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

out[0]

fp32

  • Datatype: Must have same data type as in[0]

int8

  • Datatype: out[0] must be same Datatype as in[0]

Gather

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

GatherElements

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

Constraints

Configuration

in[0]

fp32

  • Supports input of 6D

GatherNd

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Gelu

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

GenerateProposals

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

out[0]

out[1]

bbox_xform_clip

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

Support

Configuration

fp32

  • Param bbox_xform_clip only supports default value

GetSparseIndices

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

GetSparseValues

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

GridSample

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

GroupNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

epsilon

group

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

Gru

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

in[10]

in[11]

in[12]

in[13]

in[14]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

HardSwish

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

HeatMapMaxKeyPoint

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

If

Datatypes

Configuration

in[0]

fp32

QNN_DATATYPE_BOOL_8

Im2Col

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ImageProjectionTransform

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

InstanceNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

mode

region

fp32

  • Only supports input of Rank = 4

  • Only support default mode QNN_OP_INSTANCE_NORM_MODE_MU_SIGMA

  • Only support default region QNN_OP_INSTANCE_NORM_REGION_ACROSS_SPATIAL

int8

  • Only support default mode QNN_OP_INSTANCE_NORM_MODE_MU_SIGMA

  • Only support default region QNN_OP_INSTANCE_NORM_REGION_ACROSS_SPATIAL

Support

Configuration

fp32

  • Param mode only supports default value

  • Param region only supports default value

IsInf

Datatypes

Configuration

in[0]

out[0]

detect_negative

detect_positive

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

IsNan

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

L2Norm

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

L2Pool2d

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

LayerNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Logit

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

LogSoftmax

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

beta

axis

fp32

  • Value: Only support beta = 1

  • Value: Only support value of N-1

Lrn

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Lstm

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

in[10]

in[11]

in[12]

in[13]

in[14]

in[15]

in[16]

in[17]

in[18]

in[19]

in[20]

in[21]

in[22]

in[23]

in[24]

out[0]

out[1]

out[2]

out[3]

out[4]

out[5]

out[6]

out[7]

hidden_state_offset

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

Constraints

Configuration

input_gate_qscale

forget_gate_qscale

cell_gate_qscale

output_gate_qscale

hidden_state_offset

hidden_state_qscale

fp32

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

int8

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

  • Only support default value 0.0f

Support

Configuration

fp32

  • Param input_gate_qscale only supports default value

  • Param forget_gate_qscale only supports default value

  • Param cell_gate_qscale only supports default value

  • Param output_gate_qscale only supports default value

  • Param hidden_state_offset only supports default value

  • Param hidden_state_qscale only supports default value

MaskedSoftmax

Datatypes

Configuration

in[0]

in[1]

out[0]

mode

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

MatMul

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

fp32

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

Moments

Datatypes

Configuration

in[0]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

MultiClassNms

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

in[2]

out[2]

out[3]

out[4]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

NonMaxSuppression

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

NonZero

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_INT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

Nv12ToRgb

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Nv21ToRgb

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

OneHot

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

Pack

Datatypes

Configuration

in[0..m]

out[0]

in[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_UINT_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_UINT_8

int8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_UINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

int8

  • Shape: Max supported input rank is 3.

Pad

Datatypes

Configuration

in[0]

out[0]

pad_constant_value

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max Supported input rank is 6.

PoolAvg2d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

Support

Configuration

fp32

  • Param rounding_mode only supports default value

PoolAvg3d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Input rank must be 5.

Support

Configuration

fp32

  • Param rounding_mode only supports default value

PoolMax2d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

Support

Configuration

fp32

  • Param rounding_mode only supports default value

PoolMax3d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Support

Configuration

fp32

  • Param rounding_mode only supports default value

int8

  • Param rounding_mode only supports default value

Prelu

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Quantize

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16

int8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

out[0]

fp32

  • BW_SCALE_OFFSET not supported.

int8

  • BW_SCALE_OFFSET not supported.

ReduceMax

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ReduceMean

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ReduceMin

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ReduceProd

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ReduceSum

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ReduceSumSquare

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

Relu

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Relu1

Datatypes

Configuration

in[0]

out[0]

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Relu6

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

ReluMinMax

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Reshape

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UINT_8

int8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

Resize

Datatypes

Configuration

in[0]

out[0]

exclude_outside

cubic_coeff

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

exclude_outside

fp32

  • Shape: Max supported input rank is 5.

  • Value: exclude_outside must be equal to false

int8

  • Shape: Max supported input rank is 5.

  • Value: exclude_outside must be equal to false

ResizeBilinear

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

ResizeNearestNeighbor

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

int8

  • Shape: Max supported input rank is 5.

RmsNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 8.

RoiAlign

Datatypes

Configuration

in[0]

in[1]

out[0]

aligned

allow_invalid_roi

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Support

Configuration

fp32

  • Param aligned only supports default value

  • Param allow_invalid_roi only supports default value

RoiPooling

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

ScatterElements

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 4.

int8

  • Shape: Max supported input rank is 4.

ScatterNd

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Shape

Datatypes

Configuration

in[0]

out[0]

start

end

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_64

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

Sigmoid

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Softmax

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

axis

fp32

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • if Rank == 5 only default N-1 supported

int8

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • if Rank == 5 only default N-1 supported

SpaceToBatch

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

SpaceToDepth

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

block_size

mode

fp32

  • Value: block_size[0] and block_size[1] should be equal.

  • Only support modes QNN_OP_SPACE_TO_DEPTH_MODE_DCR and QNN_OP_SPACE_TO_DEPTH_MODE_CRD

int8

  • Value: block_size[0] and block_size[1] should be equal.

  • Only support modes QNN_OP_SPACE_TO_DEPTH_MODE_DCR and QNN_OP_SPACE_TO_DEPTH_MODE_CRD

Support

Configuration

fp32

  • Param mode only supports default value

SparseToDense

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Split

Datatypes

Configuration

in[0]

out[0..m]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Squeeze

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UINT_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

Stft

Datatypes

Configuration

in[0]

in[1]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

StridedSlice

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_64, QNN_DATATYPE_UINT_8, QNN_DATATYPE_INT_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

Tanh

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Tile

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

TopK

Datatypes

Configuration

in[0]

out[0]

largest

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

Support

Configuration

fp32

  • Param largest only supports default value

Transpose

Datatypes

Configuration

in[0]

out[0]

fp32

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32, QNN_DATATYPE_BOOL_8

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

fp32

  • Shape: Max supported input rank is 6.

int8

  • Shape: Max supported input rank is 6.

  • Shape: Max supported input rank is 5.

TransposeConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32, QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

TransposeConv3d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

fp32

  • Shape: Max supported input rank is 5.

int8

  • Shape: Max supported input rank is 5.

UnPack

Datatypes

Configuration

in[0]

out[0..m]

out[0]

fp32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

int8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8