HTA Backend Op Definition Supplement

Argmax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • shape(input)[axis] should be less than INT32_MAX

  • Shape: Rank <= 4

Concat

Datatypes

Configuration

in[0..m]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0..m]

All

  • Shape: Rank <=4

  • Dimensions must all match amongst all inputs, except on the concatenation dimension (Channel Only). In case when one of the channels is not aligned to 32 the total number of channels can not exceed 4096.

Conv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

reuse_sparse_indices

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

Support

Configuration

All

  • Param reuse_sparse_indices only supports default value

DepthWiseConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

ElementWiseAdd

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

ElementWiseBinary

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

in[1]

out[0]

operation

All

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

  • Supported operations: ElementWiseAdd, ElementWiseMultiply, ElementWiseSubtract

ElementWiseMultiply

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

ElementWiseNeuron

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

operation

All

  • Shape: Rank <=4

  • Dynamic Shape: Dynamic dims not supported.

  • Shape: Rank <=4

  • Dynamic Shape: Dynamic dims not supported.

  • Supported operations: HardSwish, Relu, ReluMinMax, Sigmoid, Tanh

ElementWiseSubtract

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

  • Shape: Supports Rank in range [1, 4]

FullyConnected

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank <=4

  • X dimension can not exceed 32 for 8bit and 16 for 16bit tensors. Y dimension can not exceed 1024.

HardSwish

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank <= 4

PoolAvg2d

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

count_pad_for_edges

All

  • Shape: Rank <= 4

  • Shape: Rank <= 4

  • Does not support PADDING_TYPE_VALID with explicit padding when count_pad_for_edges is zero

PoolMax2d

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Shape: Rank <= 4

Prelu

Datatypes

Configuration

in[0]

in[1]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

ReduceMean

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Shape: Rank <= 4

Relu

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank <= 4

Relu6

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank <= 4

ReluMinMax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Shape: Same as in[0]

Reshape

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports Rank in range [1, 4]

  • Dynamic Shape: Dynamic dims not supported.

  • This input is not supported.

  • Shape: Same as in[0]

  • Dynamic Shape: Dynamic dims not supported.

ResizeBilinear

Datatypes

Configuration

in[0]

out[0]

antialias

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Shape: Rank <= 4

Support

Configuration

All

  • Param antialias only supports default value

ResizeNearestNeighbor

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Shape: Rank <= 4

Sigmoid

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Dynamic Shape: Dynamic dims not supported.

  • Dynamic Shape: Dynamic dims not supported.

Softmax

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Rank <= 4

  • Dynamic Shape: Dynamic dims not supported.

  • Dynamic Shape: Dynamic dims not supported.

Support

Configuration

All

  • Param axis only supports default value

Tanh

Datatypes

Configuration

in[0]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[0]

All

  • Shape: Rank <= 4

TransposeConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

All

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16

Constraints

Configuration

in[1]

out[0]

All

  • Only supports QuantizationEncodingType as HTA_TENSOR_QUANTIZATION_ENCODING_TYPE_SCALE_OFFSET

  • Only supports encodingDefinition as QNN_DEFINITION_DEFINED