HTP Backend Op Definition Supplement

Argmax

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

Argmin

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

Batchnorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Max supported output rank is 4.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

BatchToSpace

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

Buffer

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_16

Constraints

Configuration

mode

All

  • Value: Only supported mode is 1 and 2.

Cast

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_FLOAT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_64

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_64

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_INT_64

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_64

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_8

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_16

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported output rank is 5.

FP16

  • Shape: Max supported input rank is 5.

INT16

  • QNN_DATATYPE_UINT_32 must be in the range of 0..INT32_MAX.

  • Shape: Max supported input rank is 5.

INT8

  • QNN_DATATYPE_UINT_32 must be in the range of 0..INT32_MAX.

  • Shape: Max supported input rank is 5.

INT8

  • Shape: Max supported input rank is 4.

OTHERS

  • QNN_DATATYPE_UINT_32 must be in the range of 0..INT32_MAX.

  • Shape: Max supported input rank is 5.

ChannelShuffle

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

Concat

Datatypes

Configuration

in[0..m]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0..m]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

INT8

  • Dynamic Shape: Supported only on the width dimension.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

  • Updateable quantization is supported.

Conv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

  • Updateable tensor is supported.

  • Updateable tensor is supported.

  • Updateable tensor is supported.

  • Updateable tensor is supported.

FP16

  • QNN_DATATYPE_SFIXED_POINT_8 data is supported if the quantization encoding is QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET or QNN_QUANTIZATION_ENCODING_BLOCK. The weight tensor must be static and the height and width of the weight tensor is [1,1] when QNN_DATATYPE_SFIXED_POINT_8 data is supported.

INT16

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs.

  • QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION is supported with expansion on channel axis only and the block-quant weights are expected to be signed and symmetrically quantized. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION requires minimum arch v69.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized. For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT16

  • QNN_QUANTIZATION_ENCODING_VECTOR is supported with expansion on channel axis only and the vector-quant weights are expected to be signed and symmetrically quantized. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_VECTOR requires minimum arch v69. Vector dimension should be 2, vector stride should be 2 and the index bitwidth should be 6. Additionally, the filter dimensions should be divisible by the VQ block-size and the block size should be multiples of 32.

INT16

  • Dynamic Shape: Supported only on the width dimension.

  • Dynamic Shape: Supported only on the width dimension.

INT16

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs. 16bit Weight must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0] when in[1] uses 16 bit precision.

INT8

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs.

  • QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION is supported with expansion on channel axis only and the block-quant weights are expected to be signed and symmetrically quantized. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION requires minimum arch v69.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

INT8

  • Dynamic Shape: Supported only on the width dimension.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

Conv3d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

dilation

All

  • Shape: Input rank must be 5.

  • Shape: Weight rank must be 5.

  • Shape: Output rank must be 5

  • Value: Only supports dilation if stride in height and width = 1

  • Value: Only supports dilation of 1 in the depth dimension

INT16

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized.

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized.

INT8

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized.

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

Convert

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UFIXED_POINT_8

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UFIXED_POINT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_16

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_SFIXED_POINT_16

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width and channel dimensions.

INT8

  • Updateable quantization is supported.

  • Updateable quantization is supported.

CreateSparse

Datatypes

Configuration

in[0]

in[1]

out[0]

INT8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

out[0]

All

  • Shape: Max supported sparse output rank is 5.

CumulativeSum

Datatypes

Configuration

in[0]

out[0]

axis

exclusive

reverse

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0]

axis

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

  • Value: Value: Must be in range [0, N - 1] where N is the rank of the input tensor

INT16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported input rank is 4.

DepthToSpace

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

DepthWiseConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

dilation

FP16

  • Value: Only supports dilation if stride = [1,1]

INT16

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs. 16bit Weight must have 16bit Activation and must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized. For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

  • Value: Only supports dilation if stride = [1,1] or dilation of [2,2] when stride is [2,2] and kernel size is [3,3]

INT16

  • Updateable quantization is supported.

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT8

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized. For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

  • Value: Only supports dilation if stride = [1,1] or dilation of [2,2] when stride is [2,2] and kernel size is [3,3]

INT8

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs. 16bit Weight must have 16bit Activation and must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

Dequantize

Datatypes

Configuration

in[0]

out[0]

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_16

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

DetectionOutput

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

out[1]

out[2]

out[3]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

ElementWiseAbs

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

ElementWiseAdd

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • Shape: Supports input of Rank between 1 and 5.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • Shape: Supports output of Rank between 1 and 5.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • Shape: Supports input of Rank between 1 and 5.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • Shape: Supports output of Rank between 1 and 5.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseAnd

Datatypes

Configuration

in[0]

in[1]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

INT8

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports output of Rank between 1 and 4.

OTHERS

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports output of Rank between 1 and 4.

ElementWiseAsin

Datatypes

Configuration

in[0]

out[0]

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

INT16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT8

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseAtan

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseBinary

Datatypes

Configuration

in[0]

in[1]

out[0]

operation

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

in[1]

out[0]

operation

FP16

  • Shape: Supported input rank: All: Between 1 and 4. Multiply, Divide, Equal, NotEqual, Less, LessEqual, Greater, GreaterEqual: Between 1 and 5.

  • Supported datatypes: ADD: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. AND: Not supported. DIVIDE: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FLOORDIV: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FMOD: Not supported. GREATER: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. GREATEREQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LESS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LESSEQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MAXIMUM: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MINIMUM: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MOD: Not supported. MULTIPLY: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOTEQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. OR: Not supported. POWER: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQUAREDDIFFERENCE: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SUBTRACT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. XOR: Not supported.

  • Shape: Supported input rank: All: Between 1 and 4. Multiply, Divide, Equal, NotEqual, Less, LessEqual, Greater, GreaterEqual: Between 1 and 5.

  • Supported datatypes: ADD: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. AND: Not supported. DIVIDE: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FLOORDIV: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FMOD: Not supported. GREATER: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. GREATEREQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LESS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LESSEQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MAXIMUM: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MINIMUM: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MOD: Not supported. MULTIPLY: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOTEQUAL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. OR: Not supported. POWER: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQUAREDDIFFERENCE: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SUBTRACT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. XOR: Not supported.

  • Shape: Supported output rank: All: Between 1 and 4. Multiply, Divide, Equal, NotEqual, Less, LessEqual, Greater, GreaterEqual: Between 1 and 5.

  • Supported datatypes: ADD: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. AND: Not supported. DIVIDE: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EQUAL: QNN_DATATYPE_BOOL_8. FLOORDIV: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FMOD: Not supported. GREATER: QNN_DATATYPE_BOOL_8. GREATEREQUAL: QNN_DATATYPE_BOOL_8. LESS: QNN_DATATYPE_BOOL_8. LESSEQUAL: QNN_DATATYPE_BOOL_8. MAXIMUM: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MINIMUM: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. MOD: Not supported. MULTIPLY: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOTEQUAL: QNN_DATATYPE_BOOL_8. OR: Not supported. POWER: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQUAREDDIFFERENCE: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SUBTRACT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. XOR: Not supported.

FP16

  • Shape: Supported input rank: All: Between 1 and 4. Multiply, Divide, Add, Power, Subtract: Between 1 and 5.

  • Shape: Supported input rank: All: Between 1 and 4. Multiply, Divide, Add, Power, Subtract: Between 1 and 5.

  • Shape: Supported output rank: All: Between 1 and 4. Multiply, Divide, Add, Power, Subtract: Between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support input rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • Shape: Supports input of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support input rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_INT_32, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • Shape: Supports output of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support output rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0] and in[0] type == in[1] type == out[0] type != true.

  • QNN_DEFINITION_IMPL_GENERATED supported oper types: ADD

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT16

  • Dynamic Shape: Supported only on the width and channel dimensions of ADD, DIVIDE, MULTIPLY, POWER, and SUBTRACT.

  • Dynamic Shape: Supported only on the width and channel dimensions of ADD, DIVIDE, MULTIPLY, POWER, and SUBTRACT.

  • Dynamic Shape: Supported only on the width and channel dimensions of ADD, DIVIDE, MULTIPLY, POWER, and SUBTRACT.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0] and in[0] type == in[1] type == out[0] type.

INT8

  • Shape: Supports input of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support input rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • Shape: Supports input of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support input rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_INT_32, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • Shape: Supports output of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support output rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0] and in[0] type == in[1] type == out[0] type.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD

INT8

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: ADD: QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0] and in[0] type == in[1] type == out[0] type != true.

OTHERS

  • Shape: Supports input of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support input rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • Shape: Supports input of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support input rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_INT_32, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

  • Shape: Supports output of Rank between 1 and 5. Except for AND, FLOORDIV, OR which support output rank between 1 and 4.

  • Supported datatypes: ADD: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. AND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8. DIVIDE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. EQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8. FLOORDIV: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. FMOD: Not supported. GREATER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. GREATEREQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. LESS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_INT_32, QNN_DATATYPE_BOOL_8. LESSEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. MAXIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. MINIMUM: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. MOD: Not supported. MULTIPLY: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_UINT_32, QNN_DATATYPE_INT_32. NOTEQUAL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. OR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_BOOL_8. POWER: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQUAREDDIFFERENCE: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SUBTRACT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. XOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_BOOL_8.

ElementWiseCeil

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseCos

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseDivide

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseExp

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

ElementWiseFloor

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseFloorDiv

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports output of Rank between 1 and 4.

FP16

  • Shape: Supports input of Rank between 1 and 4.

INT16

  • Shape: Supports input of Rank between 1 and 4.

INT8

  • Shape: Supports input of Rank between 1 and 4.

ElementWiseGreater

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseGreaterEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseLess

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseLessEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseLog

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseMaximum

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports output of Rank between 1 and 4.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseMinimum

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports output of Rank between 1 and 4.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseMultiply

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports output of Rank between 1 and 5.

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

ElementWiseNeg

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseNeuron

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 4. (Except Relu and Sigmoid support rank 5)

  • Supported datatypes: ELU: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. GELU: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. HARDSIGMOID: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. HARDSWISH: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RELU: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RELUMINMAX: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIGMOID: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SOFTPLUS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. TANH: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32.

  • Shape: Max supported output rank is 4 (Except Relu and Sigmoid support rank 5).

  • Supported datatypes: ELU: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. GELU: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. HARDSIGMOID: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. HARDSWISH: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RELU: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RELUMINMAX: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIGMOID: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SOFTPLUS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. TANH: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32.

INT16

  • Shape: Max supported input rank is 4. Except for HARDSIGMOID, RELU, RELUMINMAX, SIGMOID which support max input rank of 5.

  • Supported datatypes: ELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. GELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. HARDSIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSWISH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. RELUMINMAX: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. SIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SOFTPLUS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. TANH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension of SIGMOID.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: Relu, ReluMinMax, HardSwish Relu, ReluMinMax, HardSwish: QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0]. For Relu and ReluMinMax QNN_HTP_GRAPH_CONFIG_OPTION_FOLD_RELU_ACTIVATION_INTO_CONV_OFF must be false.

  • Shape: Max supported input rank is 4. Except for HARDSIGMOID, RELU, RELUMINMAX, SIGMOID which support max input rank of 5.

  • Supported datatypes: ELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. GELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. HARDSIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSWISH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. RELUMINMAX: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. SIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SOFTPLUS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. TANH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension of SIGMOID.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: Relu, ReluMinMax, HardSwish Relu, ReluMinMax, HardSwish: QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

INT8

  • Shape: Max supported input rank is 4. Except for HARDSIGMOID, RELU, RELUMINMAX, SIGMOID which support max input rank of 5.

  • Supported datatypes: ELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. GELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSWISH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. RELUMINMAX: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. SIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SOFTPLUS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. TANH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: Relu, ReluMinMax, HardSwish Relu, ReluMinMax, HardSwish: QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0]. For Relu and ReluMinMax QNN_HTP_GRAPH_CONFIG_OPTION_FOLD_RELU_ACTIVATION_INTO_CONV_OFF must be false.

  • Shape: Max supported input rank is 4. Except for HARDSIGMOID, RELU, RELUMINMAX, SIGMOID which support max input rank of 5.

  • Supported datatypes: ELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. GELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSWISH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. RELUMINMAX: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. SIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SOFTPLUS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. TANH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16.

  • QNN_DEFINITION_IMPL_GENERATED declared oper types: Relu, ReluMinMax, HardSwish Relu, ReluMinMax, HardSwish: QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

OTHERS

  • Shape: Max supported input rank is 4. Except for RELU, RELUMINMAX, SIGMOID which support max input rank of 5.

  • Supported datatypes: ELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. GELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSWISH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELUMINMAX: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. SIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SOFTPLUS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. TANH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16.

  • Shape: Max supported input rank is 4. Except for RELU, RELUMINMAX, SIGMOID which support max input rank of 5.

  • Supported datatypes: ELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. GELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. HARDSWISH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELU: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RELUMINMAX: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_INT_32. SIGMOID: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. SOFTPLUS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16. TANH: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_UFIXED_POINT_16.

ElementWiseNot

Datatypes

Configuration

in[0]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseNotEqual

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

ElementWiseOr

Datatypes

Configuration

in[0]

in[1]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

INT8

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

OTHERS

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWisePower

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseRound

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseRsqrt

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Supports max input of Rank 4.

  • Shape: Supports max output of Rank 4.

ElementWiseSelect

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

ElementWiseSign

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

INT16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT8

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseSin

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseSquaredDifference

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports input of Rank between 1 and 4.

  • Shape: Supports output of Rank between 1 and 4.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseSquareRoot

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ElementWiseSubtract

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT16

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

INT8

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

OTHERS

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports input of Rank between 1 and 5.

  • Shape: Supports output of Rank between 1 and 5.

ElementWiseUnary

Datatypes

Configuration

in[0]

out[0]

operation

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Ops that support input rank 4: Atan, Ceil, Cos, Exp, Floor, Log, Neg, Not, Round, Sign, Sin. Ops that support input rank 5: Abs, Rsqrt, Sqrt.

  • Supported datatypes: ABS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. ASIN: Not supported. ATAN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. CEIL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. COS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EXP: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FLOOR: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LOG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NEG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOT: QNN_DATATYPE_BOOL_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RSQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIGN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32.

  • Shape: Ops that support output rank 4: Atan, Ceil, Cos, Exp, Floor, Log, Neg, Not, Round, Sign, Sin. Ops that support output rank 5: Abs, Rsqrt, Sqrt.

  • Supported datatypes: ABS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. ASIN: Not supported. ATAN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. CEIL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. COS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EXP: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FLOOR: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LOG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NEG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOT: QNN_DATATYPE_BOOL_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RSQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIGN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32.

INT16

  • Shape: Ops that support input rank 4: Asin, Atan, Ceil, Cos, Floor, Log, Neg, Not, Round, Sign, Sin, Sqrt. Ops that support input rank 5: Abs, Exp, Rsqrt.

  • Supported datatypes: ABS: QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. ASIN: QNN_DATATYPE_UFIXED_POINT_8. ATAN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. CEIL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. COS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. EXP: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. FLOOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. LOG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NEG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NOT: QNN_DATATYPE_UFIXED_POINT_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RSQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SIGN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SIN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension of SQRT.

  • Shape: Ops that support output rank 4: Asin, Atan, Ceil, Cos, Floor, Log, Neg, Not, Round, Sign, Sin, Sqrt. Ops that support output rank 5: Abs, Exp, Rsqrt.

  • Supported datatypes: ABS: QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. ASIN: QNN_DATATYPE_UFIXED_POINT_8. ATAN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. CEIL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. COS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. EXP: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. FLOOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. LOG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NEG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NOT: QNN_DATATYPE_UFIXED_POINT_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RSQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16. SIGN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SIN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension of SQRT.

INT16

  • Shape: Ops that support input rank 4: Asin, Atan, Ceil, Cos, Floor, Log, Neg, Not, Round, Sign, Sin, Sqrt. Ops that support input rank 5: Abs, Exp, Rsqrt(except QNN_DATATYPE_SFIXED_POINT_16).

  • Shape: Ops that support output rank 4: Asin, Atan, Ceil, Cos, Floor, Log, Neg, Not, Round, Sign, Sin, Sqrt. Ops that support output rank 5: Abs, Exp, Rsqrt(except QNN_DATATYPE_SFIXED_POINT_16).

INT8

  • Shape: Ops that support input rank 4: Asin, Atan, Ceil, Cos, Floor, Log, Neg, Not, Round, Sign, Sin, Sqrt. Ops that support input rank 5: Abs, Exp, Rsqrt.

  • Supported datatypes: ABS: QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. ASIN: QNN_DATATYPE_UFIXED_POINT_8. ATAN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. CEIL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. COS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. EXP: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. FLOOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. LOG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NEG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NOT: QNN_DATATYPE_UFIXED_POINT_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RSQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SIGN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SIN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16.

  • Updateable quantization is supported.

  • Shape: Ops that support output rank 4: Asin, Atan, Ceil, Cos, Floor, Log, Neg, Not, Round, Sign, Sin, Sqrt. Ops that support output rank 5: Abs, Exp, Rsqrt.

  • Supported datatypes: ABS: QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. ASIN: QNN_DATATYPE_UFIXED_POINT_8. ATAN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. CEIL: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. COS: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. EXP: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. FLOOR: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. LOG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NEG: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. NOT: QNN_DATATYPE_UFIXED_POINT_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. RSQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SIGN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SIN: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16. SQRT: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16.

  • Updateable quantization is supported.

OTHERS

  • Shape: Ops that support input rank 4: Atan, Ceil, Cos, Exp, Floor, Log, Neg, Not, Round, Rsqrt, Sign, Sin, Sqrt. Ops that support input rank 5: Abs.

  • Supported datatypes: ABS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32. ASIN: Not supported. ATAN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. CEIL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. COS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EXP: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FLOOR: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LOG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NEG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOT: QNN_DATATYPE_BOOL_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RSQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIGN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32.

  • Shape: Ops that support output rank 4: Atan, Ceil, Cos, Exp, Floor, Log, Neg, Not, Round, Rsqrt, Sign, Sin, Sqrt. Ops that support output rank 5: Abs.

  • Supported datatypes: ABS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32, QNN_DATATYPE_INT_32. ASIN: Not supported. ATAN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. CEIL: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. COS: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. EXP: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. FLOOR: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. LOG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NEG: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. NOT: QNN_DATATYPE_BOOL_8. RECIPROCAL: Not supported. ROUND: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. RSQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIGN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SIN: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32. SQRT: QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32.

ElementWiseXor

Datatypes

Configuration

in[0]

in[1]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

in[1]

out[0]

INT8

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

OTHERS

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

Elu

Datatypes

Configuration

in[0]

out[0]

alpha

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ExpandDims

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 5.

INT16

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

ExtractGlimpse

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

ExtractPatches

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Input rank must be 4.

  • Shape: Onput rank must be 4.

FullyConnected

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

keep_dims

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

  • Only support default value 0

FP16

  • QNN_DATATYPE_SFIXED_POINT_8 data is supported if quantization encoding is QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET or QNN_QUANTIZATION_ENCODING_BLOCK. The weight tensor must be static in this case.

INT16

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • UFIXED_POINT_8 weights must be 8bit, 4bit or 2bit and must have rank 2. Given a 2D weight of dimensions [m n], QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only axis ‘m’ and the values are expected to be signed and symmetrically quantized. The Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the Tensor must be static, or the chain of ops preceding the Tensor must all have static inputs.

  • QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION is supported with expansion on channel axis only and the block-quant weights are expected to be signed and symmetrically quantized. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION requires minimum arch v69.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

INT16

  • UFIXED_POINT_8 weights must be 8bit, 4bit or 2bit and must have rank 2. Given a 2D weight of dimensions [m n], QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only axis ‘m’ and the values are expected to be signed and symmetrically quantized. The Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the Tensor must be static, or the chain of ops preceding the Tensor must all have static inputs. 16bit Weight must have 16bit Activation and must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

INT8

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • UFIXED_POINT_8 weights must be 8bit or 4bit and must have rank 2. Given a 2D weight of dimensions [m n], QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only axis ‘m’ and the values are expected to be signed and symmetrically quantized. The Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the Tensor must be static, or the chain of ops preceding the Tensor must all have static inputs.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION is supported with expansion on channel axis only and the block-quant weights are expected to be signed and symmetrically quantized. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION requires minimum arch v69.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 0

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

Gather

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported index rank is 5.

  • Value: Only supports index values in range [0, in[0].dim[axis] - 1].

FP16

  • Shape: Max supported output rank is 5.

FP16

  • Updateable quantization is supported.

INT16

  • Shape: Max supported output rank is 5.

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the channel dimension.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

INT8

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Shape: Max supported output rank is 5.

INT8

  • Updateable quantization is supported.

OTHERS

  • Input quantization must be equal to output quantization.

  • Shape: Max supported output rank is 5.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

OTHERS

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

OTHERS

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

OTHERS

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_INT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

OTHERS

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_INT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

GatherElements

Datatypes

Configuration

in[0]

in[1]

out[0]

axis

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

in[1]

out[0]

axis

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported index rank is 5.

  • Shape: Gather on axis 0 is not supported when rank is 5.

FP16

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported output rank is 5.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

GatherNd

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported index rank is 4.

  • Shape: Max supported output rank is 4.

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

Gelu

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

GetSparseIndices

Datatypes

Configuration

in[0]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

INT8

  • Shape: Max supported sparse input rank is 5.

GetSparseValues

Datatypes

Configuration

in[0]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

INT8

  • Shape: Max supported sparse input rank is 5.

GridSample

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

FP16

  • Shape: Max Supported input rank is 4.

  • Shape: Max Supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT16

  • Shape: Max Supported input rank is 5.

  • Shape: Max Supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max Supported input rank is 5.

  • Shape: Max Supported input rank is 5.

  • Shape: Max supported output rank is 5.

GroupNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

Gru

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

in[10]

in[11]

in[12]

in[13]

in[14]

out[0]

out[1]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

FP16

  • Shape: Input rank must be 3.

HardSwish

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT16

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0].

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

INT8

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0].

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

InstanceNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

mode

region

FP16

  • Shape: InstanceNorms of input Rank = 4 are supported. If input Rank < 4 then support is only possible if the outermost dimension is of size 1.

INT16

  • Shape: InstanceNorms of input Rank = 4 are supported. If input Rank < 4 then support is only possible if the outermost dimension is of size 1.

  • Shape: Max supported output rank is 4.

  • Value: Only supports mode MU_SIGMA

  • Value: Only supports region ACROSS_SPATIAL

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT8

  • Shape: InstanceNorms of input Rank = 4 are supported. If input Rank < 4 then support is only possible if the outermost dimension is of size 1.

  • Shape: Max supported output rank is 4.

  • Value: Only supports mode MU_SIGMA

  • Value: Only supports region ACROSS_SPATIAL

IsInf

Datatypes

Configuration

in[0]

out[0]

detect_negative

detect_positive

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

IsNan

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

L2Norm

Datatypes

Configuration

in[0]

out[0]

axis

axes

epsilon

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

LayerNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

axes

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

  • Value: Only supports normalization on final dimension or last three dimensions of 4D inputs

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

INT16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT16

  • 16bit Input and 16bit Gamma require minimum arch V73.

INT8

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

Logit

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported epsilon rank is 1.

  • Shape: Max supported output rank is 4.

LogSoftmax

Datatypes

Configuration

in[0]

out[0]

beta

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

axis

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

  • only default N-1 supported

Lrn

Datatypes

Configuration

in[0]

out[0]

alpha

beta

bias

radius

region

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

region

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

FP16

  • Value: Only supports Input dimensions where Height x Width is less than 1024 for Within channel cases

Lstm

Datatypes

Configuration

in[0]

in[1]

in[2]

in[3]

in[4]

in[5]

in[6]

in[7]

in[8]

in[9]

in[10]

in[11]

in[12]

in[13]

in[14]

in[15]

in[16]

in[17]

in[18]

in[19]

in[20]

in[21]

in[22]

in[23]

in[24]

out[0]

out[1]

out[2]

hidden_state_offset

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[12]

in[13]

in[14]

in[15]

direction

time_major

All

  • Shape: Max supported input rank is 3.

FP16

  • LayerNorm not currently supported

  • LayerNorm not currently supported

  • LayerNorm not currently supported

  • LayerNorm not currently supported

INT8

  • Support only forward for direction

  • Not applicable for 2D input and is ignored.

MatMul

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

FP16

  • QNN_DATATYPE_SFIXED_POINT_8 data is supported if quantization encoding is QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET or QNN_QUANTIZATION_ENCODING_BLOCK. The weight tensor must be static in this case.

INT16

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Given a tensor of shape […, m, n], QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only axis ‘m’ or ‘n’. The quantization axis is determined by paramater QNN_OP_MAT_MUL_PARAM_TRANSPOSE_IN1. It is ‘m’ if the param is true, else it is ‘n’. Values are expected to be signed and symmetrically quantized. Tensor must be static if using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the Tensor must be static, or the chain of ops preceding the Tensor must all have static inputs.

  • QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION is supported with expansion on channel axis only and the block-quant weights are expected to be signed and symmetrically quantized. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION requires minimum arch v69.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width and channel dimensions.

INT16

  • 16bit Weight must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

INT8

  • Given a tensor of shape […, m, n], QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only axis ‘m’ or ‘n’. The quantization axis is determined by paramater QNN_OP_MAT_MUL_PARAM_TRANSPOSE_IN1. It is ‘m’ if the param is true, else it is ‘n’. Values are expected to be signed and symmetrically quantized. Tensor must be static if using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the Tensor must be static, or the chain of ops preceding the Tensor must all have static inputs.

  • QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION is supported with expansion on channel axis only and the block-quant weights are expected to be signed and symmetrically quantized. QNN_OP_MAT_MUL_PARAM_TRANSPOSE_IN1 must be false with QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION. Weight tensor or the chain of ops preceding the weight Tensor must all have static inputs. QNN_QUANTIZATION_ENCODING_BLOCKWISE_EXPANSION requires minimum arch v69.

Quantization

Configuration

in[1]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

MultiClassNms

Datatypes

Configuration

in[0]

in[1]

in[2..m]

out[0]

out[1]

out[2]

out[3]

out[4..M]

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

NonMaxSuppression

Datatypes

Configuration

in[0]

in[1]

out[0]

out[1]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

NonZero

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

All

  • Shape: Max supported input rank is 4.

FP16

  • Shape: Max supported input rank is 4.

INT16

  • Shape: Max supported input rank is 4.

INT8

  • Shape: Max supported input rank is 4.

OTHERS

  • Shape: Max supported input rank is 4.

Nv12ToRgb

Datatypes

Configuration

in[0]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

output_order

All

  • Support only rgb output order

OneHot

Datatypes

Configuration

in[0]

out[0]

on_value

off_value

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 2.

  • Shape: Max supported output rank is 3.

Pack

Datatypes

Configuration

in[0..m]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0..m]

out[0]

All

  • Shape: Supports Rank in range [1, 4].

  • Shape: Supports Rank = rank(in[0]) + 1.

Pad

Datatypes

Configuration

in[0]

out[0]

pad_constant_value

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

scheme

pad_constant_value

FP16

  • Shape: Max supported input rank is 5 for CONSTANT scheme. Max supported input rank is 4 for schemes MIRROR_REFLECT, MIRROR_SYMMETRIC, EDGE.

  • Shape: Max supported output rank is 5 for CONSTANT scheme. Max supported output rank is 4 for schemes MIRROR_REFLECT, MIRROR_SYMMETRIC, EDGE.

  • Value: Only support schemes CONSTANT, MIRROR_REFLECT, MIRROR_SYMMETRIC, EDGE

INT16

  • Updateable quantization is supported.

  • Shape: Max supported input rank is 5.

  • Updateable quantization is supported.

  • Shape: Max supported output rank is 5.

  • pad_constant_value is quantizable with quantization params same as in[0]. Results are stored in 32-bit signed integer form.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

  • pad_constant_value is quantizable with quantization params same as in[0]. Results are stored in 32-bit signed integer form.

PoolAvg2d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

PoolAvg3d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Input rank must be 5.

  • Shape: Output rank must be 5.

PoolMax2d

Datatypes

Configuration

in[0]

out[0]

rounding_mode

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

Prelu

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0].

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • The shape of the in[1] tensor must have all dims = 1 except for the dim at RANK(in[1]) - 1 if QNN_DEFINITION_IMPL_GENERATED is used in in[0].

  • QNN_DEFINITION_IMPL_GENERATED is not supported for out[0].

INT8

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0].

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • The shape of the in[1] tensor must have all dims = 1 except for the dim at RANK(in[1]) - 1 if QNN_DEFINITION_IMPL_GENERATED is used in in[0].

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

Quantize

Datatypes

Configuration

in[0]

out[0]

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

RandomUniformLike

Datatypes

Configuration

in[0]

in[1]

out[0]

low

high

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

low

high

OTHERS

  • Shape: In[0] must be a 1-D tensor.

  • Lower boundary of the output values

  • Upper boundary of the output values

ReduceMax

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

axes

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

  • Value: When input rank is 5, only single axis is supported.

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

ReduceMean

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported input rank is 5.

  • Updateable quantization is supported.

  • Shape: Max supported output rank is 5.

  • Updateable quantization is supported.

INT8

  • Shape: Max supported input rank is 5.

  • Updateable quantization is supported.

  • Shape: Max supported output rank is 5.

  • Updateable quantization is supported.

ReduceMin

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

axes

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

  • Value: When input rank is 5, only single axis is supported.

INT16

  • Shape: Max supported input rank is 5.

  • Updateable quantization is supported.

  • Shape: Max supported output rank is 5.

  • Updateable quantization is supported.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

ReduceSum

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

Relu

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported input rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0] and QNN_HTP_GRAPH_CONFIG_OPTION_FOLD_RELU_ACTIVATION_INTO_CONV_OFF must be false.

  • Shape: Max supported output rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

INT8

  • Shape: Max supported input rank is 5.

  • Updateable quantization is supported.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0] and QNN_HTP_GRAPH_CONFIG_OPTION_FOLD_RELU_ACTIVATION_INTO_CONV_OFF must be false.

  • Shape: Max supported output rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

Relu1

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

Relu6

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

ReluMinMax

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT16

  • Shape: Max supported input rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0] and QNN_HTP_GRAPH_CONFIG_OPTION_FOLD_RELU_ACTIVATION_INTO_CONV_OFF must be false.

  • Shape: Max supported output rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

INT8

  • Shape: Max supported input rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for in[0] and QNN_HTP_GRAPH_CONFIG_OPTION_FOLD_RELU_ACTIVATION_INTO_CONV_OFF must be false.

  • Shape: Max supported output rank is 5.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for out[0].

OTHERS

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

Reshape

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

in[1]

out[0]

All

  • Shape: Supports Rank in range [1, 5].

  • Shape: Supports Rank in range [1, 5].

INT16

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only when the reshaped dimensions are grouped together.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only when the reshaped dimensions are grouped together.

INT8

  • Dynamic Shape: Supported only when the reshaped dimensions are grouped together.

  • Dynamic Shape: Supported only when the reshaped dimensions are grouped together.

INT8

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

Resize

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

nearest_mode

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

  • Only supports QNN_OP_RESIZE_NEAREST_MODE_FLOOR, except for when QNN_OP_RESIZE_PARAM_TRANSFORMATION_MODE is set to QNN_OP_RESIZE_TRANSFORMATION_MODE_ALIGN_CORNERS which only supports QNN_OP_RESIZE_NEAREST_MODE_ROUND_PREFER_CEIL. If QNN_OP_RESIZE_PARAM_NEAREST_MODE is not set, HTP follows above rules implicitly.

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

ResizeBilinear

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

ResizeNearestNeighbor

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

RmsNorm

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

axes

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

  • Value: Only supports normalization on channel

INT16

  • Updateable quantization is supported.

  • Signed 16bit Gamma must be symmetric quantized.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT16

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized.

INT16

  • Dynamic Shape: Supported only on the width dimension.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

INT8

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updateable quantization is supported.

INT8

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized.

RoiAlign

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

aligned

allow_invalid_roi

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

ScatterElements

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

axis

reduction

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_16

QNN_DATATYPE_UINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_16

QNN_DATATYPE_UINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

axis

reduction

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

  • Value: Value: Must be in range [0, N - 1] where N is the rank of the input tensor

FP16

  • For FLOAT_16, only NONE reduction is supported. NONE = 0 Indices should not have duplicated values with reduction being NONE.

INT16

  • Values: NONE = 0, ADD = 1, MUL = 2

INT16

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updatable quantization should be same for in0, in2 and out0

INT8

  • Values: NONE = 0, ADD = 1, MUL = 2, MAX = 3

INT8

  • Updateable quantization is supported.

  • Updateable quantization is supported.

  • Updatable quantization should be same for in0, in2 and out0

ScatterNd

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_UINT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_INT_32

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

  • Shape: Max supported first input rank is 5.

  • Shape: Max supported second input rank is 6.

  • Shape: Max supported third input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported first input rank is 5.

  • Shape: Max supported second input rank is 6.

  • Shape: Max supported third input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported first input rank is 5.

  • Shape: Max supported second input rank is 6.

  • Shape: Max supported third input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported first input rank is 5.

  • Shape: Max supported second input rank is 6.

  • Shape: Max supported third input rank is 5.

  • Shape: Max supported output rank is 5.

Sigmoid

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 5.

FP16

  • Shape: Max supported input rank is 5.

INT16

  • Shape: Max supported input rank is 5.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Updateable quantization is supported.

  • Shape: Max supported output rank is 5.

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Updateable quantization is supported.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Scale: 1/65536.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Scale: 1/65536.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Scale: 1/65536.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Scale: 1/65536.0

    • Offset: 0

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

Softmax

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

axis

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

  • only default N-1 supported

INT16

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

  • Updateable quantization is supported.

INT16

  • Dynamic Shape: Supported only on the width and channel dimensions.

  • Dynamic Shape: Supported only on the width and channel dimensions.

SpaceToBatch

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

SpaceToDepth

Datatypes

Configuration

in[0]

out[0]

mode

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

SparseToDense

Datatypes

Configuration

in[0]

out[0]

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

INT8

  • Shape: Max supported sparse input rank is 5.

  • Shape: Max supported output rank is 5.

Split

Datatypes

Configuration

in[0]

out[0..m]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0..m]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

Squeeze

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Dynamic Shape: Supported only on the height and width dimensions.

  • Dynamic Shape: Supported only on the height and width dimensions.

Stft

Datatypes

Configuration

in[0]

in[1]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

StridedSlice

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_UINT_8

QNN_DATATYPE_UINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the width dimension.

INT8

  • Updateable quantization is supported.

  • Updateable quantization is supported.

Tanh

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

FP16

  • Shape: Max supported output rank is 4.

INT16

  • Shape: Max supported output rank is 4.

INT8

  • Shape: Max supported output rank is 4.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: -32768

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: -32768

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: -32768

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: -32768

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Scale: 1/32768.0

    • Offset: 0

Tile

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

Constraints

Configuration

in[0]

out[0]

FP16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT8

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

OTHERS

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

TopK

Datatypes

Configuration

in[0]

out[0]

out[1]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_INT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_UINT_32

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_INT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UINT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_INT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UINT_32

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 4.

  • Shape: Max supported output rank is 4.

INT8

  • Input quantization must be equal to output quantization.

Quantization

Configuration

out[0]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

Transpose

Datatypes

Configuration

in[0]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

OTHERS

QNN_DATATYPE_BOOL_8

QNN_DATATYPE_BOOL_8

OTHERS

QNN_DATATYPE_UINT_32

QNN_DATATYPE_UINT_32

OTHERS

QNN_DATATYPE_INT_32

QNN_DATATYPE_INT_32

OTHERS

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

Constraints

Configuration

in[0]

out[0]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 5.

INT16

  • Supported datatypes: Transpose 4D: QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32. Transpose 5D: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

INT8

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

  • Supported datatypes: Transpose 4D: QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32. Transpose 5D: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32

  • Updateable quantization is supported.

  • Dynamic Shape: Supported only on the height, width, and channel dimensions.

  • Updateable quantization is supported.

OTHERS

  • Supported datatypes: Transpose 4D: QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32. Transpose 5D: QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32

OTHERS

  • Supported datatypes: Transpose 4D: QNN_DATATYPE_BOOL_8, QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_SFIXED_POINT_16, QNN_DATATYPE_INT_32, QNN_DATATYPE_UINT_32. Transpose 5D: QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8, QNN_DATATYPE_UFIXED_POINT_16, QNN_DATATYPE_FLOAT_16, QNN_DATATYPE_FLOAT_32

TransposeConv2d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8, QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

INT16

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized. For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

INT16

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs. 16bit Weight must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

INT16

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs. 16bit Weight must have 16bit Activation and must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

INT8

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[0].

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs.

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[1].

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized. For the channel scaling case, bias quantization encodings are expected to be scale = 0, offset = 0

  • QNN_DEFINITION_IMPL_GENERATED use is rejected for in[2].

  • QNN_DEFINITION_IMPL_GENERATED use is accepted for out[0].

INT8

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET and QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized. Weight Tensor must be static when using QNN_QUANTIZATION_ENCODING_BW_AXIS_SCALE_OFFSET. When using QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET, either the weight Tensor must be static, or the chain of ops preceding the weight Tensor must all have static inputs. 16bit Weight must have 16bit Activation and must be symmetric quantized. 16bit Activation and 16bit Weight require minimum arch V73.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype:
      • QNN_DATATYPE_UFIXED_POINT_8

      • QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype:
      • QNN_DATATYPE_UFIXED_POINT_8

      • QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Axis: 3

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

TransposeConv3d

Datatypes

Configuration

in[0]

in[1]

in[2]

out[0]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_32

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

in[1]

in[2]

out[0]

dilation

All

  • Shape: Input rank must be 5.

  • Shape: Weight rank must be 5.

  • Shape: Output rank must be 5

  • Value: Only supports dilation if stride in height and width = 1

  • Value: Only supports dilation of 1 in the depth dimension

INT16

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized.

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized.

INT8

  • QNN_QUANTIZATION_ENCODING_AXIS_SCALE_OFFSET is supported with only channel axis and the weights are expected to be signed and symmetrically quantized.

  • QNN_DATATYPE_SFIXED_POINT_32 must be symmetric quantized.

Quantization

Configuration

in[1]

in[2]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Axis: 4

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: BW_SCALE_OFFSET

    • Symmetry: Symmetric

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_32

    • Encoding: SCALE_OFFSET

    • Symmetry: Symmetric

UnPack

Datatypes

Configuration

in[0]

out[0..m]

FP16

QNN_DATATYPE_FLOAT_16

QNN_DATATYPE_FLOAT_16

FP16

QNN_DATATYPE_FLOAT_32

QNN_DATATYPE_FLOAT_32

INT16

QNN_DATATYPE_UFIXED_POINT_16

QNN_DATATYPE_UFIXED_POINT_16

INT16

QNN_DATATYPE_SFIXED_POINT_16

QNN_DATATYPE_SFIXED_POINT_16

INT8

QNN_DATATYPE_UFIXED_POINT_8

QNN_DATATYPE_UFIXED_POINT_8

INT8

QNN_DATATYPE_SFIXED_POINT_8

QNN_DATATYPE_SFIXED_POINT_8

Constraints

Configuration

in[0]

out[0..m]

All

  • Shape: Max supported input rank is 5.

  • Shape: Max supported output rank is 4.

INT16

  • Input quantization must be equal to output quantization.

INT8

  • Input quantization must be equal to output quantization.

Quantization

Configuration

out[0..m]

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT16

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_16

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_UFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True

INT8

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_AXIS_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: BW_SCALE_OFFSET

    • Math Invariant: True

  • Quantization Parameters Config:
    • Datatype: QNN_DATATYPE_SFIXED_POINT_8

    • Encoding: SCALE_OFFSET

    • Math Invariant: True