Supported Network Layers¶
Supported Network Layers
Qualcomm® Neural Processing SDK supports the network layer types listed in the table below.
See Limitations for details on the limitations and constraints for the supported runtimes and individual layer types.
All of supported layers in GPU runtime are valid for both of GPU modes: GPU_FLOAT32_16_HYBRID and GPU_FLOAT16. GPU_FLOAT32_16_HYBRID - data storage is done in half float and computation is done in full float. GPU_FLOAT16 - both data storage and computation is done in half float.
A list of supported ONNX operations can be found at ONNX Operator Support.
Converters Equivalent
COMMAND_LINE : indicates the Op is supported through command-line parameters provided during conversion and not as part of a source framework model. See the Source Framework’s converter help for more details.
INFERRED: indicates Source Framework does not have a concrete definition for Op. However, converter pattern-matches a sequence of Ops to map to listed QNN Op.
— : indicates there is no corresponding Source Framework Op, or the corresponding Op is not yet supported.
Runtime Support
YES: Runtime has an implementation for Op.
NO: Runtime does not have an implementation for Op.
Note : AIP Runtime supports all layers supported by the DSP runtime, as layers not supported by HTA run on HVX.