ONNX Model Conversion¶
Machine Learning frameworks have specific formats for storing neural network models. Qualcomm® Neural Processing SDK supports these various models by converting them to a framework neutral deep learning container (DLC) format. The DLC file is used by the Qualcomm® Neural Processing SDK runtime for execution of the neural network. Qualcomm® Neural Processing SDK includes a tool, “snpe-onnx-to-dlc”, for converting models serialized in the ONNX format to DLC.
Converting Models from ONNX to DLC
The snpe-onnx-to-dlc tool converts a serialized ONNX model to an equivalent DLC representation.
With the ONNX alexnet model obtained by following the instructions in https://github.com/onnx/models/blob/main/validated/vision/classification/alexnet/README.md, the following command will produce a DLC representation of alexnet:
snpe-onnx-to-dlc --input_network models/bvlc_alexnet/bvlc_alexnet/model.onnx
--output_path bvlc_alexnet.dlc
Note:
Information about the ops, versions, and parameters Qualcomm® Neural Processing SDK supports can be found at Supported ONNX Ops.
Neither snpe-onnx-to-dlc nor the Qualcomm® Neural Processing SDK runtime support symbolic tensor shape variables. See Network Resizing for information on resizing Qualcomm® Neural Processing SDK networks at initialization.
In general, Qualcomm® Neural Processing SDK determines the data types for tensors and operations based upon the needs of the runtime and builder parameters. Data types specified by the ONNX model will usually be ignored.
If the model contains ONNX functions, converter always does inlining of function nodes.