snpe-onnx-to-dlc¶
snpe-onnx-to-dlc converts a serialized ONNX model into a DLC file. Current ONNX Conversion supports upto ONNX Opset 22.
usage: snpe-onnx-to-dlc [--out_node OUT_NAMES] [--input_type INPUT_NAME INPUT_TYPE]
[--input_dtype INPUT_NAME INPUT_DTYPE] [--input_encoding [...]]
[--input_layout INPUT_NAME INPUT_LAYOUT] [--custom_io CUSTOM_IO]
[--preserve_io [PRESERVE_IO ...]]
[--dump_qairt_io_config_yaml [DUMP_QAIRT_IO_CONFIG_YAML]]
[--enable_framework_trace] [--dry_run [DRY_RUN]] [-d INPUT_NAME INPUT_DIM]
[-n] [-b BATCH] [-s SYMBOL_NAME VALUE]
[--dump_custom_io_config_template DUMP_CUSTOM_IO_CONFIG_TEMPLATE]
[--quantization_overrides QUANTIZATION_OVERRIDES] [--keep_quant_nodes]
[--disable_batchnorm_folding] [--expand_lstm_op_structure]
[--keep_disconnected_nodes] [--preserve_onnx_output_order]
[--apply_masked_softmax {compressed,uncompressed}]
[--packed_masked_softmax_inputs PACKED_MASKED_SOFTMAX_INPUTS [PACKED_MASKED_SOFTMAX_INPUTS ...]]
[--packed_max_seq PACKED_MAX_SEQ] --input_network INPUT_NETWORK [-h]
[--debug [DEBUG]] [-o OUTPUT_PATH] [--copyright_file COPYRIGHT_FILE]
[--float_bitwidth FLOAT_BITWIDTH] [--float_bw FLOAT_BW]
[--float_bias_bitwidth FLOAT_BIAS_BITWIDTH] [--float_bias_bw FLOAT_BIAS_BW]
[--model_version MODEL_VERSION]
[--validation_target RUNTIME_TARGET PROCESSOR_TARGET] [--strict]
[--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
[--op_package_lib OP_PACKAGE_LIB]
[--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB]
[-p PACKAGE_NAME | --op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
Script to convert ONNX model into DLC
required arguments:
--input_network INPUT_NETWORK, -i INPUT_NETWORK
Path to the source framework model.
optional arguments:
--out_node OUT_NAMES, --out_name OUT_NAMES
Name of the graph's output Tensor Names. Multiple output names should be
provided separately like:
--out_name out_1 --out_name out_2
--input_type INPUT_NAME INPUT_TYPE, -t INPUT_NAME INPUT_TYPE
Type of data expected by each input op/layer. Type for each input is
|default| if not specified. For example: "data" image.Note that the quotes
should always be included in order to handle special characters, spaces,etc.
For multiple inputs specify multiple --input_type on the command line.
Eg:
--input_type "data1" image --input_type "data2" opaque
These options get used by DSP runtime and following descriptions state how
input will be handled for each option.
Image:
Input is float between 0-255 and the input's mean is 0.0f and the input's
max is 255.0f. We will cast the float to uint8ts and pass the uint8ts to the
DSP.
Default:
Pass the input as floats to the dsp directly and the DSP will quantize it.
Opaque:
Assumes input is float because the consumer layer(i.e next layer) requires
it as float, therefore it won't be quantized.
Choices supported:
image
default
opaque
--input_dtype INPUT_NAME INPUT_DTYPE
The names and datatype of the network input layers specified in the format
[input_name datatype], for example:
'data' 'float32'
Default is float32 if not specified
Note that the quotes should always be included in order to handlespecial
characters, spaces, etc.
For multiple inputs specify multiple --input_dtype on the command line like:
--input_dtype 'data1' 'float32' --input_dtype 'data2' 'float32'
--input_encoding INPUT_ENCODING [INPUT_ENCODING ...], -e INPUT_ENCODING [INPUT_ENCODING ...]
Usage: --input_encoding "INPUT_NAME" INPUT_ENCODING_IN
[INPUT_ENCODING_OUT]
Input encoding of the network inputs. Default is bgr.
e.g.
--input_encoding "data" rgba
Quotes must wrap the input node name to handle special characters,
spaces, etc. To specify encodings for multiple inputs, invoke
--input_encoding for each one.
e.g.
--input_encoding "data1" rgba --input_encoding "data2" other
Optionally, an output encoding may be specified for an input node by
providing a second encoding. The default output encoding is bgr.
e.g.
--input_encoding "data3" rgba rgb
Input encoding types:
image color encodings: bgr,rgb, nv21, nv12, ...
time_series: for inputs of rnn models;
other: not available above or is unknown.
Supported encodings:
bgr
rgb
rgba
argb32
nv21
nv12
time_series
other
--input_layout INPUT_NAME INPUT_LAYOUT, -l INPUT_NAME INPUT_LAYOUT
Layout of each input tensor. If not specified, it will use the default
based on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, HWIO, OIHW, NFC, NCF, NTF, TNF, NF, NC, F,
NONTRIVIAL
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature, T =
Time
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
NONTRIVIAL for everything elseFor multiple inputs specify multiple
--input_layout on the command line.
Eg:
--input_layout "data1" NCHW --input_layout "data2" NCHW
--custom_io CUSTOM_IO
Use this option to specify a yaml file for custom IO.
--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]
Use this option to preserve IO layout and datatype. The different ways of
using this option are as follows:
--preserve_io layout <space separated list of names of inputs and
outputs of the graph>
--preserve_io datatype <space separated list of names of inputs and
outputs of the graph>
In this case, user should also specify the string - layout or datatype in
the command to indicate that converter needs to
preserve the layout or datatype. e.g.
--preserve_io layout input1 input2 output1
--preserve_io datatype input1 input2 output1
Optionally, the user may choose to preserve the layout and/or datatype for
all the inputs and outputs of the graph.
This can be done in the following two ways:
--preserve_io layout
--preserve_io datatype
Additionally, the user may choose to preserve both layout and datatypes for
all IO tensors by just passing the option as follows:
--preserve_io
Note: Only one of the above usages are allowed at a time.
Note: --custom_io gets higher precedence than --preserve_io.
--dump_qairt_io_config_yaml [DUMP_QAIRT_IO_CONFIG_YAML]
Use this option to dump a yaml file which contains the equivalent I/O
configurations of QAIRT Converter along with the QAIRT Converter Command and
can be passed to QAIRT Converter using the option --io_config.
--enable_framework_trace
Use this option to enable converter to trace the op/tensor change
information.
Currently framework op trace is supported only for ONNX converter.
--dry_run [DRY_RUN] Evaluates the model without actually converting any ops, and returns
unsupported ops/attributes as well as unused inputs and/or outputs if any.
Leave empty or specify "info" to see dry run as a table, or specify "debug"
to show more detailed messages only"
-d INPUT_NAME INPUT_DIM, --input_dim INPUT_NAME INPUT_DIM
The name and dimension of all the input buffers to the network specified in
the format [input_name comma-separated-dimensions],
for example: 'data' 1,224,224,3.
Note that the quotes should always be included in order to handle special
characters, spaces, etc.
For scalar inputs, use a single dimension `0` to indicate that the input is a scalar value.
For multiple inputs specify multiple --input_dim on the command line like:
--input_dim 'data1' 1,224,224,3 --input_dim 'data2' 0
NOTE: This feature works only with Onnx 1.6.0 and above
-n, --no_simplification
Do not attempt to simplify the model automatically. This may prevent some
models from properly converting
when sequences of unsupported static operations are present.
-b BATCH, --batch BATCH
The batch dimension override. This will take the first dimension of all
inputs and treat it as a batch dim, overriding it with the value provided
here. For example:
--batch 6
will result in a shape change from [1,3,224,224] to [6,3,224,224].
If there are inputs without batch dim this should not be used and each input
should be overridden independently using -d option for input dimension
overrides.
-s SYMBOL_NAME VALUE, --define_symbol SYMBOL_NAME VALUE
This option allows overriding specific input dimension symbols. For instance
you might see input shapes specified with variables such as :
data: [1,3,height,width]
To override these simply pass the option as:
--define_symbol height 224 --define_symbol width 448
which results in dimensions that look like:
data: [1,3,224,448]
--dump_custom_io_config_template DUMP_CUSTOM_IO_CONFIG_TEMPLATE
Dumps the yaml template for Custom I/O configuration. This file canbe edited
as per the custom requirements and passed using the option --custom_ioUse
this option to specify a yaml file to which the custom IO config template is
dumped.
--disable_batchnorm_folding
--expand_lstm_op_structure
Enables optimization that breaks the LSTM op to equivalent math ops
--keep_disconnected_nodes
Disable Optimization that removes Ops not connected to the main graph.
This optimization uses output names provided over commandline OR
inputs/outputs extracted from the Source model to determine the main graph
--preserve_onnx_output_order
Preserve the ONNX output order in the converted graph. Note: This may
slightly impact performance.
-h, --help show this help message and exit
--debug [DEBUG] Run the converter in debug mode.
-o OUTPUT_PATH, --output_path OUTPUT_PATH
Path where the converted Output model should be saved.If not specified, the
converter model will be written to a file with same name as the input model
--copyright_file COPYRIGHT_FILE
Path to copyright file. If provided, the content of the file will be added
to the output model.
--float_bitwidth FLOAT_BITWIDTH
Use the --float_bitwidth option to convert the graph to the specified float
bitwidth, either 32 (default) or 16.
--float_bw FLOAT_BW Note: --float_bw is deprecated, use --float_bitwidth.
--float_bias_bitwidth FLOAT_BIAS_BITWIDTH
Use the --float_bias_bitwidth option to select the bitwidth to use for float
bias tensor
--float_bias_bw FLOAT_BIAS_BW
Note: --float_bias_bw is deprecated, use --float_bias_bitwidth.
--model_version MODEL_VERSION
User-defined ASCII string to identify the model, only first 64 bytes will be
stored
--validation_target RUNTIME_TARGET PROCESSOR_TARGET
Note: This option is deprecated.
A combination of processor and runtime target against which model will be
validated.
Choices for RUNTIME_TARGET:
{cpu, gpu, dsp}.
Choices for PROCESSOR_TARGET:
{snapdragon_801, snapdragon_820, snapdragon_835}.
If not specified, will validate model against {snapdragon_820,
snapdragon_835} across all runtime targets.
--strict Note: This option is deprecated.
If specified, will validate in strict mode whereby model will not be
produced if it violates constraints of the specified validation target. If
not specified, will validate model in permissive mode against the specified
validation target.
--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -udo CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]
Path to the UDO configs (space separated, if multiple)
Custom Op Package Options:
--op_package_lib OP_PACKAGE_LIB, -opl OP_PACKAGE_LIB
Use this argument to pass an op package library for quantization. Must be in
the form <op_package_lib_path:interfaceProviderName> and be separated by a
comma for multiple package libs
--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB, -cpl CONVERTER_OP_PACKAGE_LIB
Absolute path to converter op package library compiled by the OpPackage
generator. Must be separated by a comma for multiple package libraries.
Note: Order of converter op package libraries must follow the order of xmls.
Ex1: --converter_op_package_lib absolute_path_to/libExample.so
Ex2: -cpl absolute_path_to/libExample1.so,absolute_path_to/libExample2.so
-p PACKAGE_NAME, --package_name PACKAGE_NAME
A global package name to be used for each node in the Model.cpp file.
Defaults to Qnn header defined package name
--op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -opc CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]
Path to a Qnn Op Package XML configuration file that contains user defined
custom operations.
Quantizer Options:
--quantization_overrides QUANTIZATION_OVERRIDES
Use this option to specify a json file with parameters to use for
quantization. These will override any quantization data carried from
conversion (eg TF fake quantization) or calculated during the normal
quantization process. Format defined as per AIMET specification.
--keep_quant_nodes Use this option to keep activation quantization nodes in the graph rather
than stripping them.
Masked Softmax Optimization Options:
--apply_masked_softmax {compressed,uncompressed}
This flag enables the pass that creates a MaskedSoftmax Op and
rewrites the graph to include this Op. MaskedSoftmax Op may not
be supported by all the QNN backends. Please check the
supplemental backend XML for the targeted backend.
This argument takes a string parameter input that selects
the mode of MaskedSoftmax Op.
'compressed' value rewrites the graph with the compressed version of
MaskedSoftmax Op.
'uncompressed' value rewrites the graph with the uncompressed version of
MaskedSoftmax Op.
--packed_masked_softmax_inputs PACKED_MASKED_SOFTMAX_INPUTS [PACKED_MASKED_SOFTMAX_INPUTS ...]
Mention the input ids tensor name which will be packed in the single
inference.
This is applicable only for Compressed MaskedSoftmax Op.
This will create a new input to the graph named 'position_ids'
with same shape as the provided input name in this flag.
During runtime, this input shall be provided with the token
locations for individual sequences so that the same will be
internally passed to positional embedding layer.
E.g. If 2 sequences of length 20 and 30 are packed together
in single batch of 64 tokens then this new input 'position_ids' should have
value [0, 1, ..., 19, 0, 1, ..., 29, 0, 0, 0, ..., 0]
Usage: --packed_masked_softmax input_ids
Packed model will enable the user to pack multiple sequences into
single batch of inference.
--packed_max_seq PACKED_MAX_SEQ
Number of sequences packed in the single input ids and
single attention mask inputs. Applicable only for
Compressed MaskedSoftmax Op.
Note: Only one of: {'op_package_config', 'package_name'} can be specified Note: Only one of:
{'op_package_config', 'package_name'} can be specified
For more information, see ONNX Model Conversion
input_layout argument:
Used with TF2Onnx or Keras2Onnx models when the input layout is NHWC. Onnx converter assumes that 4D inputs to the model are used by CNNs and are in NCHW format. For Keras2Onnx or TF2Onnx models, where the input is NHWC followed most likely by a transpose to NCHW, the converter will fail to successfully convert and optimize the model without the use of this argument.
snpe-pytorch-to-dlc¶
snpe-pytorch-to-dlc converts a serialized PyTorch model into a DLC file.
usage: snpe-pytorch-to-dlc -d INPUT_NAME INPUT_DIM [--out_node OUT_NAMES]
[--input_type INPUT_NAME INPUT_TYPE]
[--input_dtype INPUT_NAME INPUT_DTYPE] [--input_encoding ...]
[--input_layout INPUT_NAME INPUT_LAYOUT] [--custom_io CUSTOM_IO]
[--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]] [--dump_relay DUMP_RELAY]
[--quantization_overrides QUANTIZATION_OVERRIDES] [--keep_quant_nodes]
[--disable_batchnorm_folding] [--expand_lstm_op_structure]
[--keep_disconnected_nodes]
--input_network INPUT_NETWORK [-h] [--debug [DEBUG]] [-o OUTPUT_PATH]
[--copyright_file COPYRIGHT_FILE] [--float_bitwidth FLOAT_BITWIDTH]
[--float_bw FLOAT_BW] [--float_bias_bw FLOAT_BIAS_BW]
[--model_version MODEL_VERSION]
[--validation_target RUNTIME_TARGET PROCESSOR_TARGET] [--strict]
[--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
[--op_package_lib OP_PACKAGE_LIB]
[--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB]
[-p PACKAGE_NAME | --op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
Script to convert PyTorch model into DLC
required arguments:
-d INPUT_NAME INPUT_DIM, --input_dim INPUT_NAME INPUT_DIM
The names and dimensions of the network input layers specified in the format
[input_name comma-separated-dimensions], for example:
'data' 1,3,224,224
Note that the quotes should always be included in order to handle special
characters, spaces, etc. For multiple inputs specify multiple --input_dim on the command line like:
--input_dim 'data1' 1,3,224,224 --input_dim 'data2' 1,50,100,3
--input_network INPUT_NETWORK, -i INPUT_NETWORK
Path to the source framework model.
optional arguments:
--out_node OUT_NAMES, --out_name OUT_NAMES
Name of the graph's output Tensor Names. Multiple output names should be
provided separately like:
--out_name out_1 --out_name out_2
--input_type INPUT_NAME INPUT_TYPE, -t INPUT_NAME INPUT_TYPE
Type of data expected by each input op/layer. Type for each input is
|default| if not specified. For example: "data" image.Note that the quotes
should always be included in order to handle special characters, spaces,etc.
For multiple inputs specify multiple --input_type on the command line.
Eg:
--input_type "data1" image --input_type "data2" opaque
These options get used by DSP runtime and following descriptions state how
input will be handled for each option.
Image:
Input is float between 0-255 and the input's mean is 0.0f and the input's
max is 255.0f. We will cast the float to uint8ts and pass the uint8ts to the
DSP.
Default:
Pass the input as floats to the dsp directly and the DSP will quantize it.
Opaque:
Assumes input is float because the consumer layer(i.e next layer) requires
it as float, therefore it won't be quantized.
Choices supported:
image
default
opaque
--input_dtype INPUT_NAME INPUT_DTYPE
The names and datatype of the network input layers specified in the format
[input_name datatype], for example:
'data' 'float32'
Default is float32 if not specified.
Note that the quotes should always be included in order to handle special
characters, spaces, etc.
For multiple inputs specify multiple --input_dtype on the command line like:
--input_dtype 'data1' 'float32' --input_dtype 'data2' 'float32'
--input_encoding INPUT_ENCODING [INPUT_ENCODING ...], -e INPUT_ENCODING [INPUT_ENCODING ...]
Usage: --input_encoding "INPUT_NAME" INPUT_ENCODING_IN
[INPUT_ENCODING_OUT]
Input encoding of the network inputs. Default is bgr.
e.g.
--input_encoding "data" rgba
Quotes must wrap the input node name to handle special characters,
spaces, etc. To specify encodings for multiple inputs, invoke
--input_encoding for each one.
e.g.
--input_encoding "data1" rgba --input_encoding "data2" other
Optionally, an output encoding may be specified for an input node by
providing a second encoding. The default output encoding is bgr.
e.g.
--input_encoding "data3" rgba rgb
Input encoding types:
image color encodings: bgr,rgb, nv21, nv12, ...
time_series: for inputs of rnn models;
other: not available above or is unknown.
Supported encodings:
bgr
rgb
rgba
argb32
nv21
nv12
time_series
other
--input_layout INPUT_NAME INPUT_LAYOUT, -l INPUT_NAME INPUT_LAYOUT
Layout of each input tensor. If not specified, it will use the default
based on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, NFC, NCF, NTF, TNF, NF, NC, F, NONTRIVIAL
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature, T = Time
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
NONTRIVIAL for everything elseFor multiple inputs specify multiple
--input_layout on the command line.
Eg:
--input_layout "data1" NCHW --input_layout "data2" NCHW
Note: This flag does not set the layout of the input tensor in the converted DLC.
--custom_io CUSTOM_IO
Use this option to specify a yaml file for custom IO.
--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]
Use this option to preserve IO layout and datatype. The different ways of
using this option are as follows:
--preserve_io layout <space separated list of names of inputs and
outputs of the graph>
--preserve_io datatype <space separated list of names of inputs and
outputs of the graph>
In this case, user should also specify the string - layout or datatype in
the command to indicate that converter needs to
preserve the layout or datatype. e.g.
--preserve_io layout input1 input2 output1
--preserve_io datatype input1 input2 output1
Optionally, the user may choose to preserve the layout and/or datatype for
all the inputs and outputs of the graph.
This can be done in the following two ways:
--preserve_io layout
--preserve_io datatype
Additionally, the user may choose to preserve both layout and datatypes for
all IO tensors by just passing the option as follows:
--preserve_io
Note: Only one of the above usages are allowed at a time.
Note: --custom_io gets higher precedence than --preserve_io.
--dump_relay DUMP_RELAY
Dump Relay ASM and Params at the path provided with the argument
Usage: --dump_relay <path_to_dump>
--disable_batchnorm_folding
--expand_lstm_op_structure
Enables optimization that breaks the LSTM op to equivalent math ops
--keep_disconnected_nodes
Disable Optimization that removes Ops not connected to the main graph.
This optimization uses output names provided over commandline OR
inputs/outputs extracted from the Source model to determine the main graph
-h, --help show this help message and exit
-o OUTPUT_PATH, --output_path OUTPUT_PATH
Path where the converted Output model should be saved.If not specified, the
converter model will be written to a file with same name as the input model
--copyright_file COPYRIGHT_FILE
Path to copyright file. If provided, the content of the file will be added
to the output model.
--float_bitwidth FLOAT_BITWIDTH
Use the --float_bitwidth option to select the bitwidth to use when using
float for parameters(weights/bias) and activations for all ops or specific
Op (via encodings) selected through encoding, either 32 (default) or 16.
--float_bw FLOAT_BW
Note: --float_bw is deprecated, use --float_bitwidth.
--float_bias_bw FLOAT_BIAS_BW
Use the --float_bias_bw option to select the bitwidth to use for the float
bias tensor
--model_version MODEL_VERSION
User-defined ASCII string to identify the model, only first 64 bytes will be
stored
--validation_target RUNTIME_TARGET PROCESSOR_TARGET
A combination of processor and runtime target against which model will be
validated.
Choices for RUNTIME_TARGET:
{cpu, gpu, dsp}.
Choices for PROCESSOR_TARGET:
{snapdragon_801, snapdragon_820, snapdragon_835}.
If not specified, will validate model against {snapdragon_820,
snapdragon_835} across all runtime targets.
--strict If specified, will validate in strict mode whereby model will not be
produced if it violates constraints of the specified validation target. If
not specified, will validate model in permissive mode against the specified
validation target.
--udo_config_paths UDO_CONFIG_PATHS [UDO_CONFIG_PATHS ...], -udo UDO_CONFIG_PATHS
[UDO_CONFIG_PATHS ...]
Path to the UDO configs (space separated, if multiple)
Quantizer Options:
--quantization_overrides QUANTIZATION_OVERRIDES
Use this option to specify a json file with parameters to use for
quantization. These will override any quantization data carried from
conversion (eg TF fake quantization) or calculated during the normal
quantization process. Format defined as per AIMET specification.
--keep_quant_nodes Use this option to keep activation quantization nodes in the graph rather
than stripping them.
Custom Op Package Options:
--op_package_lib OP_PACKAGE_LIB, -opl OP_PACKAGE_LIB
Use this argument to pass an op package library for quantization. Must be in
the form <op_package_lib_path:interfaceProviderName> and be separated by a
comma for multiple package libs
--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB, -cpl CONVERTER_OP_PACKAGE_LIB
Path to converter op package library compiled by the OpPackage generator.
-p PACKAGE_NAME, --package_name PACKAGE_NAME
A global package name to be used for each node in the Model.cpp file.
Defaults to Qnn header defined package name
--op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -opc CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]
Path to a Qnn Op Package XML configuration file that contains user defined
custom operations.
Note: Only one of: {'package_name', 'op_package_config'} can be specified
For more information, see PyTorch Model Conversion
snpe-tensorflow-to-dlc¶
snpe-tensorflow-to-dlc converts a TensorFlow model into a DLC file.
usage: snpe-tensorflow-to-dlc -d INPUT_NAME INPUT_DIM --out_node OUT_NAMES
[--input_type INPUT_NAME INPUT_TYPE]
[--input_dtype INPUT_NAME INPUT_DTYPE] [--input_encoding ...]
[--input_layout INPUT_NAME INPUT_LAYOUT] [--custom_io CUSTOM_IO]
[--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]]
[--show_unconsumed_nodes] [--saved_model_tag SAVED_MODEL_TAG]
[--saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY]
[--quantization_overrides QUANTIZATION_OVERRIDES] [--keep_quant_nodes]
[--disable_batchnorm_folding] [--expand_lstm_op_structure]
[--keep_disconnected_nodes]
--input_network INPUT_NETWORK [-h] [--debug [DEBUG]] [-o OUTPUT_PATH]
[--copyright_file COPYRIGHT_FILE] [--float_bitwidth FLOAT_BITWIDTH]
[--float_bw FLOAT_BW] [--float_bias_bw FLOAT_BIAS_BW]
[--model_version MODEL_VERSION]
[--validation_target RUNTIME_TARGET PROCESSOR_TARGET] [--strict]
[--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
[--op_package_lib OP_PACKAGE_LIB]
[--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB]
[-p PACKAGE_NAME | --op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
Script to convert TF model into DLC.
required arguments:
-d INPUT_NAME INPUT_DIM, --input_dim INPUT_NAME INPUT_DIM
The names and dimensions of the network input layers specified in the format
[input_name comma-separated-dimensions], for example:
'data' 1,224,224,3
Note that the quotes should always be included in order to handle special
characters, spaces, etc.
For multiple inputs specify multiple --input_dim on the command line like:
--input_dim 'data1' 1,224,224,3 --input_dim 'data2' 1,50,100,3
--out_node OUT_NAMES, --out_name OUT_NAMES
Name of the graph's output Tensor Names. Multiple output names should be
provided separately like:
--out_name out_1 --out_name out_2
--input_network INPUT_NETWORK, -i INPUT_NETWORK
Path to the source framework model.
optional arguments:
--input_type INPUT_NAME INPUT_TYPE, -t INPUT_NAME INPUT_TYPE
Type of data expected by each input op/layer. Type for each input is
|default| if not specified. For example: "data" image.Note that the quotes
should always be included in order to handle special characters, spaces,etc.
For multiple inputs specify multiple --input_type on the command line.
Eg:
--input_type "data1" image --input_type "data2" opaque
These options get used by DSP runtime and following descriptions state how
input will be handled for each option.
Image:
Input is float between 0-255 and the input's mean is 0.0f and the input's
max is 255.0f. We will cast the float to uint8ts and pass the uint8ts to the
DSP.
Default:
Pass the input as floats to the dsp directly and the DSP will quantize it.
Opaque:
Assumes input is float because the consumer layer(i.e next layer) requires
it as float, therefore it won't be quantized.
Choices supported:
image
default
opaque
--input_dtype INPUT_NAME INPUT_DTYPE
The names and datatype of the network input layers specified in the format
[input_name datatype], for example:
'data' 'float32'
Default is float32 if not specified
Note that the quotes should always be included in order to handlespecial
characters, spaces, etc.
For multiple inputs specify multiple --input_dtype on the command line like:
--input_dtype 'data1' 'float32' --input_dtype 'data2' 'float32'
--input_encoding INPUT_ENCODING [INPUT_ENCODING ...], -e INPUT_ENCODING [INPUT_ENCODING ...]
Usage: --input_encoding "INPUT_NAME" INPUT_ENCODING_IN
[INPUT_ENCODING_OUT]
Input encoding of the network inputs. Default is bgr.
e.g.
--input_encoding "data" rgba
Quotes must wrap the input node name to handle special characters,
spaces, etc. To specify encodings for multiple inputs, invoke
--input_encoding for each one.
e.g.
--input_encoding "data1" rgba --input_encoding "data2" other
Optionally, an output encoding may be specified for an input node by
providing a second encoding. The default output encoding is bgr.
e.g.
--input_encoding "data3" rgba rgb
Input encoding types:
image color encodings: bgr,rgb, nv21, nv12, ...
time_series: for inputs of rnn models;
other: not available above or is unknown.
Supported encodings:
bgr
rgb
rgba
argb32
nv21
nv12
time_series
other
--input_layout INPUT_NAME INPUT_LAYOUT, -l INPUT_NAME INPUT_LAYOUT
Layout of each input tensor. If not specified, it will use the default
based on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, NFC, NCF, NTF, TNF, NF, NC, F, NONTRIVIAL
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature, T = Time
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
NONTRIVIAL for everything elseFor multiple inputs specify multiple
--input_layout on the command line.
Eg:
--input_layout "data1" NCHW --input_layout "data2" NCHW
Note: This flag does not set the layout of the input tensor in the converted DLC.
--custom_io CUSTOM_IO
Use this option to specify a yaml file for custom IO.
--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]
Use this option to preserve IO layout and datatype. The different ways of
using this option are as follows:
--preserve_io layout <space separated list of names of inputs and
outputs of the graph>
--preserve_io datatype <space separated list of names of inputs and
outputs of the graph>
In this case, user should also specify the string - layout or datatype in
the command to indicate that converter needs to
preserve the layout or datatype. e.g.
--preserve_io layout input1 input2 output1
--preserve_io datatype input1 input2 output1
Optionally, the user may choose to preserve the layout and/or datatype for
all the inputs and outputs of the graph.
This can be done in the following two ways:
--preserve_io layout
--preserve_io datatype
Additionally, the user may choose to preserve both layout and datatypes for
all IO tensors by just passing the option as follows:
--preserve_io
Note: Only one of the above usages are allowed at a time.
Note: --custom_io gets higher precedence than --preserve_io.
--show_unconsumed_nodes
Displays a list of unconsumed nodes, if there any are found. Nodes which are
unconsumed do not violate the structural fidelity of the generated graph.
--saved_model_tag SAVED_MODEL_TAG
Specify the tag to seletet a MetaGraph from savedmodel. ex:
--saved_model_tag serve. Default value will be 'serve' when it is not
assigned.
--saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY
Specify signature key to select input and output of the model. ex:
--saved_model_signature_key serving_default. Default value will be
'serving_default' when it is not assigned
--disable_batchnorm_folding
--expand_lstm_op_structure
Enables optimization that breaks the LSTM op to equivalent math ops
--keep_disconnected_nodes
Disable Optimization that removes Ops not connected to the main graph.
This optimization uses output names provided over commandline OR
inputs/outputs extracted from the Source model to determine the main graph
-h, --help show this help message and exit
--debug [DEBUG] Run the converter in debug mode.
-o OUTPUT_PATH, --output_path OUTPUT_PATH
Path where the converted Output model should be saved.If not specified, the
converter model will be written to a file with same name as the input model
--copyright_file COPYRIGHT_FILE
Path to copyright file. If provided, the content of the file will be added
to the output model.
--float_bitwidth FLOAT_BITWIDTH
Use the --float_bitwidth option to select the bitwidth to use when using
float for parameters(weights/bias) and activations for all ops or specific
Op (via encodings) selected through encoding, either 32 (default) or 16.
--float_bw FLOAT_BW
Note: --float_bw is deprecated, use --float_bitwidth.
--float_bias_bw FLOAT_BIAS_BW
Use the --float_bias_bw option to select the bitwidth to use for the float
bias tensor
--model_version MODEL_VERSION
User-defined ASCII string to identify the model, only first 64 bytes will be
stored
--validation_target RUNTIME_TARGET PROCESSOR_TARGET
A combination of processor and runtime target against which model will be
validated.
Choices for RUNTIME_TARGET:
{cpu, gpu, dsp}.
Choices for PROCESSOR_TARGET:
{snapdragon_801, snapdragon_820, snapdragon_835}.
If not specified, will validate model against {snapdragon_820,
snapdragon_835} across all runtime targets.
--strict If specified, will validate in strict mode whereby model will not be
produced if it violates constraints of the specified validation target. If
not specified, will validate model in permissive mode against the specified
validation target.
--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -udo CUSTOM_OP_CONFIG_PATHS
[CUSTOM_OP_CONFIG_PATHS ...]
Path to the UDO configs (space separated, if multiple)
Quantizer Options:
--quantization_overrides QUANTIZATION_OVERRIDES
Use this option to specify a json file with parameters to use for
quantization. These will override any quantization data carried from
conversion (eg TF fake quantization) or calculated during the normal
quantization process. Format defined as per AIMET specification.
--keep_quant_nodes Use this option to keep activation quantization nodes in the graph rather
than stripping them.
Custom Op Package Options:
--op_package_lib OP_PACKAGE_LIB, -opl OP_PACKAGE_LIB
Use this argument to pass an op package library for quantization. Must be in
the form <op_package_lib_path:interfaceProviderName> and be separated by a
comma for multiple package libs
--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB, -cpl CONVERTER_OP_PACKAGE_LIB
Path to converter op package library compiled by the OpPackage generator.
-p PACKAGE_NAME, --package_name PACKAGE_NAME
A global package name to be used for each node in the Model.cpp file.
Defaults to Qnn header defined package name
--op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -opc CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]
Path to a Qnn Op Package XML configuration file that contains user defined
custom operations.
Note: Only one of: {'package_name', 'op_package_config'} can be specified
input_network argument:
The converter supports a single frozen graph .pb file, a path to a pair of graph meta and checkpoint files, or the path to a SavedModel directory (TF 2.x).
If you are using the TensorFlow Saver to save your graph during training, 3 files will be generated as described below:
<model-name>.meta
<model-name>
checkpoint
The converter –input_network option specifies the path to the graph meta file. The converter will also use the checkpoint file to read the graph nodes parameters during conversion. The checkpoint file must have the same name without the .meta suffix.
This argument is required.
input_dim argument:
Specifies the input dimensions of the graph’s input node(s)
The converter requires a node name along with dimensions as input from which it will create an input layer by using the node output tensor dimensions. When defining a graph, there is typically a placeholder name used as input during training in the graph. The placeholder tensor name is the name you must use as the argument. It is also possible to use other types of nodes as input, however the node used as input will not be used as part of a layer other than the input layer.
Multiple Inputs
Networks with multiple inputs must provide –input_dim INPUT_NAME INPUT_DIM, one for each input node.
This argument is required.
out_node argument:
The name of the last node in your TensorFlow graph which will represent the output layer of your network.
Multiple Outputs
Networks with multiple outputs must provide several –out_node arguments, one for each output node.
output_path argument:
Specifies the output DLC file name.
This argument is optional. If not provided the converter will create a DLC file with the same name as the graph file name, with a .dlc file extension.
saved_model_tag:
For Tensorflow 2.x networks, this option allows a MetaGraph to be selected from the SavedModel specified by input_network.
This argument is optional and defaults to “serve”.
saved_model_signature:
For Tensorflow 2.x networks, this option specifies the signature key for selecting inputs and outputs of a Tensorflow 2.x SavedModel.
This argument is optional and defaults to “serving_default”.
SavedModel is the default model format in TensorFlow 2 and can been supported in Qualcomm® Neural Processing SDK TensorFlow Converter now.
snpe-tflite-to-dlc¶
snpe-tflite-to-dlc converts a TFLite model into a DLC file.
usage: snpe-tflite-to-dlc [-d INPUT_NAME INPUT_DIM] [--signature_name SIGNATURE_NAME]
[--out_node OUT_NAMES] [--input_type INPUT_NAME INPUT_TYPE]
[--input_dtype INPUT_NAME INPUT_DTYPE] [--input_encoding ...]
[--input_layout INPUT_NAME INPUT_LAYOUT] [--custom_io CUSTOM_IO]
[--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]] [--dump_relay DUMP_RELAY]
[--quantization_overrides QUANTIZATION_OVERRIDES]
[--keep_quant_nodes] [--disable_batchnorm_folding]
[--expand_lstm_op_structure]
[--keep_disconnected_nodes] --input_network INPUT_NETWORK [-h]
[--debug [DEBUG]] [-o OUTPUT_PATH] [--copyright_file COPYRIGHT_FILE]
[--float_bitwidth FLOAT_BITWIDTH] [--float_bw FLOAT_BW]
[--float_bias_bw FLOAT_BIAS_BW] [--model_version MODEL_VERSION]
[--validation_target RUNTIME_TARGET PROCESSOR_TARGET] [--strict]
[--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
[--op_package_lib OP_PACKAGE_LIB]
[--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB]
[-p PACKAGE_NAME | --op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
Script to convert TFLite model into DLC
required arguments:
--input_network INPUT_NETWORK, -i INPUT_NETWORK
Path to the source framework model.
optional arguments:
-d INPUT_NAME INPUT_DIM, --input_dim INPUT_NAME INPUT_DIM
The names and dimensions of the network input layers specified in the format
[input_name comma-separated-dimensions], for example:
'data' 1,224,224,3
Note that the quotes should always be included in order to handlespecial
characters, spaces, etc.
For multiple inputs specify multiple --input_dim on the command line like:
--input_dim 'data1' 1,224,224,3 --input_dim 'data2' 1,50,100,3
--signature_name SIGNATURE_NAME, -sn SIGNATURE_NAME
Specifies a specific subgraph signature to convert.
--out_node OUT_NAMES, --out_name OUT_NAMES
Name of the graph's output Tensor Names. Multiple output names should be
provided separately like:
--out_name out_1 --out_name out_2
--input_type INPUT_NAME INPUT_TYPE, -t INPUT_NAME INPUT_TYPE
Type of data expected by each input op/layer. Type for each input is
|default| if not specified. For example: "data" image.Note that the quotes
should always be included in order to handle special characters, spaces,etc.
For multiple inputs specify multiple --input_type on the command line.
Eg:
--input_type "data1" image --input_type "data2" opaque
These options get used by DSP runtime and following descriptions state how
input will be handled for each option.
Image:
Input is float between 0-255 and the input's mean is 0.0f and the input's
max is 255.0f. We will cast the float to uint8ts and pass the uint8ts to the
DSP.
Default:
Pass the input as floats to the dsp directly and the DSP will quantize it.
Opaque:
Assumes input is float because the consumer layer(i.e next layer) requires
it as float, therefore it won't be quantized.
Choices supported:
image
default
opaque
--input_dtype INPUT_NAME INPUT_DTYPE
The names and datatype of the network input layers specified in the format
[input_name datatype], for example:
'data' 'float32'
Default is float32 if not specified
Note that the quotes should always be included in order to handlespecial
characters, spaces, etc.
For multiple inputs specify multiple --input_dtype on the command line like:
--input_dtype 'data1' 'float32' --input_dtype 'data2' 'float32'
--input_encoding INPUT_ENCODING [INPUT_ENCODING ...], -e INPUT_ENCODING [INPUT_ENCODING ...]
Usage: --input_encoding "INPUT_NAME" INPUT_ENCODING_IN
[INPUT_ENCODING_OUT]
Input encoding of the network inputs. Default is bgr.
e.g.
--input_encoding "data" rgba
Quotes must wrap the input node name to handle special characters,
spaces, etc. To specify encodings for multiple inputs, invoke
--input_encoding for each one.
e.g.
--input_encoding "data1" rgba --input_encoding "data2" other
Optionally, an output encoding may be specified for an input node by
providing a second encoding. The default output encoding is bgr.
e.g.
--input_encoding "data3" rgba rgb
Input encoding types:
image color encodings: bgr,rgb, nv21, nv12, ...
time_series: for inputs of rnn models;
other: not available above or is unknown.
Supported encodings:
bgr
rgb
rgba
argb32
nv21
nv12
time_series
other
--input_layout INPUT_NAME INPUT_LAYOUT, -l INPUT_NAME INPUT_LAYOUT
Layout of each input tensor. If not specified, it will use the default
based on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, NFC, NCF, NTF, TNF, NF, NC, F, NONTRIVIAL
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature, T = Time
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
NONTRIVIAL for everything elseFor multiple inputs specify multiple
--input_layout on the command line.
Eg:
--input_layout "data1" NCHW --input_layout "data2" NCHW
Note: This flag does not set the layout of the input tensor in the converted DLC.
--custom_io CUSTOM_IO
Use this option to specify a yaml file for custom IO.
--preserve_io [PRESERVE_IO [PRESERVE_IO ...]]
Use this option to preserve IO layout and datatype. The different ways of
using this option are as follows:
--preserve_io layout <space separated list of names of inputs and
outputs of the graph>
--preserve_io datatype <space separated list of names of inputs and
outputs of the graph>
In this case, user should also specify the string - layout or datatype in
the command to indicate that converter needs to
preserve the layout or datatype. e.g.
--preserve_io layout input1 input2 output1
--preserve_io datatype input1 input2 output1
Optionally, the user may choose to preserve the layout and/or datatype for
all the inputs and outputs of the graph.
This can be done in the following two ways:
--preserve_io layout
--preserve_io datatype
Additionally, the user may choose to preserve both layout and datatypes for
all IO tensors by just passing the option as follows:
--preserve_io
Note: Only one of the above usages are allowed at a time.
Note: --custom_io gets higher precedence than --preserve_io.
--dump_relay DUMP_RELAY
Dump Relay ASM and Params at the path provided with the argument
Usage: --dump_relay <path_to_dump>
--disable_batchnorm_folding
--expand_lstm_op_structure
Enables optimization that breaks the LSTM op to equivalent math ops
--keep_disconnected_nodes
Disable Optimization that removes Ops not connected to the main graph.
This optimization uses output names provided over commandline OR
inputs/outputs extracted from the Source model to determine the main graph
-h, --help show this help message and exit
--debug [DEBUG] Run the converter in debug mode.
-o OUTPUT_PATH, --output_path OUTPUT_PATH
Path where the converted Output model should be saved.If not specified, the
converter model will be written to a file with same name as the input model
--copyright_file COPYRIGHT_FILE
Path to copyright file. If provided, the content of the file will be added
to the output model.
--float_bitwidth FLOAT_BITWIDTH
Use the --float_bitwidth option to select the bitwidth to use when using
float for parameters(weights/bias) and activations for all ops or specific
Op (via encodings) selected through encoding, either 32 (default) or 16.
--float_bw FLOAT_BW
Note: --float_bw is deprecated, use --float_bitwidth.
--float_bias_bw FLOAT_BIAS_BW
Use the --float_bias_bw option to select the bitwidth to use for the float
bias tensor
--model_version MODEL_VERSION
User-defined ASCII string to identify the model, only first 64 bytes will be
stored
--validation_target RUNTIME_TARGET PROCESSOR_TARGET
A combination of processor and runtime target against which model will be
validated.
Choices for RUNTIME_TARGET:
{cpu, gpu, dsp}.
Choices for PROCESSOR_TARGET:
{snapdragon_801, snapdragon_820, snapdragon_835}.
If not specified, will validate model against {snapdragon_820,
snapdragon_835} across all runtime targets.
--strict If specified, will validate in strict mode whereby model will not
be produced if it violates constraints of the specified validation target. If
not specified, will validate model in permissive mode against the specified
validation target.
--udo_config_paths CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -udo CUSTOM_OP_CONFIG_PATHS
[CUSTOM_OP_CONFIG_PATHS ...]
Path to the UDO configs (space separated, if multiple)
Quantizer Options:
--quantization_overrides QUANTIZATION_OVERRIDES
Use this option to specify a json file with parameters to use for
quantization. These will override any quantization data carried from
conversion (eg TF fake quantization) or calculated during the normal
quantization process. Format defined as per AIMET specification.
--keep_quant_nodes Use this option to keep activation quantization nodes in the graph rather
than stripping them.
Custom Op Package Options:
--op_package_lib OP_PACKAGE_LIB, -opl OP_PACKAGE_LIB
Use this argument to pass an op package library for quantization. Must be in
the form <op_package_lib_path:interfaceProviderName> and be separated by a
comma for multiple package libs
--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB, -cpl CONVERTER_OP_PACKAGE_LIB
Path to converter op package library compiled by the OpPackage generator.
-p PACKAGE_NAME, --package_name PACKAGE_NAME
A global package name to be used for each node in the Model.cpp file.
Defaults to Qnn header defined package name
--op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -opc CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]
Path to a Qnn Op Package XML configuration file that contains user defined
custom operations.
Note: Only one of: {'package_name', 'op_package_config'} can be specified
input_network argument:
The converter supports a single .tflite file.
The converter –input_network option specifies the path to the .tflite file.
This argument is required.
input_dim argument:
Specifies the input dimensions of the graph’s input node(s)
The converter requires a node name along with dimensions as input from which it will create an input layer by using the node output tensor dimensions. When defining a graph, there is typically a placeholder name used as input during training in the graph. The placeholder tensor name is the name you must use as the argument. It is also possible to use other types of nodes as input, however the node used as input will not be used as part of a layer other than the input layer.
Multiple Inputs
Networks with multiple inputs must provide –input_dim INPUT_NAME INPUT_DIM, one for each input node.
This argument is optional.
output_path argument:
Specifies the output DLC file name.
This argument is optional. If not provided the converter will create a DLC file with the same name as the tflite file name, with a .dlc file extension.
qairt-converter¶
The qairt-converter tool converts a model from the one of Onnx/TensorFlow/TFLite/PyTorch framework to a DLC file representing the QNN graph format that can enable inference on Qualcomm AI IP/HW. The converter auto detects the framework based on the source model extension. Current ONNX Conversion supports upto ONNX Opset 22.
Basic command line usage looks like:
usage: qairt-converter [--source_model_input_shape INPUT_NAME INPUT_DIM]
[--out_tensor_node OUT_NAMES]
[--source_model_input_datatype INPUT_NAME INPUT_DTYPE]
[--source_model_input_layout INPUT_NAME INPUT_LAYOUT]
[--desired_input_layout INPUT_NAME DESIRED_INPUT_LAYOUT]
[--source_model_output_layout OUTPUT_NAME OUTPUT_LAYOUT]
[--desired_output_layout OUTPUT_NAME DESIRED_OUTPUT_LAYOUT]
[--desired_input_color_encoding [ ...]]
[--preserve_io_datatype [PRESERVE_IO_DATATYPE ...]]
[--dump_config_template DUMP_IO_CONFIG_TEMPLATE] [--config IO_CONFIG]
[--dry_run [DRY_RUN]] [--enable_framework_trace] [--remove_unused_inputs]
[--gguf_config GGUF_CONFIG] [--quantizer_log QUANTIZER_LOG]
[--quantizer_log_level {LogLevel.NONE,LogLevel.TRACE,LogLevel.INFO}]
[--quantization_overrides QUANTIZATION_OVERRIDES]
[--lora_weight_list LORA_WEIGHT_LIST]
[--quant_updatable_mode {none,adapter_only,all}] [--onnx_skip_simplification]
[--onnx_override_batch BATCH] [--onnx_define_symbol SYMBOL_NAME VALUE]
[--onnx_validate_models] [--onnx_summary]
[--onnx_perform_sequence_construct_optimizer] [--tf_summary]
[--tf_override_batch BATCH] [--tf_disable_optimization]
[--tf_show_unconsumed_nodes] [--tf_saved_model_tag SAVED_MODEL_TAG]
[--tf_saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY]
[--tf_validate_models] [--tflite_signature_name SIGNATURE_NAME]
[--dump_exported_onnx] --input_network INPUT_NETWORK [--debug [DEBUG]]
[--output_path OUTPUT_PATH] [--copyright_file COPYRIGHT_FILE]
[--float_bitwidth FLOAT_BITWIDTH] [--float_bias_bitwidth FLOAT_BIAS_BITWIDTH]
[--set_model_version MODEL_VERSION] [--export_format EXPORT_FORMAT]
[--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB]
[--package_name PACKAGE_NAME | --op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]]
[--target_backend BACKEND] [--target_soc_model SOC_MODEL] [-h]
required arguments:
--input_network INPUT_NETWORK, -i INPUT_NETWORK
Path to the source framework model.
optional arguments:
--source_model_input_shape INPUT_NAME INPUT_DIM, -s INPUT_NAME INPUT_DIM
The name and dimension of all the input buffers to the network specified in
the format [input_name comma-separated-dimensions],
for example: --source_model_input_shape 'data' 1,224,224,3.
Note that the quotes should always be included in order to handle special
characters, spaces, etc.
For scalar inputs, use a single dimension `0` to indicate that the input is
a scalar value. This representation is supported for ONNX models only.
For multiple inputs specify multiple --source_model_input_shape on the commandline like:
--source_model_input_shape 'data1' 1,224,224,3 --source_model_input_shape 'data2' 0
NOTE: Required for TensorFlow and PyTorch. Optional for Onnx and Tflite
In case of Onnx, this feature works only with Onnx 1.6.0 and above
--out_tensor_node OUT_NAMES, --out_tensor_name OUT_NAMES
Name of the graph's output Tensor Names. Multiple output names should be
provided separately like:
--out_tensor_name out_1 --out_tensor_name out_2
NOTE: Required for TensorFlow. Optional for Onnx, Tflite and PyTorch
--source_model_input_datatype INPUT_NAME INPUT_DTYPE
The names and datatype of the network input layers specified in the format
[input_name datatype], for example:
'data' 'float32'
Default is float32 if not specified
Note that the quotes should always be included in order to handlespecial
characters, spaces, etc.
For multiple inputs specify multiple --source_model_input_datatype on the
command line like:
--source_model_input_datatype 'data1' 'float32'
--source_model_input_datatype 'data2' 'float32'
--source_model_input_layout INPUT_NAME INPUT_LAYOUT
Layout of each input tensor. If not specified, it will use the default based
on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, HWIO, OIHW, NFC, NCF, NTF, TNF, NF, NC, F
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature,
T = Time, I = Input, O = Output
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
HWIO/IOHW used for Weights of Conv Ops
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
For multiple inputs specify multiple --source_model_input_layout on the
command line.
Eg:
--source_model_input_layout "data1" NCHW --source_model_input_layout
"data2" NCHW
--desired_input_layout INPUT_NAME DESIRED_INPUT_LAYOUT
Desired Layout of each input tensor. If not specified, it will use the
default based on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, HWIO, OIHW, NFC, NCF, NTF, TNF, NF, NC, F
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature,
T = Time, I = Input, O = Output
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
HWIO/IOHW used for Weights of Conv Ops
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
For multiple inputs specify multiple --desired_input_layout on the command
line.
Eg:
--desired_input_layout "data1" NCHW --desired_input_layout "data2" NCHW
--source_model_output_layout OUTPUT_NAME OUTPUT_LAYOUT
Layout of each output tensor. If not specified, it will use the default
based on the Source Framework, shape of input and input encoding.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, HWIO, OIHW, NFC, NCF, NTF, TNF, NF, NC, F
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature, T =
Time
NDHWC/NCDHW used for 5d inputs
NHWC/NCHW used for 4d image-like inputs
NFC/NCF used for inputs to Conv1D or other 1D ops
NTF/TNF used for inputs with time steps like the ones used for LSTM op
NF used for 2D inputs, like the inputs to Dense/FullyConnected layers
NC used for 2D inputs with 1 for batch and other for Channels (rarely used)
F used for 1D inputs, e.g. Bias tensor
For multiple inputs specify multiple --source_model_output_layout on the
command line.
Eg:
--source_model_output_layout "data1" NCHW --source_model_output_layout
"data2" NCHW
--desired_output_layout OUTPUT_NAME DESIRED_OUTPUT_LAYOUT
Desired Layout of each output tensor. If not specified, it will use the
default based on the Source Framework.
Accepted values are-
NCDHW, NDHWC, NCHW, NHWC, HWIO, OIHW, NFC, NCF, NTF, TNF, NF, NC, F
N = Batch, C = Channels, D = Depth, H = Height, W = Width, F = Feature, T =
Time
NDHWC/NCDHW used for 5d outputs
NHWC/NCHW used for 4d image-like outputs
NFC/NCF used for outputs to Conv1D or other 1D ops
NTF/TNF used for outputs with time steps like the ones used for LSTM op
NF used for 2D outputs, like the outputs to Dense/FullyConnected layers
NC used for 2D outputs with 1 for batch and other for Channels (rarely used)
F used for 1D outputs, e.g. Bias tensor
For multiple outputs specify multiple --desired_output_layout on the command
line.
Eg:
--desired_output_layout "data1" NCHW --desired_output_layout "data2"
NCHW
--desired_input_color_encoding [ ...], -e [ ...]
Usage: --input_color_encoding "INPUT_NAME" INPUT_ENCODING_IN
[INPUT_ENCODING_OUT]
Input encoding of the network inputs. Default is bgr.
e.g.
--input_color_encoding "data" rgba
Quotes must wrap the input node name to handle special characters,
spaces, etc. To specify encodings for multiple inputs, invoke
--input_color_encoding for each one.
e.g.
--input_color_encoding "data1" rgba --input_color_encoding "data2" other
Optionally, an output encoding may be specified for an input node by
providing a second encoding. The default output encoding is bgr.
e.g.
--input_color_encoding "data3" rgba rgb
Input encoding types:
image color encodings: bgr,rgb, nv21, nv12, ...
time_series: for inputs of rnn models;
other: not available above or is unknown.
Supported encodings:
bgr
rgb
rgba
argb32
nv21
nv12
--preserve_io_datatype [PRESERVE_IO_DATATYPE ...]
Use this option to preserve IO datatype. The different ways of using this
option are as follows:
--preserve_io_datatype <space separated list of names of inputs and
outputs of the graph>
e.g.
--preserve_io_datatype input1 input2 output1
The user may choose to preserve the datatype for all the inputs and outputs
of the graph.
--preserve_io_datatype
Note: --config gets higher precedence than --preserve_io_datatype.
--dump_config_template DUMP_IO_CONFIG_TEMPLATE
Dumps the yaml template for I/O configuration. This file can be edited as
per the custom requirements and passed using the option --configUse this
option to specify a yaml file to which the IO config template is dumped.
--config IO_CONFIG Use this option to specify a yaml file for input and output options.
--dry_run [DRY_RUN] Evaluates the model without actually converting any ops, and returns
unsupported ops/attributes as well as unused inputs and/or outputs if any.
--enable_framework_trace
Use this option to enable converter to trace the op/tensor change
information.
Currently framework op trace is supported only for ONNX converter.
--remove_unused_inputs
Use this option to remove the disconnected graph input nodes after the
conversion
--gguf_config GGUF_CONFIG
This is an optional argument that can be used when input network is a GGUF
File.It specifies the path to the config file for building GenAI model.(the
config.json file generated when saving the huggingface model)
--quantizer_log QUANTIZER_LOG
Valid for use with v2.0.0 JSON schema for quantization overrides or when
--use_quantize_v2 is provided. Enable logging in the quantizer, logging to
the file <QUANTIZER_LOG>.
E.g., --quantizer_log my_model_name.csv will produce the file
my_model_name.csv. See --quantizer_log_level.
--quantizer_log_level {LogLevel.NONE,LogLevel.TRACE,LogLevel.INFO}
Sets the logging level in the quantizer. See --quantizer_log.
INFO: Emits a file in the CSV format. Requires --quantizer_log
<file_name.csv> to be set. Warnings and errors are emitted to the console.
TRACE: Emits a file in the TXT format. Requires --quantizer_log
<file_name.txt> to be set. Warnings and errors are emitted to the console.
NONE: Default value. No file is emitted. Warnings and errors are emitted to
the console.
--debug [DEBUG] Run the converter in debug mode.
--output_path OUTPUT_PATH, -o OUTPUT_PATH
Path where the converted Output model should be saved.If not specified, the
converter model will be written to a file with same name as the input model
--copyright_file COPYRIGHT_FILE
Path to copyright file. If provided, the content of the file will be added
to the output model.
--float_bitwidth FLOAT_BITWIDTH
Use the --float_bitwidth option to convert the graph to the specified float
bitwidth, either 32 (default) or 16.
--float_bias_bitwidth FLOAT_BIAS_BITWIDTH
Use the --float_bias_bitwidth option to select the bitwidth to use for float
bias tensor, either 32 or 16 (default '0' if not provided).
--set_model_version MODEL_VERSION
User-defined ASCII string to identify the model, only first 64 bytes will be
stored
--export_format EXPORT_FORMAT
DLC_DEFAULT (default)
- Produce a Float graph given a Float Source graph
- Produce a Quant graph given a Source graph with provided Encodings
DLC_STRIP_QUANT
- Produce a Float Graph with discarding Quant data
-h, --help show this help message and exit
Custom Op Package Options:
--converter_op_package_lib CONVERTER_OP_PACKAGE_LIB, -cpl CONVERTER_OP_PACKAGE_LIB
Absolute path to converter op package library compiled by the OpPackage
generator. Must be separated by a comma for multiple package libraries.
Note: Order of converter op package libraries must follow the order of xmls.
Ex1: --converter_op_package_lib absolute_path_to/libExample.so
Ex2: -cpl absolute_path_to/libExample1.so,absolute_path_to/libExample2.so
--package_name PACKAGE_NAME, -p PACKAGE_NAME
A global package name to be used for each node in the Model.cpp file.
Defaults to Qnn header defined package name
--op_package_config CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...], -opc CUSTOM_OP_CONFIG_PATHS [CUSTOM_OP_CONFIG_PATHS ...]
Path to a Qnn Op Package XML configuration file that contains user defined
custom operations.
Quantizer Options:
--quantization_overrides QUANTIZATION_OVERRIDES, -q QUANTIZATION_OVERRIDES
Use this option to specify a json file with parameters to use for
quantization. These will override any quantization data carried from
conversion (eg TF fake quantization) or calculated during the normal
quantization process. Format defined as per AIMET specification.
LoRA Converter Options:
--lora_weight_list LORA_WEIGHT_LIST
Path to a file specifying a list of tensor names that should be updateable.
--quant_updatable_mode {none,adapter_only,all}
Specify whether/for which tensors the quantization encodings change across
use-cases. In none mode, no quantization encodings are updatable. In
adapter_only mode quantization encodings for only lora/adapter branch
(Conv->Mul->Conv) change across use-case, the base branch quantization
encodings remain the same. In all mode, all quantization encodings are
updatable.
Onnx Converter Options:
--onnx_skip_simplification, -oss
Do not attempt to simplify the model automatically. This may prevent some
models from
properly converting when sequences of unsupported static operations are
present.
--onnx_override_batch BATCH
The batch dimension override. This will take the first dimension of all
inputs and treat it as a batch dim, overriding it with the value provided
here. For example:
--onnx_override_batch 6
will result in a shape change from [1,3,224,224] to [6,3,224,224].
If there are inputs without batch dim this should not be used and each input
should be overridden independently using -s option for input dimension
overrides.
--onnx_define_symbol SYMBOL_NAME VALUE
This option allows overriding specific input dimension symbols. For instance
you might see input shapes specified with variables such as :
data: [1,3,height,width]
To override these simply pass the option as:
--onnx_define_symbol height 224 --onnx_define_symbol width 448
which results in dimensions that look like:
data: [1,3,224,448]
--onnx_validate_models
Validate the original ONNX model against optimized ONNX model.
Constant inputs with all value 1s will be generated and will be used
by both models and their outputs are checked against each other.
The % average error and 90th percentile of output differences will be
calculated for this.
Note: Usage of this flag will incur extra time due to inference of the
models.
--onnx_summary Summarize the original onnx model and optimized onnx model.
Summary will print the model information such as number of parameters,
number of operators and their count, input-output tensor name, shape and
dtypes.
--onnx_perform_sequence_construct_optimizer
This option allows optimization on SequenceConstruct Op.
When SequenceConstruct op is one of the outputs of the graph, it removes
SequenceConstruct op and makes its inputs as graph outputs to replace the
original output of SequenceConstruct.
--tf_summary Summarize the original TF model and optimized TF model.
Summary will print the model information such as number of parameters,
number of operators and their count, input-output tensor name, shape and
dtypes.
TensorFlow Converter Options:
--tf_override_batch BATCH
The batch dimension override. This will take the first dimension of all
inputs and treat it as a batch dim, overriding it with the value provided
here. For example:
--tf_override_batch 6
will result in a shape change from [1,224,224,3] to [6,224,224,3].
If there are inputs without batch dim this should not be used and each input
should be overridden independently using -s option for input dimension
overrides.
--tf_disable_optimization
Do not attempt to optimize the model automatically.
--tf_show_unconsumed_nodes
Displays a list of unconsumed nodes, if there any are found. Nodeswhich are
unconsumed do not violate the structural fidelity of thegenerated graph.
--tf_saved_model_tag SAVED_MODEL_TAG
Specify the tag to seletet a MetaGraph from savedmodel. ex:
--saved_model_tag serve. Default value will be 'serve' when it is not
assigned.
--tf_saved_model_signature_key SAVED_MODEL_SIGNATURE_KEY
Specify signature key to select input and output of the model. ex:
--tf_saved_model_signature_key serving_default. Default value will be
'serving_default' when it is not assigned
--tf_validate_models Validate the original TF model against optimized TF model.
Constant inputs with all value 1s will be generated and will be used
by both models and their outputs are checked against each other.
The % average error and 90th percentile of output differences will be
calculated for this.
Note: Usage of this flag will incur extra time due to inference of the
models.
Tflite Converter Options:
--tflite_signature_name SIGNATURE_NAME
Use this option to specify a specific Subgraph signature to convert
PyTorch Converter Options:
--dump_exported_onnx Dump the exported Onnx model from input Torchscript model
Backend Options:
--target_backend BACKEND
Use this option to specify the backend on which the model needs to run.
Providing this option will generate a graph optimized for the given backend
and this graph may not run on other backends. The default backend is HTP.
Supported backends are CPU,GPU,DSP,HTP,HTA,LPAI.
--target_soc_model SOC_MODEL
Use this option to specify the SOC on which the model needs to run.
This can be found from SOC info of the device and it starts with strings
such as SDM, SM, QCS, IPQ, SA, QC, SC, SXR, SSG, STP, QRB, or AIC.
NOTE: --target_backend option must be provided to use --target_soc_model
option.
Note: Only one of: {'package_name', 'op_package_config'} can be specified