Tutorial - Use IR Backend by Using the Qualcomm® AI Engine Direct Delegate¶
Qualcomm® AI Engine Direct Delegate has provided IR backend for users to generate a DLC from a model. This tutorial demonstrates how to use the IR backend with the Qualcomm® AI Engine Direct Delegate. We will go through how to use qtld-net-run to generate a DLC on a specified path.
Prerequisites¶
The following list of prerequisites must be met before starting this tutorial:
Finish Tutorial qtld-net-run
How to Generate DLC by Using qtld-net-run¶
Please specify backend and ir_dlc_path.
$ adb shell 'export LD_LIBRARY_PATH=/data/local/tmp/qnn_delegate/:$LD_LIBRARY_PATH &&
export ADSP_LIBRARY_PATH="/data/local/tmp/qnn_delegate/" &&
cd /data/local/tmp/qnn_delegate/inception_v3_quant/ &&
/data/local/tmp/qnn_delegate/qtld-net-run \
--model inception_v3_quant.tflite \
--input target_raw_list.txt \
--output output \
--backend ir \
--ir_dlc_path /data/local/tmp/qnn_delegate/inception_v3_quant.dlc
The output should look similar to the following:
TFLite model: [inception_v3_quant.tflite]
Input list file: [target_raw_list.txt]
Total number of inferences: [1]
Using QNN Backend: [ir]
IR DLC Path: [/data/local/tmp/qnn_delegate/inception_v3_quant.dlc]
Loaded model successfully.
=== Pre-invoke Interpreter State ===
Line 945: Allocated 1 input tensor(s)
Line 955: Allocated 1 output tensor(s)
=== Invoking Interpreter ===
You should find the DLC is generated on the specified path.