CNN to QNN for Linux Host on Windows Target¶
Note
This is Part 2 of the CNN to QNN tutorial for Windows host machines. If you have not completed Part 1, please do so here.
Step 3: Model Build on Windows Host¶
Once the CNN model has been converted into QNN format, the next step is to build it so it can run on the target device’s operating system with qnn-model-lib-generator.
Based on the operating system and architecture of your target device, choose one of the following build instructions.
Warning
For cases where the “host machine” and “target device” are the same (ex. you want to build and run model inferences on your Snapdragon for Windows device), you will need to adapt the steps to handle files locally instead of transferring them to a remote device.
Note
Please continue to use the same terminal you were using on your host machine from part 1.
Create a directory on your host machine where your newly built files will live by running:
mkdir -p /tmp/qnn_tmp
Navigate to the new directory:
cd /tmp/qnn_tmp
Copy over the QNN
.cppand.binmodel files to/tmp/qnn_tmp/:cp "$QNN_SDK_ROOT/examples/Models/InceptionV3/model/Inception_v3.cpp" "$QNN_SDK_ROOT/examples/Models/InceptionV3/model/Inception_v3.bin" /tmp/qnn_tmp/
Choose the most relevant supported target architecture from the following list: - For x86_64 Windows target:
windows-x86_64- For Arm 64 Windows target:windows-aarch64- For Snapdragon devices, choosewindows-aarch64On your host machine, set the target architecture of your target device by setting
QNN_TARGET_ARCHto your device’s target architecture:export QNN_TARGET_ARCH="your-target-architecture-from-above"
For example:
export QNN_TARGET_ARCH="windows-x86_64"
Run the following command on your host machine to generate the model library:
python3 "$QNN_SDK_ROOT/bin/x86_64-linux-clang/qnn-model-lib-generator" \ -c "./Inception_v3.cpp" \ -b "./Inception_v3.bin" \ -o "model_libs" \ -t "$QNN_TARGET_ARCH"
c- This indicates the path to the.cppQNN model file.b- This indicates the path to the.binQNN model file. (bis optional, but at runtime, the.cppfile could fail if it needs the.binfile, so it is recommended).o- The path to the output folder.t- Indicate which architecture to build for.
Run
ls /tmp/qnn_tmp/model_libs/${QNN_TARGET_ARCH}and verify that the output fileInception_v3.dllis inside. - You will use theInception_v3.dllfile on the target device to execute inferences. - The output.dllfile will be located in themodel_libsdirectory, named according to the target architecture.For example:
model_libs/x64/Inception_v3.dllormodel_libs/aarch64/Inception_v3.dll.
Step 4: Use the Built Model on Specific Processors¶
Now that you have an executable version of your model, the next step is to transfer the built model and all necessary files to the target processor, then to run inferences on it.
Install all necessary dependencies from Setup.
Follow the below SSH setup instructions.
Follow the instructions for each specific processor you want to run your model on.
Sub-Step 1. If you haven’t already, ensure that you follow the processor-specific Setup instructions for your host machine :doc:`here </general/setup/windows_setup>`.
Sub-Step 2: Set up SSH on the target device.
Here we use
OpenSSHto copy files withscplater on and run scripts on the target device viassh. If that does not work for your target device, feel free to use any other method of transferring the files over. (Ex. USB ormstsc)
- Ensure that both the host device and the target device are on the same network for this setup.
Otherwise,
OpenSSHrequires port-forwarding to connect.
- On the target device, install OpenSSH on Windows.
Open an Admin PowerShell terminal.
Run the following command to install
OpenSSH Server:Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0Once installed, start the
sshserver on your target device by running:Start-Service sshd # Optional: The command below causes the OpenSSH server to start on device startup. Set-Service -Name sshd -StartupType 'Automatic'
You can verify that the
sshserver is live by running:Get-Service -Name sshdNote
You can turn off the OpenSSH Server service by running
Stop-Service sshdon your target device.
On your target device, run
ipconfigto get the IP address of your target Windows device.On your Linux host machine, set a console variable for your target device’s
ipv4address from above (replacing127.0.0.1below):export TARGET_IP="127.0.0.1"
Also set the username you would like to sign into on your Windows target device (you can find it by looking at the path to a user folder like
Documents):export TARGET_USER="yourusername"
Sub-Step 3: Follow the steps below for whichever processor you would like to run your model on.
CPU¶
Transferring over all relevant files¶
On the target device, open a terminal and run
mkdir C:\qnn_test_packageto make a destination repo for transferred files.On the host device, use
scpto transferQnnCpu.dllfrom your Linux host machine toC:\qnn_test_packageon the target Windows device.scp "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnCpu.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Use
scpto transfer the example built model. - Update thex64folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Transfer the input data, input list, and script from the QNN SDK examples folder into
C:\qnn_test_packageon the target device usingscpin a similar way:scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Transfer
qnn-net-run.exefrom$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exetoC:\qnn_test_packageon the target device:scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Doing inferences on the target device processor¶
Open a PowerShell instance on the target Windows device. - Alternatively, you can
sshfrom your Linux host machine, run the following command tosshinto your target device. - These console variables were set in the above instructions for “Transferring all relevant files”.ssh "${TARGET_USER}@${TARGET_IP}"
Note
You will have to login with your target device’s login for that username.
Navigate to the directory containing the test files:
cd C:\qnn_test_package
Run the following command on the target device to execute an inference:
.\qnn-net-run.exe ` --model ".\Inception_v3.dll" ` --input_list ".\target_raw_list.txt" ` --backend ".\QnnCpu.dll" ` --output_dir ".\output"
Run the following script on the target device to view the classification results:
Note
You can alternatively copy the output folder back to your Linux host machine with
scpand run the following script there to avoid having to install python on your target device.py -3 ".\show_inceptionv3_classifications.py" \ -i ".\cropped\raw_list.txt" \ -o "output" \ -l ".\imagenet_slim_labels.txt"
Verify that the classification results in
outputmatch the following: -${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan-${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch-${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup-${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass
GPU¶
Transferring over all relevant files¶
On the target device, open a terminal and run
mkdir C:\qnn_test_packageto make a destination repo for transferred files.Use
scpto transferQnnGpu.dllfrom your Linux host machine toC:\qnn_test_packageon the target Windows device.scp "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnGpu.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
- Use
scpto transfer the example built model. Update the
x64folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.
scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
- Use
Transfer the input data, input list, and script from the QNN SDK examples folder into
C:\qnn_test_packageon the target device usingscpin a similar way:scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Transfer
qnn-net-run.exefrom$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exetoC:\qnn_test_packageon the target device:scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Doing inferences on the target device processor¶
- Open a PowerShell instance on the target Windows device.
Alternatively, you can
sshfrom your Linux host machine, run the following command tosshinto your target device.These console variables were set in the above instructions for “Transferring all relevant files”.
ssh "${TARGET_USER}@${TARGET_IP}"
Note
You will have to login with your target device’s login for that username.
Navigate to the directory containing the test files:
cd C:\qnn_test_package
Run the following command on the target device to execute an inference:
.\qnn-net-run.exe ` --model ".\Inception_v3.dll" ` --input_list ".\target_raw_list.txt" ` --backend ".\QnnGpu.dll" ` --output_dir ".\output"
Run the following script on the target device to view the classification results:
Note
You can alternatively copy the output folder back to your Linux host machine with
scpand run the following script there to avoid having to install python on your target device.py -3 ".\show_inceptionv3_classifications.py" \ -i ".\cropped\raw_list.txt" \ -o "output" \ -l ".\imagenet_slim_labels.txt"
- Verify that the classification results in
outputmatch the following: ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass
- Verify that the classification results in
DSP¶
Warning
DSP processors require quantized models instead of full precision models. If you do not have a quantized model, please follow Step 2 of the CNN to QNN tutorial to build one.
Transferring over all relevant files¶
On the target device, open a terminal and run
mkdir C:\qnn_test_packageto make a destination repo for transferred files.Determine your target device’s SnapDragon architecture by looking your chipset up in the Supported Snapdragon Devices table.
- Update the “X” values below and run the commands to set
DSP_ARCHto match the version number found in the above table. Only the 2 digits at the end should update, and they should have the same version. Ex. For “V68”, the proper value would be
hexagon-v68.
export DSP_VERSION="XX" export DSP_ARCH="hexagon-v${DSP_VERSION}"
- Update the “X” values below and run the commands to set
Use
scpto transferQnnDsp.dllas well as other necessary executables from your Linux host machine toC:\qnn_test_packageon the target Windows device.scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnDsp.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnDspV${DSP_VERSION}Stub.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
- Check the Backend table to see if there are any other processor-specific executables needed for your target processor (
DSP) and your target device’s architecture ($QNN_TARGET_ARCH). Use similar syntax above for
scpto transfer any additional.dllfiles listed below your selected target architecture in this table. (There may be none!)
Warning
Ensure you
scpthehexagon-v##values (in addition to the other architecture files!)- Check the Backend table to see if there are any other processor-specific executables needed for your target processor (
- Use
scpto transfer the example built model. Update the
x64folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.
scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
- Use
Transfer the input data, input list, and script from the QNN SDK examples folder into
C:\qnn_test_packageon the target device usingscpin a similar way:scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Transfer
qnn-net-run.exefrom$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exetoC:\qnn_test_packageon the target device:scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Doing inferences on the target device processor¶
- Open a PowerShell instance on the target Windows device.
Alternatively, you can
sshfrom your Linux host machine, run the following command tosshinto your target device.These console variables were set in the above instructions for “Transferring all relevant files”.
ssh "${TARGET_USER}@${TARGET_IP}"
Note
You will have to login with your target device’s login for that username.
Navigate to the directory containing the test files:
cd C:\qnn_test_package
Run the following command on the target device to execute an inference:
.\qnn-net-run.exe ` --model ".\Inception_v3.dll" ` --input_list ".\target_raw_list.txt" ` --backend ".\QnnDsp.dll" ` --output_dir ".\output"
Run the following script on the target device to view the classification results:
Note
You can alternatively copy the output folder back to your Linux host machine with
scpand run the following script there to avoid having to install python on your target device.py -3 ".\show_inceptionv3_classifications.py" \ -i ".\cropped\raw_list.txt" \ -o "output" \ -l ".\imagenet_slim_labels.txt"
- Verify that the classification results in
outputmatch the following: ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass
- Verify that the classification results in
HTP¶
Warning
HTP processors require quantized models instead of floating point models. If you do not have a quantized model, please follow Step 2 of the CNN to QNN tutorial to build one.
Additional HTP Required Setup¶
Running the model on a target device’s HTP requires the generation of a serialized context.
On the Linux Host:
Navigate to the directory where you built the model in the previous steps:
cd /tmp/qnn_tmp
Users can set the custom options and different performance modes to HTP Backend through the backend config. Please refer to QNN HTP Backend Extensions for various options available in the config.
Refer to the example below for creating a backend config file for the QCS6490/QCM6490 target with mandatory options passed in:
Update the following information based on your device’s
htp_arch.{ "graphs": [ { "graph_names": [ "Inception_v3" ], "vtcm_mb": 2 } ], "devices": [ { "htp_arch": "v68" } ] }
The above config file with minimum parameters such as backend extensions config specified through JSON is given below:
{ "backend_extensions": { "shared_library_path": "path_to_shared_library", // give path to shared extensions library (.dll) "config_file_path": "path_to_config_file" // give path to backend config } }
To generate the context, update
<path to JSON of backend extensions>below with the config you wrote above, then run the command:"$QNN_SDK_ROOT/bin/${QNN_TARGET_ARCH}/qnn-context-binary-generator" \ --backend "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnHtp.dll" \ --model "${QNN_SDK_ROOT}/examples/Models/InceptionV3/model_libs/${QNN_TARGET_ARCH}/Inception_v3.dll" \ --binary_file "Inception_v3.serialized" \ --config_file <path to JSON of backend extensions>
This creates the serialized context at: -
${QNN_SDK_ROOT}/examples/Models/InceptionV3/output/Inception_v3.serialized.bin
Transferring over all relevant files¶
On the target device, open a terminal and run
mkdir C:\qnn_test_packageto make a destination repo for transferred files.Determine your target device’s SnapDragon architecture by looking your chipset up in the Supported Snapdragon Devices table.
Update the “X” values below and run the commands to set
HTP_VERSIONto match the version number found in the above table.Only the 2 digits at the end should update, and they should have the same version. Ex. For “V68” in the table, the proper value for
HTP_VERSIONwould be68andHTP_ARCHwould behexagon-v68. (You can use68as the default here to try it out).export HTP_VERSION="XX" export HTP_ARCH="hexagon-v${HTP_VERSION}"
Use
scpto transferQnnHtp.dllfrom your Linux host machine toC:\qnn_test_packageon the target Windows device.scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnHtp.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnHtpPrepare.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnHtpV${HTP_VERSION}Stub.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "$QNN_SDK_ROOT/lib/${HTP_ARCH}/unsigned/*" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Check the Backend table to see if there are any other processor-specific executables needed for your target processor (
DSP) and your target device’s architecture ($QNN_TARGET_ARCH).Use similar syntax above for
scpto transfer any additional.dllfiles listed below your selected target architecture in this table. (Usually the above install covers them all!)Use
scpto transfer the example built model.Update the
x64folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Transfer the input data, input list, and script from the QNN SDK examples folder into
C:\qnn_test_packageon the target device usingscpin a similar way:scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package" scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Transfer
qnn-net-run.exefrom$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exetoC:\qnn_test_packageon the target device:scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
Doing inferences on the target device processor¶
Open a PowerShell instance on the target Windows device.
Alternatively, you can
sshfrom your Linux host machine, run the following command tosshinto your target device.These console variables were set in the above instructions for “Transferring all relevant files”.
ssh "${TARGET_USER}@${TARGET_IP}"
Note
You will have to login with your target device’s login for that username.
Navigate to the directory containing the test files:
cd C:\qnn_test_package
Update the environment on the device by running:
export LD_LIBRARY_PATH="C:/qnn_test_package" export ADSP_LIBRARY_PATH="C:/qnn_test_package"
Run the following command on the target device to execute an inference:
.\qnn-net-run.exe ` --retrieve_context ".\Inception_v3_quantized.serialized.bin" ` --input_list ".\target_raw_list.txt" ` --backend ".\QnnHtp.dll" ` --output_dir ".\output"
Run the following script on the target device to view the classification results:
Note
You can alternatively copy the output folder back to your Linux host machine with
scpand run the following script there to avoid having to install python on your target device.py -3 ".\show_inceptionv3_classifications.py" \ -i ".\cropped\raw_list.txt" \ -o "output" \ -l ".\imagenet_slim_labels.txt"
Verify that the classification results in
outputmatch the following:${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass
## LPAI
Warning
Is not supported yet on Windows target platform.