SNPE Tutorial for Windows Target Device from Linux Host¶
Note
This is the second section of the Building and Executing a Model tutorial. If you have not completed that, please do so first by clicking the above link.
Note
Please use the same terminal on your host device as you did in the previous section, as it contains environment variables we use throughout these steps.
Part 6: Transfer Files to Your Target Device¶
Now that you have a .dlc version of your model, the next step is to transfer the built model and all necessary files to the target processor, then to run inferences on it.
Install all necessary dependencies from Setup.
Follow the below SSH setup instructions.
Follow the instructions for each specific processor you want to run your model on.
Warning
For cases where the “host machine” and “target device” are the same (ex. you want to build and run model inferences on your Snapdragon for Windows device), you can skip the SSH instructions and instead adapt the steps to handle the files locally.
Sub-Step 1: If you haven’t already, ensure that you follow the processor-specific Setup instructions¶
Sub-Step 2: Set up SSH on the target device¶
Warning
Here we use OpenSSH to copy files with scp later on and run scripts on the target device via ssh. If that does not work for your target device, feel free to use any other method of transferring the files over (ex: USB or mstsc).
If you are using the same device as both your host machine and target device, you can use cp to copy files and skip this whole section on ssh.
Ensure that both the host device and the target device are on the same network for this setup.
Otherwise,
OpenSSHrequires port-forwarding to connect.
On your target device, install OpenSSH.
Open an Admin PowerShell session.
Run the following command to install
OpenSSH Server:Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
Once installed, start the
sshserver on your target device by running:Start-Service sshdYou can verify that the
sshserver is live by running:Get-Service -Name sshd
Note
You can turn off the OpenSSH Server service by running
Stop-Service sshdon your target device.(Optional) Set the
sshserver to start on device startup (so even if your device restarts, you can connect to it):Set-Service -Name sshd -StartupType 'Automatic'
On your target device, get its IP address by running:
ipconfig
On your host machine, set a console variable for your target device’s
ipv4address from above (replacing127.0.0.1below):export TARGET_IP="127.0.0.1"
Also, set the username you would like to sign into on your target device (you can find it by looking at the path to a user folder like
Documents):export TARGET_USER="yourusername"
Sub-Step 3: Transferring Relevant Files¶
There are many files we need to transfer over. Here is a brief summary of each file and what it does. Afterwards, we will provide commands for transferring each file.
Files that need to be on the target device:
Our model file (ex.
inception_v3_model.dlc) - The built.dlcfile containing our model.Input data (ex.
notice_sign.raw) - Each file here will be used withsnpe-net-runto do inferences using our model. The paths to these files will be specified by theinput_list.txt.input_list.txt- A list of paths to input data above, one path per line.SNPE.dll- This contains the primary backend logic to interpret the.dlcfile on your target device.snpe-net-run.exe- This example application pulls together your.dlcfile, input data, and the SNPE backend to run inferences using your model.For practical applications, you will need to implement your own application using the SNPE API, as
snpe-net-runis just for testing purposes (it is relatively slow, and not tailored to your use case). See this tutorial for more details on how to build an application that uses your model on the target device.For developing purposes, you will need
SNPE.libto help link things, see building your application for Windows for more details.
msvcp140.dllandvcruntime140.dll- These libraries are usually provided on Windows devices, so we just have to make sure they are accessible on the target device. They are provided by Visual Studio to provide full runtime support tosnpe-net-run.exe.Additional runtimes based on your use case. (Ex.
SnpeHtpV73Stub.dll)(HTP Only) The cached model file (ex.
inception_v3_model_cached.dlc) - This is generated at the end of Part 5 viasnpe-dlc-graph-preparein order to speed up inferences on HTP devices.
These files work together to allow your model to run on the target device (producing output data for each input file).
Steps to Transfer Files¶
Warning
Throughout these steps, we will be switching between the host machine (where you have SNPE installed) and the target device (where your model will be run). Pay attention to the bolded directions indicating which terminal to do commands in!
Decide what folder you want to use for your destination folder on the target device.
For Windows target devices, we recommend you use
C:\ProgramData\SNPE\Temp\snpeexample.
On the target device, open a terminal or connect over
ssh.Set a variable for your
DESTINATIONon the target device by running:$env:DESTINATION = "C:\ProgramData\SNPE\Temp\snpeexample" Write-Output "DESTINATION is set to $env:DESTINATION"
Make the destination folder(s) on the target device for transferred files by running the following:
New-Item -ItemType Directory -Path $env:DESTINATION -Force
On the host machine, set an environment variable for the destination folder you chose earlier.
export DESTINATION="C:\ProgramData\SNPE\Temp\snpeexample"
Warning
Ensure that the directory you created on your target device matches the
DESTINATIONyou set on your host machine!Run the following on your target device to see which architecture, OS, and gcc version you have:
$env:PROCESSOR_ARCHITECTURE Get-CimInstance Win32_OperatingSystem
You should see an output like:
AMD64 SystemDirectory Organization BuildNumber RegisteredUser SerialNumber Version --------------- ------------ ----------- -------------- ------------ ------- C:\Windows\system32 12345 username@gmail.com 12321-12321-12321-AABBC 10.0.22631
Based on your target device’s architecture, OS, and gcc version, choose the proper folder:
Operating System
Architecture
Folder Name
Windows
x86_64 (aka ”AMD64”)
x86_64-windows-msvc
SnapDragon on Windows
arm64x
arm64x-windows-msvc
Windows
ARM64
aarch64-windows-msvc
Note
Pay special attention to the architecture. SNPE’s architecture has an “x” on the end because it supports cross-architecture apps that run natively on ARM64 and use emulation to load x64 dependencies.
On your host machine, run the following command with the corresponding folder from above to set your
TARGET_DEVICE_ARCH, for example:export TARGET_DEVICE_ARCH="x86_64-windows-msvc"
Note
This is similar to what we did earlier when setting
HOST_MACHINE_ARCHfor our host machine’s details.TARGET_DEVICE_ARCHhelps ensure we’re moving executables and libraries that are built to work with the target device’s architecture / OS / tool stack.Use
scpto transfer the example built model from your host machine to your target device.scp "${SNPE_ROOT}/examples/Models/InceptionV3/data/inception_v3_model.dlc" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
Note
You will need to sign in when using SSH for each
scprequest.Transfer the input data and script from the SNPE examples folder into
~\snpe_test_packageon the target device usingscp:scp -r "${SNPE_ROOT}/examples/Models/InceptionV3/data/cropped" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}" scp "${SNPE_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels.txt" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}" scp "${SNPE_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
Transfer the input list with file paths for all files that should be inferenced for our test.
scp "${SNPE_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
Note
The
target_raw_list.txtis generated for our example model via the initialization script$SNPE_ROOT/examples/Models/InceptionV3/.Transfer the primary runtime
SNPE.dll.scp "$SNPE_ROOT/lib/${TARGET_DEVICE_ARCH}/SNPE.dll" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
Transfer
snpe-net-run.exeto the target device:scp "$SNPE_ROOT/bin/$TARGET_DEVICE_ARCH/snpe-net-run.exe" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
Transfer the example interpreter script which is used to turn the direct outputs of
snpe-net-runinto an easy-to-read terminal output. For your applications, you may want to write your own interpreter:scp "${SNPE_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
Transfer any additional dependencies based on your use case. It’s worth looking at the
.sofiles within these folders to see if they are relevant for your application. They are not needed for the Inception_v3 model.See all libs which are for SNPE or shared between QNN and SNPE (excludes QNN specific files):
ls ${SNPE_ROOT}/lib/${TARGET_DEVICE_ARCH}/ | grep -v Qnn
If they’re relevant, write an
scpcommand to transfer the value over to"${TARGET_USER}@${TARGET_IP}:${DESTINATION}".
On your target device, run the following command to find the paths to
msvcp140.dllandvcruntime140.dll:# NOTE: # All current TARGET_DEVICE_ARCH values (x86_64-windows-msvc, arm64x-windows-msvc, aarch64-windows-msvc) # target 64-bit architectures. These DLLs are located in C:\Windows\System32. # If you ever target a 32-bit architecture (like x86-windows-msvc), # you should instead search in: C:\Windows\SysWOW64 $dllSearchDir = "C:\Windows\System32" # Construct full paths to the DLLs $msvcpPath = Join-Path $dllSearchDir "msvcp140.dll" $vcruntimePath = Join-Path $dllSearchDir "vcruntime140.dll" # Check and set msvcp140.dll if (Test-Path $msvcpPath) { $env:PATH_TO_MSVCP = $msvcpPath Write-Output "Set PATH_TO_MSVCP to '$msvcpPath'" } else { Write-Output "❌ Failed to set PATH_TO_MSVCP because we were unable to find msvcp140.dll. Please manually set \$env:PATH_TO_MSVCP." } # Check and set vcruntime140.dll if (Test-Path $vcruntimePath) { $env:PATH_TO_VCRUNTIME = $vcruntimePath Write-Output "Set PATH_TO_VCRUNTIME to '$vcruntimePath'" } else { Write-Output "❌ Failed to set PATH_TO_VCRUNTIME because we were unable to find vcruntime140.dll. Please manually set \$env:PATH_TO_VCRUNTIME." }
Warning
If the above command fails to set the path, you will need to manually look for
msvcp140.dllandvcruntime140.dll. They are created by Visual Studio. You should try searching inC:\Windows\System32,C:\Windows\SysWOW64and the build files for executables you want to run on the target device. Otherwise, you may need to use Google to troubleshoot how to get those.dllfiles.Run the following to add
msvcp140.dllandvcruntime140.dllto your path:[Environment]::SetEnvironmentVariable("PATH", [Environment]::GetEnvironmentVariable("PATH", "User") + ";" + $env:PATH_TO_MSVCP + ";" + $env:PATH_TO_VCRUNTIME, "User")
Additional files to transfer for DSP / HTP / AIP¶
Warning
This is only required if you plan to use a DSP, HTP, or AIP processor. Otherwise skip to Part 7 below.
Determine your target device’s SnapDragon architecture by looking your chipset up in the chipset table and finding the “DSP Hexagon Arch”.
Ex. “SD 8 Gen 3 (SM8650)” →
V66
On your host machine, update the “X” values below and run the commands to set
HEXAGON_ARCHto match the version number found in the above table.export HEXAGON_VERSION="XX" export HEXAGON_ARCH="hexagon-v${HEXAGON_VERSION}"
(DSP Only) Use
scpto transfer DSP specific runtimes as well as other necessary executables from your host machine to~\snpe_test_packageon the target Windows device.scp "$SNPE_ROOT/lib/${TARGET_DEVICE_ARCH}/npeDspV${HEXAGON_VERSION}Stub.dll" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
(HTP Only) If you are planning on using an HTP backend, copy over the
HtpPreparelibrary:scp "$SNPE_ROOT/lib/${TARGET_DEVICE_ARCH}/SnpeHtpPrepare.dll" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}" scp "$SNPE_ROOT/lib/${TARGET_DEVICE_ARCH}/SnpeHtpV${HEXAGON_VERSION}Stub.dll" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
See if any of the other files within the
${HEXAGON_ARCH}folder are relevant (this command ignores files which are QNN specific):ls ${SNPE_ROOT}/lib/${HEXAGON_ARCH}/unsigned/ | grep -v Qnn
If there are any relevant files, write an
scpcommand similar to above to transfer the data over to"${TARGET_USER}@${TARGET_IP}:${DESTINATION}".
Part 7: Executing Your Model With snpe-net-run¶
At this point, we have moved over all the necessary files to use our model to execute inferences on the target device and verify the outcome.
Setting Environment Variables¶
Open a terminal in your target device.
Note
You can alternatively connect to your target device using
sshby running:ssh "${TARGET_USER}@${TARGET_IP}"On your target device, update the
PATHto point to where the executable (bin) files are located:$env:PATH = "$env:PATH;$env:DESTINATION" Write-Output "Set PATH to '$($env:PATH)'"
(DSP Only) Set the
ADSP_LIBRARY_PATHto point to where libraries (lib) needed by the DSP are located by running:$env:ADSP_LIBRARY_PATH = "$($env:DESTINATION)" Write-Output "Set ADSP_LIBRARY_PATH to '$($env:ADSP_LIBRARY_PATH)'"
ADSP_LIBRARY_PATHindicates which folders should be loaded into theDSPon the target device, and operates similar toLD_LIBRARY_PATH(although it is delimited by semicolons instead of colons).
Executing¶
From the target device, navigate to the directory containing the test files:
cd "$env:DESTINATION"
Run the following to confirm that the files have been transferred and that you are in the proper folder:
ls
You should see the files you transferred, like so:
Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 5/5/2025 4:49 PM cropped -a---- 5/5/2025 4:49 PM 10479 imagenet_slim_labels.txt -a---- 5/5/2025 4:49 PM 24312743 inception_v3_model.dlc -a---- 5/5/2025 4:51 PM 48592238 inception_v3_model_cached.dlc -a---- 5/5/2025 4:50 PM 575056 msvcp140.dll -a---- 5/5/2025 4:50 PM 3381 show_inceptionv3_classifications.py -a---- 5/5/2025 4:50 PM 786944 snpe-net-run.exe -a---- 5/5/2025 4:50 PM 11036160 SNPE.dll -a---- 5/5/2025 4:49 PM 91 target_raw_list.txt -a---- 5/5/2025 4:51 PM 119888 vcruntime140.dll
Run the following command on the target device to execute an inference:
.\snpe-net-run.exe ` --container ".\inception_v3_model.dlc" ` --input_list ".\target_raw_list.txt" ` --output_dir ".\output"
Note
When calling your application (like
snpe-net-run.exe) you can decide which processors to use dynamically. In this case, if you wanted to specify a backend you could pass in--use_cpu,--use_gpu,--use_dsp, or--use_aipas long as you passed the appropriate backends to this device as part of Part 6: Transferring Files. See the reference docs for more details!Verify that you see an output like this:
------------------------------------------------------------------------------- Model String: N/A SNPE v2.33.0.250327124043_117917 ------------------------------------------------------------------------------- Processing graph : inception_v3_model Processing DNN input(s): cropped\chairs.raw Processing DNN input(s): cropped\notice_sign.raw Processing DNN input(s): cropped\plastic_cup.raw Processing DNN input(s): cropped\trash_bin.raw Successfully executed graph inception_v3_model
On the target device, interpret the results by running:
pip install numpy python3 show_inceptionv3_classifications.py -i target_raw_list.txt -o output -l imagenet_slim_labels.txt
Note
If you don’t want to install python on your target device, you can also
scptheoutputfolder back to your host machine. If you want to do that:Run this command to extract the
outputfolder:scp -r "$($env:TARGET_USER)@$($env:TARGET_IP):`"$($env:DESTINATION)/output`"" .
Run the below commands on your host machine instead of your target device. Replace
python3withpythonif you are in a virtual environment (venv).
Warning
If the
show_inceptionv3_classifications.pydoes not work because the script expects a different folder structure, you can also try running this script (which also depends on python), but handles the proper output structure:Write-Host "" Write-Host "Classification results" $idx = 0 Get-Content ".\target_raw_list.txt" | ForEach-Object { $inputFile = $_.Trim() $rawFile = ".\output\Result_$idx\InceptionV3\Predictions\Reshape_1_0.raw" Write-Host "Checking: $rawFile" if (-Not (Test-Path $rawFile)) { "{0,-22} {1,8:F6} {2,3} {3}" -f $inputFile, 0.0, 0, "missing_file" } else { try { $pythonOutput = python -c "import numpy as np; a = np.fromfile(r'$rawFile', dtype=np.float32); print(f'{np.max(a)} {np.argmax(a)}')" 2>$null } catch { $pythonOutput = "" } if ([string]::IsNullOrWhiteSpace($pythonOutput)) { "{0,-22} {1,8:F6} {2,3} {3}" -f $inputFile, 0.0, 0, "parse_error" } else { $split = $pythonOutput.Split() $maxVal = [float]$split[0] $maxIdx = [int]$split[1] $label = Get-Content ".\imagenet_slim_labels.txt" | Select-Object -Index $maxIdx "{0,-22} {1,8:F6} {2,3} {3}" -f $inputFile, $maxVal, $maxIdx, $label } } $idx++ }
You should see an output that looks like this:
Classification results cropped/notice_sign.raw 0.130224 459 brass cropped/trash_bin.raw 0.719755 413 ashcan cropped/plastic_cup.raw 0.989595 648 measuring cup cropped/chairs.raw 0.380808 832 studio couch
Note
If you are using a different model, you will likely want to create your own interpretation script similar to the above to turn the raw output tensors into a human-readable output.
In Summary¶
You have installed SNPE and its dependencies, built your model into a .dlc, transferred it onto the target device, and used snpe-net-run to execute inferences using the processors you chose!
When applying this to your own model, be sure to consider the key variables which may change how you use the SNPE tools along the way:
What model do you want to use?
How will you download it?
How will you get the input data?
How will you format the input data to feed it into your model?
Which framework is the model using? (Ex. ONNX, PyTorch, Tensorflow, etc.)
What is the OS and architecture of your host machine?
What is the OS and architecture of your target device?
Which processor(s) do you want to use for your AI models?
With those answers, you can adapt this tutorial to work with a model of your choice on your host machine, with any supported target device.
From here, the most common next steps are to:
Use this guide to build and execute your own model instead of the example model.
Create an application which uses the model on the target device (replacing
snpe-net-run). See this tutorial for more details (The SNPE API supports C++, C, or Java).Optimize your model using tools like
snpe-bench: snpe-bench documentation.
If you have any questions, consider reaching out on Qualcomm’s Developer Discord here.