SNPE Tutorial for Windows Target Device from Windows Host

Note

This is the second section of this tutorial. If you have not completed the first section (steps 1-5), please do so here.

Note

Please use the same Powershell session on your host device as you did in the previous section, as it contains environment variables we use throughout these steps.

Part 6: Transfer Files to Your Target Device

Now that you have a .dlc version of your model, the next step is to transfer the built model and all necessary files to the target processor, then to run inferences on it.

  1. Install all necessary dependencies from Setup.

  2. Follow the below SSH setup instructions.

  3. Follow the instructions for each specific processor you want to run your model on.

Warning

For cases where the “host machine” and “target device” are the same (ex. you want to build and run model inferences on your Snapdragon for Windows device), you can skip the SSH instructions and instead adapt the steps to handle the files locally.

Sub-Step 1: If you haven’t already, ensure that you follow the processor-specific Setup instructions.

Sub-Step 2: Set up SSH on the target device.

Warning

Here we use OpenSSH to copy files with scp later on and run scripts on the target device via ssh. If that does not work for your target device, feel free to use any other method of transferring the files over (ex: USB or mstsc).

  1. Ensure that both the host device and the target device are on the same network for this setup.

    • Otherwise, OpenSSH requires port-forwarding to connect.

  2. On your target device, install OpenSSH by following these steps:

    1. Open an Admin PowerShell session.

    2. Run the following command to install OpenSSH Server:

      Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
      
    3. Verify that OpenSSH Server was installed by running:

      Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH.Server*'
      
  3. Once installed, start the ssh server on your target device by running:

    Start-Service sshd
    
  4. You can verify that the ssh server is live by running:

    Get-Service -Name sshd
    

    Note

    You can turn off the OpenSSH Server service by running Stop-Service sshd on your target device.

  5. (Optional) Set the ssh server to start on device startup (so even if your device restarts, you can connect to it):

    Set-Service -Name sshd -StartupType 'Automatic'
    
  6. On your target device, get its IP address by running:

    ipconfig
    
  7. On your host machine, set a console variable for your target device’s ipv4 address from above (replacing 127.0.0.1 below).

    $env:TARGET_IP = "127.0.0.1"
    Write-Output "TARGET_IP is set to $env:TARGET_IP"
    
  8. Also, set the username you would want to use to sign into your target device. This is the username that shows up near the beginning of file paths (ex. UserName in C:\Users\UserName\Documents on your target device):

    $env:TARGET_USER = "ReplaceThisUserName"
    Write-Output "TARGET_USER is set to $env:TARGET_USER"
    

Sub-Step 3: Transferring Relevant Files

To run your AI model on another device (your target device), you need to copy over some important files. Here’s a simple breakdown of what each file is for and why it’s needed:

  1. inception_v3_model.dlc - This is the built model file (.dlc) that will be used for inferencing.

  2. Images like notice_sign.raw - These are the input files that our model will interpret on the target device. We will move over the whole folder of prepared images in cropped for this example.

  3. input_list.txt - A list of relative paths to input data above, one path per line from $DESTINATION (where we will run snpe-net-run.exe from) to the individual input files.

  4. SNPE.dll - This contains the primary backend logic to interpret the .dlc file on your target device.

  5. snpe-net-run.exe - This example application pulls together your .dlc file, input data, and the SNPE backend to run inferences using your model.

    1. For practical applications, you will need to implement your own application using the SNPE API, as snpe-net-run is just for testing purposes (it is relatively slow, and not tailored to your use case). See this tutorial for more details on how to build an application that uses your model on the target device.

    2. For developing purposes, you will need SNPE.lib to help link things, see building your application for Windows for more details.

  6. msvcp140.dll and vcruntime140.dll - These libraries are provided by Visual Studio to provide full runtime support to snpe-net-run.exe.

  7. Additional runtimes based on your use case. (Ex. SnpeHtpV73Stub.dll)

    1. (HTP Only) The cached model file (ex. inception_v3_model_cached.dlc) - This is generated at the end of Part 5 via snpe-dlc-graph-prepare in order to speed up inferences on HTP devices.

These files work together to allow your model to run on the target device (producing output data for each input file).

Steps to Transfer Files

Warning

Throughout these steps, we will be switching between the host machine (where you have SNPE installed) and the target device (where your model will be run). Pay attention to the bolded directions indicating which device to execute commands in.

If you are using the same device as both your host machine and target device, you can use cp to copy files instead of scp.

  1. Decide what folder you want to use for your destination folder on the target device.

    • For Windows target devices, we recommend you use C:\ProgramData\SNPE\Temp\snpeexample.

  2. On the target device, open a PowerShell session or connect from the host machine via ssh.

  3. Set a variable for your DESTINATION on the target device by running:

    $env:DESTINATION = "C:\ProgramData\SNPE\Temp\snpeexample"
    Write-Output "DESTINATION is set to $env:DESTINATION"
    
  4. Make the destination folder(s) on the target device for transferred files by running the following:

    New-Item -ItemType Directory -Path $env:DESTINATION -Force
    

    You should expect an output similar to:

    Directory: C:\ProgramData\SNPE\Temp
    
    Mode                 LastWriteTime         Length Name
    ----                 -------------         ------ ----
    d-----          5/6/2025  10:38 AM                snpeexample
    
  5. On the host machine this time, set an environment variable for the destination folder you chose earlier.

    $env:DESTINATION = "C:\ProgramData\SNPE\Temp\snpeexample"
    Write-Output "DESTINATION is set to $env:DESTINATION"
    

    Warning

    Ensure that the directory you created on your target device matches the DESTINATION you set on your host machine.

  6. On your target device, run the following to see which architecture and OS version you have:

    $env:PROCESSOR_ARCHITECTURE
    Get-CimInstance Win32_OperatingSystem
    

    You should see an output like:

    AMD64
    
    SystemDirectory     Organization BuildNumber RegisteredUser     SerialNumber            Version
    ---------------     ------------ ----------- --------------     ------------            -------
    C:\Windows\system32              12345       username@gmail.com 12321-12321-12321-AABBC 10.0.22631
    
  7. Based on your target device’s architecture, OS, and gcc version, choose the proper folder:

    Operating System

    Architecture

    Folder Name

    Windows

    x86_64 (aka ”AMD64”)

    x86_64-windows-msvc

    SnapDragon on Windows

    arm64x

    arm64x-windows-msvc

    Windows

    ARM64

    aarch64-windows-msvc

    Note

    Pay special attention to the architecture. SNPE’s architecture has an “x” on the end because it supports cross-architecture apps that run natively on ARM64 and use emulation to load x64 dependencies.

  8. On your host machine, run the following command with the corresponding folder from above to set your TARGET_DEVICE_ARCH, for example:

    $env:TARGET_DEVICE_ARCH = "x86_64-windows-msvc"
    

    Note

    This is similar to what we did earlier when setting HOST_MACHINE_ARCH for our host machine’s details. TARGET_DEVICE_ARCH helps ensure we’re moving executables and libraries that are built to work with the target device’s architecture / OS / tool stack.

  9. On your host machine, use scp to transfer the example built model from your host machine to your target device.

    Note

    The rest of these scp commands will be run on your host machine.

    scp "$($env:SNPE_ROOT)/examples/Models/InceptionV3/dlc/inception_v3_model.dlc" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    

    Note

    You will need to sign in when using SSH for each scp request.

  10. Transfer the input data and script from the SNPE examples folder into ~\snpe_test_package on the target device using scp in a similar way:

    scp -r "$($env:SNPE_ROOT)/examples/Models/InceptionV3/data/cropped" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    scp "$($env:SNPE_ROOT)/examples/Models/InceptionV3/data/imagenet_slim_labels.txt" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    scp "$($env:SNPE_ROOT)/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    
  11. Transfer the input list with file paths for all files that should be inferenced for our test.

    scp "$($env:SNPE_ROOT)/examples/Models/InceptionV3/data/target_raw_list.txt" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    

    Note

    The target_raw_list.txt is generated for our example model via the initialization script $SNPE_ROOT/examples/Models/InceptionV3/.

  12. Transfer the primary runtime SNPE.dll.

    scp "$($env:SNPE_ROOT)/lib/$($env:TARGET_DEVICE_ARCH)/SNPE.dll" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    
  13. Transfer snpe-net-run.exe to the target device:

    scp "$($env:SNPE_ROOT)/bin/$($env:TARGET_DEVICE_ARCH)/snpe-net-run.exe" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    
  14. Transfer the example interpreter script which is used to turn the direct outputs of snpe-net-run.exe into an easy-to-read terminal output. This is specifically written for InceptionV3, so for your applications, you will likely want to write your own interpreter:

    scp "$($env:SNPE_ROOT)/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    
  15. Run this longer PowerShell script which finds msvcp140.dll and vcruntime140.dll paths and sets them to environment variables:

Note

These .dll files are part of Visual Studio’s C++ Runtime, which is needed by many Windows executable files like snpe-net-run.exe.

# NOTE:
# All current TARGET_DEVICE_ARCH values (x86_64-windows-msvc, arm64x-windows-msvc, aarch64-windows-msvc)
# target 64-bit architectures. These DLLs are located in C:\Windows\System32.
# If you ever target a 32-bit architecture (like x86-windows-msvc),
# you should instead search in: C:\Windows\SysWOW64

$dllSearchDir = "C:\Windows\System32"

# Construct full paths to the DLLs
$msvcpPath = Join-Path $dllSearchDir "msvcp140.dll"
$vcruntimePath = Join-Path $dllSearchDir "vcruntime140.dll"

# Check and set msvcp140.dll
if (Test-Path $msvcpPath) {
   $env:PATH_TO_MSVCP = $msvcpPath
   Write-Output "Set PATH_TO_MSVCP to '$msvcpPath'"
} else {
   Write-Output "❌ Failed to set PATH_TO_MSVCP because we were unable to find msvcp140.dll. Please manually set \$env:PATH_TO_MSVCP."
}

# Check and set vcruntime140.dll
if (Test-Path $vcruntimePath) {
   $env:PATH_TO_VCRUNTIME = $vcruntimePath
   Write-Output "Set PATH_TO_VCRUNTIME to '$vcruntimePath'"
} else {
   Write-Output "❌ Failed to set PATH_TO_VCRUNTIME because we were unable to find vcruntime140.dll. Please manually set \$env:PATH_TO_VCRUNTIME."
}

Warning

If the above command fails to set the path, you will need to manually look for msvcp140.dll and vcruntime140.dll. They are created by Visual Studio. You should try searching in C:\Windows\System32, C:\Windows\SysWOW64 and the build files for executables you want to run on the target device. Otherwise, you may need to use google to troubleshoot how to get those .dll files.

  1. Transfer the Visual Studio .dll files (msvcp140.dll and vcruntime140.dll) by running:

scp "$($env:PATH_TO_MSVCP)" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
scp "$($env:PATH_TO_VCRUNTIME)" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
  1. (HTP Only) Transfer the cached .dlc created at the end of Part 5 via snpe-dlc-graph-prepare:

scp "$env:SNPE_ROOT/examples/Models/InceptionV3/dlc/inception_v3_model_cached.dlc" `
    "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
  1. Transfer any additional dependencies based on your use case. It’s worth looking at the .dll files within these folders to see if they are relevant for your application. For this tutorial’s example (Inception V3), we have already transferred all the files that we need to.

  • See all libs which are for SNPE or shared between QNN and SNPE (excludes QNN specific files): Get-ChildItem "$($env:SNPE_ROOT)\lib\$($env:TARGET_DEVICE_ARCH)\" | Where-Object { $_.Name -notmatch 'Qnn' }

  • If they’re relevant, write an scp command to transfer the value over to "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)".

  • Genie files are specifically for LLMs. See the Genie product for more details if you are using an LLM.

  • SNPE.lib is not needed on the target device. It is used on your host machine during development of the application that uses your AI model (similar to how snpe-net-run uses your model on the target device).

Additional files to transfer for DSP / HTP / AIP

Warning

This is only required if you plan to use a DSP, HTP, or AIP processor. Otherwise skip to Part 7 below.

  1. Determine your target device’s SnapDragon architecture by looking your chipset up in the chipset table and finding the “DSP Hexagon Arch”.

    • Ex. “SD 8 Gen 3 (SM8650)” → V66

  2. Update the “XX” value below to set the proper HEXAGON_VERSION (which will automatically update HEXAGON_ARCH). Ex. For “V68”, the proper value for HEXAGON_VERSION would be 68.

    $env:HEXAGON_VERSION = "XX"
    
  3. Run the following to set the HEXAGON_ARCH based on the previously set HEXAGON_VERSION:

    $env:HEXAGON_ARCH = "hexagon-v$($env:HEXAGON_VERSION)"
    Write-Output "Set HEXAGON_VERSION to '$($env:HEXAGON_VERSION)'"
    Write-Output "Set HEXAGON_ARCH to '$($env:HEXAGON_ARCH)'"
    
  4. (DSP Only) Use scp to transfer DSP specific runtimes as well as other necessary executables from your host machine to ~\snpe_test_package on the target Windows device.

    scp "$($env:SNPE_ROOT)\lib\$($env:TARGET_DEVICE_ARCH)\SnpeDspV$($env:HEXAGON_VERSION)Stub.dll" "${TARGET_USER}@${TARGET_IP}:${DESTINATION}"
    
  5. (HTP Only) If you are planning on using an HTP backend, copy over the HtpPrepare library:

    scp "$($env:SNPE_ROOT)\lib\$($env:TARGET_DEVICE_ARCH)\SnpeHtpV$($env:HEXAGON_VERSION)Stub.dll" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    scp "$($env:SNPE_ROOT)\lib\$($env:TARGET_DEVICE_ARCH)\SnpeHtpPrepare.dll" "$($env:TARGET_USER)@$($env:TARGET_IP):$($env:DESTINATION)"
    
  6. See if any of the other files within the ${HEXAGON_ARCH} folder are relevant (this command ignores files which are QNN specific). If they contain ``snpe`` in their name or the name of your hexagon version, they are relevant:

    Get-ChildItem "$($env:SNPE_ROOT)\lib\$($env:HEXAGON_ARCH)\unsigned" | Where-Object { $_.Name -notmatch 'Qnn' }
    
  7. (Optional) If there are any relevant files, write an scp command similar to above to transfer the data over to "${TARGET_USER}@${TARGET_IP}:${DESTINATION}".

Part 7: Executing Your Model With snpe-net-run

At this point, we have moved over all the necessary files to use our model to execute inferences on the target device and verify the outcome.

Setting Environment Variables

  1. Open a terminal in your target device.

    Note

    You can alternatively connect to your target device from your host machine using ssh by running:

    ssh "$($env:TARGET_USER)@$($env:TARGET_IP)"
    
  2. On your target device, update the PATH to point to where the executable (bin) files are located:

    $env:PATH = "$env:PATH;$env:DESTINATION"
    Write-Output "Set PATH to '$($env:PATH)'"
    
  3. (DSP Only) Set the ADSP_LIBRARY_PATH to point to where libraries (lib) needed by the DSP are located by running:

    $env:ADSP_LIBRARY_PATH = "$($env:DESTINATION)"
    Write-Output "Set ADSP_LIBRARY_PATH to '$($env:ADSP_LIBRARY_PATH)'"
    
    • ADSP_LIBRARY_PATH indicates which folders should be loaded into the DSP on the target device, and operates similar to LD_LIBRARY_PATH (although it is delimited by semicolons instead of colons).

Executing

  1. From the target device, navigate to the directory containing the test files:

    cd "$env:DESTINATION"
    
  2. Run the following to confirm that the files have been transferred and that you are in the proper folder:

    ls
    

    You should see the files you transferred, like so:

    Mode                 LastWriteTime         Length Name
    ----                 -------------         ------ ----
    d-----          5/5/2025   4:49 PM                cropped
    -a----          5/5/2025   4:49 PM          10479 imagenet_slim_labels.txt
    -a----          5/5/2025   4:49 PM       24312743 inception_v3_model.dlc
    -a----          5/5/2025   4:51 PM       48592238 inception_v3_model_cached.dlc
    -a----          5/5/2025   4:50 PM         575056 msvcp140.dll
    -a----          5/5/2025   4:50 PM           3381 show_inceptionv3_classifications.py
    -a----          5/5/2025   4:50 PM         786944 snpe-net-run.exe
    -a----          5/5/2025   4:50 PM       11036160 SNPE.dll
    -a----          5/5/2025   4:49 PM             91 target_raw_list.txt
    -a----          5/5/2025   4:51 PM         119888 vcruntime140.dll
    
  3. Run the following command on the target device to execute an inference:

    & ".\snpe-net-run.exe" `
      --container ".\inception_v3_model.dlc" `
      --input_list ".\target_raw_list.txt" `
      --output_dir ".\output"
    

    Warning

    If you see no output after running the above, try running the below version of the command (directly calling the executable is required on some machines):

    snpe-net-run.exe `
      --container ".\inception_v3_model.dlc" `
      --input_list ".\target_raw_list.txt" `
      --output_dir ".\output"
    

    Note

    When calling your application (like snpe-net-run.exe) you can decide which processors to use dynamically. In this case, if you wanted to specify a backend you could pass in --use_cpu, --use_gpu, --use_dsp, or --use_aip as long as you passed the appropriate backends to this device as part of Part 6: Transferring Files. See the reference docs for more details!

  4. Verify that you see an output like this:

    -------------------------------------------------------------------------------
    Model String: N/A
    SNPE v2.33.0.250327124043_117917
    -------------------------------------------------------------------------------
    
    Processing graph : inception_v3_model
    Processing DNN input(s):
    cropped\chairs.raw
    Processing DNN input(s):
    cropped\notice_sign.raw
    Processing DNN input(s):
    cropped\plastic_cup.raw
    Processing DNN input(s):
    cropped\trash_bin.raw
    Successfully executed graph inception_v3_model
    
  5. (Optional) If you don’t want to install python on your target device, you can also scp the output folder back to your host machine by running:

    1. Run this command to extract the output folder:

      scp -r "$($env:TARGET_USER)@$($env:TARGET_IP):`"$($env:DESTINATION)/output`"" .
      
    2. Run the below commands on your host machine instead of your target device.

      • Replace python3 with python if you are in a virtual environment (venv).

  6. On the target device, interpret the results by running:

    Warning

    If the show_inceptionv3_classifications.py does not work because the script expects a different folder structure, you can also try running the alternate script below (which also depends on python), but handles the proper output structure.

    pip install numpy
    python3 show_inceptionv3_classifications.py -i target_raw_list.txt -o output -l imagenet_slim_labels.txt
    

    Alternate script for interpreting the results:

    Write-Host ""
    Write-Host "Classification results"
    
    $idx = 0
    Get-Content ".\target_raw_list.txt" | ForEach-Object {
        $inputFile = $_.Trim()
        $rawFile = ".\output\Result_$idx\InceptionV3\Predictions\Reshape_1_0.raw"  # Updated from :0 to _0
    
        # Optional: show what file is being checked
        Write-Host "Checking: $rawFile"
    
        if (-Not (Test-Path $rawFile)) {
            "{0,-22} {1,8:F6} {2,3} {3}" -f $inputFile, 0.0, 0, "missing_file"
        }
        else {
            try {
                $pythonOutput = python -c "import numpy as np; a = np.fromfile(r'$rawFile', dtype=np.float32); print(f'{np.max(a)} {np.argmax(a)}')" 2>$null
            } catch {
                $pythonOutput = ""
            }
    
            if ([string]::IsNullOrWhiteSpace($pythonOutput)) {
                "{0,-22} {1,8:F6} {2,3} {3}" -f $inputFile, 0.0, 0, "parse_error"
            }
            else {
                $split = $pythonOutput.Split()
                $maxVal = [float]$split[0]
                $maxIdx = [int]$split[1]
    
                $label = Get-Content ".\imagenet_slim_labels.txt" | Select-Object -Index $maxIdx
    
                "{0,-22} {1,8:F6} {2,3} {3}" -f $inputFile, $maxVal, $maxIdx, $label
            }
        }
    
        $idx++
    }
    
  7. You should see an output that looks like this:

    Classification results
    cropped/notice_sign.raw 0.130224 459 brass
    cropped/trash_bin.raw  0.719755 413 ashcan
    cropped/plastic_cup.raw 0.989595 648 measuring cup
    cropped/chairs.raw     0.380808 832 studio couch
    

    Note

    If you are using a different model, you will likely want to create your own interpretation script similar to the above to turn the raw output tensors into a human-readable output.

With that, you’ve successfully gone through each part to build and execute your AI model on your target device!

In Summary

You have installed SNPE and its dependencies, built your model into a .dlc, transferred it onto the target device, and used snpe-net-run to execute inferences using the processors you chose!

When applying this to your own model, be sure to consider the key variables which may change how you use the SNPE tools along the way:

  1. What model do you want to use?

    1. How will you download it?

    2. How will you get the input data?

    3. How will you format the input data to feed it into your model?

    4. Do you want to quantize it? (Required for HTP / DSP target device processors)

  2. Which framework is your model using? (Ex. ONNX, PyTorch, Tensorflow, etc.)

  3. What is the OS and architecture of your host machine? (Ex. Linux x86_64)

  4. What is the OS and architecture of your target device? (Ex. Ubuntu Aarch64)

  5. Which processor(s) do you want to use for your AI models? (CPU / GPU / DSP / HTP / AIP)

With those answers, you can adapt this tutorial to work with a model of your choice on your host machine, with any supported target device.

From here, the most common next steps are to:

  1. Use this guide to build and execute your own model instead of the example model.

  2. Create an application which uses the model on the target device (replacing snpe-net-run). See this tutorial for more details (The SNPE API supports C++, C, or Java).

  3. Optimize your model using tools like snpe-bench.

If you have any questions, consider reaching out on Qualcomm’s Developer Discord here!