CNN to QNN for Windows Host on Windows Target

Note

This is Part 2 of the CNN to QNN tutorial for Windows host machines. If you have not completed Part 1, please do so here.

Step 3: Build your QNN model for target device architecture

Once the CNN model has been converted into QNN format, the next step is to build it so it can run on the target device’s operating system with qnn-model-lib-generator.

Warning

For cases where the “host machine” and “target device” are the same (e.g., you want to build and run model inferences on your Snapdragon for Windows device), you will need to adapt the steps to handle files locally instead of transferring them to a remote device.

Note

Please continue to use the same terminal you were using on your host machine from part 1.

  1. Ensure you have cmake installed on your machine by running cmake --version.

    Note

    If cmake is not installed, run & "${QNN_SDK_ROOT}/bin/check-windows-dependency.ps1" to download the proper dependencies.

  2. Run mkdir C:\tmp\qnn_tmp to make the folder where your newly built files will live.

  3. Run cd C:\tmp\qnn_tmp to navigate to the new folder.

  4. Run the following command to copy over the QNN model files:

    Copy-Item -Path "${env:QNN_SDK_ROOT}\examples\Models\InceptionV3\model\Inception_v3.cpp","${env:QNN_SDK_ROOT}\examples\Models\InceptionV3\model\Inception_v3.bin" -Destination "c:\tmp\qnn_tmp"
    
  5. Choose the most relevant supported target architecture from the following list: - For x86_64 Windows target: windows-x86_64 - For Arm 64 Windows target: windows-aarch64 - For Snapdragon devices, choose windows-aarch64

  6. On your host machine, set the target architecture of your target device by setting QNN_TARGET_ARCH to your device’s target architecture:

    $QNN_TARGET_ARCH="your-target-architecture-from-above"
    

    For example:

    $QNN_TARGET_ARCH="windows-x86_64"
    
  7. Run the following command on your host machine to generate the model library:

    python "${QNN_SDK_ROOT}\bin\x86_64-windows-msvc\qnn-model-lib-generator" `
        -c ".\Inception_v3.cpp" `
        -b ".\Inception_v3.bin" `
        -o "model_libs" `
        -t "$QNN_TARGET_ARCH"
    
    • -c - This indicates the path to the .cpp QNN model file.

    • -b - This indicates the path to the .bin QNN model file. (-b is optional, but at runtime, the .cpp file could fail if it needs the .bin file, so it is recommended).

    • -o - The path to the output folder.

    • -t - Indicate which architecture to build for (between windows-x86_64 and windows-aarch64)

    Warning

    If the build fails due to a missing build dependency such as cmake or clang-cl being missing, run & "${QNN_SDK_ROOT}/bin/check-windows-dependency.ps1" to install all build dependencies.

    You can also use & "${QNN_SDK_ROOT}/bin/envcheck.ps1" -a to help debug which dependencies are missing.

  8. Run ls /tmp/qnn_tmp/model_libs/${QNN_TARGET_ARCH} and verify that the output file Inception_v3.dll is inside. - You will use the Inception_v3.dll file on the target device to execute inferences. - The output .dll file will be located in the model_libs directory, named according to the target architecture.

    • For example: model_libs/x64/Inception_v3.dll or model_libs/aarch64/Inception_v3.dll.

Step 4: Use the built model on specific processors

Now that you have an executable version of your model, the next step is to transfer the built model and all necessary files to the target processor, then to run inferences on it.

  1. Install all necessary dependencies from Setup.

  2. Follow the below SSH setup instructions.

  3. Follow the instructions for each specific processor you want to run your model on.

Sub-Step 1: If you haven’t already, ensure that you follow the processor-specific Setup instructions for your host machine here.

Sub-Step 2: Set up SSH on the Target Device

Warning

Here we use OpenSSH to copy files with scp later on and run scripts on the target device via ssh. If that does not work for your target device, feel free to use any other method of transferring the files over (e.g., USB or mstsc for a visual connection).

  1. Ensure that both the host device and the target device are on the same network for this setup.
    1. Otherwise, OpenSSH requires port-forwarding to connect.

  2. On the target device, install OpenSSH on Windows.
    1. Open an Admin PowerShell terminal.

    2. Run the following command to install OpenSSH Server:

    Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
    
  3. Once installed, start the ssh server on your target device by running:

    Start-Service sshd
    # Optional: The command below causes the OpenSSH server to start on device startup.
    Set-Service -Name sshd -StartupType 'Automatic'
    
  4. You can verify that the ssh server is live by running:

    Get-Service -Name sshd
    

    You can turn off the OpenSSH Server service by running Stop-Service sshd on your target device.

  5. On your target device, run ipconfig to get the IP address of your target Windows device.

  6. From your host machine, set a console variable for your target device’s ipv4 address from above (replacing 127.0.0.1 below):

    $TARGET_IP="127.0.0.1"
    
  7. Also set the username you would like to sign into on your Windows target device (you can find it by looking at the path to a user folder like Documents):

    $TARGET_USER="yourusername"
    
  8. On your host machine, install OpenSSH Client by:
    1. Opening a Powershell as an administrator.

    2. Installing by running Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

    3. Verifying the installation by running Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH.Client*'

    From this point on, you should be able to ssh from Powershell. You may need to open another PowerShell to do so.

Sub-Step 3: Follow the steps below for whichever processor you would like to run your model on.

CPU

Transferring over all relevant files

  1. On the target device, open a terminal and run mkdir C:\qnn_test_package to make a destination repo for transferred files.

  2. On the host device, use scp to transfer QnnCpu.dll from your host machine to C:\qnn_test_package on the target Windows device.

    scp "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnCpu.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  3. Use scp to transfer the example built model. - Update the x64 folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.

    scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  4. Transfer the input data, input list, and script from the QNN SDK examples folder into C:\qnn_test_package on the target device using scp in a similar way:

    scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  5. Transfer qnn-net-run.exe from $QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe to C:\qnn_test_package on the target device:

    scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    

Doing inferences on the target device processor

  1. Open a PowerShell instance on the target Windows device. - Alternatively, you can ssh from your host machine, run the following command to ssh into your target device. - These console variables were set in the above instructions for “Transferring all relevant files”.

    ssh "${TARGET_USER}@${TARGET_IP}"
    

    Note

    You will have to log in with your target device’s login for that username.

  2. Navigate to the directory containing the test files:

    cd C:\qnn_test_package
    
  3. Run the following command on the target device to execute an inference:

    .\qnn-net-run.exe `
       --model ".\Inception_v3.dll" `
       --input_list ".\target_raw_list.txt" `
       --backend ".\QnnCpu.dll" `
       --output_dir ".\output"
    
  4. Run the following script on the target device to view the classification results:

    Note

    You can alternatively copy the output folder back to your host machine with scp and run the following script there to avoid having to install python on your target device.

    python ".\show_inceptionv3_classifications.py" `
           -i ".\cropped\raw_list.txt" `
           -o "output" `
           -l ".\imagenet_slim_labels.txt"
    
  5. Verify that the classification results in output match the following: 1. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan 2. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch 3. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup 4. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass

GPU

Transferring over all relevant files

  1. On the target device, open a terminal and run mkdir C:\qnn_test_package to make a destination repo for transferred files.

  2. Use scp to transfer QnnGpu.dll from your host machine to C:\qnn_test_package on the target Windows device.

    scp "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnGpu.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  3. Use scp to transfer the example built model. 1. Update the x64 folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.

    scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  4. Transfer the input data, input list, and script from the QNN SDK examples folder into C:\qnn_test_package on the target device using scp in a similar way:

    scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  5. Transfer qnn-net-run.exe from $QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe to C:\qnn_test_package on the target device:

    scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    

Doing inferences on the target device processor

  1. Open a PowerShell instance on the target Windows device. 1. Alternatively, you can ssh from your host machine, run the following command to ssh into your target device. 2. These console variables were set in the above instructions for “Transferring all relevant files”.

    ssh "${TARGET_USER}@${TARGET_IP}"
    

    Note

    You will have to login with your target device’s login for that username.

  2. Navigate to the directory containing the test files:

    cd C:\qnn_test_package
    
  3. Run the following command on the target device to execute an inference:

    .\qnn-net-run.exe `
       --model ".\Inception_v3.dll" `
       --input_list ".\target_raw_list.txt" `
       --backend ".\QnnGpu.dll" `
       --output_dir ".\output"
    
  4. Run the following script on the target device to view the classification results:

    Note

    You can alternatively copy the output folder back to your host machine with scp and run the following script there to avoid having to install python on your target device.

    python ".\show_inceptionv3_classifications.py" `
       -i ".\cropped\raw_list.txt" `
       -o "output" `
       -l ".\imagenet_slim_labels.txt"
    
  5. Verify that the classification results in output match the following: 1. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan 2. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch 3. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup 4. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass

DSP

Warning

DSP processors require quantized models instead of floating point models. If you do not have a quantized model, please follow Step 2 of the CNN to QNN tutorial to build one.

Transferring over all relevant files

  1. On the target device, open a terminal and run mkdir C:\qnn_test_package to make a destination repo for transferred files.

  2. Determine your target device’s SnapDragon architecture by looking your chipset up in the Supported Snapdragon Devices.

  3. Update the “X” values below and run the commands to set DSP_ARCH to match the version number found in the above table. 1. Only the 2 digits at the end should update, and they should have the same version. Ex. For “V68”, the proper value would be hexagon-v68.

    $DSP_VERSION="XX"
    $DSP_ARCH="hexagon-v${DSP_VERSION}"
    
  4. Use scp to transfer QnnDsp.dll as well as other necessary executables from your host machine to C:\qnn_test_package on the target Windows device.

    scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnDsp.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnDspV${DSP_VERSION}Stub.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  5. Check the Backend table to see if there are any other processor-specific executables needed for your target processor (DSP) and your target device’s architecture ($QNN_TARGET_ARCH). 1. Use similar syntax above for scp to transfer any additional .dll files listed below your selected target architecture in this table. (There may be none!)

    Warning

    Ensure you scp the hexagon-v## values (in addition to the other architecture files!)

  6. Use scp to transfer the example built model. 1. Update the x64 folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.

    scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  7. Transfer the input data, input list, and script from the QNN SDK examples folder into C:\qnn_test_package on the target device using scp in a similar way:

    scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  8. Transfer qnn-net-run.exe from $QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe to C:\qnn_test_package on the target device:

    scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    

Doing inferences on the target device processor

  1. Open a PowerShell instance on the target Windows device. 1. Alternatively, you can ssh from your host machine, run the following command to ssh into your target device. 2. These console variables were set in the above instructions for “Transferring all relevant files”.

    ssh "${TARGET_USER}@${TARGET_IP}"
    

    Note

    You will have to login with your target device’s login for that username.

  2. Navigate to the directory containing the test files:

    cd C:\qnn_test_package
    
  3. Run the following command on the target device to execute an inference:

    .\qnn-net-run.exe `
       --model ".\Inception_v3.dll" `
       --input_list ".\target_raw_list.txt" `
       --backend ".\QnnDsp.dll" `
       --output_dir ".\output"
    
  4. Run the following script on the target device to view the classification results:

    Note

    You can alternatively copy the output folder back to your host machine with scp and run the following script there to avoid having to install python on your target device.

    python ".\show_inceptionv3_classifications.py" `
       -i ".\cropped\raw_list.txt" `
       -o "output" `
       -l ".\imagenet_slim_labels.txt"
    
  5. Verify that the classification results in output match the following: 1. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan 2. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch 3. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup 4. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass

HTP

Warning

HTP processors require quantized models instead of floating point models. If you do not have a quantized model, please follow Step 2 <qnn_tutorial_windows_host> of the CNN to QNN tutorial to build one.

Additional HTP Required Setup

Running the model on a target device’s HTP requires the generation of a serialized context.

On the host:

  1. Navigate to the directory where you built the model in the previous steps:

    cd /tmp/qnn_tmp
    
  2. Users can set the custom options and different performance modes to HTP Backend through the backend config. Please refer to QNN HTP Backend Extensions for various options available in the config.

  3. Refer to the example below for creating a backend config file for the QCS6490/QCM6490 target with mandatory options passed in:

    1. Update the following information based on your device’s htp_arch.

    {
        "graphs": [
            {
                "graph_names": [
                    "Inception_v3"
                ],
                "vtcm_mb": 2
            }
        ],
        "devices": [
            {
                "htp_arch": "v68"
            }
        ]
    }
    
  4. The above config file with minimum parameters such as backend extensions config specified through JSON is given below:

    {
        "backend_extensions": {
            "shared_library_path": "path_to_shared_library",  // give path to shared extensions library (.dll)
            "config_file_path": "path_to_config_file"         // give path to backend config
        }
    }
    
  5. To generate the context, update <path to JSON of backend extensions> below with the config you wrote above, then run the command in Windows PowerShell:

    & "${QNN_SDK_ROOT}/bin/${QNN_TARGET_ARCH}/qnn-context-binary-generator.exe" `
        --backend "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnHtp.dll" `
        --model "${QNN_SDK_ROOT}/examples/Models/InceptionV3/model_libs/${QNN_TARGET_ARCH}/libInception_v3.dll" `
        --binary_file "Inception_v3.serialized" `
        --config_file <path to JSON of backend extensions>
    
  6. This creates the serialized context at: - ${QNN_SDK_ROOT}/examples/Models/InceptionV3/output/Inception_v3.serialized.bin

Transferring over all relevant files

  1. On the target device, open a terminal and run mkdir C:\qnn_test_package to make a destination repo for transferred files.

  2. Determine your target device’s SnapDragon architecture by looking your chipset up in the Supported Snapdragon Devices table.

  3. Update the “X” values below and run the commands to set HTP_VERSION to match the version number found in the above table.

    1. Only the 2 digits at the end should update, and they should have the same version. Ex. For “V68” in the table, the proper value for HTP_VERSION would be 68 and HTP_ARCH would be hexagon-v68. (You can use 68 as the default here to try it out).

    $HTP_VERSION="XX"
    $HTP_ARCH="hexagon-v${HTP_VERSION}"
    
  4. Use scp to transfer QnnHtp.dll, from your host machine to C:\qnn_test_package on the target Windows device.

    scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnHtp.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnHtpPrepare.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "$QNN_SDK_ROOT/lib/${QNN_TARGET_ARCH}/QnnHtpV${HTP_VERSION}Stub.dll" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "$QNN_SDK_ROOT/lib/${HTP_ARCH}/unsigned/*" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  5. Check the Backend table to see if there are any other processor-specific executables needed for your target processor (DSP) and your target device’s architecture ($QNN_TARGET_ARCH).

    1. Use similar syntax above for scp to transfer any additional .dll files listed below your selected target architecture in this table. (Usually the above install covers them all!)

  6. Use scp to transfer the example built model.

    1. Update the x64 folder below to the proper folder for your built model. The folder name depends on your host machine’s architecture.

    scp "/tmp/qnn_tmp/model_libs/x64/Inception_v3.dll"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  7. Transfer the input data, input list, and script from the QNN SDK examples folder into C:\qnn_test_package on the target device using scp in a similar way:

    scp -r "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/imagenet_slim_labels"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    scp "${QNN_SDK_ROOT}/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications.py"  "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    
  8. Transfer qnn-net-run.exe from $QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe to C:\qnn_test_package on the target device:

    scp "$QNN_SDK_ROOT/bin/$QNN_TARGET_ARCH/qnn-net-run.exe" "${TARGET_USER}@${TARGET_IP}:C:/qnn_test_package"
    

Doing inferences on the target device processor

  1. Open a PowerShell instance on the target Windows device.

    1. Alternatively, you can ssh from your host machine, run the following command to ssh into your target device.

    2. These console variables were set in the above instructions for “Transferring all relevant files”.

    ssh "${TARGET_USER}@${TARGET_IP}"
    

    Note

    You will have to login with your target device’s login for that username.

  2. Navigate to the directory containing the test files:

    cd C:\qnn_test_package
    
  3. Update the environment on the device by running:

    $env:LD_LIBRARY_PATH="C:/qnn_test_package"
    $env:ADSP_LIBRARY_PATH="C:/qnn_test_package"
    
  4. Run the following command on the target device to execute an inference:

    .\qnn-net-run.exe `
       --retrieve_context ".\Inception_v3_quantized.serialized.bin" `
       --input_list ".\target_raw_list.txt" `
       --backend ".\QnnHtp.dll" `
       --output_dir ".\output"
    
  5. Run the following script on the target device to view the classification results:

    Note

    You can alternatively copy the output folder back to your host machine with scp and run the following script there to avoid having to install python on your target device.

    python ".\show_inceptionv3_classifications.py" `
         -i ".\cropped\raw_list.txt" `
         -o "output" `
         -l ".\imagenet_slim_labels.txt"
    
  6. Verify that the classification results in output match the following:

    1. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw 0.777344 413 ashcan

    2. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw 0.253906 832 studio couch

    3. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw 0.980469 648 measuring cup

    4. ${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw 0.167969 459 brass

Running the built model

  1. Connect to the Windows target device and create a folder for the model files and input data (target specific):

    mstsc -v <your device IP>
    New-Item -Path "C:/qnn_test_package" -ItemType Directory
    
  2. Look up your target device’s Snapdragon architecture in this Supported Snapdragon Devices table and set $HTP_ARCH to hexagon-vXX where XX is the version of your Hexagon Architecture. For example:

    $HTP_ARCH = "hexagon-v68"
    
  3. Copy QnnHtp.dll and your built model (Inception_v3.serialized.bin) to your target device:

    Copy-Item -Path "${QNN_SDK_ROOT}/lib/${HTP_ARCH}/unsigned/*" -Destination "C:/qnn_test_package"
    Copy-Item -Path "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnHtp.dll" -Destination "C:/qnn_test_package"
    Copy-Item -Path "${QNN_SDK_ROOT}/examples/Models/InceptionV3/output/Inception_v3.serialized.bin" -Destination "C:/qnn_test_package"
    
  4. Copy the specific version of your $HTP_ARCH file by replacing QnnHtpV68Stub.dll with your version (ex. QnnHtpV69Stub.dll for v69):

    Copy-Item -Path "${QNN_SDK_ROOT}/lib/${QNN_TARGET_ARCH}/QnnHtpV68Stub.dll" -Destination "C:/qnn_test_package"
    
  5. Copy the input data and input lists to your target device:

    Copy-Item -Path "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped" -Destination "C:/qnn_test_package"
    Copy-Item -Path "${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/target_raw_list.txt" -Destination "C:/qnn_test_package"
    
  6. Copy the qnn-net-run.exe tool which will actually execute the inferences:

    Copy-Item -Path "${QNN_SDK_ROOT}/bin/${QNN_TARGET_ARCH}/qnn-net-run.exe" -Destination "C:/qnn_test_package"
    
  7. Set up the environment on your target device by running:

    $env:LD_LIBRARY_PATH = "C:/qnn_test_package"
    $env:ADSP_LIBRARY_PATH = "C:/qnn_test_package"
    
  8. Use qnn-net-run in the target device shell to execute the inference on the example inputs:

    ./qnn-net-run.exe --backend QnnHtp.dll --input_list target_raw_list.txt --retrieve_context Inception_v3.serialized.bin --output_dir ./output
    
  9. Copy the results back to the Windows host machine:

    Copy-Item -Path "C:/qnn_test_package/output" -Destination "C:/tmp/qnn_tmp"
    
  10. Open “Developer PowerShell for VS 2022”

  11. Run cd C:/tmp/qnn_tmp

  12. Run the following command to output a readable view of the inference data:

py -3 ./show_inceptionv3_classifications.py -i ./cropped/raw_list.txt -o output -l ./imagenet_slim_labels.txt
  1. Verify that the classification results in output match the following:

Path

Score

Label

${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/trash_bin.raw

0.777344

413 ashcan

${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/chairs.raw

0.253906

832 studio couch

${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/plastic_cup.raw

0.980469

648 measuring cup

${QNN_SDK_ROOT}/examples/Models/InceptionV3/data/cropped/notice_sign.raw

0.167969

459 brass

LPAI

Warning

LPAI Backend on the x86 Windows platform can be used for offline model preparation and hardware simulation. The execution of serialized model is supported by QNN SDK directly upon Linux Target platform only.

Preparing LPAI Configuration Files for Model Preparation

EXAMPLE of config.json file:

{
   "backend_extensions": {
      "shared_library_path": "${QNN_SDK_ROOT}/lib/x86_64-windows-msvc/QnnLpaiNetRunExtensions.dll",
      "config_file_path": "./lpaiParams.conf"
   }
}

EXAMPLE of lpaiParams.conf file that includes only preparation parameters:

{
   "lpai_backend": {
      "target_env": "adsp",
      "enable_hw_ver": "v5"
   },
   "lpai_graph": {
      "prepare": {
         "enable_batchnorm_fold": true,
         "exclude_io": false
      }
   }
}

To configure lpaiParams.conf, consider using the following optional settings:

lpai_backend
   "target_env"              "arm/adsp/x86/tensilica, default adsp"
   "enable_hw_ver"           "v4,v5 default v5"
lpai_graph
   prepare
      "enable_batchnorm_fold"   "true/false,     default false"
      "exclude_io"              "true/false,     default false"

Using the above config.json and lpaiParams.conf you can use qnn-context-binary-generator to build the LPAI offline model.

When files are mentioned, ensure that they have the relative or absolute path to that value.

cd ${QNN_SDK_ROOT}/examples/QNN/converter/models
$LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${QNN_SDK_ROOT}/lib/x86_64-linux-clang \
& ${QNN_SDK_ROOT}/bin/x86_64-windows-msvc/qnn-context-binary-generator.exe \
              --backend ${QNN_SDK_ROOT}/lib/x86_64-windows-msvc/QnnLpai.dll \
              --model ${QNN_SDK_ROOT}/examples/Models/InceptionV3/model_libs/x86_64-windows-msvc/<QnnModel.dll> \
              --config_file <config.json> \
              --log_level verbose \
              --binary_file <lpai_graph_serialized>

Note

  • Use generated lpai_graph_serialized.bin file in QNN format to be executed directly by QNN SDK on Linux target.

Running LPAI Emulation Backend on Windows x86

While LPAI Backend on x86_64 Windows CPU is designed for offline model generation as described above, it still can run in HW simulation mode where internally it executes prepare and execution steps together.

EXAMPLE of config.json file for Simulator:

{
   "backend_extensions": {
      "shared_library_path": "${QNN_SDK_ROOT}/lib/x86_64-windows-msvc/QnnLpaiNetRunExtensions.dll",
      "config_file_path": "./lpaiParams.conf"
   }
}

EXAMPLE of lpaiParams.conf file that includes preparation and execution parameters:

{
   "lpai_backend": {
      "target_env": "x86",
      "enable_hw_ver": "v5"
   },
   "lpai_graph": {
      "prepare": {
         "enable_batchnorm_fold": false,
         "exclude_io": false
      },
      "execute": {
         "fps": 1,
         "ftrt_ratio": 10,
         "client_type": "real_time",
         "affinity": "soft",
         "core_selection": 0
      }
   }
}

To configure lpaiParams.conf, consider using the following optional settings:

lpai_backend
   "target_env"              "arm/adsp/x86/tensilica, default adsp"
   "enable_hw_ver"           "v4,v5 default v5"
lpai_graph
   prepare
      "enable_batchnorm_fold"   "true/false,     default false"
      "exclude_io"              "true/false,     default false"
   execute
      "fps"                     "Specify the fps rate number, used for clock voting, default 1"
      "ftrt_ratio"              "Specify the ftrt_ratio number, default 10"
      "client_type"             "real_time/non_real_time, defult real_time"
      "affinity"                "soft/hard, default soft"
      "core_selection"          "Specify the core number, default 0"

Using the above config.json and lpaiParams.conf you can use qnn-net-run to directly execute LPAI backend in simulation mode.

With the appropriate libraries compiled, qnn-net-run is used with the following:

Note

If full paths are not given to qnn-net-run, all libraries must be added to LD_LIBRARY_PATH and be discoverable by the system library loader.

$ cd ${QNN_SDK_ROOT}/examples/QNN/converter/models
$ ${QNN_SDK_ROOT}/bin/x86_64-windows-msvc/qnn-net-run.exe \
              --backend ${QNN_SDK_ROOT}/lib/x86_64-windows-msvc/QnnLpai.dll \
              --model ${QNN_SDK_ROOT}/examples/QNN/example_libs/x86_64-windows-msvc/libqnn_model_8bit_quantized.dll \
              --input_list ${QNN_SDK_ROOT}/examples/QNN/converter/models/input_list_float.txt \
              --config_file <config.json>

Outputs from the run will be located at the default ./output directory.