Linux Setup

Instructions for Linux Host (Ubuntu, WSL, and other distros)

Follow these instructions to install the Qualcomm Neural Processing SDK (commonly referred to as SNPE) and all necessary dependencies.

SNPE allows you to convert an AI model (e.g., a .pt model from PyTorch) into instructions that can be run on a target device’s various processing units (CPU, GPU, DSP).

Note

This guide is for a host machine running Linux. If you’re using a Windows machine, please follow the instructions for the Windows setup.

Note

SNPE is verified with the Ubuntu 22.04 LTS (Focal) Linux host operating system and the Windows Subsystem for Linux (WSL2) environment, version 1.1.3.0 (currently limited to Linux host runnable artifacts such as converters, model generation and run tools).

There are several parts to this guide to install SNPE and all its corresponding dependencies:

  • Part 1: Install the SNPE SDK.

  • Part 2: Install SNPE dependencies.

  • Part 3: Install model frameworks to interpret your AI model files (e.g., PyTorch).

  • Part 4: Install Target Device OS-Specific Toolchain Code.

  • Part 5: Install Dependencies for Target Hardware Processors.

These steps will require admin access in order to run some of the installation commands.

Note

To learn about acronyms which are new to you, see the glossary.

Note

This guide contains many recommendations about specific versions of code to use that have been verified to work. Other versions of those dependencies may or may not work, so use them at your own risk.

Part 1: Install the Qualcomm Neural Processing Engine (aka “SNPE”)

  1. Go to the SNPE product page.

  2. Click “Get Software” to download the SNPE SDK (this will install the QAIRT SDK which contains SNPE inside).

    1. The zipped file should have a name like v2.32.0.250228.

    2. Note: the SNPE SDK is ~ 2.5GB when unzipped.

  3. Unzip the downloaded SDK into the folder where you want the SDK to live.

Set up your environment

  1. Open a terminal.

    Note

    Please keep this terminal open, as it will contain environment variables you’ll need in later steps and tutorials.

  2. Navigate inside the unzipped SNPE SDK to /qairt/<SNPE_VERSION>/bin.

    1. Replace <SNPE_VERSION> with the name of the folder directly under qairt.

      • It should look almost identical to the zipped folder name (ex. 2.32.0.250228).

      • Example: cd /path/to/unzipped/sdk/qairt/2.32.0.250228/bin

  3. To set important environment variables like QAIRT_SDK_ROOT run:

    source ./envsetup.sh
    

    Warning

    The envsetup.sh script only updates environment variables for the current terminal session.

  4. To copy install the core Linux build tools, run:

    chmod +x "${QAIRT_SDK_ROOT}/bin/check-linux-dependency.sh"
    sudo "${QAIRT_SDK_ROOT}/bin/check-linux-dependency.sh"
    
    • If you see “Done!!”, then the installation has completed successfully.

    • If you do not see that message, you may need to re-run the command after it installs key packages (like build-essential, cmake, etc.).

    Note

    As the script is running, you may have to confirm additional downloads by pressing “Enter”. The script may take several minutes to complete.

  5. Run the following command to verify that you have installed the required toolchain successfully:

    "${QAIRT_SDK_ROOT}/bin/envcheck" -c
    
    • This will verify that you have installed the required toolchain successfully. You should see something like this:

      Checking Clang Environment
      --------------------------------------------------------------
      [INFO] Found clang++ at /usr/bin/clang++
      --------------------------------------------------------------
      

Part 2: Install SNPE SDK dependencies

  1. Check whether you have Python 3.10 installed by running:

    compgen -c python | grep -E '^python[0-9\.]*$' | sort -V | uniq
    
    • You should see python3.10 if you have it installed.

    • If you do not have Python 3.10, you can install it by running:

      sudo apt-get update && sudo apt-get install python3.10 python3-distutils libpython3.10
      
    • Verify the installation worked by running:

      python3 --version
      
      • You should see Python 3.10 as one of the options.

  2. Run the following to check if you have python3.10-venv installed:

    dpkg -l | grep python3.10-venv
    
  3. If you do not have python3.10-venv installed, you can install it by running:

    sudo apt install python3.10-venv
    
  4. Create and activate a new virtual environment:

    cd ${QAIRT_SDK_ROOT}/bin
    python3 -m venv venv --without-pip
    source venv/bin/activate
    

    Warning

    If your environment is in WSL, <PYTHON3.10_VENV_ROOT> must be under $HOME.

    Warning

    We have to use the flag --without-pip on Debian/Ubuntu systems to avoid a crash since venv will call ensurepip at the system level (which is disabled on these distros). Once we activate our venv, we can safely run ensurepip to create a local pip for installing packages.

  5. Install pip3 in your virtual environment (see warning above for why):

    python3 -m ensurepip --upgrade
    
  6. Verify pip3 is installed by running:

    which pip3
    
    1. You should see a path that is inside your virtual environment (e.g., /venv/bin/pip3)

  7. Update all dependencies by running:

    python "${QAIRT_SDK_ROOT}/bin/check-python-dependency"
    

    Warning

    If you run into an error with a specific version of a package, run pip install PACKAGE_NAME to get a more up-to-date version of the package then re-run the script above.

Part 3: Install Model Frameworks

SNPE supports the following model frameworks.

  1. Install the packages from the below table that are relevant for the AI model files you want to use.

    Note

    You can install a package by running pip3 install package==version. e.g., pip3 install torch==2.4.0

    Warning

    You do not need to install all packages here.

Framework

Package

Version

Description

PyTorch

torch

2.4.0

PyTorch is used for building and training deep learning models with a focus on flexibility and speed. Used with .pt files. Install by downloading the proper binary from PyTorch previous versions if pip install does not work. THIS REQUIRES INSTALLING ONNX 1.17.0.

TensorFlow

tensorflow

2.10.1

TensorFlow is used for building and training machine learning models, particularly deep learning models. Used with .pb files. ⚠️ The envcheck script may incorrectly say this file is not installed on Ubuntu. If you would like to use tensorflow 1.15.1, you must install Python 3.6.8 by rerunning the Python installation steps above with that version.

TFLite

tflite

2.18.0

TFLite is used for running TensorFlow models on mobile and edge devices with optimized performance. Used with .tflite files.

ONNX

onnx

1.17.0

ONNX stands for Open Neural Network Exchange. It is used for defining and exchanging deep learning models between different frameworks. Used with .onnx files.

ONNX Runtime

onnxruntime

for ubuntu22.04 - 1.22.0, for ubuntu20.04 - 1.19.2

ONNX stands for Open Neural Network Exchange. It is used for running ONNX models with high performance across various hardware platforms. Used with .onnx files.

Note

You can verify your installation by running python -c "import <model>" (replacing <model> with the import you downloaded above). If the script executes without any errors, the package is installed correctly.

Part 4: Install Target Device OS-Specific Toolchain Code

Depending on the target device’s operating system there may be additional installation requirements. These dependencies are mostly used for cross-compilation.

You only need to read the section that corresponds to your target device OS. SNPE supports versions of Android, Linux, OE Linux, and Windows target devices.

Note

For Windows target devices, you do not need to do any additional installations, you can skip to Part 5.

Working with an Android Target Device

For working with Android devices, you will need to install a corresponding Android NDK (Native Developer Kit). You can install the recommended version (Android NDK version r26c) by following these steps:

  1. Download the Android NDK: Android NDK r26c

  2. Unzip the file.

    Warning

    If your environment is in WSL, the Android NDK must be unzipped under $HOME with the WSL unzip command.

  3. Copy the path to your unzipped Android NDK root (the folder should be named android-ndk-r26c).

  4. Replace <path-to-your-android-ndk-root-folder> with the path to the unzipped android-ndk-r26c folder, then run:

    echo 'export ANDROID_NDK_ROOT="<path-to-your-android-ndk-root-folder>"' >> ~/.bashrc
    
  5. Add the location of the unzipped file to your PATH by running:

    echo 'export PATH="${ANDROID_NDK_ROOT}:${PATH}"' >> ~/.bashrc
    source ~/.bashrc
    
  6. Verify your environment variables by running:

    ${QAIRT_SDK_ROOT}/bin/envcheck -n
    
  7. Install Java 8 by running:

    sudo apt-get install openjdk-8-jdk
    

    Note

    Java 8 has been verified with SNPE, other versions may not work as expected. Java is essential if you want to use SNPE with an Android application.

Working with a Linux Target Device

  1. Verify if you have clang++ by running:

    ${QAIRT_SDK_ROOT}/bin/envcheck -c
    

    You should see something like:

    Checking Clang Environment
    --------------------------------------------------------------
    [INFO] Found clang++ at /usr/bin/clang++
    --------------------------------------------------------------
    
  2. If you don’t, you will likely need to use clang++14 to build models for Linux target devices using SNPE (later versions may work but have not been verified).

    Note

    For more information, see C++ Support in Clang.

    1. To install clang++14, run:

      sudo apt update
      sudo apt install -y wget gpg lsb-release
      wget https://apt.llvm.org/llvm.sh
      chmod +x llvm.sh
      sudo ./llvm.sh 14
      
    2. To verify if you have clang++14, run:

      /usr/bin/clang++ --version
      

      You should see something like:

      Ubuntu clang version 14.0.6
      Target: x86_64-pc-linux-gnu
      Thread model: posix
      InstalledDir: /usr/bin
      
  3. If make is not already installed on your system, install it by running:

    sudo apt-get install make
    

OE Linux

Some Linux target devices, such as those based on Yocto Kirkstone, require cross-compilation using a GCC-based toolchain. The Qualcomm Neural Processing SDK has been verified to work with GCC 11.2 for these systems.

If you do not already have the required toolchain installed or available in your system PATH, follow these steps to download and install the cross-compiler using Qualcomm’s eSDK:

  1. Download the SDK package:

    wget https://artifacts.codelinaro.org/artifactory/qli-ci/flashable-binaries/qimpsdk/qcm6490/x86/qcom-6.6.28-QLI.1.1-Ver.1.1_qim-product-sdk-1.1.3.zip
    
  2. Unzip the downloaded archive:

    unzip qcom-6.6.28-QLI.1.1-Ver.1.1_qim-product-sdk-1.1.3.zip
    
  3. Update permissions:

    umask a+rx
    
  4. Navigate into the installation script folder:

    cd ./target/qcm6490/sdk
    
  5. Run the toolchain installer script:

    sh qcom-wayland-x86_64-qcom-multimedia-image-armv8-2a-qcm6490-toolchain-ext-1.0.sh
    

    You can install the packages it asks you to install with a command like this:

    sudo apt update
    sudo apt install <your packages separated by spaces>
    

    Note

    When it asks you where to install the toolchain, type Enter twice for default paths.

  6. When the installation is complete successfully, you should see the following text:

    done
    SDK has been successfully set up and is ready to be used.
    Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
     $ . /home/usr/qcom-wayland_sdk/environment-setup-armv8-2a-qcom-linux
    
  7. Copy the path to qcom-wayland_sdk from the installation output.

    1. Ex. /home/usr/qcom-wayland_sdk/

    2. Do NOT copy the last piece of the path environment-setup-armv8-2a-qcom-linux.

  8. Set ESDK_ROOT to the path at the bottom of your installation output (like above) by running:

    export ESDK_ROOT="<path-to-qcom-wayland_sdk>"
    
  9. Set your eSDK root and source the toolchain environment:

    cd $ESDK_ROOT
    source environment-setup-armv8-2a-qcom-linux
    

    Warning

    If you see an error like ...environment-setup-armv8-2a-qcom-linux: Not a directory, go back to Step 7 and reset your ESDK_ROOT to just the path to qcom-wayland_sdk NOT the path to the environment-setup... script.

This sets up the correct cross-compilation environment to build and deploy models for Yocto-based Linux targets.

Part 5: Install Dependencies for Target Hardware Processors

Some of the target processors (CPU, GPU, or DSP) require additional dependencies to build models.

CPU (Central Processing Unit)

The x86_64 targets are built using clang-14. If you’re using this kind of target, please install clang++14.

  1. To install clang++14, follow the “Working with a Linux Device” steps above.

  2. ARM CPU targets are built using the Android NDK (see the Working with an Android Target Device section above).

GPU (Graphics Processing Unit)

The GPU backend kernels are written based on OpenCL. The GPU operations must be implemented based on OpenCL headers with a minimum version of OpenCL 1.2.

DSP

Compiling for DSP hardware requires the use of the Qualcomm Hexagon SDK and Hexagon SDK Tools which you can install by following these steps:

  1. If you do not already have Qualcomm Package Manager 3 (QPM3) installed, install it by following the steps below:

    1. Create an account or sign in with your Qualcomm ID

    2. Go to the Qualcomm Package Manager 3 download page. You must be signed in.

    Warning

    You may have to click the link again after logging in to have it load properly. If you just created an account, this may take up to 10 minutes to fully register your new account.

    1. Download the Linux version of Qualcomm Package Manager 3 (QPM3).

      1. The downloaded file will have a name similar to: QualcommPackageManager3.3.0.121.7.Linux-x86.deb.

    2. Install the .deb (replacing the path with the path to the downloaded QPM3 file):

      sudo dpkg -i ./<path-to-your-downloaded-.deb>
      
  2. Start the Qualcomm Package Manager 3 App (from the activities section).

  3. Log in with your Qualcomm ID within the QPM 3 app when prompted.

  4. Once logged in to QPM3, click on “Tools.”

  5. Look up which “Hexagon Architecture” your chip supports in the supported Snapdragon devices table (it should look like “V00”).

  6. Use the Hexagon Architecture you just looked up to determine which version of the “Hexagon SDK” you need from this table:

    Hexagon Architecture

    Hexagon SDK Version

    V75

    5.4.0

    V73

    5.4.0

    V69

    4.3.0

    v68

    4.2.0

    v66

    4.1.0

  7. Search for “Hexagon SDK” in QPM3.

  8. Download the proper version for your target device based on the table above.

    1. You will have to accept terms and conditions to use the downloaded files.

    2. You will likely want to leave the files in the default location.

    3. The download should pre-select which add-ons are relevant. You may download the “Full NDK” as an alternative to the “Minimal NDK” depending on your use case.

  9. Write down the path to the Hexagon SDK, you will need it later.

  10. In QPM3, go back to the search bar and search for “Hexagon Toolchain”.

  11. Based on your Hexagon Architecture, choose the corresponding version of the Hexagon Toolchain to click into:

Note

Hexagon SDK tools version 8.7.03/8.6.02/8.5.03/8.4.09/8.4.06 is not currently pre-packaged into Hexagon SDK version 5.4.0/4.3.0/4.2.0/4.1.0 respectively. It needs to be downloaded separately and placed at the location $HEXAGON_SDK_ROOT/tools/HEXAGON_Tools/.

  1. Download the corresponding version of the “Hexagon Toolchain”.

  2. Ensure you have installed clang++ so you can compile code for DSP hardware (see the “Working with a Linux Target” section above).

  3. Follow the rest of the instructions at $HEXAGON_SDK_ROOT/docs/readme.html (where $HEXAGON_SDK_ROOT is the location of the Hexagon SDK installation) to complete the installation.

Next Steps

Now that you’ve finished setting up SNPE and its dependencies, you can start using SNPE. Follow the “Building and Executing a Model” Tutorial to learn how to transform your AI models, build them for your target device, and use them to generate inferences on the information processing cores you choose. Use SNPE to allow your AI models to execute on your specific target device’s cores.