Linux Setup¶
Instructions for Linux Host (Ubuntu, WSL, and other distros)¶
Follow these instructions to install the Qualcomm Neural Processing SDK (commonly referred to as SNPE) and all necessary dependencies.
SNPE allows you to convert an AI model (e.g., a .pt model from PyTorch) into instructions that can be run on a target device’s various processing units (CPU, GPU, DSP).
Note
This guide is for a host machine running Linux. If you’re using a Windows machine, please follow the instructions for the Windows setup.
Note
SNPE is verified with the Ubuntu 22.04 LTS (Focal) Linux host operating system and the Windows Subsystem for Linux (WSL2) environment, version 1.1.3.0 (currently limited to Linux host runnable artifacts such as converters, model generation and run tools).
There are several parts to this guide to install SNPE and all its corresponding dependencies:
Part 1: Install the SNPE SDK.
Part 2: Install SNPE dependencies.
Part 3: Install model frameworks to interpret your AI model files (e.g.,
PyTorch).Part 4: Install Target Device OS-Specific Toolchain Code.
Part 5: Install Dependencies for Target Hardware Processors.
These steps will require admin access in order to run some of the installation commands.
Note
To learn about acronyms which are new to you, see the glossary.
Note
This guide contains many recommendations about specific versions of code to use that have been verified to work. Other versions of those dependencies may or may not work, so use them at your own risk.
Part 1: Install the Qualcomm Neural Processing Engine (aka “SNPE”)¶
Go to the SNPE product page.
Click “Get Software” to download the SNPE SDK (this will install the QAIRT SDK which contains SNPE inside).
The zipped file should have a name like
v2.32.0.250228.Note: the SNPE SDK is ~ 2.5GB when unzipped.
Unzip the downloaded SDK into the folder where you want the SDK to live.
Set up your environment¶
Open a terminal.
Note
Please keep this terminal open, as it will contain environment variables you’ll need in later steps and tutorials.
Navigate inside the unzipped SNPE SDK to
/qairt/<SNPE_VERSION>/bin.Replace
<SNPE_VERSION>with the name of the folder directly underqairt.It should look almost identical to the zipped folder name (ex.
2.32.0.250228).Example:
cd /path/to/unzipped/sdk/qairt/2.32.0.250228/bin
To set important environment variables like
QAIRT_SDK_ROOTrun:source ./envsetup.sh
Warning
The
envsetup.shscript only updates environment variables for the current terminal session.To copy install the core Linux build tools, run:
chmod +x "${QAIRT_SDK_ROOT}/bin/check-linux-dependency.sh" sudo "${QAIRT_SDK_ROOT}/bin/check-linux-dependency.sh"
If you see “Done!!”, then the installation has completed successfully.
If you do not see that message, you may need to re-run the command after it installs key packages (like build-essential, cmake, etc.).
Note
As the script is running, you may have to confirm additional downloads by pressing “Enter”. The script may take several minutes to complete.
Run the following command to verify that you have installed the required toolchain successfully:
"${QAIRT_SDK_ROOT}/bin/envcheck" -c
This will verify that you have installed the required toolchain successfully. You should see something like this:
Checking Clang Environment -------------------------------------------------------------- [INFO] Found clang++ at /usr/bin/clang++ --------------------------------------------------------------
Part 2: Install SNPE SDK dependencies¶
Check whether you have Python 3.10 installed by running:
compgen -c python | grep -E '^python[0-9\.]*$' | sort -V | uniq
You should see
python3.10if you have it installed.If you do not have Python 3.10, you can install it by running:
sudo apt-get update && sudo apt-get install python3.10 python3-distutils libpython3.10
Verify the installation worked by running:
python3 --versionYou should see
Python 3.10as one of the options.
Run the following to check if you have
python3.10-venvinstalled:dpkg -l | grep python3.10-venv
If you do not have
python3.10-venvinstalled, you can install it by running:sudo apt install python3.10-venv
Create and activate a new virtual environment:
cd ${QAIRT_SDK_ROOT}/bin python3 -m venv venv --without-pip source venv/bin/activate
Warning
If your environment is in WSL,
<PYTHON3.10_VENV_ROOT>must be under$HOME.Warning
We have to use the flag
--without-pipon Debian/Ubuntu systems to avoid a crash since venv will callensurepipat the system level (which is disabled on these distros). Once we activate our venv, we can safely runensurepipto create a local pip for installing packages.Install
pip3in your virtual environment (see warning above for why):python3 -m ensurepip --upgrade
Verify
pip3is installed by running:which pip3You should see a path that is inside your virtual environment (e.g.,
/venv/bin/pip3)
Update all dependencies by running:
python "${QAIRT_SDK_ROOT}/bin/check-python-dependency"
Warning
If you run into an error with a specific version of a package, run
pip install PACKAGE_NAMEto get a more up-to-date version of the package then re-run the script above.
Part 3: Install Model Frameworks¶
SNPE supports the following model frameworks.
Install the packages from the below table that are relevant for the AI model files you want to use.
Note
You can install a package by running
pip3 install package==version. e.g.,pip3 install torch==2.4.0Warning
You do not need to install all packages here.
Framework |
Package |
Version |
Description |
|---|---|---|---|
PyTorch |
2.4.0 |
PyTorch is used for building and training deep learning models with a focus on flexibility and speed. Used with |
|
TensorFlow |
2.10.1 |
TensorFlow is used for building and training machine learning models, particularly deep learning models. Used with |
|
TFLite |
2.18.0 |
TFLite is used for running TensorFlow models on mobile and edge devices with optimized performance. Used with |
|
ONNX |
1.17.0 |
ONNX stands for Open Neural Network Exchange. It is used for defining and exchanging deep learning models between different frameworks. Used with |
|
ONNX Runtime |
for ubuntu22.04 - 1.22.0, for ubuntu20.04 - 1.19.2 |
ONNX stands for Open Neural Network Exchange. It is used for running ONNX models with high performance across various hardware platforms. Used with |
Note
You can verify your installation by running python -c "import <model>" (replacing <model> with the import you downloaded above).
If the script executes without any errors, the package is installed correctly.
Part 4: Install Target Device OS-Specific Toolchain Code¶
Depending on the target device’s operating system there may be additional installation requirements. These dependencies are mostly used for cross-compilation.
You only need to read the section that corresponds to your target device OS. SNPE supports versions of Android, Linux, OE Linux, and Windows target devices.
Note
For Windows target devices, you do not need to do any additional installations, you can skip to Part 5.
Working with an Android Target Device¶
For working with Android devices, you will need to install a corresponding Android NDK (Native Developer Kit). You can install the recommended version (Android NDK version r26c) by following these steps:
Download the Android NDK: Android NDK r26c
Unzip the file.
Warning
If your environment is in WSL, the Android NDK must be unzipped under
$HOMEwith the WSLunzipcommand.Copy the path to your unzipped Android NDK root (the folder should be named
android-ndk-r26c).Replace
<path-to-your-android-ndk-root-folder>with the path to the unzippedandroid-ndk-r26cfolder, then run:echo 'export ANDROID_NDK_ROOT="<path-to-your-android-ndk-root-folder>"' >> ~/.bashrc
Add the location of the unzipped file to your PATH by running:
echo 'export PATH="${ANDROID_NDK_ROOT}:${PATH}"' >> ~/.bashrc source ~/.bashrc
Verify your environment variables by running:
${QAIRT_SDK_ROOT}/bin/envcheck -n
Install Java 8 by running:
sudo apt-get install openjdk-8-jdk
Note
Java 8 has been verified with SNPE, other versions may not work as expected. Java is essential if you want to use SNPE with an Android application.
Working with a Linux Target Device¶
Verify if you have
clang++by running:${QAIRT_SDK_ROOT}/bin/envcheck -c
You should see something like:
Checking Clang Environment -------------------------------------------------------------- [INFO] Found clang++ at /usr/bin/clang++ --------------------------------------------------------------
If you don’t, you will likely need to use
clang++14to build models for Linux target devices using SNPE (later versions may work but have not been verified).Note
For more information, see C++ Support in Clang.
To install
clang++14, run:sudo apt update sudo apt install -y wget gpg lsb-release wget https://apt.llvm.org/llvm.sh chmod +x llvm.sh sudo ./llvm.sh 14
To verify if you have
clang++14, run:/usr/bin/clang++ --versionYou should see something like:
Ubuntu clang version 14.0.6 Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin
If
makeis not already installed on your system, install it by running:sudo apt-get install make
OE Linux¶
Some Linux target devices, such as those based on Yocto Kirkstone, require cross-compilation using a GCC-based toolchain. The Qualcomm Neural Processing SDK has been verified to work with GCC 11.2 for these systems.
If you do not already have the required toolchain installed or available in your system PATH, follow these steps to download and install the cross-compiler using Qualcomm’s eSDK:
Download the SDK package:
wget https://artifacts.codelinaro.org/artifactory/qli-ci/flashable-binaries/qimpsdk/qcm6490/x86/qcom-6.6.28-QLI.1.1-Ver.1.1_qim-product-sdk-1.1.3.zipUnzip the downloaded archive:
unzip qcom-6.6.28-QLI.1.1-Ver.1.1_qim-product-sdk-1.1.3.zipUpdate permissions:
umask a+rx
Navigate into the installation script folder:
cd ./target/qcm6490/sdk
Run the toolchain installer script:
sh qcom-wayland-x86_64-qcom-multimedia-image-armv8-2a-qcm6490-toolchain-ext-1.0.shYou can install the packages it asks you to install with a command like this:
sudo apt update sudo apt install <your packages separated by spaces>
Note
When it asks you where to install the toolchain, type Enter twice for default paths.
When the installation is complete successfully, you should see the following text:
done SDK has been successfully set up and is ready to be used. Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g. $ . /home/usr/qcom-wayland_sdk/environment-setup-armv8-2a-qcom-linux
Copy the path to
qcom-wayland_sdkfrom the installation output.Ex.
/home/usr/qcom-wayland_sdk/Do NOT copy the last piece of the path
environment-setup-armv8-2a-qcom-linux.
Set
ESDK_ROOTto the path at the bottom of your installation output (like above) by running:export ESDK_ROOT="<path-to-qcom-wayland_sdk>"
Set your eSDK root and source the toolchain environment:
cd $ESDK_ROOT source environment-setup-armv8-2a-qcom-linux
Warning
If you see an error like
...environment-setup-armv8-2a-qcom-linux: Not a directory, go back to Step 7 and reset your ESDK_ROOT to just the path toqcom-wayland_sdkNOT the path to theenvironment-setup...script.
This sets up the correct cross-compilation environment to build and deploy models for Yocto-based Linux targets.
Part 5: Install Dependencies for Target Hardware Processors¶
Some of the target processors (CPU, GPU, or DSP) require additional dependencies to build models.
CPU (Central Processing Unit)¶
The x86_64 targets are built using clang-14. If you’re using this kind of target, please install clang++14.
To install
clang++14, follow the “Working with a Linux Device” steps above.ARM CPU targets are built using the Android NDK (see the Working with an Android Target Device section above).
GPU (Graphics Processing Unit)¶
The GPU backend kernels are written based on OpenCL. The GPU operations must be implemented based on OpenCL headers with a minimum version of OpenCL 1.2.
DSP¶
Compiling for DSP hardware requires the use of the Qualcomm Hexagon SDK and Hexagon SDK Tools which you can install by following these steps:
If you do not already have Qualcomm Package Manager 3 (QPM3) installed, install it by following the steps below:
Go to the Qualcomm Package Manager 3 download page. You must be signed in.
Warning
You may have to click the link again after logging in to have it load properly. If you just created an account, this may take up to 10 minutes to fully register your new account.
Download the Linux version of Qualcomm Package Manager 3 (QPM3).
The downloaded file will have a name similar to:
QualcommPackageManager3.3.0.121.7.Linux-x86.deb.
Install the
.deb(replacing the path with the path to the downloaded QPM3 file):sudo dpkg -i ./<path-to-your-downloaded-.deb>
Start the Qualcomm Package Manager 3 App (from the activities section).
Log in with your Qualcomm ID within the QPM 3 app when prompted.
Once logged in to QPM3, click on “Tools.”
Look up which “Hexagon Architecture” your chip supports in the supported Snapdragon devices table (it should look like “V00”).
Use the Hexagon Architecture you just looked up to determine which version of the “Hexagon SDK” you need from this table:
Hexagon Architecture
Hexagon SDK Version
V75
5.4.0
V73
5.4.0
V69
4.3.0
v68
4.2.0
v66
4.1.0
Search for “Hexagon SDK” in QPM3.
Download the proper version for your target device based on the table above.
You will have to accept terms and conditions to use the downloaded files.
You will likely want to leave the files in the default location.
The download should pre-select which add-ons are relevant. You may download the “Full NDK” as an alternative to the “Minimal NDK” depending on your use case.
Write down the path to the Hexagon SDK, you will need it later.
In QPM3, go back to the search bar and search for “Hexagon Toolchain”.
Based on your Hexagon Architecture, choose the corresponding version of the Hexagon Toolchain to click into:
Note
Hexagon SDK tools version 8.7.03/8.6.02/8.5.03/8.4.09/8.4.06 is not currently pre-packaged into Hexagon SDK version 5.4.0/4.3.0/4.2.0/4.1.0 respectively. It needs to be downloaded separately and placed at the location
$HEXAGON_SDK_ROOT/tools/HEXAGON_Tools/.
Download the corresponding version of the “Hexagon Toolchain”.
Ensure you have installed
clang++so you can compile code for DSP hardware (see the “Working with a Linux Target” section above).Follow the rest of the instructions at
$HEXAGON_SDK_ROOT/docs/readme.html(where$HEXAGON_SDK_ROOTis the location of the Hexagon SDK installation) to complete the installation.
Next Steps¶
Now that you’ve finished setting up SNPE and its dependencies, you can start using SNPE. Follow the “Building and Executing a Model” Tutorial to learn how to transform your AI models, build them for your target device, and use them to generate inferences on the information processing cores you choose. Use SNPE to allow your AI models to execute on your specific target device’s cores.