Introduction¶
Welcome to the Qualcomm® AI Engine Direct software development kit (aka the QNN SDK) documentation!
The QNN SDK helps you build model files (e.g., from pytorch) to run on various device processors
(CPU, GPU, DSP, HTP, or LPAI) across multiple operating systems.
Where to go¶
Note
As you read through these pages, if you see any acronyms which you are not familiar with, please check the glossary.html.
To get started using the QNN SDK, follow the comprehensive Tutorial: Converting and executing a CNN model with QNN which walks through the entire process from installing the SDK to executing inferences on the specific hardware you are interested in.
To install the QNN SDK, follow the Setup guide which helps you download the SDK and all necessary dependencies (this is included in the above tutorial).
To understand the many features of the QNN SDK, read through the Overview page which explains where the QNN SDK fits into the AI development stack.
For other more advanced tutorials, see the Tutorials page.
To benchmark your model’s performance on target hardware, see the Benchmarking page.
Reference Material¶
Compiling from CNN to QNN via Converters - Depending on the AI model you want to use, these allow you to convert them into the QNN SDK format which can be run across multiple processor types.
Tools - This page lists out the various scripts which can help convert AI models, and which architectures they support.
Quantization - This explains how quantization works with the QNN SDK.
Reference docs for processor “backend” code. These files are used when executing models on specific architectures.
Op packages guides and reference material- Writing your own Op packages allow you to customize how built models are interpreted when being used for inferences.
The existing Op packages are documented in the Operations page.
Legal notice - This outlines the legal guidelines and information related to the QNN SDK.
