Tutorials Setup

Tutorial Resources

The tutorials require additional resources which are not included in the default Qualcomm® Neural Processing SDK package. These assets need to be downloaded before running the tutorials.

Getting Inception v3

In this tutorial, the Inception v3 TensorFlow model file and sample image files are prepared for the TensorFlow classification tutorial. The script requires a directory path to the Inception v3 assets (zip file). The script can also optionally download the Inception v3 archive.

The Inception v3 assets are listed below:

inception_v3_2016_08_28_frozen.pb.tar.gz - https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz

Note that the assets are large and can take some time to download. Running “python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -h” will show the usage description.

usage: $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py [-h] -a ASSETS_DIR [-d] [-r RUNTIME] [-u] [-l [HTP_SOC]]

Prepares the InceptionV3 assets for tutorial examples.

required arguments:
  -a ASSETS_DIR, --assets_dir ASSETS_DIR
                        directory containing the InceptionV3 assets

optional arguments:
  -d, --download        Download InceptionV3 assets to InceptionV3 example
                        directory
  -r RUNTIME, --runtime RUNTIME
                        Choose a runtime to set up tutorial for. Choices: cpu,
                        gpu, dsp, aip, all. 'all' option is only supported
                        with --udo flag
  -u, --udo             Generate and compile a user-defined operation package
                        to be used with InceptionV3. Softmax is simulated as
                        a UDO for this script.
  -l [HTP_SOC], --htp_soc [HTP_SOC]
                        Specify SOC target for generating HTP Offline Cache.
                        For example: "--htp_soc sm8450" for Snapdragon 8 Gen 1,
                        default value is sm8750.

Before using the script, please set the environment variable TENSORFLOW_HOME to point to the path where TensorFlow package is installed. The script uses TensorFlow utilities like optimize_for_inference.py which are present in the TensorFlow installation directory.

$ export TENSORFLOW_HOME=<Python-Libs-Installation-Directory>/python3/site-packages/tensorflow/core

Download the model and prepare the assets

The assets directory is intended to contain the network model assets. If the assets have been previously downloaded set the ASSETS_DIR to this directory, otherwise select a target directory to store the assets as they are downloaded, along with the option –download to actually download the model files.

Choice of target runtime

Depending on the chosen runtime the script may perform additional steps of optimization specific to a hardware target. Users can choose to generate the final DLC to run on one of CPU, GPU, DSP or the HTA targets at runtime. The argument ‘runtime’ is optional and defaults to ‘cpu’ when not explicitly specified. Here are some sample commands to use in different circumstances: Let us use ~/tmpdir as our assets directory for these examples.

Run the script to download model and set up to run on CPU:

python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -a ~/tmpdir -d

Run the script to download model and set up to run on DSP:

python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -a ~/tmpdir -d -r dsp

Run the script on an model already downloaded to ~/tmpdir to set up to run on HTA:

python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -a ~/tmpdir -r aip

Choice of SoC target

Based on SoC target, the script will add enable_htp and htp_socs arguments to snpe-dlc-quantize when target runtime is ‘dsp’ or ‘all’. The argument ‘htp_soc’ is optional, if no value is given then ‘sm8750’ is taken as default value for htp_soc argument. Here are some sample commands for running setup_inceptionv3_snpe.py:

Run the script to download model and set up to run on dsp with

  1. SoC target as sm8750:

    python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -a ~/tmpdir -d -r dsp -l
    
  2. SoC target as sm8450:

    python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py -a ~/tmpdir -d -r dsp -l sm8450
    

After the script is complete the prepared Inception v3 assets are copied to the $SNPE_ROOT/examples/Models/InceptionV3 directory, along with sample raw images, and converted Qualcomm® Neural Processing SDK DLC files with additional optimizations as applicable.

Note: for information on running Inception v3 with UDO and the use of the –udo option, visit UDO Tutorial

Getting VGG

In this tutorial, the VGG ONNX model file and sample image files are prepared for the ONNX classification tutorial. The script requires a directory path to the VGG assets. The script can also optionally download the VGG assets.

The VGG assets are listed below:

vgg.onnx - https://s3.amazonaws.com/onnx-model-zoo/vgg/vgg16/vgg16.onnx
synset.txt - https://s3.amazonaws.com/onnx-model-zoo/synset.txt
kitten.jpg - https://s3.amazonaws.com/model-server/inputs/kitten.jpg

Note that the assets are large and can take some time to download. Running “python3 $SNPE_ROOT/examples/Models/VGG/scripts/setup_VGG.py -h” will show the usage description.

usage: $SNPE_ROOT/examples/Models/VGG/scripts/setup_VGG.py [-h] -a ASSETS_DIR [-d]

      Prepares the VGG assets for tutorial examples.

      required arguments:
        -a ASSETS_DIR, --assets_dir ASSETS_DIR
                              directory containing the VGG assets

      optional arguments:
        -d, --download        Download VGG assets to VGG example directory

Download the model and prepare the assets

The assets directory is intended to contain the network model assets. If the assets have been previously downloaded set the ASSETS_DIR to this directory, otherwise select a target directory to store the assets as they are downloaded, along with the option –download to actually download the model files.

After the script is complete the prepared VGG assets are copied to the $SNPE_ROOT/examples/Models/VGG directory, along with sample raw images, and converted Qualcomm® Neural Processing SDK DLC files with additional optimizations as applicable.

Note: for information on running VGG with UDO and the use of the –udo option, visit UDO Tutorial With Weights