Running the Inception v3 Model¶
Overview
The example C++ application in this tutorial is called snpe-net-run. It is a command line executable that executes a neural network using Qualcomm® Neural Processing SDK APIs.
The required arguments to snpe-net-run are:
A neural network model in the DLC file format
An input list file with paths to the input data.
Optional arguments to snpe-net-run are:
Choice of GPU, DSP or AIP runtimes (default is CPU)
Output directory (default is ./output)
Show help description
snpe-net-run creates and populates an output directory with the results of executing the neural network on the input data.
The Qualcomm® Neural Processing SDK provides Linux and Android binaries of snpe-net-run under
$SNPE_ROOT/bin/x86_64-linux-clang
$SNPE_ROOT/bin/aarch64-android
$SNPE_ROOT/bin/aarch64-oe-linux-gcc8.2
$SNPE_ROOT/bin/aarch64-oe-linux-gcc9.3
$SNPE_ROOT/bin/aarch64-ubuntu-gcc9.4
$SNPE_ROOT/bin/aarch64-oe-linux-gcc11.2
Prerequisites
The Qualcomm® Neural Processing SDK has been set up following the Qualcomm (R) Neural Processing SDK Setup chapter.
The Tutorials Setup has been completed.
TensorFlow is installed (see TensorFlow Setup)
Introduction
The Inception v3 Imagenet classification model is trained to classify images with 1000 labels.
The examples below shows the steps required to execute a pretrained optimized and optionally quantized Inception v3 model with snpe-net-run to classify a set of sample images. An optimized and quantized model is used in this example to showcase the DSP and AIP runtimes which execute quantized 8-bit neural network models.
The DLC for the model used in this tutorial was generated and optimized using the TensorFlow optimizer tool, during the Getting Inception v3 portion of the Tutorials Setup, by the script $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py. Additionally, if a fixed-point runtime such as DSP or AIP was selected when running the setup script, the model was quantized by snpe-dlc-quantize.
Learn more about a quantized model.
Run on Linux Host
Go to the base location for the model and run snpe-net-run
cd $SNPE_ROOT/examples/Models/InceptionV3
snpe-net-run --container dlc/inception_v3_quantized.dlc --input_list data/cropped/raw_list.txt
After snpe-net-run completes, the results are populated in the $SNPE_ROOT/examples/Models/InceptionV3/output directory. There should be one or more .log files and several Result_X directories, each containing a InceptionV3/Predictions/Reshape_1:0.raw file.
One of the inputs is data/cropped/notice_sign.raw and it was created from data/cropped/notice_sign.jpg which looks like the following:
With this input, snpe-net-run created the output file $SNPE_ROOT/examples/Models/InceptionV3/output/Result_0/InceptionV3/Predictions/Reshape_1:0.raw. It holds the output tensor data of 1000 probabilities for the 1000 categories. The element with the highest value represents the top classification. A python script to interpret the classification results is provided and can be used as follows:
python3 $SNPE_ROOT/examples/Models/InceptionV3/scripts/show_inceptionv3_classifications_snpe.py -i data/cropped/raw_list.txt \
-o output/ \
-l data/imagenet_slim_labels.txt
The output should look like the following, showing classification results for all the images.
Classification results
<input_files_dir>/notice_sign.raw 0.170401 459 brass
<input_files_dir>/plastic_cup.raw 0.977711 648 measuring cup
<input_files_dir>/chairs.raw 0.299139 832 studio couch
<input_files_dir>/trash_bin.raw 0.747274 413 ashcan
Note: The <input_files_dir> above maps to a path such as $SNPE_ROOT/examples/Models/InceptionV3/data/cropped/
The output shows the image was classified as “brass” (index 459 of the labels) with a probability of 0.170401. The rest of the output can be examined to see the model’s classification on other images.
Binary data input
Note that the Inception v3 image classification model does not accept jpg files as input. The model expects its input tensor dimension to be 299x299x3 as a float array. The scripts/setup_inceptionv3_snpe.py script performs a jpg to binary data conversion by calling scripts/create_inceptionv3_raws.py. The scripts are an example of how jpg images can be preprocessed to generate input for the Inception v3 model.
Run on Target Platform ( Android/LE/UBUN )
Select target architecture
Qualcomm® Neural Processing SDK provides binaries for different target platforms. Android binaries are compiled with clang using libc++ STL implementation. Below are examples for aarch64-android (Android platform) and aarch64-oe-linux-gcc11.2 toolchain (LE platform). Similarly other toolchains for different platforms can be set as SNPE_TARGET_ARCH
# For Android targets: architecture: arm64-v8a - compiler: clang - STL: libc++
export SNPE_TARGET_ARCH=aarch64-android
# Example for LE targets
export SNPE_TARGET_ARCH=aarch64-oe-linux-gcc11.2
For simplicity, this tutorial sets the target binaries to aarch64-android. Following are commands for on host and on target.
Push libraries and binaries to target
Push Qualcomm® Neural Processing SDK libraries and the prebuilt snpe-net-run executable to /data/local/tmp/snpeexample on the Android target. Set SNPE_TARGET_DSPARCH to the DSP architecture of the target Android device.
export SNPE_TARGET_ARCH=aarch64-android
export SNPE_TARGET_DSPARCH=hexagon-v73
adb shell "mkdir -p /data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin"
adb shell "mkdir -p /data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib"
adb shell "mkdir -p /data/local/tmp/snpeexample/dsp/lib"
adb push $SNPE_ROOT/lib/$SNPE_TARGET_ARCH/*.so \
/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib
adb push $SNPE_ROOT/lib/$SNPE_TARGET_DSPARCH/unsigned/*.so \
/data/local/tmp/snpeexample/dsp/lib
adb push $SNPE_ROOT/bin/$SNPE_TARGET_ARCH/snpe-net-run \
/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin
Set up enviroment variables
Set up the library path, the path variable, and the target architecture in adb shell to run the executable with the -h argument to see its description.
adb shell
export SNPE_TARGET_ARCH=aarch64-android
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib
export PATH=$PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin
snpe-net-run -h
exit
Push model data to Android target
To execute the Inception v3 classification model on Android target follow these steps:
cd $SNPE_ROOT/examples/Models/InceptionV3
mkdir data/rawfiles && cp data/cropped/*.raw data/rawfiles/
adb shell "mkdir -p /data/local/tmp/inception_v3"
adb push data/rawfiles /data/local/tmp/inception_v3/cropped
adb push data/target_raw_list.txt /data/local/tmp/inception_v3
adb push dlc/inception_v3_quantized.dlc /data/local/tmp/inception_v3
rm -rf data/rawfiles
Note: It may take some time to push the Inception v3 dlc file to the target.
Running on Android using CPU Runtime
The Android C++ executable is run with the following commands:
adb shell
export SNPE_TARGET_ARCH=aarch64-android
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib
export PATH=$PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin
cd /data/local/tmp/inception_v3
snpe-net-run --container inception_v3_quantized.dlc --input_list target_raw_list.txt
exit
The executable will create the results folder: /data/local/tmp/inception_v3/output. To pull the output:
adb pull /data/local/tmp/inception_v3/output output_android
Check the classification results by running the following python script:
python3 scripts/show_inceptionv3_classifications_snpe.py -i data/target_raw_list.txt \
-o output_android/ \
-l data/imagenet_slim_labels.txt
The output should look like the following, showing classification results for all the images.
Classification results
cropped/notice_sign.raw 0.170409 459 brass
cropped/plastic_cup.raw 0.977708 648 measuring cup
cropped/chairs.raw 0.299145 832 studio couch
cropped/trash_bin.raw 0.747256 413 ashcan
Running on Android using DSP Runtime
adb shell
export SNPE_TARGET_ARCH=aarch64-android
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib
export PATH=$PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin
export ADSP_LIBRARY_PATH="/data/local/tmp/snpeexample/dsp/lib;/system/lib/rfsa/adsp;/system/vendor/lib/rfsa/adsp;/dsp"
cd /data/local/tmp/inception_v3
snpe-net-run --container inception_v3_quantized.dlc --input_list target_raw_list.txt --use_dsp
exit
Pull the output into an output_android_dsp directory.
adb pull /data/local/tmp/inception_v3/output output_android_dsp
Check the classification results by running the following python script:
python3 scripts/show_inceptionv3_classifications_snpe.py -i data/target_raw_list.txt \
-o output_android_dsp/ \
-l data/imagenet_slim_labels.txt
The output should look like the following, showing classification results for all the images.
Classification results
cropped/notice_sign.raw 0.175781 459 brass
cropped/plastic_cup.raw 0.976562 648 measuring cup
cropped/chairs.raw 0.285156 832 studio couch
cropped/trash_bin.raw 0.773438 413 ashcan
Classification results are identical to the run with CPU runtime, but there are differences in the probabilities associated with the output labels due to floating point precision differences.
Running on Android using AIP Runtime
The AIP runtime allows you to run the Inception v3 model on the HTA. Running the model using the AIP runtime requires setting the –runtime argument as ‘aip’ in the script $SNPE_ROOT/examples/Models/InceptionV3/scripts/setup_inceptionv3_snpe.py to allow HTA-specific metadata to be packed into the DLC that is required by the AIP runtime. Refer to Getting Inception v3 for more details.
Other than that the additional settings for AIP runtime are quite similar to those for the DSP runtime. Note the extra environment variable ADSP_LIBRARY_PATH must be set to use DSP (See DSP Runtime Environment for details). Try running on an Android target with the –use_aip option as follows:
adb shell
export SNPE_TARGET_ARCH=aarch64-android
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/lib
export PATH=$PATH:/data/local/tmp/snpeexample/$SNPE_TARGET_ARCH/bin
export ADSP_LIBRARY_PATH="/data/local/tmp/snpeexample/dsp/lib;/system/lib/rfsa/adsp;/system/vendor/lib/rfsa/adsp;/dsp"
cd /data/local/tmp/inception_v3
snpe-net-run --container inception_v3_quantized.dlc --input_list target_raw_list.txt --use_aip
exit
Pull the output into an output_android_aip directory.
adb pull /data/local/tmp/inception_v3/output output_android_aip
Check the classification results by running the following python script:
python3 scripts/show_inceptionv3_classifications_snpe.py -i data/target_raw_list.txt \
-o output_android_aip/ \
-l data/imagenet_slim_labels.txt
The output should look like the following, showing classification results for all the images.
Classification results
cropped/notice_sign.raw 0.175781 459 brass
cropped/plastic_cup.raw 0.976562 648 measuring cup
cropped/chairs.raw 0.285156 832 studio couch
cropped/trash_bin.raw 0.773438 413 ashcan
Classification results are identical to the run with CPU runtime, but there are differences in the probabilities associated with the output labels due to floating point precision differences.