ARM64X Tutorial¶
Introduction¶
Windows SDK provides ARM64X binaries, which are compatible for x86-64, ARM64 and ARM64EC applications on Windows ARM devices. ARM64X is only supported on SC8380XP (Snapdragon 8cx Gen 4).
End-to-end HTP inference with ARM64EC qnn-net-run and offline prepared graph¶
Copy the following files to SC8380XP target device
$QNN_SDK_ROOT/bin/arm64x-windows-msvc/qnn-net-run.exe
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtp.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpV73Stub.dll
$QNN_SDK_ROOT/lib/hexagon-v73/unsigned/libQnnHtpV73Skel.so
Inception_v3_quantized.serialized.bin (See Converting and executing a CNN model with QNN for more details)
Run the following inference command on the target device
$ .\qnn-net-run.exe --input_list .\input_list.txt --retrieve_context .\Inception_v3_quantized.serialized.bin --backend QnnHtp.dll
End-to-end HTP inference with ARM64EC qnn-net-run and online prepared graph¶
Copy the following files to SC8380XP target device
$QNN_SDK_ROOT/bin/arm64x-windows-msvc/qnn-net-run.exe
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtp.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpV73Stub.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpPrepare.dll
$QNN_SDK_ROOT/lib/hexagon-v73/unsigned/libQnnHtpV73Skel.so
Inception_v3_quantized.dll (See Converting and executing a CNN model with QNN for more details, and use -t windows-x86_64 when running the qnn-model-lib-generator)
Run the following inference command on the target device
$ .\qnn-net-run.exe --input_list .\input_list.txt --model .\Inception_v3_quantized.dll --backend QnnHtp.dll
End-to-end HTP inference with ARM64 qnn-net-run and offline prepared graph¶
Copy the following files to SC8380XP target device
$QNN_SDK_ROOT/bin/aarch64-windows-msvc/qnn-net-run.exe
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtp.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpV73Stub.dll
$QNN_SDK_ROOT/lib/hexagon-v73/unsigned/libQnnHtpV73Skel.so
Inception_v3_quantized.serialized.bin (See Converting and executing a CNN model with QNN for more details)
Run the following inference command on the target device
$ .\qnn-net-run.exe --input_list .\input_list.txt --retrieve_context .\Inception_v3_quantized.serialized.bin --backend QnnHtp.dll
End-to-end HTP inference with ARM64 qnn-net-run with online prepared graph¶
Copy the following files to SC8380XP target device
$QNN_SDK_ROOT/bin/aarch64-windows-msvc/qnn-net-run.exe
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtp.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpV73Stub.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpPrepare.dll
$QNN_SDK_ROOT/lib/hexagon-v73/unsigned/libQnnHtpV73Skel.so
Inception_v3_quantized.dll (See Converting and executing a CNN model with QNN for more details, and use -t windows-aarch64 when running the qnn-model-lib-generator)
Run the following inference command on the target device
$ .\qnn-net-run.exe --input_list .\input_list.txt --model .\Inception_v3_quantized.dll --backend QnnHtp.dll
End-to-end HTP inference with x64 qnn-sample-app and offline prepared graph¶
Copy the following files to SC8380XP target device
$QNN_SDK_ROOT/examples/QNN/SampleApp/build/src/Release/qnn-sample-app.exe (See Sample App Tutorial: Create and build a sample C++ application for more details)
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtp.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnHtpV73Stub.dll
$QNN_SDK_ROOT/lib/arm64x-windows-msvc/QnnSystem.dll
$QNN_SDK_ROOT/lib/hexagon-v73/unsigned/libQnnHtpV73Skel.so
Inception_v3_quantized.serialized.bin (See Converting and executing a CNN model with QNN for more details)
Run the following inference command on the target device
$ .\qnn-sample-app.exe --system_library .\QnnSystem.dll --input_list .\input_list.txt --retrieve_context .\Inception_v3_quantized.serialized.bin --backend QnnHtp.dll