Preparing LPAI Param Config File¶
Prepare Json file with appropriate parameters to generate model for appropriate hardware
EXAMPLE of lpaiParams.conf file for v6 hardware:
{
"lpai_backend": {
"target_env": "x86",
"enable_hw_ver": "v6"
}
}
Compile LPAI Graph on x86 Linux OS¶
EXAMPLE of config.json file:
{
"backend_extensions": {
"shared_library_path": "${QNN_SDK_ROOT}/lib/x86_64-linux-clang/libQnnLpaiNetRunExtensions.so",
"config_file_path": "./lpaiParams.conf"
}
}
Use context binary generator to generate offline LPAI model.
The qnn-context-binary-generator utility is backend-agnostic, meaning it can only utilize generic QNN APIs. The backend extension feature allows for the use of backend-specific APIs, such as custom configurations. More documentation on context binary generator can be found under qnn-context-binary-generator Please note that the scope of QNN backend extensions is limited to qnn-context-binary-generator and qnn-net-run.
LPAI Backend Extensions serve as an interface to offer custom options to the LPAI Backend.
To enable hardware versions, it is necessary to provide an extension shared library
libQnnLpaiNetRunExtensions.so and a configuration file, if required.
To use backend extension-related parameters with qnn-net-run and qnn-context-binary-generator, use the --config_file argument and provide the path to the JSON file.
$ cd ${QNN_SDK_ROOT}/examples/QNN/converter/models
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${QNN_SDK_ROOT}/lib/x86_64-linux-clang
$ ${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-context-binary-generator \
--backend <path_to_x86_library>/libQnnLpai.so \
--model <qnn_x86_model_name.so> \
--log_level verbose \
--binary_file <qnn_model_name.bin> \
--config_file <path to JSON of backend extensions>
To configure LPAI JSON configuration refer to `QNN LPAI Backend Configuration Guide`_