Class SNPE

Class Documentation

class SNPE

Public Functions

DlSystem::Optional<DlSystem::StringList> getInputTensorNames() const noexcept

Gets the names of input tensors to the network.

To support multiple input scenarios, where multiple tensors are passed through execute() in a TensorMap, each tensor needs to be uniquely named. The names of tensors can be retrieved through this function.

In the case of a single input, one name will be returned.

Returns

An Optional List of input tensor names.

Note

Note that because the returned value is an Optional list, the list must be verified as boolean true value before being dereferenced.

DlSystem::Optional<DlSystem::StringList> getOutputTensorNames() const noexcept

Gets the names of output tensors to the network.

Returns

List of output tensor names.

DlSystem::StringList getOutputTensorNamesByLayerName(const char *name) const noexcept

Gets the names of output tensor from the input layer name.

Parameters

name[in] Layer name

Returns

Output tensor names.

bool execute(const DlSystem::TensorMap &input, DlSystem::TensorMap &output) noexcept

Processes the input data and returns the output.

Parameters
  • A[in] map of tensors that contains the input data for each input. The names of tensors needs to be matched with names retrieved through getInputTensorNames()

  • An[inout] empty map of tensors that will contain the output data of potentially multiple layers (the key in the map is the layer name) upon return

Note

output tensormap has to be empty. To forward propagate and get results in user-supplied tensors, use executeWithSuppliedOutputTensors.

bool execute(const DlSystem::ITensor *input, DlSystem::TensorMap &output) noexcept

Processes the input data and returns the output.

Parameters
  • A[in] single tensor contains the input data.

  • An[inout] empty map of tensors that will contain the output data of potentially multiple layers (the key in the map is the layer name) upon return

Note

output tensormap has to be empty.

bool execute(const DlSystem::UserBufferMap &input, const DlSystem::UserBufferMap &output) noexcept

Processes the input data and returns the output, using user-supplied buffers.

Caller must guarantee that for the duration of execute(), the buffer stored in UserBuffer would remain valid. For more detail on buffer ownership and lifetime requirements, please refer to zdl::DlSystem::UserBuffer documentation.

Parameters
  • A[in] map of UserBuffers that contains the input data for each input. The names of UserBuffers needs to be matched with names retrieved through getInputTensorNames()

  • A[inout] map of UserBuffers that will hold the output data of potentially multiple layers (the key in the map is the UserBuffer name)

Note

input and output UserBuffer maps must be fully pre-populated with dimensions matching what the network expects. For example, if there are 5 output UserBuffers they all have to be present in map.

bool registerIonBuffers(const DlSystem::UserMemoryMap &ionBufferMap) noexcept

Register Client ION Buffers.

Parameters

A[in] UserMemoryMap of virtual addresses

Note

To be deprecated, please use new api registerMemoryMappedBuffers

Note

UserBuffer type passed for registration must match the data type of the tensor in the dlc For regular UserBuffers SNPE performs an online data conversion (quantization or dequantization etc). This is not possible for ion buffers hence can lead to issues during execution or accuracy degradation

bool deregisterIonBuffers(const DlSystem::StringList &ionBufferNames) noexcept

Deregister Client ION Buffers.

Parameters

A[in] StringList of ION Buffer names

Note

To be deprecated, please use new api deregisterMemoryMappedBuffers

bool registerMemoryMappedBuffers(const DlSystem::UserMemoryMap &memoryMappedBufferMap) noexcept

Register Client Memory-Mapped Buffers (Example ION buffers in Android)

Parameters

A[in] UserMemoryMap of virtual addresses

Note

UserBuffer type passed for registration must match the data type of the tensor in the dlc For regular UserBuffers SNPE performs an online data conversion (quantization or dequantization etc). This is not possible for memory mapped buffers hence can lead to issues during execution or accuracy degradation

bool deregisterMemoryMappedBuffers(const DlSystem::StringList &bufferNames) noexcept

Deregister Client Memory-Mapped Buffers (Example ION buffers in Android)

Parameters

A[in] StringList of memory mapped buffer names

std::string getModelVersion() const

Returns the version string embedded at model conversion time.

Returns

Model version string, which is a free-form string supplied at the time of the conversion

DlSystem::Optional<DlSystem::TensorShape> getInputDimensions() const noexcept

Returns the dimensions of the input data to the model in the form of TensorShape. The dimensions in TensorShape corresponds to what the tensor dimensions would need to be for an input tensor to the model.

Returns

An Optional instance of TensorShape that maintains dimensions, matching the tensor dimensions for input to the model, where the last entry is the fastest varying dimension, etc.

Note

Note that this function only makes sense for networks that have a fixed input size. For networks in which the input size varies with each call of Execute(), this function should not be used.

Note

Because the returned type is an Optional instance, it must be verified as a boolean true value before being dereferenced.

DlSystem::Optional<DlSystem::TensorShape> getInputDimensions(const char *name) const noexcept

Returns the dimensions of the input data to the model in the form of TensorShape. The dimensions in TensorShape corresponds to what the tensor dimensions would need to be for an input tensor to the model.

See also

Snpe_ITensor_Handle_t

See also

Snpe_TensorShape_Handle_t

Parameters

name[in] input name.

Returns

a TensorShape that maintains dimensions, matching the tensor dimensions for input to the model, where the last entry is the fastest varying dimension, etc.

Note

Note that this function only makes sense for networks that have a fixed input size. For networks in which the input size varies with each call of Execute(), this function should not be used.

DlSystem::Optional<DlSystem::StringList> getOutputLayerNames() const noexcept

Gets the output layer(s) for the network.

Note that the output layers returned by this function may be different than those specified when the network was created via the zdl::SNPE::SNPEBuilder. For example, if the network was created in debug mode with no explicit output layers specified, this will contain all layers.

Returns

A List of output layer names.

Note

Note that because the returned value is an Optional StringList, the list must be verified as a boolean true value before being dereferenced.

DlSystem::Optional<DlSystem::IBufferAttributes*> getInputOutputBufferAttributes(const char *name) const noexcept

Returns attributes of buffers used to feed input tensors and receive result from output tensors.

Parameters

Tensor[in] name.

Returns

BufferAttributes of input/output tensor named

DlSystem::Optional<DlSystem::StringList> getInputTensorNamesForNetwork(const char *networkName) const noexcept

Gets the names of input tensors to the network.

To support multiple input scenarios, where multiple tensors are passed through execute() in a TensorMap, each tensor needs to be uniquely named. The names of tensors can be retrieved through this function.

In the case of a single input, one name will be returned.

Parameters

networkName[in] Network name.

Returns

A StringList of input tensor names.

DlSystem::Optional<DlSystem::StringList> getOutputTensorNamesForNetwork(const char *networkName) const noexcept

Gets the names of output tensors to the network.

Parameters

networkName[in] Network name.

Returns

List of output tensor names.

Note

The networkName is specified in snpe-dlc-info and defaults to the name of the first graph in the DLC.

DlSystem::StringList getOutputTensorNamesByLayerNameForNetwork(const char *networkName, const char *name) const noexcept

Gets the names of output tensor from the input layer name.

Parameters
  • networkName[in] Network name.

  • name[in] Layer name

Returns

Output tensor names.

bool execute(const char *networkName, const DlSystem::TensorMap &input, DlSystem::TensorMap &output) noexcept

Processes the input data and returns the output.

Parameters
  • input[in] A map of tensors that contains the input data for each input. The names of tensors needs to be matched with names retrieved through getInputTensorNames()

  • output[inout] An empty map of tensors that will contain the output data of potentially multiple layers (the key in the map is the layer name) upon return

Returns

SNPE_SUCCESS upon successful execution

Note

output TensorMap has to be empty. To forward propagate and get results in user-supplied tensors, use Snpe_SNPE_ExecuteUserBuffers().

bool execute(const char *networkName, const DlSystem::ITensor *input, DlSystem::TensorMap &output) noexcept

Processes the input data and returns the output.

Parameters
  • networkName[in] Network name.

  • input[in] A single tensor contains the input data.

  • output[inout] An empty map of tensors that will contain the output data of potentially multiple layers (the key in the map is the layer name) upon return

Returns

SNPE_SUCCESS upon successful execution

Note

output TensorMap has to be empty. To forward propagate and get results in user-supplied tensors, use Snpe_SNPE_ExecuteUserBuffers.

bool execute(const char *networkName, const DlSystem::UserBufferMap &input, const DlSystem::UserBufferMap &output) noexcept

Processes the input data and returns the output, using user-supplied buffers.

Caller must guarantee that for the duration of execute(), the buffer stored in UserBuffer would remain valid. For more detail on buffer ownership and lifetime requirements, please refer to zdl::DlSystem::UserBuffer documentation.

Parameters
  • networkName[in] Network name.

  • input[in] A map of UserBuffers that contains the input data for each input. The names of UserBuffers needs to be matched with names retrieved through getInputTensorNames()

  • output[inout] A map of UserBuffers that will hold the output data of potentially multiple layers (the key in the map is the UserBuffer name)

Returns

SNPE_SUCCESS upon successful execution

Note

input and output UserBuffer maps must be fully pre-populated. with dimensions matching what the network expects. For example, if there are 5 output UserBuffers they all have to be present in map.

bool registerMemoryMappedBuffersForNetwork(const char *networkName, const DlSystem::UserMemoryMap &memoryMappedBufferMap) noexcept

Register Client Memory-Mapped Buffers (Example ION buffers in Android)

Parameters
  • networkName[in] Network name.

  • memoryMappedBuuferMap[in] A UserMemoryMap of virtual addresses

Returns

SNPE_SUCCESS upon successful memory mapped buffer registration

Note

UserBuffer type passed for registration must match the data type of the tensor in the dlc For regular UserBuffers SNPE performs an online data conversion (quantization or dequantization etc). This is not possible for memory mapped buffers hence can lead to issues during execution or accuracy degradation

bool deregisterMemoryMappedBuffersForNetwork(const char *networkName, const DlSystem::StringList &bufferNames) noexcept

Deregister Client Memory-Mapped Buffers (Example ION buffers in Android)

Parameters
  • networkName[in] Network name.

  • bufferNames[in] A StringList of memory mapped buffer names

Returns

SNPE_SUCCESS upon successful memory mapped buffer deregistration

DlSystem::Optional<DlSystem::TensorShape> getInputDimensionsForNetwork(const char *networkName) const noexcept

Returns the dimensions of the input data to the model in the form of TensorShape. The dimensions in TensorShape corresponds to what the tensor dimensions would need to be for an input tensor to the model.

See also

Snpe_ITensor_Handle_t

See also

Snpe_TensorShape_Handle_t

Parameters

networkName[in] Network name.

Returns

a TensorShape that maintains dimensions, matching the tensor dimensions for input to the model, where the last entry is the fastest varying dimension, etc.

Note

Note that this function only makes sense for networks that have a fixed input size. For networks in which the input size varies with each call of Execute(), this function should not be used.

DlSystem::Optional<DlSystem::TensorShape> getInputDimensionsForNetwork(const char *networkName, const char *name) const noexcept

Returns the dimensions of the input data to the model in the form of TensorShape. The dimensions in TensorShape corresponds to what the tensor dimensions would need to be for an input tensor to the model.

See also

Snpe_ITensor_Handle_t

See also

Snpe_TensorShape_Handle_t

Parameters
  • snpeHandle[in] Handle to access the SNPE object

  • networkName[in] Network name.

  • name[in] input name.

Returns

a TensorShape that maintains dimensions, matching the tensor dimensions for input to the model, where the last entry is the fastest varying dimension, etc.

Note

Note that this function only makes sense for networks that have a fixed input size. For networks in which the input size varies with each call of Execute(), this function should not be used.

DlSystem::Optional<DlSystem::StringList> getOutputLayerNamesForNetwork(const char *networkName) const noexcept

Gets the output layer(s) for the network.

Parameters

networkName[in] Network name.

Returns

A StringList of output layer names.

Note

The output layers returned by this function may be different than those specified when the network was created via the SNPEBuilder. For example, if the network was created in debug mode with no explicit output layers specified, this will contain all layers.

DlSystem::Optional<DlSystem::IBufferAttributes*> getInputOutputBufferAttributesForNetwork(const char *networkName, const char *name) const noexcept

Returns attributes of buffers used to feed input tensors and receive result from output tensors.

Parameters
  • networkName[in] Network name.

  • name[in] Tensor name.

Returns

BufferAttributes of input/output tensor named

DlSystem::Optional<DiagLog::IDiagLog*> getDiagLogInterface() noexcept

Get the diagnostic logging interface.

Note

Note that because the returned type is an Optional instance, it must be verified as a boolean true value before being dereferenced.

DlSystem::Optional<DlSystem::StringList> getNetworkNames() const noexcept

Returns a stringList of network names managed by the snpeHandle.

Returns

StringListHandle of networkNames

bool setPerformanceProfile(DlSystem::PerformanceProfile_t perfProfile) noexcept

Sets performance profile.

Parameters

perfProfile, performance[in] profile level

Returns

true upon successful setting of performance profile

bool setCustomPerfProfile(DlSystem::SNPEPerfProfile perfProfile) noexcept

Sets custom performance profile.

Parameters

perfProfile, custom[in] performance profile level

Returns

true upon successful setting of custom performance profile

bool setExecutionPriorityHint(DlSystem::ExecutionPriorityHint_t priority)

Sets a preference for execution priority. Allows users to set the priority of the graph. Setting this option overwrites the previous priority. SNPE runtime is free to use this information to co-ordinate between different workloads that may or may not extend beyond SNPE.

Parameters

priority[in] The target performance profile.

Returns

true upon successful setting of custom performance profile

Note

On the Android platform, performance is determined by the priority level. In contrast, on Windows, the operating system can adjust the priority level, which means that performance cannot be guaranteed.