This chapter provides information on the Inference Engine plugins that enable inferencing of deep learning models on the supported VPU devices:
'ScaleShift'layer is supported for zero value of
'CTCGreedyDecoder'layer works with
'ctc_merge_repeated'attribute equal 1.
'DetectionOutput'layer works with zero values of
'MVN'layer uses fixed value for
'Normalize'layer uses fixed value for
'eps'parameters (1e-9) and is supported for zero value of
'Pad'layer works only with 4D tensors.
The VPU plugins supports the configuration parameters listed below. The parameters are passed as
std::map<std::string, std::string> on
|Parameter Name||Parameter Values||Default||Description|
|Turn on HW stages usage|
Applicable for Intel Movidius Myriad X and Intel Vision Accelerator Design devices only.
|Specify internal input and output layouts for network layers.|
|Add device-side time spent waiting for input to PerformanceCounts.|
See Data Transfer Pipelining section for details.
|VPU plugin could use statistic present in IR in order to try to improve calculations precision.|
If you don't want statistic to be used enable this option.
|path to XML file||empty string||This option allows to pass XML file with custom layers binding.|
If layer is present in such file, it would be used during inference even if the layer is natively supported.
MYRIAD plugin tries to pipeline data transfer to/from device with computations. While one infer request is executed the data for next infer request can be uploaded to device in parallel. Same applicable for result downloading.
KEY_VPU_PRINT_RECEIVE_TENSOR_TIME configuration parameter can be used to check the efficiency of current pipelining. The new record in performance counters will show the time that device spent waiting for input before starting the inference. In perfect pipeline this time should be near to zero, which means that the data was already transfered when new inference started.
Get the following message when running inference with the VPU plugin: "[VPU] Cannot convert layer <layer_name> due to unsupported layer type <layer_type>"
This means that your topology has a layer that is unsupported by your target VPU plugin. To resolve this issue, you can implement the custom layer for the target device using the Inference Engine Kernels Extensibility mechanism. Or, to quickly get a working prototype, you can use the heterogeneous scenario with the default fallback policy (see the HETERO Plugin section). Use the HETERO plugin with a fallback device that supports this layer, for example, CPU:
HETERO:MYRIAD,CPU. For a list of VPU supported layers, see the Supported Layers section of the Supported Devices topic.
NOTE: Using heterogeneous scenario with VPU usage may cause accuracy issues on the VPU side. You can use the Collect Statistics Tool to collect statistic and save it in IR. This statistics can be used by the VPU plugin in order to restore accuracy.