This section provides a high-level description of the process of integrating the Inference Engine into your application. Refer to the Hello Classification Sample sources for example of using the Inference Engine in applications.
NOTE: For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach. The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases. Please, refer to Migration from Inference Engine Plugin API to Core API guide to update your application.
libinference_engine.so library implements loading and parsing a model Intermediate Representation (IR), and triggers inference using a specified device. The core library has the following API:
C++ Inference Engine API wraps the capabilities of core library:
Integration process includes the following steps:
1) Create Inference Engine Core to manage available devices and read network objects:
2) Read a model IR created by the Model Optimizer (.xml is supported format):
Or read the model from ONNX format (.onnx and .prototxt are supported formats)
Optionally, set the number format (precision) and memory layout for inputs and outputs. Refer to the Supported configurations chapter to choose the relevant configuration.
You can also allow input of any size. To do this, mark each input as resizable by setting a desired resize algorithm (e.g.
BILINEAR) inside of the appropriate input info.
Basic color format conversions are supported as well. By default, the Inference Engine assumes that the input color format is
BGR and color format conversions are disabled. The Inference Engine supports the following color format conversions:
X is a channel that will be ignored during inference. To enable the conversions, set a desired color format (for example,
RGB) for each input inside of the appropriate input info.
If you want to run inference for multiple images at once, you can use the built-in batch pre-processing functionality.
NOTE: Batch pre-processing is not supported if input color format is set to
You can use the following code snippet to configure input and output:
NOTE: NV12 input color format pre-processing differs from other color conversions. In case of NV12, Inference Engine expects two separate image planes (Y and UV). You must use a specific
InferenceEngine::NV12Blobobject instead of default blob object and set this blob to the Inference Engine Infer Request using
InferenceEngine::InferRequest::SetBlob(). Refer to Hello NV12 Input Classification C++ Sample for more details.
If you skip this step, the default values are set:
ColorFormat::RAWmeaning that input does not need color conversions
|Number of dimensions||5||4||3||2||1|
4) Load the model to the device using
It creates an executable network from a network object. The executable network is associated with single hardware device. It is possible to create as many networks as needed and to use them simultaneously (up to the limitation of the hardware resources). Third parameter is a configuration for plugin. It is map of pairs: (parameter name, parameter value). Choose device from Supported devices page for more details about supported configuration parameters.
5) Create an infer request:
6) Prepare input. You can use one of the following options to prepare input:
InferenceEngine::InferRequest::GetBlob()and feed an image and the input data to the blobs. In this case, input data must be aligned (resized manually) with a given blob size and have a correct color format.
InferenceEngine::InferRequest::GetBlob()and set it as input for the second request using
InferenceEngine::make_shared_blob()with passing of
InferenceEngine::InferRequest::SetBlob()to set these blobs for an infer request:
SetBlob()method compares precision and layout of an input blob with ones defined on step 3 and throws an exception if they do not match. It also compares a size of the input blob with input size of the read network. But if input was configured as resizable, you can set an input blob of any size (for example, any ROI blob). Input resize will be invoked automatically using resize algorithm configured on step 3. Similarly to the resize, color format conversions allow the color format of an input blob to differ from the color format of the read network. Color format conversion will be invoked automatically using color format configured on step 3.
GetBlob()logic is the same for pre-processable and not pre-processable input. Even if it is called with input configured as resizable or as having specific color format, a blob allocated by an infer request is returned. Its size and color format are already consistent with the corresponding values of the read network. No pre-processing will happen for this blob. If you call
SetBlob(), you will get the blob you set in
or by calling the
InferenceEngine::InferRequest::Infer method for synchronous request:
StartAsync returns immediately and starts inference without blocking main thread,
Infer blocks main thread and returns when inference is completed. Call
Wait for waiting result to become available for asynchronous request.
There are three ways to use it:
InferenceEngine::IInferRequest::WaitMode::RESULT_READY- waits until inference result becomes available
InferenceEngine::IInferRequest::WaitMode::STATUS_ONLY- immediately returns request status.It does not block or interrupts current thread.
Both requests are thread-safe: can be called from different threads without fearing corruption and failures.
Multiple requests for single
ExecutableNetwork are executed sequentially one by one in FIFO order.
While request is ongoing, all its methods except
InferenceEngine::InferRequest::Wait would throw an exception.
8) Go over the output blobs and process the results. Note that casting
std::dynamic_pointer_cast is not recommended way, better to access data via
as() methods as follows:
For details about building your application, refer to the CMake files for the sample applications. All samples source code is located in the
<INSTALL_DIR>/openvino/inference_engine/samples directory, where
INSTALL_DIR is the OpenVINO™ installation directory.
project/CMakeLists.txtOpenCV integration is needed mostly for pre-processing input data and ngraph for more complex applications using ngraph API.
NOTE: Make sure Set the Environment Variables step in OpenVINO Installation document is applied to your terminal, otherwise
OpenCV_DIRvariables won't be configured properly to pass
NOTE: Before running, make sure you completed Set the Environment Variables section in OpenVINO Installation document so that the application can find the libraries.
To run compiled applications on Microsoft* Windows* OS, make sure that Microsoft* Visual C++ 2017 Redistributable and Intel® C++ Compiler 2017 Redistributable packages are installed and
<INSTALL_DIR>/bin/intel64/Release/*.dll files are placed to the application folder or accessible via
PATH% environment variable.
|Deprecation Begins||June 1, 2020|
|Removal Date||December 1, 2020|
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.