Deploying deep learning networks from the training environment to embedded platforms for inference might be a complex task that introduces a number of technical challenges that must be addressed:
The process assumes that you have a network model trained using one of the supported frameworks. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
The steps are:
Model Optimizer is a cross-platform command line tool that facilitates the transition between the training and deployment environment, performs static model analysis and automatically adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer is designed to support multiple deep learning supported frameworks and formats.
While running Model Optimizer you do not need to consider what target device you wish to use, the same output of the MO can be used in all targets.
The process assumes that you have a network model trained using one of the supported frameworks. The Model Optimizer workflow can be described as following:
For the list of supported models refer to the framework or format specific page:
Intermediate representation describing a deep learning model plays an important role connecting the OpenVINO™ toolkit components. The IR is a pair of files:
.xml: The topology file - an XML file that describes the network topology
.bin: The trained data file - a .bin file that contains the weights and biases binary data
Intermediate Representation (IR) files can be read, loaded and inferred with the Inference Engine. Inference Engine API offers a unified API across a number of supported Intel® platforms. IR is also consumed, modified and written by Post-Training Optimization Tool which provides quantization capabilities.
Refer to a dedicated description about Intermediate Representation and Operation Sets for further details.
OpenVINO toolkit is powered by nGraph capabilities for Graph construction API, Graph transformation engine and Reshape. nGraph Function is used as an intermediate representation for a model in the run-time underneath the CNNNetwork API. The conventional representation for CNNNetwork is still available if requested for backward compatibility when some conventional API methods are used. Please refer to the Overview of nGraph Flow describing the details of nGraph integration into the Inference Engine and co-existence with the conventional representation.
|Deprecation Begins||June 1, 2020|
|Removal Date||December 1, 2020|
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.
Inference Engine is a runtime that delivers a unified API to integrate the inference with application logic:
The Inference Engine supports inference of multiple image classification networks, including AlexNet, GoogLeNet, VGG and ResNet families of networks, fully convolutional networks like FCN8 used for image segmentation, and object detection networks like Faster R-CNN.
For the full list of supported hardware, refer to the Supported Devices section.
For Intel® Distribution of OpenVINO™ toolkit, the Inference Engine package contains headers, runtime libraries, and sample console applications demonstrating how you can use the Inference Engine in your applications.
For complete information about compiler optimizations, see our Optimization Notice.