Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
.xml- Describes the network topology
.bin- Contains the weights and biases binary data.
|Deprecation Begins||June 1, 2020|
|Removal Date||December 1, 2020|
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.
--disable_weights_compressionModel Optimizer command-line parameter to get an expanded version.
Erfoperation into the
Concatoperations to a single
ReorgYolo. They became a part of new
opset2operation set and generated with
version="opset2". Before this fix, the operations were generated with
version="opset1"by mistake, they were not a part of
opset1specification was fixed accordingly.
MeanVarianceNormalizationif normalization is performed over spatial dimensions.
Reshapewith input shape values equal to -2, -3, and -4.
NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.
Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit