Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
.xml- Describes the network topology
.bin- Contains the weights and biases binary data.
--inputcommand line parameters to specify shapes and values for freezing arbitrary nodes (not only model inputs). The command line parameter
Notice that certain topology-specific layers (like DetectionOutput used in the SSD*) and several general-purpose layers (like Squeeze and Unsqueeze) are now delivered in the source code. This assumes that the ./inference-engine/src/extension/README.md "extensions library" is compiled/loaded. The extensions are also required for the pre-trained models inference. Please refer to the complete list of layers that require the extensions library.
NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.
Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit