Model Optimizer Developer Guide

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:

workflow_steps.png

Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:

What's New in the Model Optimizer in this Release?

Notice that certain topology-specific layers (like DetectionOutput used in the SSD*) and several general-purpose layers (like Squeeze and Unsqueeze) are now delivered in the source code. This assumes that the extensions library is compiled/loaded. The extensions are also required for the pre-trained models inference. Please refer to the complete list of layers that require the extensions library.

NOTE: Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.

Table of Content

Typical Next Step: Introduction to Intel® Deep Learning Deployment Toolkit