Model Optimizer Developer Guide

Introduction

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the Inference Engine.

NOTE: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.

The scheme below illustrates the typical workflow for deploying a trained deep learning model:

The IR is a pair of files describing the model:

  • .xml - Describes the network topology
  • .bin - Contains the weights and biases binary data.

Below is a simple command running Model Optimizer to generate an IR for the input model:

python3 mo.py --input_model INPUT_MODEL

To learn about all Model Optimizer parameters and conversion technics, see the Converting a Model to IR page.

TIP: You can quick start with the Model Optimizer inside the OpenVINO™ Deep Learning Workbench (DL Workbench). DL Workbench is the OpenVINO™ toolkit UI that enables you to import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for deployment on various Intel® platforms.

Videos

Model Optimizer Concept.
Duration: 3:56
Model Optimizer Basic
Operation
.
Duration: 2:57.
Choosing the Right Precision.
Duration: 4:18.