Converting a TensorFlow* Model

A summary of the steps for optimizing and deploying a model that was trained with the TensorFlow* framework:

  1. Configure the Model Optimizer for TensorFlow* (TensorFlow was used to train your model).
  2. Freeze the TensorFlow model if your model is not already frozen or skip this step and use the instruction to a convert a non-frozen model.
  3. Convert a TensorFlow* model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and biases values.
  4. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided sample applications.
  5. Integrate the Inference Engine in your application to deploy the model in the target environment.

Supported Topologies

Supported Non-Frozen Topologies with Links to the Associated Slim Model Classification Download Files

Detailed information on how to convert models from the TensorFlow*-Slim Image Classification Model Library is available in the Converting TensorFlow*-Slim Image Classification Model Library Models chapter. The table below contains list of supported TensorFlow*-Slim Image Classification Model Library models and required mean/scale values. The mean values are specified as if the input image is read in BGR channels order layout like Inference Engine classification sample does.

Model NameSlim Model Checkpoint File--mean_values --scale
Inception v1inception_v1_2016_08_28.tar.gz[127.5,127.5,127.5]127.5
Inception v2inception_v1_2016_08_28.tar.gz[127.5,127.5,127.5]127.5
Inception v3inception_v3_2016_08_28.tar.gz[127.5,127.5,127.5]127.5
Inception V4inception_v4_2016_09_09.tar.gz[127.5,127.5,127.5]127.5
Inception ResNet v2inception_resnet_v2_2016_08_30.tar.gz[127.5,127.5,127.5]127.5
MobileNet v1 128mobilenet_v1_0.25_128.tgz[127.5,127.5,127.5]127.5
MobileNet v1 160mobilenet_v1_0.5_160.tgz[127.5,127.5,127.5]127.5
MobileNet v1 224mobilenet_v1_1.0_224.tgz[127.5,127.5,127.5]127.5
NasNet Largenasnet-a_large_04_10_2017.tar.gz[127.5,127.5,127.5]127.5
NasNet Mobilenasnet-a_mobile_04_10_2017.tar.gz[127.5,127.5,127.5]127.5
ResidualNet-50 v1resnet_v1_50_2016_08_28.tar.gz[103.94,116.78,123.68] 1
ResidualNet-50 v2resnet_v2_50_2017_04_14.tar.gz[103.94,116.78,123.68] 1
ResidualNet-101 v1resnet_v1_101_2016_08_28.tar.gz[103.94,116.78,123.68] 1
ResidualNet-101 v2resnet_v2_101_2017_04_14.tar.gz[103.94,116.78,123.68] 1
ResidualNet-152 v1resnet_v1_152_2016_08_28.tar.gz[103.94,116.78,123.68] 1
ResidualNet-152 v2resnet_v2_152_2017_04_14.tar.gz[103.94,116.78,123.68] 1
VGG-16vgg_16_2016_08_28.tar.gz[103.94,116.78,123.68] 1
VGG-19vgg_19_2016_08_28.tar.gz[103.94,116.78,123.68] 1

Supported Frozen Topologies from TensorFlow Object Detection Models Zoo

Detailed information on how to convert models from the Object Detection Models Zoo is available in the Converting TensorFlow Object Detection API Models chapter. The table below contains models from the Object Detection Models zoo that are supported.

Model NameTensorFlow Object Detection API Models (Frozen)
SSD MobileNet V1 COCO*ssd_mobilenet_v1_coco_2018_01_28.tar.gz
SSD MobileNet V1 0.75 Depth COCOssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03.tar.gz
SSD MobileNet V1 PPN COCOssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03.tar.gz
SSD MobileNet V1 FPN COCOssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz
SSD ResNet50 FPN COCOssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz
SSD MobileNet V2 COCOssd_mobilenet_v2_coco_2018_03_29.tar.gz
SSD Lite MobileNet V2 COCOssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
SSD Inception V2 COCOssd_inception_v2_coco_2018_01_28.tar.gz
RFCN ResNet 101 COCOrfcn_resnet101_coco_2018_01_28.tar.gz
Faster R-CNN Inception V2 COCOfaster_rcnn_inception_v2_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 50 COCOfaster_rcnn_resnet50_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 50 Low Proposals COCOfaster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 101 COCOfaster_rcnn_resnet101_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 101 Low Proposals COCOfaster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz
Faster R-CNN Inception ResNet V2 COCOfaster_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz
Faster R-CNN Inception ResNet V2 Low Proposals COCOfaster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28.tar.gz
Faster R-CNN NasNet COCOfaster_rcnn_nas_coco_2018_01_28.tar.gz
Faster R-CNN NasNet Low Proposals COCOfaster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz
Mask R-CNN Inception ResNet V2 COCOmask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28.tar.gz
Mask R-CNN Inception V2 COCOmask_rcnn_inception_v2_coco_2018_01_28.tar.gz
Mask R-CNN ResNet 101 COCOmask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz
Mask R-CNN ResNet 50 COCOmask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz
Faster R-CNN ResNet 101 Kitti*faster_rcnn_resnet101_kitti_2018_01_28.tar.gz
Faster R-CNN Inception ResNet V2 Open Images*faster_rcnn_inception_resnet_v2_atrous_oid_2018_01_28.tar.gz
Faster R-CNN Inception ResNet V2 Low Proposals Open Images*faster_rcnn_inception_resnet_v2_atrous_lowproposals_oid_2018_01_28.tar.gz
Faster R-CNN ResNet 101 AVA v2.1*faster_rcnn_resnet101_ava_v2.1_2018_04_30.tar.gz

Supported Frozen Quantized Topologies

The topologies hosted on the TensorFlow* Lite site. The frozen model file (.pb file) should be fed to the Model Optimizer.

Model Name Frozen Model File
Mobilenet V1 0.25 128 mobilenet_v1_0.25_128_quant.tgz
Mobilenet V1 0.25 160 mobilenet_v1_0.25_160_quant.tgz
Mobilenet V1 0.25 192 mobilenet_v1_0.25_192_quant.tgz
Mobilenet V1 0.25 224 mobilenet_v1_0.25_224_quant.tgz
Mobilenet V1 0.50 128 mobilenet_v1_0.5_128_quant.tgz
Mobilenet V1 0.50 160 mobilenet_v1_0.5_160_quant.tgz
Mobilenet V1 0.50 192 mobilenet_v1_0.5_192_quant.tgz
Mobilenet V1 0.50 224 mobilenet_v1_0.5_224_quant.tgz
Mobilenet V1 0.75 128 mobilenet_v1_0.75_128_quant.tgz
Mobilenet V1 0.75 160 mobilenet_v1_0.75_160_quant.tgz
Mobilenet V1 0.75 192 mobilenet_v1_0.75_192_quant.tgz
Mobilenet V1 0.75 224 mobilenet_v1_0.75_224_quant.tgz
Mobilenet V1 1.0 128 mobilenet_v1_1.0_128_quant.tgz
Mobilenet V1 1.0 160 mobilenet_v1_1.0_160_quant.tgz
Mobilenet V1 1.0 192 mobilenet_v1_1.0_192_quant.tgz
Mobilenet V1 1.0 224 mobilenet_v1_1.0_224_quant.tgz
Mobilenet V2 1.0 224 mobilenet_v2_1.0_224_quant.tgz
Inception V1 inception_v1_224_quant_20181026.tgz
Inception V2 inception_v2_224_quant_20181026.tgz
Inception V3 inception_v3_quant.tgz
Inception V4 inception_v4_299_quant_20181026.tgz

It is necessary to specify the following command line parameters for the Model Optimizer to convert some of the models from the list above: --input input --input_shape [1,HEIGHT,WIDTH,3]. Where HEIGHT and WIDTH are the input images height and width for which the model was trained.

Other supported topologies

Model NameRepository
ResNext Repo
DenseNet Repo
CRNN Repo
NCF Repo
lm_1b Repo
DeepSpeech Repo
A3C Repo
VDCNN Repo
Unet Repo
Keras-TCN Repo

Loading Non-Frozen Models to the Model Optimizer

There are three ways to store non-frozen TensorFlow models and load them to the Model Optimizer:

  1. Checkpoint:

    In this case, a model consists of two files:

    • inference_graph.pb or inference_graph.pbtxt
    • checkpoint_file.ckpt

    If you do not have an inference graph file, refer to Freezing Custom Models in Python.

    To convert such TensorFlow model:

    1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
    2. Run the mo_tf.py script with the path to the checkpoint file to convert a model:
    • If input model is in .pb format:
      python3 mo_tf.py --input_model <INFERENCE_GRAPH>.pb --input_checkpoint <INPUT_CHECKPOINT>
    • If input model is in .pbtxt format:
      python3 mo_tf.py --input_model <INFERENCE_GRAPH>.pbtxt --input_checkpoint <INPUT_CHECKPOINT> --input_model_is_text
  2. MetaGraph:

    In this case, a model consists of three or four files stored in the same directory:

    • model_name.meta
    • model_name.index
    • model_name.data-00000-of-00001 (digit part may vary)
    • checkpoint (optional)

    To convert such TensorFlow model:

    1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
    2. Run the mo_tf.py script with a path to the MetaGraph .meta file to convert a model:
      python3 mo_tf.py --input_meta_graph <INPUT_META_GRAPH>.meta
  3. SavedModel:

    In this case, a model consists of a special directory with a .pb file and several subfolders: variables, assets, and assets.extra. For more information about the SavedModel directory, refer to the README file in the TensorFlow repository.

    To convert such TensorFlow model:

    1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
    2. Run the mo_tf.py script with a path to the SavedModel directory to convert a model:
      python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>

Freezing Custom Models in Python*

When a network is defined in Python* code, you have to create an inference graph file. Usually graphs are built in a form that allows model training. That means that all trainable parameters are represented as variables in the graph. To be able to use such graph with Model Optimizer such graph should be frozen. The graph is frozen and dumped to a file with the following code:

import tensorflow as tf
from tensorflow.python.framework import graph_io
frozen = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ["name_of_the_output_node"])
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)

Where:

Convert a TensorFlow* Model

To convert a TensorFlow model:

  1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
  2. Use the mo_tf.py script to simply convert a model with the path to the input model .pb file:
    python3 mo_tf.py --input_model <INPUT_MODEL>.pb

Two groups of parameters are available to convert your model:

NOTE: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the RGB<->BGR conversion specifying the command-line parameter: --reverse_input_channels. Otherwise, inference results may be incorrect. For more information about the parameter, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.

Using TensorFlow*-Specific Conversion Parameters

The following list provides the TensorFlow*-specific parameters.

TensorFlow*-specific parameters:
--input_model_is_text
TensorFlow*: treat the input model file as a text
protobuf format. If not specified, the Model Optimizer
treats it as a binary file by default.
--input_checkpoint INPUT_CHECKPOINT
TensorFlow*: variables file to load.
--input_meta_graph INPUT_META_GRAPH
Tensorflow*: a file with a meta-graph of the model
before freezing
--saved_model_dir SAVED_MODEL_DIR
TensorFlow*: directory representing non frozen model
--saved_model_tags SAVED_MODEL_TAGS
Group of tag(s) of the MetaGraphDef to load, in string
format, separated by ','. For tag-set contains
multiple tags, all tags must be passed in.
--tensorflow_custom_operations_config_update TENSORFLOW_CUSTOM_OPERATIONS_CONFIG_UPDATE
TensorFlow*: update the configuration file with node
name patterns with input/output nodes information.
--tensorflow_object_detection_api_pipeline_config TENSORFLOW_OBJECT_DETECTION_API_PIPELINE_CONFIG
TensorFlow*: path to the pipeline configuration file
used to generate model created with help of Object
Detection API.
--tensorboard_logdir TENSORBOARD_LOGDIR
TensorFlow*: dump the input graph to a given directory
that should be used with TensorBoard.
--tensorflow_custom_layer_libraries TENSORFLOW_CUSTOM_LAYER_LIBRARIES
TensorFlow*: comma separated list of shared libraries
with TensorFlow* custom operations implementation.
--disable_nhwc_to_nchw
Disables default translation from NHWC to NCHW

NOTE: Models produces with TensorFlow* usually have not fully defined shapes (contain -1 in some dimensions). It is necessary to pass explicit shape for the input using command line parameter --input_shape or -b to override just batch dimension. If the shape is fully defined, then there is no need to specify either -b or --input_shape options.

Command-Line Interface (CLI) Examples Using TensorFlow*-Specific Parameters

Custom Layer Definition

Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom.

See Custom Layers in the Model Optimizer for information about:

Supported TensorFlow* Layers

Refer to Supported Framework Layers for the list of supported standard layers.

Frequently Asked Questions (FAQ)

The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the Model Optimizer FAQ. The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.

Summary

In this document, you learned: