Converting a Model to Intermediate Representation (IR)

Use the script from the <INSTALL_DIR>/deployment_tools/model_optimizer directory to run the Model Optimizer and convert the model to the Intermediate Representation (IR). The simplest way to convert a model is to run with a path to the input model file:

python3 --input_model INPUT_MODEL

NOTE: Some models require using additional arguments to specify conversion parameters, such as --scale, --scale_values, --mean_values, --mean_file. To learn about when you need to use these parameters, refer to Converting a Model Using General Conversion Parameters.

The script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:

  • .caffemodel - Caffe* models
  • .pb - TensorFlow* models
  • .params - MXNet* models
  • .onnx - ONNX* models
  • .nnet - Kaldi* models.

If the model files do not have standard extensions, you can use the --framework {tf,caffe,kaldi,onnx,mxnet} option to specify the framework type explicitly.

For example, the following commands are equivalent:

python3 --input_model /user/models/model.pb
python3 --framework tf --input_model /user/models/model.pb

To adjust the conversion process, you may use general parameters defined in the Converting a Model Using General Conversion Parameters and Framework-specific parameters for:

See Also