A summary of the steps for optimizing and deploying a model that was trained with Caffe*:
NOTE: It is necessary to specify mean and scale values for most of the Caffe* models to convert them with the Model Optimizer. The exact values should be determined separately for each model. For example, for Caffe* models trained on ImageNet, the mean values usually are
123.68
,116.779
,103.939
for blue, green and red channels respectively. The scale value is usually127.5
. Refer to Framework-agnostic parameters for the information on how to specify mean and scale values.
To convert a Caffe* model:
<INSTALL_DIR>/deployment_tools/model_optimizer
directory.mo.py
script to simply convert a model with the path to the input model .caffemodel
file: Two groups of parameters are available to convert your model:
The following list provides the Caffe*-specific parameters.
prototxt
file. This is needed when the name of the Caffe* model and the .prototxt
file are different or are placed in different directories. Otherwise, it is enough to provide only the path to the input model.caffemodel
file. CustomLayersMapping
file. This is the legacy method of quickly enabling model conversion if your model has custom layers. This requires system Caffe* on the computer. To read more about this, see Legacy Mode for Caffe* Custom Layers. Optional parameters without default values and not specified by the user in the .prototxt
file are removed from the Intermediate Representation, and nested parameters are flattened: This example shows a multi-input model with input layers: `data`, `rois`
1,3,227,227
. For rois, set the shape to 1,6,1,1
: Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom.
Refer to Supported Framework Layers for the list of supported standard layers.
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the Model Optimizer FAQ. The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
In this document, you learned: