NOTE: SSD models from the table require converting to the deploy mode. For details, see the Conversion Instructions in the GitHub MXNet-SSD repository.
|Model Name||Model File|
|VGG-16||Repo, Symbol, Params|
|VGG-19||Repo, Symbol, Params|
|ResNet-152 v1||Repo, Symbol, Params|
|SqueezeNet_v1.1||Repo, Symbol, Params|
|Inception BN||Repo, Symbol, Params|
|CaffeNet||Repo, Symbol, Params|
|DenseNet-121||Repo, Symbol, Params|
|DenseNet-161||Repo, Symbol, Params|
|DenseNet-169||Repo, Symbol, Params|
|DenseNet-201||Repo, Symbol, Params|
|MobileNet||Repo, Symbol, Params|
|SSD-ResNet-50||Repo, Symbol + Params|
|SSD-VGG-16-300||Repo, Symbol + Params|
|SSD-Inception v3||Repo, Symbol + Params|
|FCN8 (Semantic Segmentation)||Repo, Symbol, Params|
|MTCNN part 1 (Face Detection)||Repo, Symbol, Params|
|MTCNN part 2 (Face Detection)||Repo, Symbol, Params|
|MTCNN part 3 (Face Detection)||Repo, Symbol, Params|
|MTCNN part 4 (Face Detection)||Repo, Symbol, Params|
|Lightened_moon||Repo, Symbol, Params|
Other supported topologies
To convert an MXNet* model:
model-file-0000.params, run the Model Optimizer launch script
mo.py, specifying a path to the input model file:
Two groups of parameters are available to convert your model:
The following list provides the MXNet*-specific parameters.
NOTE: By default, the Model Optimizer does not use the MXNet loader, as it transforms the topology to another format, which is compatible with the latest version of MXNet, but it is required for models trained with lower version of MXNet. If your model was trained with MXNet version lower than 1.0.0, specify the
--legacy_mxnet_modelkey to enable the MXNet loader. However, the loader does not support models with custom layers. In this case, you must manually recompile MXNet with custom layers and install it to your environment.
Internally, when you run the Model Optimizer, it loads the model, goes through the topology, and tries to find each layer type in a list of known layers. Custom layers are layers that are not included in the list of known layers. If your topology contains any layers that are not in this list of known layers, the Model Optimizer classifies them as custom.
Refer to Supported Framework Layers for the list of supported standard layers.
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the Model Optimizer FAQ. The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
In this document, you learned: