ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.
|Model Name||Path to Public Models master branch|
Listed models are built with operation set version 8. Models that are upgraded to higher operation set versions may not be supported.
Starting from the R4 release, the OpenVINO™ toolkit officially supports public Pytorch* models (from
torchvision 0.2.1 and
pretrainedmodels 0.7.4 packages) via ONNX conversion. The list of supported topologies is presented below:
|Package Name||Supported Models|
|Torchvision Models||alexnet, densenet121, densenet161, densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19|
|Pretrained Models||alexnet, fbresnet152, resnet101, resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11|
| ESPNet Models |
Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion. The list of supported topologies is presented below:
|Model Name||Path to Model Code|
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX* model:
mo.pyscript to simply convert a model with the path to the input model
There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model.
Refer to Supported Framework Layers for the list of supported standard layers.