ONNX* is a representation format for deep learning models. ONNX allows AI developers easily transfer models between different frameworks that helps to choose the best combination for them. Today, PyTorch*, Caffe2*, Apache MXNet*, Microsoft Cognitive Toolkit* and other tools are developing ONNX support.
|Model Name||Path to Public Models master branch|
Listed models are built with the operation set version 8 except the GPT-2 model (which uses version 10). Models that are upgraded to higher operation set versions may not be supported.
Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion. The list of supported topologies downloadable from PaddleHub is presented below:
|Model Name||Command to download the model from PaddleHub|
NOTE: To convert a model downloaded from PaddleHub use paddle2onnx converter.
The list of supported topologies from the models v1.5 package:
NOTE: To convert these topologies one should first serialize the model by calling
The Model Optimizer process assumes you have an ONNX model that was directly downloaded from a public repository or converted from any framework that supports exporting to the ONNX format.
To convert an ONNX* model:
mo.pyscript to simply convert a model with the path to the input model
.nnetfile and an output directory where you have write permissions:
There are no ONNX* specific parameters, so only framework-agnostic parameters are available to convert your model.
Refer to Supported Framework Layers for the list of supported standard layers.