In the instructions below, the Post-training Optimization Tool directory
<INSTALL_DIR>/deployment_tools/tools/post_training_optimization_toolkit is referred to as
<INSTALL_DIR> is the directory where Intel® Distribution of OpenVINO™ toolkit is installed.
NOTE: Installation directory is different in the case of PyPI installation and does not contain examples of configuration files.
The tool is designed to work with the configuration file where all the parameters required for the optimization are specified. These parameters are organized as a dictionary and stored in a JSON file. JSON file allows using comments that are supported by the
jstyleson Python* package. Logically all parameters are divided into three groups:
This section contains only three parameters:
"model_name"- string parameter that defines a model name, e.g.
"model"- string parameter that defines the path to an input model topology (.xml)
"weights"- string parameter that defines the path to an input model weights (.bin)
The main parameter is
"type" which can take two possible options:
"accuracy_checher" (default) and
"simplified", which specify the engine that is used for model inference and validation (if supported):
DefaultQuantizationalgorithm to get fully quantized model using a subset of images. It does not use the Accuracy Checker tool and annotation. To measure accuracy, you should implement your own validation pipeline with OpenVINO API.
mobilenetV2_tf_int8_simple_mode.jsonfile from the
"config"parameter containing a path to the AccuracyChecker configuration file.
This section defines optimization algorithms and their parameters. For more details about parameters of the concrete optimization algorithm, please refer to the corresponding documentation.
For a quick start, many examples of configuration files are provided and placed to the
<POT_DIR>/configs/examples folder. There you can find ready-to-use configurations for the models from various domains: Computer Vision (Image Classification, Object Detection, Segmentation), Natural Language Processing, Recommendation Systems. We basically put configuration files for the models which require non-default configuration settings in order to get accurate results. For details on how to run the Post-Training Optimization Tool with a sample configuration file, see the instructions.