This document describes the most important insights about model optimization using the Post-training Optimization Toolkit (POT). The post-training optimization usually is the fastest way to get a low-precision model because it does not require a fine-tuning and thus, there is no need in the training dataset, pipeline and availability of the powerful training hardware. In some cases, it may lead to not satisfactory accuracy drop, especially when quantizing the whole model. However, it can be still helpful for fast performance evaluation in order to understand the possible speed up from low precision. Before going into details we suggest reading the following POT documentation.
NOTE: It is recommended to use the target hardware for quantization, but if it is not possible, the best option is the CPU with the same instruction set or a VNNI-based CPU if the target device is GPU or VPU. Deep Learning Inference Engine uses highly optimized low-level operations which fully utilize available
instruction set. Due to this, some issues that appear on one type of hardware,
like intermediate results integer saturation, may not appear on hardware with more advanced instruction set. This means that a model generated and evaluated on non-target hardware may have different both accuracy and latency when moved to target hardware.
The POT has lots of knobs that can be used to get an accurate quantized model. However, as a starting point we suggest using the
DefaultQuantization algorithm with default settings. In many cases it leads to satisfied accuracy and performance speedup. A fragment of the configuration file (
config/default_quantization_template.json in the POT directory) with default settings is shown below:
DefaultQuantization algorithm implies two different usage scenarios (engines):
NOTE: The first scenario potentially gives better accurate results and can handle more models while the second is much easier to use because
it does not require integration of custom data readers into the AccuracyChecker tool. In order to validate results after the Simplified mode the user should rely on own scripts and tools.
In the case of substantial accuracy degradation after applying the
DefaultQuantization algorithm there are two alternatives to use:
The default quantization algorithm provides multiple hyperparameters which can be used in order to improve accuracy results for the fully-quantized model. Below is a list of best practices which can be applied to improve accuracy without a substantial performance reduction with respect to default settings:
presetthat can be varied from
mixed. It enables asymmetric quantization of activations and can be helpful for the NNs with non-ReLU activation functions, e.g. YOLO, EfficientNet, etc.
use_fast_bias. Setting this option for
falseenables a different bias correction method which is more accurate, in general, and applied after model quantization as a part of
NOTE: Changing this option can substantially increase quantization time in the POT tool.
range_estimator. It defines how to calculate the minimum and maximum of quantization range for weights and activations. For example, the following
range_estimatorfor activations can improve the accuracy for Faster R-CNN based networks:
Please find the possible options and their description in the
config/default_quantization_spec.json file in the POT directory.
stat_subset_size. It controls the size of the calibration dataset used by POT to collect statistics for quantization parameters initialization. It is assumed that this dataset should contain a sufficient number of representative samples. Thus, varying this parameter may affect accuracy (the higher is better). However, we empirically found that 300 samples are sufficient to get representative statistics in most cases.
ignored_scope. It allows excluding some layers from the quantization process, i.e. their inputs will not be quantized. It may be helpful for some patterns for which it is known in advance that they drop accuracy when executing in low-precision. For example,
DetectionOutputlayer of SSD model expressed as a subgraph should not be quantized to preserve the accuracy of Object Detection models. One of the sources for the ignored scope can be the AccuracyAware algorithm which can revert layers back to the original precision (see details below).
In case when the steps above do not lead to the accurate quantized model you may use the so-called
AccuracyAwareQuantization algorithm which leads to mixed-precision models. The whole idea behind that is to revert quantized layers back to floating-point precision based on their contribution to the accuracy drop until the desired accuracy degradation with respect to the full-precision model is satisfied.
A fragment of the configuration file with default settings is shown below (
AccuracyAwareQuantization calls the
DefaultQuantization at the first step it means that all the parameters of the latter one are also valid and can be applied to the accuracy-aware scenario.
NOTE: In general case, possible speedup after applying the
AccuracyAwareQuantizationalgorithm is less than after the
DefaultQuantizationwhen the model gets fully-quantized.
If you do not achieve the desired accuracy and performance after applying the
AccuracyAwareQuantization algorithm or you need an accurate fully-quantized model, we recommend either using layer-wise hyperparameters tuning with TPE or using Quantization-Aware training from the supported frameworks.
As the last step in post-training optimization, you may try layer-wise hyperparameter tuning using TPE, which stands for Tree of Parzen Estimators hyperparameter optimizer that searches through available configurations trying to find an optimal one. For post-training optimization, TPE assigns multiple available configuration options to choose from for every layer and by evaluating different sets of parameters, it creates a probabilistic model of their impact on accuracy and latency to iteratively find an optimal one.
You can run TPE with any combination of parameters in
tuning_scope, but it is recommend using one of two configurations described below. If for some reason, like HW failure or power shutdown, TPE trials stop before completion, you can rerun them starting from the last trial by changing
warm_start as long as logs from the previous execution are available.
NOTE: TPE requires many iterations to converge to an optimal solution, and it is recommended to run it for at least 200 iterations. Because every iteration requires evaluation of a generated model , which means accuracy measurements on a dataset and latency measurements using benchmark, this process may take from 24 hours up to few days to complete, depending on a model. To run this configuration on multiple machines and reduce the execution time, see Multi-node.
To run TPE with range estimator tuning, use the following configuration:
This configuration searches for optimal preset for
range_estimator and optimal outlier probability for quantiles for every layer. Because this configuration only changes final values provided to FakeQuantize layers, changes in parameters do not impact inference latency, thus we set
latency_weight to 0 to prevent jitter in benchmark results to negatively impact model evaluation. Experiments show that this configuration can give much better accuracy then the approach of just changing
range_estimator configuration globally.
To run TPE with layer tuning, use the following configuration:
This configuration is similar to
AccuracyAwareQuantization, because it also tries to revert quantized layers back to floating-point precision, but uses a different algorithm to choose layers, which can lead to better results.