Install prerequisites first:
accuracy checker uses Python 3. Install it first:
Python setuptools and python package manager (pip) install packages into system directory by default. Installation of accuracy checker tested only via virtual environment.
In order to use virtual environment you should install it first:
Before starting to work inside virtual environment, it should be activated:
Virtual environment can be deactivated using command
The next step is installing backend frameworks for Accuracy Checker.
In order to evaluate some models required frameworks have to be installed. Accuracy-Checker supports these frameworks:
You can use any of them or several at a time.
If all prerequisite are installed, then you are ready to install accuracy checker:
You may test your installation and get familiar with accuracy checker by running sample.
Once you installed accuracy checker you can evaluate your configurations with:
All relative paths in config files will be prefixed with values specified in command line:
-c, --configpath to configuration file.
-m, --modelsspecifies directory in which models and weights declared in config file will be searched.
-s, --sourcespecifies directory in which input images will be searched.
-a, --annotationsspecifies directory in which annotation and meta files will be searched.
You may refer to
-h, --help to full list of command line options. Some optional arguments are:
-r, --rootprefix for all relative paths.
-d, --definitionspath to the global configuration file
-e, --extensionsdirectory with InferenceEngine extensions.
-b, --bitstreamsdirectory with bitstream (for Inference Engine with fpga plugin).
directory to store Model Optimizer converted models (used for DLSDK launcher only). --tf, –target_framework
framework for infer. --td, –target_devices` devices for infer. You can specify several devices using space as a delimiter.
You are also able to replace some command line arguments with environment variables for path prefixing. Supported following list of variables:
DATA_DIR- equivalent of
MODELS_DIR- equivalent of
EXTENSIONS- equivalent of
ANNOTATIONS_DIR- equivalent of
BITSTREAMS_DIR- equivalent of
There is config file which declares validation process. Every validated model has to have its entry in
models list with distinct
name and other properties described below.
There is also definitions file, which declares global options shared across all models. Config file has priority over definitions file.
Optionally you can use global configuration. It can be useful for avoiding duplication if you have several models which should be run on the same dataset. Example of global definitions file can be found here. Global definitions will be merged with evaluation config in the runtime by dataset name. Parameters of global configuration can be overwritten by local config (e.g. if in definitions specified resize with destination size 224 and in the local config used resize with size 227, the value in config - 227 will be used as resize parameter) You can use field
global_definitions for specifying path to global definitions directly in the model config or via command line arguments (
Launcher is a description of how your model should be executed. Each launcher configuration starts with setting
framework name. Currently caffe, dlsdk, mxnet, tf, tf_lite, opencv, onnx_runtime supported. Launcher description can have differences. Please view:
Dataset entry describes data on which model should be evaluated, all required preprocessing and postprocessing/filtering steps, and metrics that will be used for evaluation.
If your dataset data is a well-known competition problem (COCO, Pascal VOC, ...) and/or can be potentially reused for other models it is reasonable to declare it in some global configuration file (definition file). This way in your local configuration file you can provide only
name and all required steps will be picked from global one. To pass path to this global configuration use
--definition argument of CLI.
Each dataset must have:
name- unique identifier of your model/topology.
data_source: path to directory where input data is stored.
metrics: list of metrics that should be computed.
preprocessing: list of preprocessing steps applied to input data. If you want calculated metrics to match reported, you must reproduce preprocessing from canonical paper of your topology or ask topology author about required steps.
postprocessing: list of postprocessing steps.
reader: approach for data reading. Default reader is
Also it must contain data related to annotation. You can convert annotation inplace using:
annotation_conversion: parameters for annotation conversion
or use existing annotation file and dataset meta:
annotation- path to annotation file, you must convert annotation to representation of dataset problem first, you may choose one of the converters from annotation-converters if there is already converter for your dataset or write your own.
dataset_meta: path to metadata file (generated by converter). More detailed information about annotation conversion you can find in Annotation Conversion Guide.
example of dataset definition:
Each entry of preprocessing, metrics, postprocessing must have
type field, other options are specific to type. If you do not provide any other option, then it will be picked from definitions file.
You can find useful following instructions:
You may optionally provide
reference field for metric, if you want calculated metric tested against specific value (i.e. reported in canonical paper).
Some metrics support providing vector results ( e. g. mAP is able to return average precision for each detection class). You can change view mode for metric results using
Typical workflow for testing new model include:
Standard Accuracy Checker validation pipeline: Annotation Reading -> Data Reading -> Preprocessing -> Inference -> Postprocessing -> Metrics. In some cases it can be unsuitable (e.g. if you have sequence of models). You are able to customize validation pipeline using own evaluator. More details about custom evaluations can be found in related section.