device- specifies which device will be used for infer. Supported:
HDDL, Heterogeneous plugin as
HETERO:target_device,fallback_deviceand Multi device plugin as
MULTI:target_device1,target_device2. If you have several MYRIAD devices in your machine, you are able to provide specific device id in such way:
MYRIAD.1.2-ma2480) It is possible to specify one or more devices via
-td, --target devicescommand line argument. Target device will be selected from command line (in case when several devices provided, evaluations will be run one by one with all specified devices).
model- path to xml file with model for your topology or compiled executable network.
weights- path to bin file with weights for your topology (Optional, the argument can be omitted if bin file stored in the same directory with model xml or if you use compiled blob).
Note: You can generate executable blob using compile_tool. Before evaluation executable blob, please make sure that selected device support it.
launcher may optionally provide model parameters in source framework format which will be converted to Inference Engine IR using Model Optimizer. If you want to use Model Optimizer for model conversion, please view Model Optimizer Developer Guide. You can provide:
caffe_weightsfor Caffe model and weights (*.prototxt and *.caffemodel).
tf_modelfor TensorFlow model (*.pb, *.pb.frozen, *.pbtxt).
tf_metafor TensorFlow MetaGraph (*.meta).
mxnet_weightsfor MXNet params (*.params).
onnx_modelfor ONNX model (*.onnx). You also able to pass your ONNX model directly using
modeloption if you do not need Model Optimizer conversion step.
kaldi_modelfor Kaldi model (*.nnet).
In case when you want to determine additional parameters for model conversion (data_type, input_shape and so on), you can use
mo_params for arguments with values and
mo_flags for positional arguments like
legacy_mxnet_model . Full list of supported parameters you can find in Model Optimizer Developer Guide.
Model will be converted before every evaluation. You can provide
converted_model_dir for saving converted model in specific folder, otherwise, converted models will be saved in path provided via
-C command line argument or source model directory.
adapter- approach how raw output will be converted to representation of dataset problem, some adapters can be specific to framework. You can find detailed instruction how to use adapters here.
Launcher understands which batch size will be used from model intermediate representation (IR). If you want to use batch for infer, please, provide model with required batch or convert it using specific parameter in
allow_reshape_input- parameter, which allows to reshape input layer to data shape (default value is False).
Additionally you can provide device specific parameters:
cpu_extensions(path to extension file with custom layers for cpu). You can also use special key
AUTOfor automatic search cpu extensions library in the provided as command line argument directory (option
gpu_extensions(path to extension *.xml file with OpenCL kernel description for gpu).
bitstreamfor running on FPGA.
Device config contains device specific options which should be set to Inference Engine. For setting device specific flags, you are able to use
--device_config command line option. Device config should be represented as YML file with dictionary of one of two types:
Each supported device has own set of supported configuration parameters which can be found on device page in Inference Engine development guide
Note: Since OpenVINO 2020.4 on platforms with native bfloat16 support models will be executed on this precision by default. For disabling this behaviour, you need to use device_config with following configuration:
Device config example can be found here
Beside that, you can launch model in
async_mode, enable this option and optionally provide the number of infer requests (
num_requests), which will be used in evaluation process. By default, if
num_requests not provided or used value
AUTO, automatic number request assignment for specific device will be performed For multi device configuration async mode used always. You can provide number requests for each device as part device specification:
MULTI:device_1(num_req_1),device_2(num_req_2) or in
num_requests config section (for this case comma-separated list of integer numbers or one value if number requests for all devices equal can be used).
Note: not all models support async execution, in cases when evaluation can not be run in async, the inference will be switched to sync.
In case when you model has several inputs you should provide list of input layers in launcher config section using key
inputs. Each input description should has following info:
name- input layer name in network
type- type of input values, it has impact on filling policy. Available options:
CONST_INPUT- input will be filled using constant provided in config. It also requires to provide
IMAGE_INFO- specific key for setting information about input shape to layer (used in Faster RCNN based topologies). You do not need provide
value, because it will be calculated in runtime. Format value is
Nx[H, W, S], where
Nis batch size,
H- original image height,
W- original image width,
S- scale of original image (default 1).
ORIG_IMAGE_INFO- specific key for setting information about original image size before preprocessing.
INPUT- network input for main data stream (e. g. images). If you have several data inputs, you should provide regular expression for identifier as
valuefor specifying which one data should be provided in specific input.
LSTM_INPUT- input which should be filled by hidden state from previous iteration. The hidden state layer name should be provided via
valueparameter. Optionally you can determine
shapeof input (actually does not used, DLSDK launcher uses info given from network),
layoutin case when your model was trained with non-standard data layout (For DLSDK default layout is
FP16- signed shot,
U8- unsigned char,
U16- unsigned short int,
I8- signed char,
I16- short int,
I64- long int).
OpenVINO™ launcher config example: