Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference Engine plugin. When inference is done, the application creates an output image and outputs data to the standard output stream.
NOTE: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
Run the application with the
-h option yields the usage message:
The command yields the following usage message:
Running the application with the empty list of options yields the usage message given above.
NOTE: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
For example, to perform inference of an AlexNet model (previously converted to the Inference Engine format) on CPU, use the following command:
By default the application outputs top-10 inference results. Add the
-nt option to the previous command to modify the number of top output results. For example, to get the top-5 results on GPU, run the following command: