The following Inference Engine Python API is used in the application:
|Basic Infer Flow||IECore, IECore.read_network, IECore.load_network||Common API to do inference|
|Synchronous Infer||ExecutableNetwork.infer||Do synchronous inference|
|Network Operations||IENetwork.input_info, IENetwork.outputs, InputInfoPtr.precision, DataPtr.precision, InputInfoPtr.input_data.shape||Managing of network: configure input and output blobs|
|Validated Models||alexnet, googlenet-v1|
|Model Format||Inference Engine Intermediate Representation (.xml + .bin), ONNX (.onnx)|
|Other language realization||C++, C|
At startup, the sample application reads command-line parameters, prepares input data, loads a specified model and image to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
Run the application with the
-h option to see the usage message:
To run the sample, you need specify a model and image:
- By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
- Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
alexnetmodel on a
GPU, for example:
The sample application logs each step in a standard output stream and outputs top-10 inference results.