This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API, input auto-resize feature and support of UNICODE paths.
Hello Classification C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
|Basic Infer Flow||Common API to do inference: configure input and output blobs, loading model, create infer request|
|Synchronous Infer||Do synchronous inference|
|Network Operations||Managing of network|
|Blob Operations||Work with memory container for storing inputs, outputs of the network, weights and biases of the layers|
|Input auto-resize||Set image of the original size as input for a network with other input size. Resize and layout conversions will be performed automatically by the corresponding plugin just before inference|
|Validated Models||alexnet, googlenet-v1|
|Model Format||Inference Engine Intermediate Representation (*.xml + *.bin), ONNX (*.onnx)|
|Validated images||The sample uses OpenCV* to read input image (*.bmp, *.png)|
|Other language realization||C, Python|
Upon the start-up, the sample application reads command line parameters, loads specified network and an image to the Inference Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the application outputs data to the standard output stream.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
To build the sample, please use instructions available at Build the Sample Applications section in Inference Engine Samples guide.
To run the sample, you need specify a model and image:
- By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
- Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
alexnetmodel on a
GPU, for example:
The application outputs top-10 inference results.