Hello Classification C++ Sample

This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API, input auto-resize feature and support of UNICODE paths.

Hello Classification C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:

Feature API Description
Basic Infer Flow InferenceEngine::Core::ReadNetwork, InferenceEngine::Core::LoadNetwork, InferenceEngine::ExecutableNetwork::CreateInferRequest, InferenceEngine::InferRequest::SetBlob, InferenceEngine::InferRequest::GetBlob Common API to do inference: configure input and output blobs, loading model, create infer request
Synchronous Infer InferenceEngine::InferRequest::Infer Do synchronous inference
Network Operations ICNNNetwork::getInputsInfo, InferenceEngine::CNNNetwork::getOutputsInfo, InferenceEngine::InputInfo::setPrecision Managing of network
Blob Operations InferenceEngine::Blob::getTensorDesc, InferenceEngine::TensorDesc::getDims, , InferenceEngine::TensorDesc::getPrecision, InferenceEngine::as, InferenceEngine::MemoryBlob::wmap, InferenceEngine::MemoryBlob::rmap, InferenceEngine::Blob::size Work with memory container for storing inputs, outputs of the network, weights and biases of the layers
Input auto-resize InferenceEngine::PreProcessInfo::setResizeAlgorithm, InferenceEngine::InputInfo::setLayout Set image of the original size as input for a network with other input size. Resize and layout conversions will be performed automatically by the corresponding plugin just before inference
Options Values
Model Format Inference Engine Intermediate Representation (*.xml + *.bin), ONNX (*.onnx)
Validated images The sample uses OpenCV* to read input image (*.bmp, *.png)
Supported devices All
Other language realization C, Python

## How It Works

Upon the start-up, the sample application reads command line parameters, loads specified network and an image to the Inference Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the application outputs data to the standard output stream.

You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.

## Building

To build the sample, please use instructions available at Build the Sample Applications section in Inference Engine Samples guide.

## Running

To run the sample, you need specify a model and image:

NOTES:

• By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with --reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
• Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
• The sample accepts models in ONNX format (.onnx) that do not require preprocessing.

### Example

2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
python <path_to_omz_tools>/converter.py --name alexnet
1. Perform inference of car.bmp using alexnet model on a GPU, for example:
<path_to_sample>/hello_classification <path_to_model>/alexnet.xml <path_to_image>/car.bmp GPU

## Sample Output

The application outputs top-10 inference results.

Top 10 results:
Image C:\images\car.bmp
classid probability
------- -----------
656 0.6664789
654 0.1129405
581 0.0684867
874 0.0333845
436 0.0261321
817 0.0167310
675 0.0109796
511 0.0105919
569 0.0081782
717 0.0063356
This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool