This sample demonstrates how to execute an synchronous inference using nGraph function feature to create a network, which uses weights from LeNet classification network, which is known to work well on digit classification tasks.
The sample supports only single-channel
ubyte images as an input.
You do not need an XML file to create a network. The API of ngraph::Function allows to create a network on the fly from the source code.
nGraph Function Creation C++ Sample demonstrates the following Inference Engine API in your applications:
|Inference Engine Version||Get Inference Engine API version|
|Available Devices||Get version information of the devices for inference|
|Network Operations||Managing of network, operate with its batch size. Setting batch size using input image count.|
|nGraph Functions||Illustrates how to construct an nGraph function|
Basic Inference Engine API is covered by Hello Classification C++ sample.
|Model Format||Network weights file (*.bin)|
|Validated images||single-channel |
|Other language realization||Python|
At startup, the sample application reads command-line parameters, prepares input data, creates a network using the nGraph function feature and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference and processes output data, logging each step in a standard output stream. You can place labels in .labels file near the model to get pretty output.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
To build the sample, please use instructions available at Build the Sample Applications section in Inference Engine Samples guide.
To run the sample, you need specify a model wights and ubyte image:
lenet.binwith FP32 weights file
lenet.binwith FP32 weights file was generated by the Model Optimizer tool from the public LeNet model with the
--input_shape [64,1,28,28]parameter specified.
The original model is available in the Caffe* repository on GitHub*.
Running the application with the
-h option yields the following usage message:
Running the application with the empty list of options yields the usage message given above and an error message.
You can do inference of an image using a pre-trained model on a GPU using the following command:
The sample application logs each step in a standard output stream and outputs top-10 inference results.
|Deprecation Begins||June 1, 2020|
|Removal Date||December 1, 2020|
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.