nGraph Function Creation Python* Sample

This sample demonstrates how to execute an inference using nGraph function feature to create a network that uses weights from LeNet classification network, which is known to work well on digit classification tasks. So you don't need an XML file, the model will be created from the source code on the fly.

In addition to regular grayscale images with a digit, the sample also supports single-channel ubyte images as an input.

The following Inference Engine Python API is used in the application:

Feature API Description
Network Operations IENetwork, IENetwork.batch_size Managing of network
nGraph Functions ngraph.impl.Function, ngraph.parameter, ngraph.constant, ngraph.convolution, ngraph.add, ngraph.max_pool, ngraph.reshape, ngraph.matmul, ngraph.relu, ngraph.softmax, ngraph.result, ngraph.impl.Function.to_capsule Description of a network using nGraph Python API

Basic Inference Engine API is covered by Hello Classification Python* Sample.

Options Values
Validated Models LeNet
Model Format Network weights file (*.bin)
Validated images The sample uses OpenCV* to read input grayscale image (*.bmp, *.png) or single-channel ubyte image
Supported devices All
Other language realization C++

How It Works

At startup, the sample application reads command-line parameters, prepares input data, creates a network using nGraph function feature and passed weights file, loads the network and image(s) to the Inference Engine plugin, performs synchronous inference, and processes output data, logging each step in a standard output stream.

You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.


Run the application with the -h option to see the usage message:

python <path_to_sample>/ -h

Usage message:

usage: [-h] -m MODEL -i INPUT [INPUT ...]
[-d DEVICE] [--labels LABELS]
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to a file with network weights.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to an image file.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, MYRIAD, HDDL or HETERO: is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU.
--labels LABELS Optional. Path to a labels mapping file.
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results.

To run the sample, you need specify a model weights and image:


  • This sample supports models with FP32 weights only.
  • The lenet.bin weights file was generated by the Model Optimizer tool from the public LeNet model with the --input_shape [64,1,28,28] parameter specified.
  • The original model is available in the Caffe* repository on GitHub*.
  • The white over black images will be automatically inverted in color for a better predictions.

For example, you can do inference of 3.png using the pre-trained model on a GPU:

python <path_to_sample>/ -m <path_to_sample>/lenet.bin -i <path_to_image>/3.png -d GPU

Sample Output

The sample application logs each step in a standard output stream and outputs top-10 inference results.

[ INFO ] Creating Inference Engine
[ INFO ] Loading the network using ngraph function with weights from c:\openvino\deployment_tools\inference_engine\samples\python\ngraph_function_creation_sample\lenet.bin
[ INFO ] Configuring input and output blobs
[ INFO ] Loading the model to the plugin
[ WARNING ] Image c:\images\3.png is inverted to white over black
[ WARNING ] Image c:\images\3.png is resized from (351, 353) to (28, 28)
[ INFO ] Starting inference in synchronous mode
[ INFO ] Image path: c:\images\3.png
[ INFO ] Top 10 results:
[ INFO ] classid probability
[ INFO ] -------------------
[ INFO ] 3 1.0000000
[ INFO ] 9 0.0000000
[ INFO ] 8 0.0000000
[ INFO ] 7 0.0000000
[ INFO ] 6 0.0000000
[ INFO ] 5 0.0000000
[ INFO ] 4 0.0000000
[ INFO ] 2 0.0000000
[ INFO ] 1 0.0000000
[ INFO ] 0 0.0000000
[ INFO ]
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool

Deprecation Notice

Deprecation Begins June 1, 2020
Removal Date December 1, 2020

Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.

Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.

See Also