The OpenVINO™ toolkit optimizes and runs Deep Learning Neural Network models on Intel® hardware. This guide helps you get started with the OpenVINO™ toolkit you installed on Windows* OS.
In this guide, you will:
The toolkit consists of three primary components:
In addition, demo scripts, code samples and demo applications are provided to help you get up and running with the toolkit:
This guide assumes you completed all Intel® Distribution of OpenVINO™ toolkit installation and configuration steps. If you have not yet installed and configured the toolkit, see Install Intel® Distribution of OpenVINO™ toolkit for Windows*.
By default, the installation directory is C:\Program Files (x86)\Intel\openvino_<version>
, referred to as <INSTALL_DIR>
. If you installed the Intel® Distribution of OpenVINO™ toolkit to a directory other than the default, replace C:\Program Files (x86)\Intel
with the directory in which you installed the software. For simplicity, a shortcut to the latest installation is also created: C:\Program Files (x86)\Intel\openvino_2021
.
The primary tools for deploying your models and applications are installed to the <INSTALL_DIR>\deployment_tools
directory.
Click for the
deployment_tools
directory structure
Directory | Description |
---|---|
demo\ | Demo scripts. Demonstrate pipelines for inference scenarios, automatically perform steps and print detailed output to the console. For more information, see the Use OpenVINO: Demo Scripts section. |
inference_engine\ | Inference Engine directory. Contains Inference Engine API binaries and source files, samples and extensions source files, and resources like hardware drivers. |
bin\ | Inference Engine binaries. |
external\ | Third-party dependencies and drivers. |
include\ | Inference Engine header files. For API documentation, see the Inference Engine API Reference. |
lib\ | Inference Engine static libraries. |
samples\ | Inference Engine samples. Contains source code for C++ and Python* samples and build scripts. See the Inference Engine Samples Overview. |
share\ | CMake configuration files for linking with Inference Engine. |
src\ | Source files for CPU extensions. |
~intel_models\ | Symbolic link to the intel_models subfolder of the open_model_zoo folder. |
model_optimizer\ | Model Optimizer directory. Contains configuration scripts, scripts to run the Model Optimizer and other files. See the Model Optimizer Developer Guide. |
ngraph\ | nGraph directory. Includes the nGraph header and library files. |
open_model_zoo\ | Open Model Zoo directory. Includes the Model Downloader tool to download pre-trained OpenVINO and public models, OpenVINO models documentation, demo applications and the Accuracy Checker tool to evaluate model accuracy. |
demos\ | Demo applications for inference scenarios. Also includes documentation and build scripts. |
intel_models\ | Pre-trained OpenVINO models and associated documentation. See the Overview of OpenVINO™ Toolkit Pre-Trained Models. |
models | Intel's trained and public models that can be obtained with Model Downloader. |
tools\ | Model Downloader and Accuracy Checker tools. |
tools\ | Contains a symbolic link to the Model Downloader folder and auxiliary tools to work with your models: Calibration tool, Benchmark and Collect Statistics tools. |
The simplified OpenVINO™ workflow is:
.xml
and .bin
files that are used as the input for Inference Engine.The demo scripts in <INSTALL_DIR>\deployment_tools\demo
give you a starting point to learn the OpenVINO workflow. These scripts automatically perform the workflow steps to demonstrate running inference pipelines for different scenarios. The demo steps demonstrate how to:
REQUIRED: You must have Internet access to run the demo scripts. If your Internet access is through a proxy server, make sure the operating system environment proxy information is configured.
The demo scripts can run inference on any supported target device. Although the default inference device is CPU, you can use the -d
parameter to change the inference device. The general command to run the scripts looks as follows:
Before running the demo applications on Intel® Processor Graphics or Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, you must complete additional hardware configuration steps. For details, see the following sections in the installation instructions:
The following paragraphs describe each demo script.
The demo_squeezenet_download_convert_run
script illustrates the image classification pipeline.
The script:
car.png
image located in the demo
directory.
Click for an example of running the Image Classification demo script
To run the script to perform inference on a CPU:
When the script completes, you see the label and confidence for the top-10 categories:
The demo_security_barrier_camera
uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
The script:
car_1.bmp
image from the demo
directory to show an inference pipeline.This application:
Click for an example of Running the Pipeline demo script
To run the script performing inference on Intel® Processor Graphics:
When the verification script completes, you see an image that displays the resulting frame with detections rendered as bounding boxes, and text:
The demo_benchmark_app
script illustrates how to use the Benchmark Application to estimate deep learning inference performance on supported devices.
The script:
car.png
image located in the demo
directory.
Click for an example of running the Benchmark demo script
To run the script that performs inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
When the verification script completes, you see the performance counters, resulting latency, and throughput values displayed on the screen.
This section guides you through a simplified workflow for the Intel® Distribution of OpenVINO™ toolkit using code samples and demo applications.
You will perform the following steps:
Each demo and code sample is a separate application, but they use the same behavior and components.
Inputs you need to specify when using a code sample or demo application:
To perform sample inference, run the Image Classification code sample and Security Barrier Camera demo application that are automatically compiled when you run the Image Classification and Inference Pipeline demo scripts. The binary files are in the C:\Users\<USER_ID>\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release
and C:\Users\<USER_ID>\Intel\OpenVINO\inference_engine_demos_build\intel64\Release
directories, respectively.
You can also build all available sample code and demo applications from the source files delivered with the OpenVINO™ toolkit. To learn how to do this, see the instruction in the Inference Engine Code Samples Overview and Demo Applications Overview sections.
You must have a model that is specific for you inference task. Example model types are:
Options to find a model suitable for the OpenVINO™ toolkit are:
This guide uses the Model Downloader to get pre-trained models. You can use one of the following options to find a model:
grep
to list models that have a specific name pattern: Use the Model Downloader to download the models to a models directory. This guide uses <models_dir>
as the models directory and <models_name>
as the model name:
Download the following models if you want to run the Image Classification Sample and Security Barrier Camera Demo application:
Model Name | Code Sample or Demo App |
---|---|
squeezenet1.1 | Image Classification Sample |
vehicle-license-plate-detection-barrier-0106 | Security Barrier Camera Demo application |
vehicle-attributes-recognition-barrier-0039 | Security Barrier Camera Demo application |
license-plate-recognition-barrier-0001 | Security Barrier Camera Demo application |
Click for an example of downloading the SqueezeNet Caffe* model
To download the SqueezeNet 1.1 Caffe* model to the C:\Users\<USER_ID>\Documents\models
folder:
Your screen looks similar to this after the download:
Click for an example of downloading models for the Security Barrier Camera Demo application
To download all three pre-trained models in FP16 precision to the C:\Users\<USER_ID>\Documents\models
folder:
Your screen looks similar to this after the download:
In this step, your trained models are ready to run through the Model Optimizer to convert them to the Intermediate Representation (IR) format. This is required before using the Inference Engine with the model.
Models in the Intermediate Representation format always include a pair of .xml
and .bin
files. Make sure you have these files for the Inference Engine to find them.
model_name.xml
model_name.bin
This guide uses the public SqueezeNet 1.1 Caffe* model to run the Image Classification Sample. See the example to download a model in the Download Models section to learn how to download this model.
The squeezenet1.1
model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to the IR. The vehicle-license-plate-detection-barrier-0106
, vehicle-attributes-recognition-barrier-0039
, license-plate-recognition-barrier-0001
models are downloaded in the IR format. You do not need to use the Model Optimizer to convert these models.
<ir_dir>
directory to contain the model's IR.FP32
, FP16
, INT8
. To prepare an IR with specific precision, run the Model Optimizer with the appropriate --data_type
option.<ir_dir>
directory.
Click for an example of converting the SqueezeNet Caffe* model
The following command converts the public SqueezeNet 1.1 Caffe* model to the FP16 IR and saves to the C:\Users\<USER_ID>\Documents\models\public\squeezenet1.1\ir
output directory:
After the Model Optimizer script is completed, the produced IR files (squeezenet1.1.xml
, squeezenet1.1.bin
) are in the specified C:\Users\<USER_ID>\Documents\models\public\squeezenet1.1\ir
directory.
Copy the squeezenet1.1.labels
file from the <INSTALL_DIR>\deployment_tools\demo\
to <ir_dir>
. This file contains the classes that ImageNet uses. Therefore, the inference results show text instead of classification numbers:
Many sources are available from which you can download video media to use the code samples and demo applications. Possibilities include:
As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes two sample images that you can use for running code samples and demo applications:
<INSTALL_DIR>\deployment_tools\demo\car.png
<INSTALL_DIR>\deployment_tools\demo\car_1.bmp
NOTE: The Image Classification code sample is automatically compiled when you run the Image Classification demo script. If you want to compile it manually, see the Build the Sample Applications on Microsoft Windows* OS section in Inference Engine Code Samples Overview.
To run the Image Classification code sample with an input image on the IR:
Click for examples of running the Image Classification code sample on different devices
The following commands run the Image Classification Code Sample using the car.png
file from the <INSTALL_DIR>\deployment_tools\demo
directory as an input image, the IR of your model from C:\Users\<USER_ID>\Documents\models\public\squeezenet1.1\ir
and on different hardware devices:
CPU:
GPU:
NOTE: Running inference on Intel® Processor Graphics (GPU) requires additional hardware configuration steps. For details, see the Steps for Intel® Processor Graphics (GPU) section in the installation instructions.
MYRIAD:
When the Sample Application completes, you see the label and confidence for the top-10 categories on the display. Below is a sample output with inference results on CPU:
NOTE: The Security Barrier Camera Demo Application is automatically compiled when you run the Inference Pipeline demo scripts. If you want to build it manually, see the instructions in the Demo Applications Overview section.
To run the Security Barrier Camera Demo Application using an input image on the prepared IRs:
Click for examples of running the Security Barrier Camera demo application on different devices
CPU:
GPU:
NOTE: Running inference on Intel® Processor Graphics (GPU) requires additional hardware configuration steps. For details, see the Steps for Intel® Processor Graphics (GPU) section in the installation instructions.
MYRIAD:
Below you can find basic guidelines for executing the OpenVINO™ workflow using the code samples and demo applications:
C:\Users\<USER_ID>\Documents\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release
C:\Users\<USER_ID>\Documents\Intel\OpenVINO\inference_engine_demos_build\intel64\Release
Template to call sample code or a demo application:
With the sample information specified, the command might look like this:
Some demo applications let you use multiple models for different purposes. In these cases, the output of the first model is usually used as the input for later models.
For example, an SSD detects a variety of objects in a frame, then age, gender, head pose, emotion recognition and similar models target the objects classified by the SSD to perform their functions.
In these cases, the use pattern in the last part of the template above is usually:
-m_<acronym> … -d_<acronym> …
For head pose:
-m_hp <headpose model> -d_hp <headpose hardware target>
Example of an Entire Command (object_detection + head pose):
Example of an Entire Command (object_detection + head pose + age-gender):
You can see all the sample application’s parameters by adding the -h
or --help
option at the command line.
Use these resources to learn more about the OpenVINO™ toolkit: