The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).
The Intel® Distribution of OpenVINO™ toolkit for Linux*:
Included with the Installation and installed by default:
|Model Optimizer||This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine.
Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.
|Inference Engine||This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.|
|Drivers and runtimes for OpenCL™ version 2.1||Enables OpenCL on the GPU/CPU for Intel® processors|
|Intel® Media SDK||Offers access to hardware accelerated video codecs and frame processing|
|OpenCV||OpenCV* community version compiled for Intel® hardware|
|OpenVX* version 1.1||Intel's implementation of OpenVX* 1.1 optimized for running on Intel® hardware (CPU, GPU, IPU)|
|Sample Applications||A set of simple console applications demonstrating how to use the Inference Engine in your applications|
|Demos||A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases|
|Additional Tools||A set of tools to work with your models|
|Documentation for Pre-Trained Models||Documentation for the pre-trained models available in the Open Model Zoo repo|
The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.
This guide provides step-by-step instructions on how to install the Intel® Distribution of OpenVINO™ toolkit. Links are provided for each type of compatible hardware including downloads, initialization and configuration steps. The following steps will be covered:
Download the Intel® Distribution of OpenVINO™ toolkit package file from Intel® Distribution of OpenVINO™ toolkit for Linux*. Select the Intel® Distribution of OpenVINO™ toolkit for Linux package from the dropdown menu.
NOTE: The Intel® Media SDK component is always installed in the
/opt/intel/mediasdkdirectory regardless of the OpenVINO installation path chosen.
The first core components are installed. Continue to the next section to install additional dependencies.
NOTE: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace
/opt/intelwith the directory in which you installed the software.
These dependencies are required for:
You must update several environment variables before you can compile and run OpenVINO™ applications. Run the following script to temporarily set your environment variables:
Optional: The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
[setupvars.sh] OpenVINO environment initialized.
The environment variables are set. Continue to the next section to configure the Model Optimizer.
The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX* and Kaldi*.
The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:
.xml: Describes the network topology
.bin: Contains the weights and biases binary data
For more information about the Model Optimizer, refer to the Model Optimizer Developer Guide.
You can choose to either configure all supported frameworks at once OR configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
NOTE: Since the TensorFlow framework is not officially supported on CentOS*, the Model Optimizer for TensorFlow can't be configured and ran on those systems.
IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
Option 1: Configure all supported frameworks at the same time
Option 2: Configure each framework separately
Configure individual frameworks separately ONLY if you did not select Option 1 above.
You are ready to compile the samples by running the verification scripts.
IMPORTANT: This section is required. In addition to confirming your installation was successful, demo scripts perform other steps, such as setting up your computer to use the Inference Engine samples.
To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:
car.pngimage located in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
Run the Inference Pipeline verification script:
This script downloads three pre-trained models IRs, builds the Security Barrier Camera Demo application and run it with the downloaded models and the
car_1.bmp image from the
demo directory to show an inference pipeline. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.
First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.
When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text:
To learn about the verification scripts, see the
README.txt file in
For a description of the Intel Distribution of OpenVINO™ pre-trained object detection and object recognition models, see Overview of OpenVINO™ Toolkit Pre-Trained Models.
You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models. To use other hardware, see;
The steps in this section are required only if you want to enable the toolkit components to use processor graphics (GPU) on your system.
Ignore those suggestions and continue.
These steps are only required if you want to perform inference on Intel® Movidius™ NCS powered by the Intel® Movidius™ Myriad™ 2 VPU or Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU. See also the Get Started page for Intel® Neural Compute Stick 2:
To install and configure your Intel® Vision Accelerator Design with Intel® Movidius™ VPU see the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs Configuration Guide
NOTE: After installing your Intel® Movidius™ VPU, you will return to this guide to complete OpenVINO™ installation.
After configuration is done, you are ready to run the verification scripts with the HDDL Plugin for your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
In this section you will run the Image Classification Sample Application with a Squeezenet1.1 Caffe* model on three types of Intel® hardware: CPU, GPU and VPU.
IMPORTANT: This section requires that you have Run the Verification Scripts to Verify Installation. This script builds the Image Classification sample application and downloads the required Caffe* Squeezenet model.
Setting up a neural network is the first step in running the sample.
If you are running inference on hardware other than VPU-based devices, you already have the required FP32 neural network model converted to an optimized Intermediate Representation (IR). Follow the steps in the Run the Sample Application section to run the sample.
If you want to run inference on a VPU device (Intel® Movidius™ Neural Compute Stick, Intel® Neural Compute Stick 2 or Intel® Vision Accelerator Design with Intel® Movidius™ VPU), you'll need an FP16 version of the model, which you will set up in this paragraph.
To convert the FP32 model to a FP16 IR suitable for VPU-based hardware accelerators, follow the steps below:
squeezenet1.1.labelsfile contains the classes that ImageNet uses. This file is included so that the inference results show text instead of classification numbers. Copy
squeezenet1.1.labelsto your optimized model location:
Now your neural network setup is complete and you're ready to run the sample application.
In this paragraph you will run the Image Classification sample application, which was automatically built when you Ran the Image Classification Verification Script. To run the sample application:
car.pngfile from the
demodirectory as an input image, the IR of your FP16 model and a plugin for a hardware device to perform inference on.
NOTE: Running the sample application on hardware other than CPU requires performing additional hardware configuration steps.
NOTE: Running inference on Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2 with the MYRIAD plugin requires performing additional hardware configuration steps.
NOTE: Running inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs with the HDDL plugin requires performing additional hardware configuration steps
For information on Sample Applications, see the Inference Engine Samples Overview.
Congratulations, you have finished the installation of the Intel® Distribution of OpenVINO™ toolkit for Linux*. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below.
To learn more about converting models, go to: