- These steps apply to Ubuntu*, CentOS*, and Yocto*.
- If you are using Intel® Distribution of OpenVINO™ toolkit on Windows* OS, see the Installation Guide for Windows*.
- CentOS and Yocto installations will require some modifications that are not covered in this guide.
- An internet connection is required to follow the steps in this guide.
- Intel® System Studio is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to Get Started with Intel® System Studio.
OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud.
The Intel® Distribution of OpenVINO™ toolkit for Linux*:
Included with the Installation and installed by default:
|Model Optimizer||This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. |
Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.
|Inference Engine||This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.|
|Intel® Media SDK||Offers access to hardware accelerated video codecs and frame processing|
|OpenCV||OpenCV* community version compiled for Intel® hardware|
|Inference Engine Code Samples||A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more.|
|Demo Applications||A set of simple console applications that provide robust application templates to help you implement specific deep learning scenarios.|
|Additional Tools||A set of tools to work with your models including Accuracy Checker utility, Post-Training Optimization Tool Guide, Model Downloader and other|
|Documentation for Pre-Trained Models||Documentation for the pre-trained models available in the Open Model Zoo repo.|
|Deep Learning Streamer (DL Streamer)||Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see DL Streamer Samples, API Reference, Elements, Tutorial.|
Could Be Optionally Installed
Deep Learning Workbench (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
Proceed to an easy installation from Docker to get started.
NOTE: With OpenVINO™ 2020.4 release, Intel® Movidius™ Neural Compute Stick is no longer supported.
This guide provides step-by-step instructions on how to install the Intel® Distribution of OpenVINO™ toolkit. Links are provided for each type of compatible hardware including downloads, initialization and configuration steps. The following steps will be covered:
Download the Intel® Distribution of OpenVINO™ toolkit package file from Intel® Distribution of OpenVINO™ toolkit for Linux*. Select the Intel® Distribution of OpenVINO™ toolkit for Linux package from the dropdown menu.
Change directories to where you downloaded the Intel Distribution of OpenVINO toolkit for Linux* package file.
If you downloaded the package file to the current user's
By default, the file is saved as
Unpack the .tgz file:
The files are unpacked to the
Go to the
If you have a previous version of the Intel Distribution of OpenVINO toolkit installed, rename or delete these two directories:
Choose your installation option and run the related script as root to use either a GUI installation wizard or command line instructions (CLI).
Screenshots are provided for the GUI, but not for CLI. The following information also applies to CLI and will be helpful to your installation where you will be presented with the same choices and tasks.
You can select which OpenVINO components will be installed by modifying the
COMPONENTS parameter in the
silent.cfg file. For example, to install only CPU runtime for the Inference Engine, set
silent.cfg. To get a full list of available components for installation, run the
./install.sh --list_components command from the unpacked OpenVINO™ toolkit package.
If you select the default options, the Installation summary GUI screen looks like this:
By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as
/home/<USER>/intel/openvino_<version>/For simplicity, a symbolic link to the latest installation is also created:
NOTE: If there is an OpenVINO™ toolkit version previously installed on your system, the installer will use the same destination directory for next installations. If you want to install a newer version to a different directory, you need to uninstall the previously installed versions.
NOTE: The Intel® Media SDK component is always installed in the
/opt/intel/mediasdkdirectory regardless of the OpenVINO installation path chosen.
The first core components are installed. Continue to the next section to install additional dependencies.
NOTE: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace
/opt/intelwith the directory in which you installed the software.
These dependencies are required for:
Run a script to download and install the external software dependencies:
The dependencies are installed. Continue to the next section to set your environment variables.
You must update several environment variables before you can compile and run OpenVINO™ applications. Run the following script to temporarily set your environment variables:
Optional: The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows:
[setupvars.sh] OpenVINO environment initialized.
The environment variables are set. Continue to the next section to configure the Model Optimizer.
The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX* and Kaldi*.
The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:
.xml: Describes the network topology
.bin: Contains the weights and biases binary data
For more information about the Model Optimizer, refer to the Model Optimizer Developer Guide.
You can choose to either configure all supported frameworks at once OR configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
NOTE: Since the TensorFlow framework is not officially supported on CentOS*, the Model Optimizer for TensorFlow can't be configured and ran on those systems.
IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
Option 1: Configure all supported frameworks at the same time
Option 2: Configure each framework separately
Configure individual frameworks separately ONLY if you did not select Option 1 above.
The Model Optimizer is configured for one or more frameworks.
You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models.
To enable inference on other hardware, see:
Or proceed to the Get Started to get started with running code samples and demo applications.
The steps in this section are required only if you want to enable the toolkit components to use processor graphics (GPU) on your system.
Install the Intel® Graphics Compute Runtime for OpenCL™ driver components required to use the GPU plugin and write custom layers for Intel® Integrated Graphics. The drivers are not included in the package, to install it, make sure you have the internet connection and run the installation script:
The script compares the driver version on the system to the current version. If the driver version on the system is higher or equal to the current version, the script does not install a new driver. If the version of the driver is lower than the current version, the script uninstalls the lower and installs the current version with your permission:
Higher hardware versions require a higher driver version, namely 20.35 instead of 19.41. If the script fails to uninstall the driver, uninstall it manually. During the script execution, you may see the following command line output:
Ignore this suggestion and continue.
You can also find the most recent version of the driver, installation procedure and other information in the https://github.com/intel/compute-runtime/ repository.
You've completed all required configuration steps to perform inference on processor graphics. Proceed to the Get Started to get started with running code samples and demo applications.
These steps are only required if you want to perform inference on Intel® Movidius™ NCS powered by the Intel® Movidius™ Myriad™ 2 VPU or Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU. See also the Get Started page for Intel® Neural Compute Stick 2:
Add the current Linux user to the
Log out and log in for it to take effect.
NOTE: You may need to reboot your machine for this to take effect.
You've completed all required configuration steps to perform inference on Intel® Neural Compute Stick 2. Proceed to the Get Started to get started with running code samples and demo applications.
To install and configure your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, see the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs Configuration Guide.
NOTE: After installing your Intel® Movidius™ VPU, you will return to this guide to complete the Intel® Distribution of OpenVINO™ installation.
After configuration is done, you are ready to run the verification scripts with the HDDL Plugin for your Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
You've completed all required configuration steps to perform inference on Intel® Vision Accelerator Design with Intel® Movidius™ VPUs. Proceed to the Get Started to get started with running code samples and demo applications.
Now you are ready to get started. To continue, see the following pages:
Choose one of the options provided below to uninstall the Intel® Distribution of OpenVINO™ Toolkit from your system.
PRC developers might encounter pip installation related issues during OpenVINO™ installation. To resolve the issues, you may use one of the following options at your discretion:
-iparameter in the
pipcommand. For example:
--trusted-host parameter if the URL above is
http instead of
~/.pip/pip.conffile to change the default download source with the content below:
To learn more about converting models, go to: