Install Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support

NOTES:

Introduction

The Intel® Distribution of OpenVINO™ 2019 R1 toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ 2019 R1 toolkit includes the Intel® Deep Learning Deployment Toolkit (Intel® DLDT).

The Intel® Distribution of OpenVINO™ 2019 R1 toolkit for Linux* with FPGA Support:

Included with the Installation and installed by default:

Component Description
Model Optimizer This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*.
Inference Engine This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications.
Drivers and runtimes for OpenCL™ version 2.1 Enables OpenCL on the GPU/CPU for Intel® processors
Intel® Media SDK Offers access to hardware accelerated video codecs and frame processing
Pre-compiled FPGA bitstream samples Pre-compiled bitstream samples for the Intel® Arria® 10 GX FPGA Development Kit, Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA, and Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA.
Intel® FPGA SDK for OpenCL™ software technology The Intel® FPGA RTE for OpenCL™ provides utilities, host runtime libraries, drivers, and RTE-specific libraries and files
OpenCV version 3.4.2 OpenCV* community version compiled for Intel® hardware. Includes PVL libraries for computer vision
OpenVX* version 1.1 Intel's implementation of OpenVX* 1.1 optimized for running on Intel® hardware (CPU, GPU, IPU)
Demos and Sample Applications A set of simple console applications demonstrating how to use the Inference Engine in your applications

Development and Target Platform

The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.

Hardware

Processor Notes:

Operating Systems:

Overview

This guide provides step-by-step instructions on how to install the Intel® Distribution of OpenVINO™ 2019 R1 toolkit with FPGA Support. Links are provided for each type of compatible hardware including downloads, initialization and configuration steps. The following steps will be covered:

  1. Install the Intel® Distribution of OpenVINO™ Toolkit
  2. Install External software dependencies
  3. Configure the Model Optimizer
  4. Run the Verification Scripts to Verify Installation and Compile Samples
  5. Install your compatible hardware from the list of supported hardware
    After installing your compatible hardware, you will return to this guide to complete OpenVINO™ installation.
  6. Complete Accelerator Setup
  7. Run a Sample Application
  8. Use the Face Detection Tutorial

Install the Intel® Distribution of OpenVINO™ 2019 R1 Toolkit Core Components

Download the Intel® Distribution of OpenVINO™ 2019 R1 toolkit package file from Intel® Distribution of OpenVINO™ toolkit for Linux* with FPGA Support. Select the Intel® Distribution of OpenVINO™ toolkit for Linux with FPGA Support package from the dropdown menu.

  1. Open a command prompt terminal window.
  2. Change directories to where you downloaded the Intel Distribution of OpenVINO toolkit for Linux* with FPGA Support package file.
    If you downloaded the package file to the current user's Downloads directory:
    cd ~/Downloads/
    By default, the file is saved as l_openvino_toolkit_fpga_p_<version>.tgz.
  3. Unpack the .tgz file:
    tar -xvzf l_openvino_toolkit_fpga_p_<version>.tgz
    The files are unpacked to the l_openvino_toolkit_fpga_p_<version> directory.
  4. Go to the l_openvino_toolkit_fpga_p_<version> directory:
    cd l_openvino_toolkit_fpga_p_<version>
    If you have a previous version of the Intel Distribution of OpenVINO toolkit installed, rename or delete these two directories:

Installation Notes:

  1. Choose your installation option:
    • Option 1: GUI Installation Wizard:
      sudo ./install_GUI.sh
    • Option 2: Command-Line Instructions:
      sudo ./install.sh
  2. Follow the instructions on your screen. Watch for informational messages such as the following in case you must complete additional steps:
    install-linux-fpga-01.png
  3. If you select the default options, the Installation summary GUI screen looks like this:
    install-linux-fpga-02.png
    • Optional: You can choose Customize and select only the bitstreams for your card. This will allow you to minimize the size of the download by several gigabytes.
    • The three bitstreams listed at the bottom of the customization screen are highlighted below. Choose the one for your FPGA:
      install-linux-fpga-04.png
    • When installed as root the default installation directory for the Intel Distribution of OpenVINO 2019 R1 is /opt/intel/openvino_fpga_2019.<version>/.
      For simplicity, a symbolic link to the latest installation is also created: /opt/intel/openvino/.
  4. A Complete screen indicates that the core components have been installed:
    install-linux-fpga-05.png

The first core components are installed. Continue to the next section to install additional dependencies.

Install External Software Dependencies

These dependencies are required for:

  1. Change to the install_dependencies directory:
    cd /opt/intel/openvino/install_dependencies
  2. Run a script to download and install the external software dependencies:
    sudo -E ./install_openvino_dependencies.sh

The dependencies are installed. Continue to the next section to configure the Model Optimizer.

Configure the Model Optimizer

The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX* and Kaldi*.

The Model Optimizer is a key component of the Intel Distribution of OpenVINO toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:

For more information about the Model Optimizer, refer to the Model Optimizer Developer Guide

Model Optimizer Configuration Steps

IMPORTANT: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your environment.

You can choose to either configure all supported frameworks at once OR configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.

NOTE: If you installed the Intel® Distribution of OpenVINO™ to the non-default install directory, replace /opt/intel with the directory in which you installed the software.

Option 1: Configure all supported frameworks at the same time

  1. Go to the Model Optimizer prerequisites directory:
    cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
  2. Run the script to configure the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi*, and ONNX:
    sudo ./install_prerequisites.sh

Option 2: Configure each framework separately

Configure individual frameworks separately ONLY if you did not select Option 1 above.

  1. Go to the Model Optimizer prerequisites directory:
    cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
  2. Run the script for your model framework. You can run more than one script:
sudo ./install_prerequisites_caffe.sh

You are ready to compile the samples by running the verification scripts.

Run the Verification Scripts to Verify Installation and Compile Samples

To verify the installation and compile two samples, run the verification applications provided with the product on the CPU:

  1. Go to the Inference Engine demo directory:
    cd /opt/intel/openvino/deployment_tools/demo
  2. Run the Image Classification verification script:
    ./demo_squeezenet_download_convert_run.sh
    This verification script downloads a SqueezeNet model, uses the Model Optimizer to convert the model to the .bin and .xml Intermediate Representation (IR) files. The Inference Engine requires this model conversion so it can use the IR as input and achieve optimum performance on Intel hardware.
    This verification script builds the Image Classification Sample application and run it with the car.png image in the demo directory. When the verification script completes, you will have the label and confidence for the top-10 categories:
    squeezenet_results.png
  3. Run the Inference Pipeline verification script:

    ./demo_security_barrier_camera.sh

    This verification script builds the Security Barrier Camera Demo application included in the package.

    This verification script uses the car_1.bmp image in the demo directory to show an inference pipeline using three of the pre-trained models. The verification script uses vehicle recognition in which vehicle attributes build on each other to narrow in on a specific attribute.

    First, an object is identified as a vehicle. This identification is used as input to the next model, which identifies specific vehicle attributes, including the license plate. Finally, the attributes identified as the license plate are used as input to the third model, which recognizes specific characters in the license plate.

    When the verification script completes, you will see an image that displays the resulting frame with detections rendered as bounding boxes, and text:

    security-barrier-results.png
  4. Close the image viewer window to complete the verification script.

To learn about the verification scripts, see the README.txt file in /opt/intel/openvino/deployment_tools/demo.

For a description of the Intel Distribution of OpenVINO™ pre-trained object detection and object recognition models, see Overview of OpenVINO™ Toolkit Pre-Trained Models.

You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models. To use other hardware, see Install and Configure your Compatible Hardware below.

Install and Configure Your Compatible Hardware

Install your compatible hardware from the list of supported components below.

NOTE: Once you've completed your hardware installation, you'll return to this guide to finish installation and configuration of the Intel® Distribution of OpenVINO™ 2019 R1 toolkit.

Links to install and configure compatible hardware

Complete Accelerator Setup

Now that you've completed installation and configuration of your compatible hardware, you're ready to move to accelerator setup and run a sample application.

The fpga_support_files.tgz are required to ensure your accelerator card and OpenVINO™ work correctly, and provide support for all compatible hardware accelerators.

  1. Download fpga_support_files.tgz from the Intel Registration Center. Right click or save the file instead of letting your browser extract automatically.
  2. Go to the directory where you downloaded fpga_support_files.tgz.
  3. Unpack the .tgz file:
    tar -xvzf fpga_support_files.tgz
    A directory named fpga_support_files is created.
  4. Go to the fpga_support_files directory:
    cd fpga_support_files
  5. Switch to superuser:
    sudo su
  6. Use the setup_env.sh script from fpga_support_files.tgz to set your environment variables:
    source setup_env.sh
  7. Run the install_openvino_fpga_dependencies.sh script which allows OpenCL to support Ubuntu and recent kernels:
    ./install_openvino_fpga_dependencies.sh
    When asked, select the option for the FPGA card, Intel® GPU or Movidius Neural Compute Stick, then the appropriate dependencies will be installed.
  8. Install OpenCL devices. Enter Y when prompted to install:
    aocl install
  9. Reboot the machine:
    reboot
  10. Use the setup_env.sh script from fpga_support_files.tgz to set your environment variables:
    source /home/<user>/Downloads/fpga_support_files/setup_env.sh

    NOTE: If you reboot for any reason or opened a new terminal window, run the above command to source your environment files again.

  11. Run aocl diagnose:
    aocl diagnose
    Your screen displays DIAGNOSTIC_PASSED.

If you have the Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2, plug it in now.

You have completed the accelerator installation and configuration. You are ready to run the Classification Sample, which you have compiled by running the Image Classification verification script in the Run the Verification Scripts to Verify Installation section.

Run a Sample Application

IMPORTANT: This section requires that you have Run the Verification Scripts to Verify Installation.

Setting up a neural network is the first step in running a sample.

NOTE: If you are running inference only on a CPU, you already have the required FP32 neural network model. If you want to run inference on any hardware other than the CPU, you’ll need an FP16 version of the model, which you will set up in the following section.

Set Up a Neural Network Model

In this section, you will create an FP16 model suitable for hardware accelerators. For more information, see the information about FPGA plugins in the Inference Engine Developer Guide.

  1. Make a directory for the FP16 SqueezeNet Model:
    mkdir /home/<user>/squeezenet1.1_FP16
  2. Go to /home/<user>/squeezenet1.1_FP16:
    cd /home/<user>/squeezenet1.1_FP16
  3. Use the Model Optimizer to convert an FP16 Squeezenet Caffe model into an optimized Intermediate Representation (IR):
    python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/<user>/openvino_models/models/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir .
  4. The squeezenet1.1.labels file contains the classes that ImageNet uses. This file is included so that the inference results show text instead of classification numbers. Copy squeezenet1.1.labels to your optimized model location:
    cp /home/<user>/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.labels .
  5. Copy a sample image to the release directory. You will use this with your optimized model:
    sudo cp /opt/intel/openvino/deployment_tools/demo/car.png ~/inference_engine_samples/intel64/Release

Once your neural network setup is complete, you're ready to run a sample application.

Run a Sample Application

  1. Go to the samples directory:
    cd /home/<user>/inference_engine_samples/intel64/Release
  2. Use the Inference Engine to run a sample application on the CPU or GPU:
    • On the CPU:
      ./classification_sample -i car.png -m ~/openvino_models/ir/FP32/classification/squeezenet/1.1/caffe/squeezenet1.1.xml
    • On the GPU:
      ./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d GPU
  3. To run the inference using both your FPGA and CPU, add the -d option and HETERO: to your target:
    ./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HETERO:FPGA,CPU
  4. To run the sample application on your target accelerator:
    • Intel® Arria® 10 GX FPGA Development Kit:
      aocl program acl0 /opt/intel/openvino/bitstreams/a10_devkit_bitstreams/5-0_A10DK_FP11_SqueezeNet.aocx
    • Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 GX FPGA:
      aocl program acl0 /opt/intel/openvino/bitstreams/a10_dcp_bitstreams/5-0_RC_FP11_SqueezeNet.aocx
    • Intel® Vision Accelerator Design with an Intel® Arria 10 FPGA (Mustang-F100-A10):
      aocl program acl0 /opt/intel/openvino/bitstreams/a10_vision_design_bitstreams/5-0_PL1_FP11_SqueezeNet.aocx
    • Intel® Vision Accelerator Design with Intel® Movidius™ VPUs:
      ./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HDDL
    • Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2:
      ./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d MYRIAD

NOTE: The CPU throughput is measured in Frames Per Second (FPS). This tells you how quickly the inference is done on the hardware.

The throughput on the accelerator may show a lower FPS due to the initialization time. To account for that, use -ni to increase the number of iterations. This option reduces the initialization impact and gives a more accurate sense of the speed the inference can run on the accelerator.

./classification_sample -i car.png -m ~/squeezenet1.1_FP16/squeezenet1.1.xml -d HETERO:FPGA,CPU -ni 100

Congratulations, you have finished the Intel® Distribution of OpenVINO™ 2019 R1 toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and other resources are provided below.

Hello World Face Detection Tutorial

Refer to the OpenVINO™ with FPGA Hello World Face Detection Exercise.

Additional Resources

To learn more about converting models, go to: