NOTE: For previous versions, see Configuration Guide for OpenVINO 2020.3, Configuration Guide for OpenVINO 2020.2, Configuration Guide for OpenVINO 2019R1/2019R2/2019R3, Configuration Guide for OpenVINO 2018R5.
The following describes the set-up of the Intel® Distribution of OpenVINO™ toolkit on CentOS* 7.4 or Ubuntu* 16.04, kernel 4.15. This is based upon a completely fresh install of the OS with developer tools included. Official Intel® documentation for the install process can be found in the following locations and it is highly recommended that these are read, especially for new users. This document serves as a guide, and in some cases, adds additional detail where necessary.
Intel® Acceleration Stack for FPGAs Quick Start Guide
OpenCL™ on Intel® PAC Quick Start Guide
Installing the Intel® Distribution of OpenVINO™ toolkit for Linux*
(Optional): Install NTFS support for transferring large installers if already downloaded on another machine.
sudo yum -y install epel-release
sudo yum -y install ntfs-3g
Install Intel® PAC and the Intel® Programmable Acceleration Card Stack
- Download version 1.2.1 of the Acceleration Stack for Runtime from the Intel FPGA Acceleration Hub. This downloads as
a10_gx_pac_ias_1_2_1_pv_rte.tar.gz. Let it download to
- Create a new directory to install to:
mkdir -p ~/tools/intelrtestack
- Untar and launch the installer:
tar xf a10_gx_pac_ias_1_2_1_pv_rte.tar.gz
- Select Y to install OPAE and accept license and when asked, specify
/home/<user>/tools/intelrtestack as the absolute install path. During the installation there should be a message stating the directory already exists as it was created in the first command above. Select Y to install to this directory. If this message is not seen, it suggests that there was a typo when entering the install location.
- Tools are installed to the following directories:
- OpenCL™ Run-time Environment:
- Intel® Acceleration Stack for FPGAs:
- Check the version of the FPGA Interface Manager firmware on the PAC board.
- If the reported FIM (
Pr Interface Id) is not
38d782e3-b612-5343-b934-2433e348ac4c then follow the instructions in Appendix A: Updating the FIM and BMC Firmware of the Intel® Acceleration Stack for FPGAs Quick Start Guide to update the FIM and BMC.
- Run the built in self-test to verify operation of the Acceleration Stack and Intel® PAC in a non-virtualized environment.
sudo sh -c "echo 20 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages"
sudo fpgabist $OPAE_PLATFORM_ROOT/hw/samples/nlb_mode_3/bin/nlb_mode_3.gbs
Verify the Intel® Acceleration Stack for FPGAs OpenCL™ BSP
- Remove any previous FCD files that may be from previous installations of hardware in the
sudo rm -rf *.fcd
lsb_release on your system if you are using CentOS:
sudo yum install redhat-lsb-core
- Create an initialization script
~/init_openvino.sh with the following content that can be run upon opening a new terminal or rebooting. This will source the script ran above as well as setting up the OpenCL™ environment.
- Source the script:
- Some of the settings made in the child scripts need a reboot to take effect. Reboot the machine and source the script again. Note that this script should be sourced each time a new terminal is opened for use with the Intel® Acceleration Stack for FPGAs and Intel® Distribution of OpenVINO™ toolkit.
- Install the OpenCL™ driver:
Select Y when asked to install the BSP. Note that the following warning can be safely ignored.
sudo -E ./tools/intelrtestack/opencl_rte/aclrte-linux64/bin/aocl install
WARNING: install not implemented. Please refer to DCP Quick Start User Guide.
- Program the Intel® PAC board with a pre-compiled
.aocx file (OpenCL™ based FPGA bitstream).
aocl program acl0 hello_world.aocx
- Build and run the Hello World application:
sudo tar xf exm_opencl_hello_world_x64_linux.tgz
sudo chmod -R a+w hello_world
cp ../hello_world.aocx ./bin
Add Intel® Distribution of OpenVINO™ toolkit with FPGA Support to Environment Variables
- To run the Intel® Distribution of OpenVINO™ toolkit, add the last four commands to the
~/init_openvino.sh script. The previous content is shown as well.
For Ubuntu systems, it is recommended to use python3.5 above instead of python3.6.
alias mo="python3.6 $IE_INSTALL/model_optimizer/mo.py"
- Source the script
Program a Bitstream
The bitstream you program should correspond to the topology you want to deploy. In this section, you program a SqueezeNet bitstream and deploy the classification sample with a SqueezeNet model.
IMPORTANT: Only use bitstreams from the installed version of the Intel® Distribution of OpenVINO™ toolkit. Bitstreams from older versions of the Intel® Distribution of OpenVINO™ toolkit are incompatible with later versions. For example, you cannot use the
2020-3_RC_FP16_AlexNet_GoogleNet_Generic bitstream, when the Intel® Distribution of OpenVINO™ toolkit supports the
There are different folders for each FPGA card type which were downloaded in the Intel® Distribution of OpenVINO™ toolkit package. For the Intel® Programmable Acceleration Card with Intel® Arria® 10 FPGA GX, the pre-trained bitstreams are in the
/opt/intel/openvino/bitstreams/a10_dcp_bitstreams directory. This example uses a SqueezeNet bitstream with low precision for the classification sample.
Program the bitstream for Intel® Programmable Acceleration Card with Intel® Arria® 10 FPGA GX.
aocl program acl0 /opt/intel/openvino/bitstreams/a10_dcp_bitstreams/2020-4_RC_FP11_InceptionV1_ResNet_SqueezeNet_TinyYolo_YoloV3.aocx
Use the Intel® Distribution of OpenVINO™ toolkit
- Run inference with the Intel® Distribution of OpenVINO™ toolkit independent of the demo scripts using the SqueezeNet model that was download by the scripts. For convenience, copy the necessary files to a local directory. If the workstation has been rebooted or a new terminal is opened, source the script above first.
cp ~/openvino_models/models/public/squeezenet1.1/squeezenet1.1.* .
cp ~/openvino_models/ir/public/squeezenet1.1/FP16/squeezenet1.1.labels .
- Note that the
squeezenet1.1.labels file contains the classes used by ImageNet and is included here so that the inference results show text rather than classification numbers. Convert the model with the Model Optimizer. Note that the command below uses the alias defined in the script above and is not referred to in other documentation.
mo --input_model squeezenet1.1.caffemodel
- Now run Inference on the CPU using one of the built in Inference Engine samples:
classification_sample_async -m squeezenet1.1.xml -i $IE_INSTALL/demo/car.png
- Add the
-d option to run on FPGA:
classification_sample_async -m squeezenet1.1.xml -i $IE_INSTALL/demo/car.png -d HETERO:FPGA,CPU
Congratulations, You are done with the Intel® Distribution of OpenVINO™ toolkit installation for FPGA. To learn more about how the Intel® Distribution of OpenVINO™ toolkit works, the Hello World tutorial and are other resources are provided below.
Hello World Face Detection Tutorial
Use the Intel® Distribution of OpenVINO™ toolkit with FPGA Hello World Face Detection Exercise to learn more about how the software and hardware work together.
Intel® Distribution of OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
Intel® Distribution of OpenVINO™ toolkit documentation: https://docs.openvinotoolkit.org
Inference Engine FPGA plugin documentation: https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_FPGA.html