NOTE: The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.
Introduction
The Intel® Distribution of OpenVINO™ toolkitquickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit.
System Requirements
Target Operating Systems
- Ubuntu* 16.04 long-term support (LTS), 64-bit
- CentOS* 7.4, 64-bit
Host Operating Systems
- Linux with installed GPU driver and with Linux kernel supported by GPU driver
Building Docker* Image for CPU
- Kernel reports the same information for all containers as for native application - for example, CPU, memory information
- All instructions that are available to host process available for process in container, including, for example, AVX2, AVX512. No restrictions.
- Docker* does not use virtualization or emulation. The process in Docker* is just a regular Linux process, but it is isolated from external world on kernel level. Performance penalty is small.
Example of a Dockerfile
:
FROM ubuntu:16.04
ENV http_proxy $HTTP_PROXY
ENV https_proxy $HTTP_PROXY
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/13231/l_openvino_toolkit_p_2018.0.000.tgz
ARG INSTALL_DIR=/opt/intel/computer_vision_sdk
ARG TEMP_DIR=/tmp/openvino_installer
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
cpio \
sudo \
lsb-release && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p $TEMP_DIR && cd $TEMP_DIR && \
wget -c $DOWNLOAD_LINK && \
tar xf l_openvino_toolkit*.tgz && \
cd l_openvino_toolkit* && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
rm -rf $TEMP_DIR
RUN $INSTALL_DIR/install_dependencies/install_cv_sdk_dependencies.sh
# build Inference Engine samples
RUN mkdir $INSTALL_DIR/deployment_tools/inference_engine/samples/build && cd $INSTALL_DIR/deployment_tools/inference_engine/samples/build && \
/bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh && cmake .. && make -j1"
NOTE: Please replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the DOWNLOAD_LINK
variable. You can copy the link from the Intel® Distribution of OpenVINO™ toolkit download page https://software.seek.intel.com/openvino-toolkit after registration. Right click on Offline Installer button on the download page for Linux in your browser and press Copy link address.
- Build a Docker* image with the following command:
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
- Run a Docker* container with the following command:
docker run -it <image_name>
Building Docker* Image for GPU
Prerequisites:
- GPU is not available in container by default, you must attach it to the container.
- Kernel driver must be installed on the host.
- Intel® OpenCL™ runtime package must be included into the container.
- In the container, user must be in the
video
group.
Before building a Docker* image on GPU, add the following commands to the Dockerfile
example for CPU above:
COPY intel-opencl*.deb /opt/gfx/
RUN cd /opt/gfx && \
dpkg -i intel-opencl*.deb && \
ldconfig && \
rm -rf /opt/gfx
RUN useradd -G video -ms /bin/bash user
USER user
To build a Docker* image for GPU:
- Copy Intel® OpenCL™ driver for Ubuntu* (
intel-opencl*.deb
) from <OPENVINO_INSTALL_DIR>/install_dependencies
to the folder with the Dockerfile
.
- Run the following command to build a Docker* image:
docker build . -t <image_name> \
--build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
--build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
- Run a Docker* container. To make GPU available in the container, you can use one of the two options:
-
Option 1 (recommended). Attach GPU to the container using
--device /dev/dri
option and run the container:
docker run -it –device /dev/dri <image_name>
-
Option 2. Run the container in a privileged mode with the
--priveleged
option. It is not recommended due to security implications:
docker run -it --privileged <image_name>
Building Docker Image for Intel® Movidius™ Neural Compute Stick
Known limitations:
- Intel® Movidius™ Neural Compute Stick device changes its VendorID and DeviceID during execution and each time looks for a host system as a brand new device. It means it cannot be mounted as usual.
- UDEV events are not forwarded to the container by default it does not know about device reconnection.
- Only one device per host is supported.
Possible solutions for Intel Movidius Neural Compute Stick:
-
Solution #1:
Run container in privileged mode, enable Docker network configuration as host, and mount all devices to container:
docker run --privileged –v /dev:/dev –network=host <image_name>
Notes:
- It is not secure
- Conflicts with Kubernetes* and other tools that use orchestration and private networks
-
Solution #2:
- Get rid of UDEV by rebuilding libusb without UDEV support in the Docker* image:
RUN cd /tmp/ && \
wget https://github.com/libusb/libusb/archive/v1.0.22.zip && \
unzip v1.0.22.zip && cd libusb-1.0.22 && \
./bootstrap.sh && \
./configure --disable-udev --enable-shared && \
make -j4 && make install && \
rm -rf /tmp/*
- Run the Docker* image in privileged mode:
docker run --privileged –v /dev:/dev <image_name>
Notes:
- It is not secure
- No conflicts with Kubernetes*
Building Docker* Image for FPGA
FPGA card is not available in container by default, but it can be mounted there with the following pre-requisites:
- FPGA device is up and ready to run inference.
- FPGA bitstreams were pushed to the device over PCIe.
To build a Docker* image for FPGA:
- Set additional environment variables in the
Dockerfile
:
ENV CL_CONTEXT_COMPILER_MODE_INTELFPGA=3
ENV DLA_AOCX=/opt/intel/computer_vision_sdk/a10_devkit_bitstreams/2-0-1_RC_FP11_Generic.aocx
ENV PATH=/opt/altera/aocl-pro-rte/aclrte-linux64/bin:$PATH
- Install the following UDEV rule:
cat <<EOF > fpga.rules
KERNEL=="acla10_ref*",GROUP="users",MODE="0660"
EOF
sudo cp fpga.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules
sudo udevadm trigger
sudo ldconfig
Make sure that a container user is added to the "users" group with the same GID as on host.
- Run the Docker* container for FPGA with the following options:
docker run --rm -it \
--mount type=bind,source=/opt/intel/intelFPGA_pro,destination=/opt/intel/intelFPGA_pro \
--mount type=bind,source=/opt/altera,destination=/opt/altera \
--mount type=bind,source=/etc/OpenCL/vendors,destination=/etc/OpenCL/vendors \
--mount type=bind,source=/opt/Intel/OpenCL/Boards,destination=/opt/Intel/OpenCL/Boards \
--device /dev/acla10_ref0:/dev/acla10_ref0 \
<image_name>
Additional Resources
OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit
OpenVINO™ toolkit documentation: https://software.intel.com/en-us/openvino-toolkit/documentation/featured
Intel® Neural Compute Stick 2 Get Started: https://software.intel.com/en-us/neural-compute-stick/get-started