Create Docker* Images with Intel® Distribution of OpenVINO™ toolkit for Linux* OS

NOTE: The Intel® Distribution of OpenVINO™ toolkit was formerly known as the Intel® Computer Vision SDK.

Introduction

The Intel® Distribution of OpenVINO™ toolkitquickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance. The Intel® Distribution of OpenVINO™ toolkit includes the Intel® Deep Learning Deployment Toolkit.

System Requirements

Target Operating Systems

Host Operating Systems

Building Docker* Image for CPU

Example of a Dockerfile:

FROM ubuntu:16.04
ENV http_proxy $HTTP_PROXY
ENV https_proxy $HTTP_PROXY
ARG DOWNLOAD_LINK=http://registrationcenter-download.intel.com/akdlm/irc_nas/13231/l_openvino_toolkit_p_2018.0.000.tgz
ARG INSTALL_DIR=/opt/intel/computer_vision_sdk
ARG TEMP_DIR=/tmp/openvino_installer
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
cpio \
sudo \
lsb-release && \
rm -rf /var/lib/apt/lists/*
RUN mkdir -p $TEMP_DIR && cd $TEMP_DIR && \
wget -c $DOWNLOAD_LINK && \
tar xf l_openvino_toolkit*.tgz && \
cd l_openvino_toolkit* && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
rm -rf $TEMP_DIR
RUN $INSTALL_DIR/install_dependencies/install_cv_sdk_dependencies.sh
# build Inference Engine samples
RUN mkdir $INSTALL_DIR/deployment_tools/inference_engine/samples/build && cd $INSTALL_DIR/deployment_tools/inference_engine/samples/build && \
/bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh && cmake .. && make -j1"

NOTE: Please replace direct link to the Intel® Distribution of OpenVINO™ toolkit package to the latest version in the DOWNLOAD_LINK variable. You can copy the link from the Intel® Distribution of OpenVINO™ toolkit download page https://software.seek.intel.com/openvino-toolkit after registration. Right click on Offline Installer button on the download page for Linux in your browser and press Copy link address.

  1. Build a Docker* image with the following command:
    docker build . -t <image_name> \
    --build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
    --build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
  2. Run a Docker* container with the following command:
    docker run -it <image_name>

Building Docker* Image for GPU

Prerequisites:

Before building a Docker* image on GPU, add the following commands to the Dockerfile example for CPU above:

COPY intel-opencl*.deb /opt/gfx/
RUN cd /opt/gfx && \
dpkg -i intel-opencl*.deb && \
ldconfig && \
rm -rf /opt/gfx
RUN useradd -G video -ms /bin/bash user
USER user

To build a Docker* image for GPU:

  1. Copy Intel® OpenCL™ driver for Ubuntu* (intel-opencl*.deb) from <OPENVINO_INSTALL_DIR>/install_dependencies to the folder with the Dockerfile.
  2. Run the following command to build a Docker* image:
    docker build . -t <image_name> \
    --build-arg HTTP_PROXY=<http://your_proxy_server.com:port> \
    --build-arg HTTPS_PROXY=<https://your_proxy_server.com:port>
  3. Run a Docker* container. To make GPU available in the container, you can use one of the two options:
    • Option 1 (recommended). Attach GPU to the container using --device /dev/dri option and run the container:
      docker run -it –device /dev/dri <image_name>
    • Option 2. Run the container in a privileged mode with the --priveleged option. It is not recommended due to security implications:
      docker run -it --privileged <image_name>

Building Docker Image for Intel® Movidius™ Neural Compute Stick

Known limitations:

Possible solutions for Intel Movidius Neural Compute Stick:

Notes:

  • It is not secure
  • Conflicts with Kubernetes* and other tools that use orchestration and private networks

Notes:

Building Docker* Image for FPGA

FPGA card is not available in container by default, but it can be mounted there with the following pre-requisites:

To build a Docker* image for FPGA:

  1. Set additional environment variables in the Dockerfile:
    ENV CL_CONTEXT_COMPILER_MODE_INTELFPGA=3
    ENV DLA_AOCX=/opt/intel/computer_vision_sdk/a10_devkit_bitstreams/2-0-1_RC_FP11_Generic.aocx
    ENV PATH=/opt/altera/aocl-pro-rte/aclrte-linux64/bin:$PATH
  2. Install the following UDEV rule:
    cat <<EOF > fpga.rules
    KERNEL=="acla10_ref*",GROUP="users",MODE="0660"
    EOF
    sudo cp fpga.rules /etc/udev/rules.d/
    sudo udevadm control --reload-rules
    sudo udevadm trigger
    sudo ldconfig
    Make sure that a container user is added to the "users" group with the same GID as on host.
  3. Run the Docker* container for FPGA with the following options:
    docker run --rm -it \
    --mount type=bind,source=/opt/intel/intelFPGA_pro,destination=/opt/intel/intelFPGA_pro \
    --mount type=bind,source=/opt/altera,destination=/opt/altera \
    --mount type=bind,source=/etc/OpenCL/vendors,destination=/etc/OpenCL/vendors \
    --mount type=bind,source=/opt/Intel/OpenCL/Boards,destination=/opt/Intel/OpenCL/Boards \
    --device /dev/acla10_ref0:/dev/acla10_ref0 \
    <image_name>

Additional Resources

OpenVINO™ toolkit home page: https://software.intel.com/en-us/openvino-toolkit

OpenVINO™ toolkit documentation: https://software.intel.com/en-us/openvino-toolkit/documentation/featured

Intel® Neural Compute Stick 2 Get Started: https://software.intel.com/en-us/neural-compute-stick/get-started