NOTES:
- The Intel® Distribution of OpenVINO™ is supported on macOS* 10.15.x versions.
- An internet connection is required to follow the steps in this guide. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance.
The Intel® Distribution of OpenVINO™ toolkit for macOS* includes the Inference Engine, OpenCV* libraries and Model Optimizer tool to deploy applications for accelerated inference on Intel® CPUs and Intel® Neural Compute Stick 2.
The Intel® Distribution of OpenVINO™ toolkit for macOS*:
Included with the Installation
The following components are installed by default:
Component | Description |
---|---|
Model Optimizer | This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine. Popular frameworks include Caffe*, TensorFlow*, MXNet*, and ONNX*. |
Inference Engine | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
OpenCV* | OpenCV* community version compiled for Intel® hardware |
Sample Applications | A set of simple console applications demonstrating how to use the Inference Engine in your applications. |
Demos | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases |
Additional Tools | A set of tools to work with your models including Accuracy Checker utility, Post-Training Optimization Tool Guide, Model Downloader and other |
Documentation for Pre-Trained Models | Documentation for the pre-trained models available in the Open Model Zoo repo |
Could Be Optionally Installed
Deep Learning Workbench (DL Workbench) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare performance of deep learning models on various Intel® architecture configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components:
Proceed to an easy installation from Docker to get started.
The development and target platforms have the same requirements, but you can select different components during the installation, based on your intended use.
Hardware
NOTE: The current version of the Intel® Distribution of OpenVINO™ toolkit for macOS* supports inference on Intel CPUs and Intel® Neural Compute Sticks 2 only.
Software Requirements
Operating Systems
This guide provides step-by-step instructions on how to install the Intel® Distribution of OpenVINO™ 2020.1 toolkit for macOS*.
The following steps will be covered:
.bash_profile
.If you have a previous version of the Intel® Distribution of OpenVINO™ toolkit installed, rename or delete these two directories:
/home/<user>/inference_engine_samples
/home/<user>/openvino_models
Download the latest version of OpenVINO toolkit for macOS* then return to this guide to proceed with the installation.
Install the OpenVINO toolkit core components:
Downloads
directory. By default, the disk image file is saved as m_openvino_toolkit_p_<version>.dmg
.m_openvino_toolkit_p_<version>.dmg
file to mount. The disk image is mounted to /Volumes/m_openvino_toolkit_p_<version>
and automatically opened in a separate window.m_openvino_toolkit_p_<version>.app
On the User Selection screen, choose a user account for the installation:
The default installation directory path depends on the privileges you choose for the installation.
The Installation summary screen shows you the default component set to install:
By default, the Intel® Distribution of OpenVINO™ is installed to the following directory, referred to as <INSTALL_DIR>
:
/opt/intel/openvino_<version>/
/home/<USER>/intel/openvino_<version>/
For simplicity, a symbolic link to the latest installation is also created: /home/<user>/intel/openvino_2021/
.
NOTE: If there is an OpenVINO™ toolkit version previously installed on your system, the installer will use the same destination directory for next installations. If you want to install a newer version to a different directory, you need to uninstall the previously installed versions.
Click Next to save the installation options and show the Installation summary screen.
You need to update several environment variables before you can compile and run OpenVINO™ applications. Open the macOS Terminal* or a command-line interface shell you prefer and run the following script to temporarily set your environment variables:
Optional: The OpenVINO environment variables are removed when you close the shell. You can permanently set the environment variables as follows:
.bash_profile
file in the current user home directory: :wq
and press the Enter key.[setupvars.sh] OpenVINO environment initialized
.The environment variables are set. Continue to the next section to configure the Model Optimizer.
The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX* and Kaldi*.
The Model Optimizer is a key component of the OpenVINO toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The IR is a pair of files that describe the whole model:
.xml
: Describes the network topology.bin
: Contains the weights and biases binary dataThe Inference Engine reads, loads, and infers the IR files, using a common API on the CPU hardware.
For more information about the Model Optimizer, see the Model Optimizer Developer Guide.
You can choose to either configure the Model Optimizer for all supported frameworks at once, OR for one framework at a time. Choose the option that best suits your needs. If you see error messages, verify that you installed all dependencies listed under Software Requirements at the top of this guide.
NOTE: If you installed OpenVINO to a non-default installation directory, replace
/opt/intel/
with the directory where you installed the software.
Option 1: Configure the Model Optimizer for all supported frameworks at the same time:
Option 2: Configure the Model Optimizer for each framework separately:
Configure individual frameworks separately ONLY if you did not select Option 1 above.
The Model Optimizer is configured for one or more frameworks.
You have completed all required installation, configuration and build steps in this guide to use your CPU to work with your trained models.
To enable inference on Intel® Neural Compute Stick 2, see the Steps for Intel® Neural Compute Stick 2.
Or proceed to the Get Started to get started with running code samples and demo applications.
These steps are only required if you want to perform inference on Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X VPU. See also the Get Started page for Intel® Neural Compute Stick 2.
To perform inference on Intel® Neural Compute Stick 2, the libusb
library is required. You can build it from the source code or install using the macOS package manager you prefer: Homebrew*, MacPorts* or other.
For example, to install the libusb
library using Homebrew*, use the following command:
You've completed all required configuration steps to perform inference on your Intel® Neural Compute Stick 2. Proceed to the Get Started to get started with running code samples and demo applications.
Now you are ready to get started. To continue, see the following pages:
Follow the steps below to uninstall the Intel® Distribution of OpenVINO™ Toolkit from your system:
<INSTALL_DIR>
, locate and open openvino_toolkit_uninstaller.app
.README.txt
in /opt/intel/openvino_2021/deployment_tools/demo/
.