This version of the Deep Learning Workbench (DL Workbench) is a feature preview release. This documentation and corresponding functionality within the application are subject to change and can contain errata or be inaccurate. Installing, hosting and using this application is at your own risk.
DL Workbench is a web-based graphical environment that enables you to visualize, fine-tune, and compare performance of deep learning models on various Intel® architecture configurations, such as CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
The intuitive web-based interface of the DL Workbench enables you to easily use various OpenVINO™ toolkit components:
DL Workbench is available to be installed and run locally and also in the Intel® DevCloud for the Edge:
- Running the DL Workbench on your local system enables you to profile your neural network on your own hardware configuration, as well as connect to targets in your local network and profile on them remotely. You have access to an extended feature list including accuracy measurements and Winograd algorithmic tuning. You also do not compete for resources with other Intel® DevCloud for the Edge users. As a result, your experiments are conducted faster.
To get started, follow the Installation Guide. DL Workbench uses authentication tokens to access the application. A token is generated automatically and displayed in the console output when you run the container for the first time.
- Running the DL Workbench in the Intel® DevCloud for the Edge enables you to profile your neural network on various Intel® hardware configurations hosted in the cloud environment without any hardware setup at your end and integrate the optimized model in the friendly environment of JupyterLab*. You can also choose this option if you just want to get familiar with the DL Workbench and explore its features.
To get started, follow the instructions in Run DL Workbench in the Intel® DevCloud for the Edge.
To start a new configuration, select Get Started on the home page:
Create a new configuration on the Configurations page. A configuration includes:
- Pretrained model
- Validation dataset to run inference on
- Target device
Once you import and configure a model and dataset, you can experiment with the model to identify the model performance and optimal parameters to achieve the maximum performance on Intel® hardware:
- Calibrate the model in INT8 precision
- Find the best combination of inference parameters: number of streams and batches
- Analyze inference results and compare them across different configurations
- Implement an optimal configuration into your application
Core Use Cases
DL Workbench supports several advanced profiling scenarios:
Table of Contents