DL Workbench combines OpenVINO™ tools to assist you with the most commonly used tasks: import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for deployment in a matter of minutes. DL Workbench will take you through the full OpenVINO™ workflow, providing the opportunity to learn about various toolkit components.
DL Workbench enables you to get detailed performance assessment, explore inference configurations, and obtain an optimized model ready to be deployed on various Intel® configurations, such as client and server CPU, Intel® Processor Graphics (GPU), Intel® Movidius™ Neural Compute Stick 2 (NCS 2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs.
DL Workbench also provides the JupyterLab environment that helps you quick start with OpenVINO™ API and command-line interface (CLI). Follow the full OpenVINO workflow created for your model and learn about different toolkit components.
Learn more about the DL Workbench in this introduction video:
DL Workbench helps achieve your goals depending on the stage of your deep learning journey.
If you are a beginner in the deep learning field, the DL Workbench provides you with learning opportunities:
If you have enough experience with neural networks, DL Workbench provides you with a convenient web interface to optimize your model and prepare it for production:
The diagram below illustrates the typical DL Workbench workflow. Click to see the full-size image:
Get a quick overview of the workflow in the DL Workbench User Interface:
The intuitive web-based interface of the DL Workbench enables you to easily use various OpenVINO™ toolkit components:
|Open Model Zoo||Get access to the collection of high-quality pre-trained deep learning public and Intel-trained models trained to resolve a variety of different tasks.|
|Model Optimizer||Optimize and transform models trained in supported frameworks to the IR format. |
Supported frameworks include TensorFlow*, Caffe*, Kaldi*, MXNet*, and ONNX* format.
|Benchmark Tool||Estimate deep learning model inference performance on supported devices.|
|Accuracy Checker||Evaluate the accuracy of a model by collecting one or several metric values.|
|Post-Training Optimization Tool||Optimize pretrained models with lowering the precision of a model from floating-point precision(FP32 or FP16) to integer precision (INT8), without the need to retrain or fine-tune models.|