OpenVINO 2024.0

OpenVINO is an open-source toolkit for optimizing and deploying deep learning models from cloud to edge. It accelerates deep learning inference across various use cases, such as generative AI, video, audio, and language with models from popular frameworks like PyTorch, TensorFlow, ONNX, and more. Convert and optimize models, and deploy across a mix of Intel® hardware and environments, on-premises and on-device, in the browser or in the cloud.

Check out the OpenVINO Cheat Sheet.

  • An open-source toolkit for optimizing and deploying deep learning models.

    Boost your AI deep-learning inference performance!

    Learn more
  • Better OpenVINO integration with PyTorch!

    Use PyTorch models directly, without converting them first.

    Learn more
  • OpenVINO via PyTorch 2.0 torch.compile()

    Use OpenVINO directly in PyTorch-native applications!

    Learn more
  • Do you like Generative AI?

    You will love how it performs with OpenVINO!

    Check out our new notebooks
  • Boost your AI deep learning interface perfmormance.

    Use Intel's open-source OpenVino toolkit for optimizing and deploying deep learning models.

    Learn more


openvino diagram

Places to Begin

card-img-top
Installation

This guide introduces installation and learning materials for Intel® Distribution of OpenVINO™ toolkit.

Get Started

card-img-top
Performance Benchmarks

See latest benchmark numbers for OpenVINO and OpenVINO Model Server.

View data

card-img-top
Framework Compatibility

Load models directly (for TensorFlow, ONNX, PaddlePaddle) or convert to OpenVINO format.

Load your model

card-img-top
Easy Deployment

Get started in just a few lines of code.

Run Inference

card-img-top
Serving at scale

Cloud-ready deployments for microservice applications.

Try it out

card-img-top
Model Compression

Reach for performance with post-training and training-time compression with NNCF.

Optimize now


Key Features

card-img-top
Model Compression

You can either link directly with OpenVINO Runtime to run inference locally or use OpenVINO Model Server to serve model inference from a separate server or within Kubernetes environment.

card-img-top
Fast & Scalable Deployment

Write an application once, deploy it anywhere, achieving maximum performance from hardware. Automatic device discovery allows for superior deployment flexibility. OpenVINO Runtime supports Linux, Windows and MacOS and provides Python, C++ and C API. Use your preferred language and OS.

card-img-top
Lighter Deployment

Designed with minimal external dependencies reduces the application footprint, simplifying installation and dependency management. Popular package managers enable application dependencies to be easily installed and upgraded. Custom compilation for your specific model(s) further reduces final binary size.

card-img-top
Enhanced App Start-Up Time

In applications where fast start-up is required, OpenVINO significantly reduces first-inference latency by using the CPU for initial inference and then switching to another device once the model has been compiled and loaded to memory. Compiled models are cached, improving start-up time even more.