For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach. The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases.
This section provides common steps to migrate your application written using the Inference Engine Plugin API (
InferenceEngine::InferencePlugin) to the Inference Engine Core API (
To learn how to write a new application using the Inference Engine, refer to Integrate the Inference Engine Request API with Your Application and Inference Engine Samples Overview.
The Inference Engine Core class is implemented on top existing Inference Engine Plugin API and handles plugins internally. The main responsibility of the
InferenceEngine::Core class is to hide plugin specifics inside and provide a new layer of abstraction that works with devices (
InferenceEngine::Core::GetAvailableDevices). Almost all methods of this class accept
deviceName as an additional parameter that denotes an actual device you are working with. Plugins are listed in the
plugins.xml file, which is loaded during constructing
Common migration process includes the following steps:
InferenceEngine::Core class initialization:
InferenceEngine::CNNNetReaderto read IR:
read networks using the Core class:
The Core class also allows reading models from the ONNX format (more information is here):
add extensions to CPU device using the Core class:
deviceNameis omitted as the last argument, configuration is set for all Inference Engine devices.
InferenceEngine::Core::LoadNetwork to a particular device:
After you have an instance of
InferenceEngine::ExecutableNetwork, all other steps are as usual.