The Inference Engine integrates the nGraph to represent a model in run time underneath of the conventional
CNNNetwork API, which is an instance of
Besides the representation update, nGraph supports new features:
ngraph::Functionpassing it to
CNNNetwork::reshape()method in order to specialize input shapes.
A complete picture of the existing flow is shown below.
The IR version 10 automatically triggers the nGraph flow inside the Inference Engine. When such IR is read in an application, the Inference Engine IR reader produces
CNNNetwork that encapsulates the
ngraph::Function instance underneath.
Interpretation of the IR version 10 differs from the old IR version. Besides having a different operations set, the IR version 10 ignores the shapes and data types assigned to the ports in an XML file. Both shapes and types are reinferred while loading to the Inference Engine using the nGraph shape and type propagation function that is a part of each nGraph operation.
Alternative method to feed the Inference Engine with a model is to create the model in the run time. It is achieved by creation of the
ngraph::Function construction using nGraph operation classes and optionally user-defined operations. For details, see Add Custom nGraph Operations and examples. At this stage, the code is completely independent of the rest of the Inference Engine code and can be built separately. After you construct an instance of
ngraph::Function, you can use it to create
CNNNetwork by passing it to the new constructor for this class.