Public Types | Public Member Functions | Protected Member Functions
InferenceEngine::ICNNNetwork Interface Referenceabstract

This is the main interface to describe the NN topology. More...

#include <ie_icnn_network.hpp>

Inheritance diagram for InferenceEngine::ICNNNetwork:
Inheritance graph
[legend]
Collaboration diagram for InferenceEngine::ICNNNetwork:
Collaboration graph
[legend]

Public Types

using Ptr = std::shared_ptr< ICNNNetwork >
 A shared pointer to a ICNNNetwork interface. More...
 
using InputShapes = std::map< std::string, SizeVector >
 Map of pairs: name of corresponding data and its dimension. More...
 

Public Member Functions

virtual std::shared_ptr< ngraph::FunctiongetFunction () noexcept=0
 Returns nGraph function. More...
 
virtual std::shared_ptr< const ngraph::FunctiongetFunction () const noexcept=0
 Returns constant nGraph function. More...
 
virtual void getOutputsInfo (OutputsDataMap &out) const noexcept=0
 Gets the network output Data node information. The received info is stored in the given Data node. More...
 
virtual void getInputsInfo (InputsDataMap &inputs) const noexcept=0
 Gets the network input Data node information. The received info is stored in the given InputsDataMap object. More...
 
virtual InputInfo::Ptr getInput (const std::string &inputName) const noexcept=0
 Returns information on certain input pointed by inputName. More...
 
virtual const std::string & getName () const noexcept=0
 Returns the network name. More...
 
virtual size_t layerCount () const noexcept=0
 Returns the number of layers in the network as an integer value. More...
 
virtual StatusCode addOutput (const std::string &layerName, size_t outputIndex=0, ResponseDesc *resp=nullptr) noexcept=0
 Adds output to the layer. More...
 
virtual StatusCode setBatchSize (size_t size, ResponseDesc *responseDesc) noexcept=0
 Changes the inference batch size. More...
 
virtual size_t getBatchSize () const noexcept=0
 Gets the inference batch size. More...
 
virtual StatusCode reshape (const InputShapes &inputShapes, ResponseDesc *resp) noexcept
 Run shape inference with new input shapes for the network. More...
 
virtual StatusCode serialize (const std::string &xmlPath, const std::string &binPath, ResponseDesc *resp) const noexcept=0
 Serialize network to IR and weights files. More...
 
virtual StatusCode getOVNameForTensor (std::string &ov_name, const std::string &orig_name, ResponseDesc *resp) const noexcept
 Methods maps framework tensor name to OpenVINO name. More...
 

Protected Member Functions

 ~ICNNNetwork ()=default
 Default destructor.
 

Detailed Description

This is the main interface to describe the NN topology.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead

Member Typedef Documentation

◆ InputShapes

using InferenceEngine::ICNNNetwork::InputShapes = std::map<std::string, SizeVector>

Map of pairs: name of corresponding data and its dimension.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead

◆ Ptr

A shared pointer to a ICNNNetwork interface.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead

Member Function Documentation

◆ addOutput()

virtual StatusCode InferenceEngine::ICNNNetwork::addOutput ( const std::string &  layerName,
size_t  outputIndex = 0,
ResponseDesc resp = nullptr 
)
pure virtualnoexcept

Adds output to the layer.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Parameters
layerNameName of the layer
outputIndexIndex of the output
respResponse message
Returns
Status code of the operation

◆ getBatchSize()

virtual size_t InferenceEngine::ICNNNetwork::getBatchSize ( ) const
pure virtualnoexcept

Gets the inference batch size.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Returns
The size of batch as a size_t value

◆ getFunction() [1/2]

virtual std::shared_ptr<const ngraph::Function> InferenceEngine::ICNNNetwork::getFunction ( ) const
pure virtualnoexcept

Returns constant nGraph function.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Returns
constant nGraph function

◆ getFunction() [2/2]

virtual std::shared_ptr<ngraph::Function> InferenceEngine::ICNNNetwork::getFunction ( )
pure virtualnoexcept

Returns nGraph function.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Returns
nGraph function

◆ getInput()

virtual InputInfo::Ptr InferenceEngine::ICNNNetwork::getInput ( const std::string &  inputName) const
pure virtualnoexcept

Returns information on certain input pointed by inputName.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Parameters
inputNameName of input layer to get info on
Returns
A smart pointer to the input information

◆ getInputsInfo()

virtual void InferenceEngine::ICNNNetwork::getInputsInfo ( InputsDataMap inputs) const
pure virtualnoexcept

Gets the network input Data node information. The received info is stored in the given InputsDataMap object.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead

For single and multiple inputs networks. This method need to be called to find out OpenVINO input names for using them later when calling InferenceEngine::InferRequest::SetBlob

If you want to use framework names, you can use InferenceEngine::ICNNNetwork::getOVNameForTensor method to map framework names to OpenVINO names

Parameters
inputsReference to InputsDataMap object.

◆ getName()

virtual const std::string& InferenceEngine::ICNNNetwork::getName ( ) const
pure virtualnoexcept

Returns the network name.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Returns
Network name

◆ getOutputsInfo()

virtual void InferenceEngine::ICNNNetwork::getOutputsInfo ( OutputsDataMap out) const
pure virtualnoexcept

Gets the network output Data node information. The received info is stored in the given Data node.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead

For single and multiple outputs networks.

This method need to be called to find out OpenVINO output names for using them later when calling InferenceEngine::InferRequest::GetBlob or InferenceEngine::InferRequest::SetBlob

If you want to use framework names, you can use InferenceEngine::ICNNNetwork::getOVNameForTensor method to map framework names to OpenVINO names

Parameters
outReference to the OutputsDataMap object

◆ getOVNameForTensor()

virtual StatusCode InferenceEngine::ICNNNetwork::getOVNameForTensor ( std::string &  ov_name,
const std::string &  orig_name,
ResponseDesc resp 
) const
inlinevirtualnoexcept

Methods maps framework tensor name to OpenVINO name.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Parameters
ov_nameOpenVINO name
orig_nameFramework tensor name
respPointer to the response message that holds a description of an error if any occurred
Returns
Status code of the operation

◆ layerCount()

virtual size_t InferenceEngine::ICNNNetwork::layerCount ( ) const
pure virtualnoexcept

Returns the number of layers in the network as an integer value.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Returns
The number of layers as an integer value

◆ reshape()

virtual StatusCode InferenceEngine::ICNNNetwork::reshape ( const InputShapes inputShapes,
ResponseDesc resp 
)
inlinevirtualnoexcept

Run shape inference with new input shapes for the network.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Parameters
inputShapes- map of pairs: name of corresponding data and its dimension.
respPointer to the response message that holds a description of an error if any occurred
Returns
Status code of the operation

◆ serialize()

virtual StatusCode InferenceEngine::ICNNNetwork::serialize ( const std::string &  xmlPath,
const std::string &  binPath,
ResponseDesc resp 
) const
pure virtualnoexcept

Serialize network to IR and weights files.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Parameters
xmlPathPath to output IR file.
binPathPath to output weights file.
respPointer to the response message that holds a description of an error if any occurred
Returns
Status code of the operation

◆ setBatchSize()

virtual StatusCode InferenceEngine::ICNNNetwork::setBatchSize ( size_t  size,
ResponseDesc responseDesc 
)
pure virtualnoexcept

Changes the inference batch size.

Deprecated:
Use InferenceEngine::CNNNetwork wrapper instead
Note
There are several limitations and it's not recommended to use it. Set batch to the input shape and call ICNNNetwork::reshape.
Parameters
sizeSize of batch to set
responseDescPointer to the response message that holds a description of an error if any occurred
Returns
Status code of the operation
Note
Current implementation of the function sets batch size to the first dimension of all layers in the networks. Before calling it make sure that all your layers have batch in the first dimension, otherwise the method works incorrectly. This limitation is resolved via shape inference feature by using InferenceEngine::ICNNNetwork::reshape method. To read more refer to the Shape Inference section in documentation
Current implementation of the function sets batch size to the first dimension of all layers in the networks. Before calling it make sure that all your layers have batch in the first dimension, otherwise the method works incorrectly. This limitation is resolved via shape inference feature by using InferenceEngine::ICNNNetwork::reshape method. To read more refer to the Shape Inference section in documentation

The documentation for this interface was generated from the following file: