Public Member Functions
InferenceEngine::CNNNetwork Class Reference

This class contains all the information about the Neural Network and the related binary information. More...

#include <ie_cnn_network.h>

Public Member Functions

 CNNNetwork ()
 A default constructor.
 
 CNNNetwork (std::shared_ptr< ICNNNetwork > network)
 Allows helper class to manage lifetime of network object. More...
 
 CNNNetwork (const std::shared_ptr< ngraph::Function > &network, const std::vector< IExtensionPtr > &exts={})
 A constructor from ngraph::Function object This constructor wraps existing ngraph::Function If you want to avoid modification of original Function, please create a copy. More...
 
OutputsDataMap getOutputsInfo () const
 Gets the network output Data node information. The received info is stored in the given Data node. More...
 
InputsDataMap getInputsInfo () const
 Gets the network input Data node information. The received info is stored in the given InputsDataMap object. More...
 
size_t layerCount () const
 Returns the number of layers in the network as an integer value. More...
 
const std::string & getName () const
 Returns the network name. More...
 
void setBatchSize (const size_t size)
 Changes the inference batch size. More...
 
size_t getBatchSize () const
 Gets the inference batch size. More...
 
 operator ICNNNetwork::Ptr ()
 An overloaded operator cast to get pointer on current network. More...
 
 operator ICNNNetwork & ()
 An overloaded operator & to get current network. More...
 
 operator const ICNNNetwork & () const
 An overloaded operator & to get current network. More...
 
std::shared_ptr< ngraph::FunctiongetFunction ()
 Returns constant nGraph function. More...
 
std::shared_ptr< const ngraph::FunctiongetFunction () const
 Returns constant nGraph function. More...
 
void addOutput (const std::string &layerName, size_t outputIndex=0)
 Adds output to the layer. More...
 
ICNNNetwork::InputShapes getInputShapes () const
 Helper method to get collect all input shapes with names of corresponding Data objects. More...
 
void reshape (const ICNNNetwork::InputShapes &inputShapes)
 Run shape inference with new input shapes for the network. More...
 
void serialize (const std::string &xmlPath, const std::string &binPath={}) const
 Serialize network to IR and weights files. More...
 
std::string getOVNameForTensor (const std::string &orig_name) const
 Method maps framework tensor name to OpenVINO name. More...
 

Detailed Description

This class contains all the information about the Neural Network and the related binary information.

Constructor & Destructor Documentation

◆ CNNNetwork() [1/2]

InferenceEngine::CNNNetwork::CNNNetwork ( std::shared_ptr< ICNNNetwork network)
explicit

Allows helper class to manage lifetime of network object.

Deprecated:
Don't use this constructor. It will be removed soon
Parameters
networkPointer to the network object

◆ CNNNetwork() [2/2]

InferenceEngine::CNNNetwork::CNNNetwork ( const std::shared_ptr< ngraph::Function > &  network,
const std::vector< IExtensionPtr > &  exts = {} 
)
explicit

A constructor from ngraph::Function object This constructor wraps existing ngraph::Function If you want to avoid modification of original Function, please create a copy.

Parameters
networkPointer to the ngraph::Function object
extsVector of pointers to IE extension objects

Member Function Documentation

◆ addOutput()

void InferenceEngine::CNNNetwork::addOutput ( const std::string &  layerName,
size_t  outputIndex = 0 
)

Adds output to the layer.

Parameters
layerNameName of the layer
outputIndexIndex of the output

◆ getBatchSize()

size_t InferenceEngine::CNNNetwork::getBatchSize ( ) const

Gets the inference batch size.

Returns
The size of batch as a size_t value

◆ getFunction() [1/2]

std::shared_ptr<ngraph::Function> InferenceEngine::CNNNetwork::getFunction ( )

Returns constant nGraph function.

Returns
constant nGraph function

◆ getFunction() [2/2]

std::shared_ptr<const ngraph::Function> InferenceEngine::CNNNetwork::getFunction ( ) const

Returns constant nGraph function.

Returns
constant nGraph function

◆ getInputShapes()

ICNNNetwork::InputShapes InferenceEngine::CNNNetwork::getInputShapes ( ) const

Helper method to get collect all input shapes with names of corresponding Data objects.

Returns
Map of pairs: input name and its dimension.

◆ getInputsInfo()

InputsDataMap InferenceEngine::CNNNetwork::getInputsInfo ( ) const

Gets the network input Data node information. The received info is stored in the given InputsDataMap object.

For single and multiple inputs networks. This method need to be called to find out OpenVINO input names for using them later when calling InferenceEngine::InferRequest::SetBlob

If you want to use framework names, you can use InferenceEngine::ICNNNetwork::getOVNameForTensor method to map framework names to OpenVINO names

Returns
The InferenceEngine::InputsDataMap object.

◆ getName()

const std::string& InferenceEngine::CNNNetwork::getName ( ) const

Returns the network name.

Returns
Network name

◆ getOutputsInfo()

OutputsDataMap InferenceEngine::CNNNetwork::getOutputsInfo ( ) const

Gets the network output Data node information. The received info is stored in the given Data node.

For single and multiple outputs networks.

This method need to be called to find out OpenVINO output names for using them later when calling InferenceEngine::InferRequest::GetBlob or InferenceEngine::InferRequest::SetBlob

If you want to use framework names, you can use InferenceEngine::CNNNetwork::getOVNameForTensor method to map framework names to OpenVINO names

Returns
the InferenceEngine::OutputsDataMap object

◆ getOVNameForTensor()

std::string InferenceEngine::CNNNetwork::getOVNameForTensor ( const std::string &  orig_name) const

Method maps framework tensor name to OpenVINO name.

Parameters
orig_nameFramework tensor name
Returns
OpenVINO name

◆ layerCount()

size_t InferenceEngine::CNNNetwork::layerCount ( ) const

Returns the number of layers in the network as an integer value.

Returns
The number of layers as an integer value

◆ operator const ICNNNetwork &()

InferenceEngine::CNNNetwork::operator const ICNNNetwork & ( ) const

An overloaded operator & to get current network.

Deprecated:
InferenceEngine::ICNNNetwork interface is deprecated
Returns
A const reference of the current network

◆ operator ICNNNetwork &()

InferenceEngine::CNNNetwork::operator ICNNNetwork & ( )

An overloaded operator & to get current network.

Deprecated:
InferenceEngine::ICNNNetwork interface is deprecated
Returns
An instance of the current network

◆ operator ICNNNetwork::Ptr()

InferenceEngine::CNNNetwork::operator ICNNNetwork::Ptr ( )

An overloaded operator cast to get pointer on current network.

Deprecated:
InferenceEngine::ICNNNetwork interface is deprecated
Returns
A shared pointer of the current network

◆ reshape()

void InferenceEngine::CNNNetwork::reshape ( const ICNNNetwork::InputShapes inputShapes)

Run shape inference with new input shapes for the network.

Parameters
inputShapesA map of pairs: name of corresponding data and its dimension.

◆ serialize()

void InferenceEngine::CNNNetwork::serialize ( const std::string &  xmlPath,
const std::string &  binPath = {} 
) const

Serialize network to IR and weights files.

Parameters
xmlPathPath to output IR file.
binPathPath to output weights file. The parameter is skipped in case of executable graph info serialization.

◆ setBatchSize()

void InferenceEngine::CNNNetwork::setBatchSize ( const size_t  size)

Changes the inference batch size.

Note
There are several limitations and it's not recommended to use it. Set batch to the input shape and call InferenceEngine::CNNNetwork::reshape.
Parameters
sizeSize of batch to set
Note
Current implementation of the function sets batch size to the first dimension of all layers in the networks. Before calling it make sure that all your layers have batch in the first dimension, otherwise the method works incorrectly. This limitation is resolved via shape inference feature by using InferenceEngine::ICNNNetwork::reshape method. To read more refer to the Shape Inference section in documentation
Current implementation of the function sets batch size to the first dimension of all layers in the networks. Before calling it make sure that all your layers have batch in the first dimension, otherwise the method works incorrectly. This limitation is resolved via shape inference feature by using InferenceEngine::ICNNNetwork::reshape method. To read more refer to the Shape Inference section in documentation

The documentation for this class was generated from the following file: