Public Types | Public Member Functions | Protected Member Functions | Protected Attributes
InferenceEngine::IInferRequestInternal Interface Reference

An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...

#include <ie_iinfer_request_internal.hpp>

Inheritance diagram for InferenceEngine::IInferRequestInternal:
InferenceEngine::AsyncInferRequestThreadSafeDefault

Public Types

using Ptr = std::shared_ptr< IInferRequestInternal >
 A shared pointer to a IInferRequestInternal interface.
 
using Callback = std::function< void(std::exception_ptr)>
 Alias for callback type.
 

Public Member Functions

 IInferRequestInternal (const InputsDataMap &networkInputs, const OutputsDataMap &networkOutputs)
 Constructs a new instance. More...
 
virtual void Infer ()
 Infers specified input(s) in synchronous mode. More...
 
virtual void InferImpl ()
 The minimal infer function to be implemented by plugins. It infers specified input(s) in synchronous mode. More...
 
virtual void Cancel ()
 Cancel current inference request execution.
 
virtual std::map< std::string, InferenceEngineProfileInfoGetPerformanceCounts () const
 Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data. More...
 
virtual void SetBlob (const std::string &name, const Blob::Ptr &data)
 Set input/output data to infer. More...
 
virtual Blob::Ptr GetBlob (const std::string &name)
 Get input/output data to infer. More...
 
virtual void SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info)
 Sets pre-process for input data. More...
 
virtual const PreProcessInfoGetPreProcess (const std::string &name) const
 Gets pre-process for input data. More...
 
virtual void SetBatch (int batch)
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
virtual std::vector< std::shared_ptr< IVariableStateInternal > > QueryState ()
 Queries memory states. More...
 
virtual void StartAsync ()
 Start inference of specified input(s) in asynchronous mode. More...
 
virtual void StartAsyncImpl ()
 The minimal asynchronous inference function to be implemented by plugins. It starts inference of specified input(s) in asynchronous mode. More...
 
virtual StatusCode Wait (int64_t millis_timeout)
 Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first. More...
 
virtual void SetCallback (Callback callback)
 Set callback function which will be called on success or failure of asynchronous request. More...
 
void checkBlob (const Blob::Ptr &blob, const std::string &name, bool isInput, const SizeVector &refDims={}) const
 Check that blob is valid. Throws an exception if it's not. More...
 
virtual void checkBlobs ()
 Check that all of the blobs is valid. Throws an exception if it's not.
 
void setPointerToExecutableNetworkInternal (const std::shared_ptr< IExecutableNetworkInternal > &exeNetwork)
 Sets the pointer to executable network internal. More...
 
The method will be removed void * GetUserData () noexcept
 Gets the pointer to userData. More...
 
The method will be removed void SetUserData (void *userData) noexcept
 Sets the pointer to userData. More...
 

Protected Member Functions

 ~IInferRequestInternal ()
 Destroys the object.
 
void execDataPreprocessing (InferenceEngine::BlobMap &preprocessedBlobs, bool serial=false)
 Checks and executes input data pre-processing if needed. More...
 
bool findInputAndOutputBlobByName (const std::string &name, InputInfo::Ptr &foundInput, DataPtr &foundOutput) const
 Helper function to find input or output blob by name. More...
 
bool preProcessingRequired (const InputInfo::Ptr &info, const Blob::Ptr &userBlob, const Blob::Ptr &deviceBlob=nullptr)
 Checks whether pre-processing step is required for a given input. More...
 
void addInputPreProcessingFor (const std::string &name, Blob::Ptr const &from, const Blob::Ptr &to)
 

Protected Attributes

InferenceEngine::InputsDataMap _networkInputs
 Holds information about network inputs info.
 
InferenceEngine::OutputsDataMap _networkOutputs
 Holds information about network outputs data.
 
InferenceEngine::BlobMap _inputs
 A map of user passed blobs for network inputs.
 
InferenceEngine::BlobMap _deviceInputs
 A map of actual network inputs, in plugin specific format.
 
InferenceEngine::BlobMap _outputs
 A map of user passed blobs for network outputs.
 
std::map< std::string, PreProcessDataPtr > _preProcData
 A map of pre-process data per input.
 
int m_curBatch = -1
 Current batch value used in dynamic batching.
 
std::shared_ptr< IExecutableNetworkInternal_exeNetwork
 A shared pointer to IInferRequestInternal. More...
 
Callback _callback
 A callback.
 

Detailed Description

An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism.

Constructor & Destructor Documentation

◆ IInferRequestInternal()

InferenceEngine::IInferRequestInternal::IInferRequestInternal ( const InputsDataMap networkInputs,
const OutputsDataMap networkOutputs 
)

Constructs a new instance.

Parameters
[in]networkInputsThe network inputs info
[in]networkOutputsThe network outputs data

Member Function Documentation

◆ checkBlob()

void InferenceEngine::IInferRequestInternal::checkBlob ( const Blob::Ptr blob,
const std::string &  name,
bool  isInput,
const SizeVector refDims = {} 
) const

Check that blob is valid. Throws an exception if it's not.

Parameters
[in]blobThe blob to check
[in]nameThe name of input or output depending of if the blob is input or output
[in]isInputIndicates if is input
[in]refDimsThe reference dims, empty if not specified

◆ execDataPreprocessing()

void InferenceEngine::IInferRequestInternal::execDataPreprocessing ( InferenceEngine::BlobMap preprocessedBlobs,
bool  serial = false 
)
protected

Checks and executes input data pre-processing if needed.

Parameters
inputsInputs blobs to perform preprocessing on
serialWhether to use multiple threads to execute the step

◆ findInputAndOutputBlobByName()

bool InferenceEngine::IInferRequestInternal::findInputAndOutputBlobByName ( const std::string &  name,
InputInfo::Ptr foundInput,
DataPtr foundOutput 
) const
protected

Helper function to find input or output blob by name.

Parameters
nameA name of input or output blob.
foundInputA pointer to input information if found.
foundOutputA pointer to output DataPtr if found.
Returns
True - if loaded network has input with provided name, false - if loaded network has output with provided name
Exceptions
[not_found]exception if there is no input and output layers with given name

◆ GetBlob()

virtual Blob::Ptr InferenceEngine::IInferRequestInternal::GetBlob ( const std::string &  name)
virtual

Get input/output data to infer.

Note
Memory allocation doesn't happen
Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ GetPerformanceCounts()

virtual std::map<std::string, InferenceEngineProfileInfo> InferenceEngine::IInferRequestInternal::GetPerformanceCounts ( ) const
virtual

Queries performance measures per layer to get feedback of what is the most time consuming layer. Note: not all plugins may provide meaningful data.

Returns
- a map of layer names to profiling information for that layer.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ GetPreProcess()

virtual const PreProcessInfo& InferenceEngine::IInferRequestInternal::GetPreProcess ( const std::string &  name) const
virtual

Gets pre-process for input data.

Parameters
nameName of input blob.
infopointer to a pointer to PreProcessInfo structure

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ GetUserData()

The method will be removed void* InferenceEngine::IInferRequestInternal::GetUserData ( )
noexcept

Gets the pointer to userData.

Returns
Pointer to user data

◆ Infer()

virtual void InferenceEngine::IInferRequestInternal::Infer ( )
virtual

Infers specified input(s) in synchronous mode.

Note
blocks all method of InferRequest while request is ongoing (running or waiting in queue)

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ InferImpl()

virtual void InferenceEngine::IInferRequestInternal::InferImpl ( )
virtual

The minimal infer function to be implemented by plugins. It infers specified input(s) in synchronous mode.

Note
  • This method is used in IInferRequestInternal::Infer, which calls the common code first and after uses this plugin dependent implementation.
  • Blocks all method of InferRequest while request is ongoing (running or waiting in queue)

◆ preProcessingRequired()

bool InferenceEngine::IInferRequestInternal::preProcessingRequired ( const InputInfo::Ptr info,
const Blob::Ptr userBlob,
const Blob::Ptr deviceBlob = nullptr 
)
protected

Checks whether pre-processing step is required for a given input.

Parameters
infoInputInfo corresponding to input blob
userBlobInput Blob object corresponding to input info
deviceBlobBlob object in plugin's desired format
Returns
True if pre-processing is required, false otherwise

◆ QueryState()

virtual std::vector<std::shared_ptr<IVariableStateInternal> > InferenceEngine::IInferRequestInternal::QueryState ( )
virtual

Queries memory states.

Returns
Returns memory states

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetBatch()

virtual void InferenceEngine::IInferRequestInternal::SetBatch ( int  batch)
virtual

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch- new batch size to be used by all the following inference calls for this request.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetBlob() [1/2]

virtual void InferenceEngine::IInferRequestInternal::SetBlob ( const std::string &  name,
const Blob::Ptr data 
)
virtual

Set input/output data to infer.

Note
Memory allocation doesn't happen
Parameters
name- a name of input or output blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetBlob() [2/2]

virtual void InferenceEngine::IInferRequestInternal::SetBlob ( const std::string &  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
virtual

Sets pre-process for input data.

Parameters
nameName of input blob.
data- a reference to input or output blob. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ SetCallback()

virtual void InferenceEngine::IInferRequestInternal::SetCallback ( Callback  callback)
virtual

Set callback function which will be called on success or failure of asynchronous request.

Parameters
callback- function to be called with the following description:

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ setPointerToExecutableNetworkInternal()

void InferenceEngine::IInferRequestInternal::setPointerToExecutableNetworkInternal ( const std::shared_ptr< IExecutableNetworkInternal > &  exeNetwork)

Sets the pointer to executable network internal.

Note
Needed to correctly handle ownership between objects.
Parameters
[in]exeNetworkThe executable network

◆ SetUserData()

The method will be removed void InferenceEngine::IInferRequestInternal::SetUserData ( void *  userData)
noexcept

Sets the pointer to userData.

Parameters
[in]Pointerto user data

◆ StartAsync()

virtual void InferenceEngine::IInferRequestInternal::StartAsync ( )
virtual

Start inference of specified input(s) in asynchronous mode.

Note
The method returns immediately. Inference starts also immediately.

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

◆ StartAsyncImpl()

virtual void InferenceEngine::IInferRequestInternal::StartAsyncImpl ( )
virtual

The minimal asynchronous inference function to be implemented by plugins. It starts inference of specified input(s) in asynchronous mode.

Note
  • The methos is used in AsyncInferRequestInternal::StartAsync which performs common steps first and calls plugin dependent implementation of this method after.
  • It returns immediately. Inference starts also immediately.

◆ Wait()

virtual StatusCode InferenceEngine::IInferRequestInternal::Wait ( int64_t  millis_timeout)
virtual

Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.

Parameters
millis_timeout- maximum duration in milliseconds to block for
Note
There are special cases when millis_timeout is equal some value of WaitMode enum:
  • STATUS_ONLY - immediately returns request status (InferRequest::StatusCode). It doesn't block or interrupt current thread.
  • RESULT_READY - waits until inference result becomes available
Returns
A status code

Implemented in InferenceEngine::AsyncInferRequestThreadSafeDefault.

Field Documentation

◆ _exeNetwork

std::shared_ptr<IExecutableNetworkInternal> InferenceEngine::IInferRequestInternal::_exeNetwork
protected

A shared pointer to IInferRequestInternal.

Note
Needed to correctly handle ownership between objects.

The documentation for this interface was generated from the following file: