Data Structures | Public Types | Public Member Functions | Friends
InferenceEngine::InferRequest Class Reference

This is an interface of asynchronous infer request. More...

#include <ie_infer_request.hpp>

Data Structures

struct  SetCallback< IInferRequest::CompletionCallback >
 
struct  SetCallback< std::function< void(InferRequest, StatusCode)> >
 

Public Types

enum  WaitMode : int64_t { RESULT_READY = -1 , STATUS_ONLY = 0 }
 Enumeration to hold wait mode for IInferRequest. More...
 
using Ptr = std::shared_ptr< InferRequest >
 A smart pointer to the InferRequest object.
 

Public Member Functions

 InferRequest ()=default
 Default constructor.
 
 InferRequest (IInferRequest::Ptr request, std::shared_ptr< details::SharedObjectLoader > splg={})
 Constructs InferRequest from the initialized std::shared_ptr. More...
 
void SetBlob (const std::string &name, const Blob::Ptr &data)
 Sets input/output data to infer. More...
 
Blob::Ptr GetBlob (const std::string &name)
 Gets input/output data for inference. More...
 
void SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info)
 Sets blob with a pre-process information. More...
 
const PreProcessInfoGetPreProcess (const std::string &name) const
 Gets pre-process for input data. More...
 
void Infer ()
 Infers specified input(s) in synchronous mode. More...
 
void Cancel ()
 Cancels inference request.
 
std::map< std::string, InferenceEngineProfileInfoGetPerformanceCounts () const
 Queries performance measures per layer to get feedback of what is the most time consuming layer. More...
 
void SetInput (const BlobMap &inputs)
 Sets input data to infer. More...
 
void SetOutput (const BlobMap &results)
 Sets data that will contain result of the inference. More...
 
void SetBatch (const int batch)
 Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
void StartAsync ()
 Start inference of specified input(s) in asynchronous mode. More...
 
StatusCode Wait (int64_t millis_timeout=RESULT_READY)
 Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first. More...
 
template<typename F >
void SetCompletionCallback (F callbackToSet)
 Sets a callback function that will be called on success or failure of asynchronous request. More...
 
std::vector< VariableStateQueryState ()
 Gets state control interface for given infer request. More...
 
 operator std::shared_ptr< IInferRequest > ()
 IInferRequest pointer to be used directly in CreateInferRequest functions. More...
 
bool operator! () const noexcept
 Checks if current InferRequest object is not initialized. More...
 
 operator bool () const noexcept
 Checks if current InferRequest object is initialized. More...
 
bool operator!= (const InferRequest &) const noexcept
 Compares whether this request wraps the same impl underneath. More...
 
bool operator== (const InferRequest &) const noexcept
 Compares whether this request wraps the same impl underneath. More...
 

Friends

class ExecutableNetwork
 

Detailed Description

This is an interface of asynchronous infer request.

Wraps IInferRequest It can throw exceptions safely for the application, where it is properly handled.

Member Enumeration Documentation

◆ WaitMode

Enumeration to hold wait mode for IInferRequest.

Enumerator
RESULT_READY 

Wait until inference result becomes available

STATUS_ONLY 

IInferRequest doesn't block or interrupt current thread and immediately returns inference status

Constructor & Destructor Documentation

◆ InferRequest()

InferenceEngine::InferRequest::InferRequest ( IInferRequest::Ptr  request,
std::shared_ptr< details::SharedObjectLoader >  splg = {} 
)
explicit

Constructs InferRequest from the initialized std::shared_ptr.

Deprecated:
This ctor will be removed in 2022.1
Parameters
requestInitialized shared pointer
splgPlugin to use. This is required to ensure that InferRequest can work properly even if plugin object is destroyed.

Member Function Documentation

◆ GetBlob()

Blob::Ptr InferenceEngine::InferRequest::GetBlob ( const std::string &  name)

Gets input/output data for inference.

Note
Memory allocation does not happen
Parameters
nameA name of Blob to get
Returns
A shared pointer to a Blob with a name name. If a blob is not found, an exception is thrown.

◆ GetPerformanceCounts()

std::map<std::string, InferenceEngineProfileInfo> InferenceEngine::InferRequest::GetPerformanceCounts ( ) const

Queries performance measures per layer to get feedback of what is the most time consuming layer.

Note
not all plugins provide meaningful data
Returns
Map of layer names to profiling information for that layer

◆ GetPreProcess()

const PreProcessInfo& InferenceEngine::InferRequest::GetPreProcess ( const std::string &  name) const

Gets pre-process for input data.

Parameters
nameName of input blob.
Returns
pointer to pre-process info of blob with name

◆ Infer()

void InferenceEngine::InferRequest::Infer ( )

Infers specified input(s) in synchronous mode.

Note
blocks all methods of InferRequest while request is ongoing (running or waiting in queue)

◆ operator bool()

InferenceEngine::InferRequest::operator bool ( ) const
explicitnoexcept

Checks if current InferRequest object is initialized.

Returns
true if current InferRequest object is initialized, false - otherwise

◆ operator std::shared_ptr< IInferRequest >()

InferenceEngine::InferRequest::operator std::shared_ptr< IInferRequest > ( )

IInferRequest pointer to be used directly in CreateInferRequest functions.

Returns
A shared pointer to IInferRequest interface

◆ operator!()

bool InferenceEngine::InferRequest::operator! ( ) const
noexcept

Checks if current InferRequest object is not initialized.

Returns
true if current InferRequest object is not initialized, false - otherwise

◆ operator!=()

bool InferenceEngine::InferRequest::operator!= ( const InferRequest ) const
noexcept

Compares whether this request wraps the same impl underneath.

Returns
true if current InferRequest object doesn't wrap the same impl as the operator's arg

◆ operator==()

bool InferenceEngine::InferRequest::operator== ( const InferRequest ) const
noexcept

Compares whether this request wraps the same impl underneath.

Returns
true if current InferRequest object wraps the same impl as the operator's arg

◆ QueryState()

std::vector<VariableState> InferenceEngine::InferRequest::QueryState ( )

Gets state control interface for given infer request.

State control essential for recurrent networks

Returns
A vector of Memory State objects

◆ SetBatch()

void InferenceEngine::InferRequest::SetBatch ( const int  batch)

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batchnew batch size to be used by all the following inference calls for this request.

◆ SetBlob() [1/2]

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data 
)

Sets input/output data to infer.

Note
Memory allocation does not happen
Parameters
nameName of input or output blob.
dataReference to input or output blob. The type of a blob must match the network input precision and size.

◆ SetBlob() [2/2]

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data,
const PreProcessInfo info 
)

Sets blob with a pre-process information.

Note
Returns an error in case if data blob is output
Parameters
nameName of input blob.
dataA reference to input. The type of Blob must correspond to the network input precision and size.
infoPreprocess info for blob.

◆ SetCompletionCallback()

template<typename F >
void InferenceEngine::InferRequest::SetCompletionCallback ( callbackToSet)
inline

Sets a callback function that will be called on success or failure of asynchronous request.

Parameters
callbackToSetcallback object which will be called on when inference finish.

◆ SetInput()

void InferenceEngine::InferRequest::SetInput ( const BlobMap inputs)

Sets input data to infer.

Note
Memory allocation doesn't happen
Parameters
inputsA reference to a map of input blobs accessed by input names. The type of Blob must correspond to the network input precision and size.

◆ SetOutput()

void InferenceEngine::InferRequest::SetOutput ( const BlobMap results)

Sets data that will contain result of the inference.

Note
Memory allocation doesn't happen
Parameters
results- a reference to a map of result blobs accessed by output names. The type of Blob must correspond to the network output precision and size.

◆ StartAsync()

void InferenceEngine::InferRequest::StartAsync ( )

Start inference of specified input(s) in asynchronous mode.

Note
It returns immediately. Inference starts also immediately.

◆ Wait()

StatusCode InferenceEngine::InferRequest::Wait ( int64_t  millis_timeout = RESULT_READY)

Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.

Parameters
millis_timeoutMaximum duration in milliseconds to block for
Note
There are special cases when millis_timeout is equal some value of the WaitMode enum:
  • STATUS_ONLY - immediately returns inference status (IInferRequest::RequestStatus). It does not block or interrupt current thread
  • RESULT_READY - waits until inference result becomes available
Returns
A status code of operation

The documentation for this class was generated from the following file: