Public Types | Public Member Functions
InferenceEngine::InferRequest Class Reference

#include <ie_infer_request.hpp>

Public Types

using  Ptr = std::shared_ptr< InferRequest >
  A smart pointer to the InferRequest object.
 

Public Member Functions

  InferRequest ()=default
  Default constructor.
 
  ~InferRequest ()
  Destructor.
 
void  SetBlob (const std::string &name, const Blob::Ptr &data)
  Sets input/output data to infer. More...
 
Blob::Ptr  GetBlob (const std::string &name)
 
void  SetBlob (const std::string &name, const Blob::Ptr &data, const PreProcessInfo &info)
  Sets pre-process for input data. More...
 
const PreProcessInfo GetPreProcess (const std::string &name) const
  Gets pre-process for input data. More...
 
void  Infer ()
 
std::map< std::string, InferenceEngineProfileInfo GetPerformanceCounts () const
 
void  SetInput (const BlobMap &inputs)
  Sets input data to infer. More...
 
void  SetOutput (const BlobMap &results)
  Sets data that will contain result of the inference. More...
 
void  SetBatch (const int batch)
  Sets new batch size when dynamic batching is enabled in executable network that created this request. More...
 
  InferRequest (IInferRequest::Ptr request, InferenceEnginePluginPtr plg={})
 
void  StartAsync ()
  Start inference of specified input(s) in asynchronous mode. More...
 
StatusCode  Wait (int64_t millis_timeout)
 
template<class T >
void  SetCompletionCallback (const T &callbackToSet)
 
  operator IInferRequest::Ptr & ()
  IInferRequest pointer to be used directly in CreateInferRequest functions.
 
bool  operator! () const noexcept
  Checks if current InferRequest object is not initialized. More...
 
  operator bool () const noexcept
  Checks if current InferRequest object is initialized. More...
 

Detailed Description

This is an interface of asynchronous infer request.

Wraps IInferRequest It can throw exceptions safely for the application, where it is properly handled.

Constructor & Destructor Documentation

§ InferRequest()

InferenceEngine::InferRequest::InferRequest ( IInferRequest::Ptr  request,
InferenceEnginePluginPtr  plg = {} 
)
inlineexplicit

constructs InferRequest from the initialized shared_pointer

Parameters
request Initialized shared pointer
plg Plugin to use

Member Function Documentation

§ GetBlob()

Blob::Ptr InferenceEngine::InferRequest::GetBlob ( const std::string &  name )
inline

Gets input/output data for inference.

Wraps IInferRequest::GetBlob

§ GetPerformanceCounts()

std::map<std::string, InferenceEngineProfileInfo> InferenceEngine::InferRequest::GetPerformanceCounts ( ) const
inline

Queries performance measures per layer to get feedback of what is the most time consuming layer.

Wraps IInferRequest::GetPerformanceCounts

§ GetPreProcess()

const PreProcessInfo& InferenceEngine::InferRequest::GetPreProcess ( const std::string &  name ) const
inline

Gets pre-process for input data.

Parameters
name Name of input blob.
Returns
pointer to pre-process info of blob with name

§ Infer()

void InferenceEngine::InferRequest::Infer ( )
inline

Infers specified input(s) in synchronous mode.

Wraps IInferRequest::Infer

§ operator bool()

InferenceEngine::InferRequest::operator bool ( ) const
inlineexplicitnoexcept

Checks if current InferRequest object is initialized.

Returns
true if current InferRequest object is initialized, false - otherwise

§ operator!()

bool InferenceEngine::InferRequest::operator! ( ) const
inlinenoexcept

Checks if current InferRequest object is not initialized.

Returns
true if current InferRequest object is not initialized, false - otherwise

§ SetBatch()

void InferenceEngine::InferRequest::SetBatch ( const int  batch )
inline

Sets new batch size when dynamic batching is enabled in executable network that created this request.

Parameters
batch new batch size to be used by all the following inference calls for this request.

§ SetBlob() [1/2]

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data 
)
inline

Sets input/output data to infer.

Note
: Memory allocation does not happen
Parameters
name Name of input or output blob.
data Reference to input or output blob. The type of a blob must match the network input precision and size.

§ SetBlob() [2/2]

void InferenceEngine::InferRequest::SetBlob ( const std::string &  name,
const Blob::Ptr data,
const PreProcessInfo info 
)
inline

Sets pre-process for input data.

Note
: Will return an error in case if data blob is output
Parameters
name Name of input blob.
data - a reference to input. The type of Blob must correspond to the network input precision and size.
info Preprocess info for blob.

§ SetCompletionCallback()

template<class T >
void InferenceEngine::InferRequest::SetCompletionCallback ( const T &  callbackToSet )
inline

Sets a callback function that will be called on success or failure of asynchronous request.

Wraps IInferRequest::SetCompletionCallback

Parameters
callbackToSet Lambda callback object which will be called on processing finish.

§ SetInput()

void InferenceEngine::InferRequest::SetInput ( const BlobMap inputs )
inline

Sets input data to infer.

Note
: Memory allocation doesn't happen
Parameters
inputs - a reference to a map of input blobs accessed by input names. The type of Blob must correspond to the network input precision and size.

§ SetOutput()

void InferenceEngine::InferRequest::SetOutput ( const BlobMap results )
inline

Sets data that will contain result of the inference.

Note
: Memory allocation doesn't happen
Parameters
results - a reference to a map of result blobs accessed by output names. The type of Blob must correspond to the network output precision and size.

§ StartAsync()

void InferenceEngine::InferRequest::StartAsync ( )
inline

Start inference of specified input(s) in asynchronous mode.

Note
: It returns immediately. Inference starts also immediately.

§ Wait()

StatusCode InferenceEngine::InferRequest::Wait ( int64_t  millis_timeout )
inline

Waits for the result to become available. Blocks until specified millis_timeout has elapsed or the result becomes available, whichever comes first.

Wraps IInferRequest::Wait


The documentation for this class was generated from the following file: