Namespaces | Data Structures | Typedefs | Enumerations | Functions | Variables
InferenceEngine Namespace Reference

Inference Engine Plugin API namespace. More...

Namespaces

 details
 A namespace with non-public Inference Engine Plugin API.
 
 PluginConfigInternalParams
 A namespace with internal plugin configuration keys.
 
 PrecisionUtils
 Namespace for precision utilities.
 

Data Structures

class  InferRequest
 
class  BatchedBlob
 
class  Blob
 
class  BlockingDesc
 
class  CNNNetwork
 
class  CompoundBlob
 
class  Core
 
class  Data
 
struct  DataConfig
 
struct  Exception
 
class  ExecutableNetwork
 
class  Extension
 
class  I420Blob
 
interface  IAllocator
 
interface  ICNNNetwork
 
class  IExecutableNetwork
 
class  IExtension
 
class  IInferRequest
 
interface  ILayerExecImpl
 
interface  ILayerImpl
 
struct  InferenceEngineProfileInfo
 
class  InputInfo
 
interface  IVariableState
 
struct  LayerConfig
 
class  LockedMemory
 
class  LockedMemory< const T >
 
class  LockedMemory< void >
 
class  MemoryBlob
 
class  NV12Blob
 
class  Parameter
 
class  Precision
 
struct  PrecisionTrait
 
struct  PreProcessChannel
 
class  PreProcessInfo
 
struct  QueryNetworkResult
 
class  RemoteBlob
 
class  RemoteContext
 
struct  ResponseDesc
 
struct  ROI
 
class  TBlob
 
class  TensorDesc
 
union  UserValue
 
class  VariableState
 
struct  Version
 
class  ExecutableNetworkThreadSafeDefault
 This class provides optimal thread safe default implementation. The class is recommended to be used as a base class for Executable Network impleentation during plugin development. More...
 
class  AsyncInferRequestThreadSafeDefault
 Base class with default implementation of asynchronous multi staged inference request. To customize pipeline stages derived class should change the content of AsyncInferRequestThreadSafeDefault::_pipeline member container. It consists of pairs of tasks and executors which will run the task. The class is recommended to be used by plugins as a base class for asynchronous inference request implementation. More...
 
interface  IExecutableNetworkInternal
 An internal API of executable network to be implemented by plugin,. More...
 
interface  IInferRequestInternal
 An internal API of synchronous inference request to be implemented by plugin, which is used in InferRequestBase forwarding mechanism. More...
 
interface  IInferencePlugin
 An API of plugin to be implemented by a plugin. More...
 
interface  IVariableStateInternal
 Minimal interface for variable state implementation. More...
 
struct  DescriptionBuffer
 A description buffer wrapping StatusCode and ResponseDesc. More...
 
interface  ICore
 Minimal ICore interface to allow plugin to get information from Core Inference Engine class. More...
 
class  CPUStreamsExecutor
 CPU Streams executor implementation. The executor splits the CPU into groups of threads, that can be pinned to cores or NUMA nodes. It uses custom threads to pull tasks from single queue. More...
 
class  ExecutorManager
 This is global point for getting task executor objects by string id. It's necessary in multiple asynchronous requests for having unique executors to avoid oversubscription. E.g. There 2 task executors for CPU device: one - in FPGA, another - in MKLDNN. Parallel execution both of them leads to not optimal CPU usage. More efficient to run the corresponding tasks one by one via single executor. More...
 
class  ImmediateExecutor
 Task executor implementation that just run tasks in current thread during calling of run() method. More...
 
interface  IStreamsExecutor
 Interface for Streams Task Executor. This executor groups worker threads into so-called streams. More...
 
interface  ITaskExecutor
 Interface for Task Executor. Inference Engine uses InferenceEngine::ITaskExecutor interface to run all asynchronous internal tasks. Different implementations of task executors can be used for different purposes: More...
 

Typedefs

typedef VariableState MemoryState
 
typedef void * gpu_handle_param
 
typedef std::map< std::string, Blob::PtrBlobMap
 
typedef std::vector< size_t > SizeVector
 
typedef std::shared_ptr< DataDataPtr
 
typedef std::shared_ptr< const DataCDataPtr
 
typedef std::weak_ptr< DataDataWeakPtr
 
typedef std::map< std::string, CDataPtrConstOutputsDataMap
 
typedef std::map< std::string, DataPtrOutputsDataMap
 
typedef std::shared_ptr< IExtensionIExtensionPtr
 
typedef IVariableState IMemoryState
 
typedef std::map< std::string, InputInfo::PtrInputsDataMap
 
typedef std::map< std::string, InputInfo::CPtrConstInputsDataMap
 
typedef std::map< std::string, ParameterParamMap
 
using SoExecutableNetworkInternal = details::SOPointer< IExecutableNetworkInternal >
 SOPointer to IExecutableNetworkInternal.
 
using SoIInferRequestInternal = details::SOPointer< IInferRequestInternal >
 SOPointer to IInferRequestInternal.
 
using IMemoryStateInternal = IVariableStateInternal
 For compatibility reasons.
 
using SoIVariableStateInternal = details::SOPointer< IVariableStateInternal >
 SOPointer to IVariableStateInternal.
 
using MemoryStateInternal = IVariableStateInternal
 For compatibility reasons.
 
using ie_fp16 = short
 A type difinition for FP16 data type. Defined as a singed short.
 
using Task = std::function< void()>
 Inference Engine Task Executor can use any copyable callable without parameters and output as a task. It would be wrapped into std::function object.
 
template<typename T >
using ThreadLocal = tbb::enumerable_thread_specific< T >
 A wrapper class to keep object to be thread local. More...
 

Enumerations

enum  LockOp
 
enum  Layout
 
enum  ColorFormat
 
enum  StatusCode
 
enum  MeanVariant
 
enum  ResizeAlgorithm
 

Functions

std::shared_ptr< InferenceEngine::IAllocatorCreateDefaultAllocator () noexcept
 
std::shared_ptr< T > as (const Blob::Ptr &blob) noexcept
 
std::shared_ptr< const T > as (const Blob::CPtr &blob) noexcept
 
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc)
 
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc, Type *ptr, size_t size=0)
 
InferenceEngine::TBlob< Type >::Ptr make_shared_blob (const TensorDesc &tensorDesc, const std::shared_ptr< InferenceEngine::IAllocator > &alloc)
 
InferenceEngine::TBlob< TypeTo >::Ptr make_shared_blob (const TBlob< TypeTo > &arg)
 
std::shared_ptr< T > make_shared_blob (Args &&... args)
 
Blob::Ptr make_shared_blob (const Blob::Ptr &inputBlob, const ROI &roi)
 
std::ostream & operator<< (std::ostream &out, const Layout &p)
 
std::ostream & operator<< (std::ostream &out, const ColorFormat &fmt)
 
std::shared_ptr< T > make_so_pointer (const std::string &name)
 
void CreateExtensionShared (IExtensionPtr &ext)
 
StatusCode CreateExtension (IExtension *&ext, ResponseDesc *resp) noexcept
 
TensorDesc make_roi_desc (const TensorDesc &origDesc, const ROI &roi, bool useOrigMemDesc)
 
RemoteBlob::Ptr make_shared_blob (const TensorDesc &desc, RemoteContext::Ptr ctx)
 
void LowLatency (InferenceEngine::CNNNetwork &network)
 
void lowLatency2 (InferenceEngine::CNNNetwork &network, bool use_const_initializer=true)
 
std::string fileNameToString (const file_name_t &str)
 
file_name_t stringToFileName (const std::string &str)
 
const VersionGetInferenceEngineVersion () noexcept
 
void blob_copy (Blob::Ptr src, Blob::Ptr dst)
 Copies data with taking into account layout and precision params. More...
 
PreProcessInfo copyPreProcess (const PreProcessInfo &from)
 Copies preprocess info. More...
 
template<typename T >
std::map< std::string, std::shared_ptr< const T > > constMapCast (const std::map< std::string, std::shared_ptr< T >> &map)
 Copies the values of std::string indexed map and apply const cast. More...
 
template<typename T >
std::map< std::string, std::shared_ptr< T > > constMapCast (const std::map< std::string, std::shared_ptr< const T >> &map)
 Copies the values of std::string indexed map and apply const cast. More...
 
InputsDataMap copyInfo (const InputsDataMap &networkInputs)
 Copies InputInfo. More...
 
OutputsDataMap copyInfo (const OutputsDataMap &networkOutputs)
 Copies OutputsData. More...
 
std::string getIELibraryPath ()
 Returns a path to Inference Engine library. More...
 
inline ::FileUtils::FilePath getInferenceEngineLibraryPath ()
 
bool checkOpenMpEnvVars (bool includeOMPNumThreads=true)
 Checks whether OpenMP environment variables are defined. More...
 
std::vector< int > getAvailableNUMANodes ()
 Returns available CPU NUMA nodes (on Linux, and Windows [only with TBB], single node is assumed on all other OSes) More...
 
std::vector< int > getAvailableCoresTypes ()
 Returns available CPU cores types (on Linux, and Windows) and ONLY with TBB, single core type is assumed otherwise. More...
 
int getNumberOfCPUCores (bool bigCoresOnly=false)
 Returns number of CPU physical cores on Linux/Windows (which is considered to be more performance friendly for servers) (on other OSes it simply relies on the original parallel API of choice, which usually uses the logical cores). call function with 'false' to get #phys cores of all types call function with 'true' to get #phys 'Big' cores number of 'Little' = 'all' - 'Big'. More...
 
bool with_cpu_x86_sse42 ()
 Checks whether CPU supports SSE 4.2 capability. More...
 
bool with_cpu_x86_avx ()
 Checks whether CPU supports AVX capability. More...
 
bool with_cpu_x86_avx2 ()
 Checks whether CPU supports AVX2 capability. More...
 
bool with_cpu_x86_avx512f ()
 Checks whether CPU supports AVX 512 capability. More...
 
bool with_cpu_x86_avx512_core ()
 Checks whether CPU supports AVX 512 capability. More...
 
bool with_cpu_x86_bfloat16 ()
 Checks whether CPU supports BFloat16 capability. More...
 

Variables

 LOCK_FOR_READ
 
 LOCK_FOR_WRITE
 
 ANY
 
 NCHW
 
 NHWC
 
 NCDHW
 
 NDHWC
 
 OIHW
 
 GOIHW
 
 OIDHW
 
 GOIDHW
 
 SCALAR
 
 C
 
 CHW
 
 HWC
 
 HW
 
 NC
 
 CN
 
 BLOCKED
 
 RAW
 
 RGB
 
 BGR
 
 RGBX
 
 BGRX
 
 NV12
 
 I420
 
 MEAN_IMAGE
 
 MEAN_VALUE
 
 NONE
 
static constexpr auto KEY_AUTO_DEVICE_LIST
 
static constexpr auto HDDL_GRAPH_TAG
 
static constexpr auto HDDL_STREAM_ID
 
static constexpr auto HDDL_DEVICE_TAG
 
static constexpr auto HDDL_BIND_DEVICE
 
static constexpr auto HDDL_RUNTIME_PRIORITY
 
static constexpr auto HDDL_USE_SGAD
 
static constexpr auto HDDL_GROUP_DEVICE
 
static constexpr auto MYRIAD_ENABLE_FORCE_RESET
 
static constexpr auto MYRIAD_DDR_TYPE
 
static constexpr auto MYRIAD_DDR_AUTO
 
static constexpr auto MYRIAD_PROTOCOL
 
static constexpr auto MYRIAD_PCIE
 
static constexpr auto MYRIAD_THROUGHPUT_STREAMS
 
static constexpr auto MYRIAD_ENABLE_HW_ACCELERATION
 
static constexpr auto MYRIAD_ENABLE_RECEIVING_TENSOR_TIME
 
static constexpr auto MYRIAD_CUSTOM_LAYERS
 

Detailed Description

Inference Engine Plugin API namespace.

Function Documentation

◆ constMapCast() [1/2]

template<typename T >
std::map<std::string, std::shared_ptr<T> > InferenceEngine::constMapCast ( const std::map< std::string, std::shared_ptr< const T >> &  map)

Copies the values of std::string indexed map and apply const cast.

Parameters
[in]mapmap to copy
Returns
map that contains pointers to values

◆ constMapCast() [2/2]

template<typename T >
std::map<std::string, std::shared_ptr<const T> > InferenceEngine::constMapCast ( const std::map< std::string, std::shared_ptr< T >> &  map)

Copies the values of std::string indexed map and apply const cast.

Parameters
[in]mapmap to copy
Returns
map that contains pointers to constant values

◆ copyInfo() [1/2]

InputsDataMap InferenceEngine::copyInfo ( const InputsDataMap networkInputs)

Copies InputInfo.

Parameters
[in]networkInputsThe network inputs to copy from
Returns
copy of network inputs

◆ copyInfo() [2/2]

OutputsDataMap InferenceEngine::copyInfo ( const OutputsDataMap networkOutputs)

Copies OutputsData.

Parameters
[in]networkInputsnetwork outputs to copy from
Returns
copy of network outputs

◆ copyPreProcess()

PreProcessInfo InferenceEngine::copyPreProcess ( const PreProcessInfo from)

Copies preprocess info.

Parameters
[in]fromPreProcessInfo to copy from
Returns
copy of preprocess info