If the application uses the Inference Engine with third-party components that depend on Intel OpenMP, multiple loadings of the libiomp library may occur and cause OpenMP runtime initialization conflicts. This may happen, for example, if the application uses Intel® Math Kernel Library (Intel® MKL) through the “Single Dynamic Library” (
libmkl_rt.so) mechanism and calls Intel MKL after loading the Inference Engine plugin. The error log looks as follows:
Preload the OpenMP runtime using the
This eliminates multiple loadings of libiomp, and makes all the components use this specific version of OpenMP.
KMP_DUPLICATE_LIB_OK=TRUE. However, performance degradation or results incorrectness may occur in this case.
With python protobuf library version 3.5.1 the following incompatibility can happen. The known case is for Cent OS 7.4
The error log looks as follows:
Possible workaround is to upgrade default protobuf compiler (libprotoc 2.5.0) to newer version, for example libprotoc 2.6.1.
Refer to the Limitations section of Dynamic batching page
Refer to the Limitations section of Static Shape Infer page
As described in documentation for new API, you can set an image blob of any size to an infer request using resizable input. Resize is executed during inference using configured resize algorithm.
But currently resize algorithms are not completely optimized. So expect performance degradation if resizable input is specified and an input blob (to be resized) is set (
SetBlob() is used). Required performance is met for CPU plugin only (because enabled openMP* provides parallelism).
Another limitation is that currently, resize algorithms support NCHW layout only. So if you set NHWC layout for an input blob, NHWC is converted to NCHW before resize and back to NHWC after resize.