TensorFlow* provides only prebuilt binaries with AVX instructions enabled. When you're configuring the Model Optimizer by running the
install_prerequisites_tf scripts, they download only those ones, which are not supported on hardware such as Intel® Pentium® processor N4200/5, N3350/5, N3450/5 (formerly known as Apollo Lake).
To run the Model Optimizer on this hardware, you should compile TensorFlow binaries from source as described at the TensorFlow website.
Another option is to run the Model Optimizer to generate an IR on hardware that supports AVX to and then perform inference on hardware without AVX.
If the application uses the Inference Engine with third-party components that depend on Intel OpenMP, multiple loadings of the libiomp library may occur and cause OpenMP runtime initialization conflicts. This may happen, for example, if the application uses Intel® Math Kernel Library (Intel® MKL) through the “Single Dynamic Library” (
libmkl_rt.so) mechanism and calls Intel MKL after loading the Inference Engine plugin. The error log looks as follows:
Preload the OpenMP runtime using the
This eliminates multiple loadings of libiomp, and makes all the components use this specific version of OpenMP.
KMP_DUPLICATE_LIB_OK=TRUE. However, performance degradation or results incorrectness may occur in this case.
With python protobuf library version 3.5.1 the following incompatibility can happen. The known case is for Cent OS 7.4
The error log looks as follows:
Possible workaround is to upgrade default protobuf compiler (libprotoc 2.5.0) to newer version, for example libprotoc 2.6.1.