Style Transfer C++ sample application demonstrates how to use the following Inference Engine C++ API in applications:
|Inference Engine Version||Get Inference Engine API version|
|Available Devices||Get version information of the devices for inference|
|Custom Extension Kernels||Load extension library and config to the device|
|Network Operations||Managing of network, operate with its batch size. Setting batch size using input image count.|
Basic Inference Engine API is covered by Hello Classification C++ sample.
|Model Format||Inference Engine Intermediate Representation (*.xml + *.bin), ONNX (*.onnx)|
|Validated images||The sample uses OpenCV* to read input image (*.bmp, *.png)|
|Other language realization||Python|
Upon the start-up the sample application reads command line parameters, loads specified network and image(s) to the Inference Engine plugin. Then, the sample creates an synchronous inference request object. When inference is done, the application creates output image(s), logging each step in a standard output stream.
You can see the explicit description of each sample step at Integration Steps section of "Integrate the Inference Engine with Your Application" guide.
To build the sample, please use instructions available at Build the Sample Applications section in Inference Engine Samples guide.
To run the sample, you need specify a model and image:
Running the application with the
-h option yields the following usage message:
Running the application with the empty list of options yields the usage message given above and an error message.
- By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
- Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
- The sample accepts models in ONNX format (*.onnx) that do not require preprocessing.
fast-neural-style-mosaic-onnxmodel does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use a other model that is not in the Inference Engine IR or ONNX format, you can convert it using the model converter script:
fast-neural-style-mosaic-onnxmodel on a
GPU, for example:
The sample application logs each step in a standard output stream and creates an image (
out1.bmp) or a sequence of images (
out<N>.bmp) which are redrawn in style of the style transfer model used for the sample.