This topic demonstrates how to run the Object Detection demo application, which does inference using object detection networks like Faster R-CNN on Intel® Processors and Intel® HD Graphics.
VGG16-Faster-RCNN is a public CNN that can be easily obtained from GitHub:
To convert the source model, run the Model Optimizer. You can use the following command to convert the source model:
For documentation on how to convert Caffe models, refer to Converting a Caffe Model.
Upon the start-up, the demo application reads command line parameters and loads a network and an image to the Inference Engine plugin. When inference is done, the application creates an output image and outputs data to the standard output stream.
NOTE: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Specify Input Shapes section of Converting a Model Using General Conversion Parameters.
Running the application with the
-h option yields the following usage message:
Running the application with the empty list of options yields the usage message given above and an error message.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
You can use the following command to do inference on CPU on an image using a trained Faster R-CNN network:
The application outputs an image (
out_0.bmp) with detected objects enclosed in rectangles. It outputs the list of classes of the detected objects along with the respective confidence values and the coordinates of the rectangles to the standard output stream.