Async API usage can improve overall frame-rate of the application, because rather than wait for inference to complete, the app can continue doing things on the host, while accelerator is busy. Specifically, this demo keeps the number of Infer Requests that you have set using
-nireq flag. While some of the Infer Requests are processed by IE, the other ones can be filled with new frame data and asynchronously started or the next output can be taken from the Infer Request and displayed.
The technique can be generalized to any available parallel slack, for example, doing inference and simultaneously encoding the resulting (previous) frames or running further inference, like some emotion detection on top of the face detection results. There are important performance caveats though, for example the tasks that run in parallel should try to avoid oversubscribing the shared compute resources. For example, if the inference is performed on the FPGA, and the CPU is essentially idle, than it makes sense to do things on the CPU in parallel. But if the inference is performed say on the GPU, than it can take little gain to do the (resulting video) encoding on the same GPU in parallel, because the device is already busy.
This and other performance implications and tips for the Async API are covered in the Optimization Guide.
Other demo objectives are:
.labelsfile) or class number (if no file is provided)
On the start-up, the application reads command-line parameters and loads a network to the Inference Engine. Upon getting a frame from the OpenCV VideoCapture, it performs inference and displays the results.
Async API operates with a notion of the "Infer Request" that encapsulates the inputs/outputs and separates scheduling and waiting for result.
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work
with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channels argument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
Running the application with the
-h option yields the following usage message:
The command yields the following usage message:
The number of Infer Requests is specified by
-nireq flag. An increase of this number usually leads to an increase of performance (throughput), since in this case several Infer Requests can be processed simultaneously if the device supports parallelization. However, a large number of Infer Requests increases the latency because each frame still has to wait before being sent for inference.
For higher FPS, it is recommended that you set
-nireq to slightly exceed the
-nstreams value, summed across all devices used.
NOTE: This demo is based on the callback functionality from the Inference Engine Python API.
The selected approach makes the execution in multi-device mode optimal by preventing wait delays caused by the differences in device performance. However, the internal organization of the callback mechanism in Python API leads to FPS decrease. Please, keep it in mind and use the C++ version of this demo for performance-critical cases.
Running the application with the empty list of options yields the usage message given above and an error message. You can use the following command to do inference on GPU with a pre-trained object detection model:
To run the demo, you can use public or pre-trained models. You can download the pre-trained models with the OpenVINO Model Downloader.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine
format (*.xml + *.bin) using the Model Optimizer tool.
The demo uses OpenCV to display the resulting frame with detections (rendered as bounding boxes and labels, if provided). The demo reports