This demo showcases Vehicle and License Plate Detection network followed by the Vehicle Attributes Recognition and License Plate Recognition networks applied on top of the detection results. You can use a set of the following pre-trained models with the demo:
vehicle-license-plate-detection-barrier-0106, which is a primary detection network to find the vehicles and license plates
vehicle-attributes-recognition-barrier-0039, which is executed on top of the results from the first network and reports general vehicle attributes, for example, vehicle type (car/van/bus/track) and color
license-plate-recognition-barrier-0001, which is executed on top of the results from the first network and reports a string per recognized license plate
For more information about the pre-trained models, refer to the https://github.com/opencv/open_model_zoo/blob/master/intel_models/index.md "Open Model Zoo" repository on GitHub*.
Other demo objectives are:
On the start-up, the application reads command line parameters and loads the specified networks. The Vehicle and License Plate Detection network is required, the other two are optional.
Upon getting a frame from the OpenCV VideoCapture, the applications performs inference of Vehicles and License Plate Detection network, then performs another two inferences of Vehicle Attributes and License Plate Recognition networks if they were specified in command line, and displays the results.
NOTE: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Specify Input Shapes section of Converting a Model Using General Conversion Parameters.
Running the application with the
-h option yields the following usage message:
Running the application with an empty list of options yields the usage message given above and an error message.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
For example, to do inference on a GPU with the OpenVINO toolkit pre-trained models, run the following command:
To do inference for two video inputs using two asynchronous infer request on FPGA with the OpenVINO toolkit pre-trained models, run the following command:
OMP_NUM_THREADS: Specifies number of threads to use. For heterogeneous scenarios with FPGA, when several inference requests are used asynchronously, limiting the number of CPU threads with
OMP_NUM_THREADSallows to avoid competing for resources between threads. For the Security Barrier Camera Demo, recommended value is
KMP_BLOCKTIME: Sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping. The default value is 200ms, which is not optimal for the demo. Recommended value is
The demo uses OpenCV to display the resulting frame with detections rendered as bounding boxes and text: