This demo showcases Vehicle and License Plate Detection network followed by the Vehicle Attributes Recognition and License Plate Recognition networks applied on top of the detection results. The corresponding topologies are shipped with the product:
vehicle-license-plate-detection-barrier-0106
, which is a primary detection network to find the vehicles and license platesvehicle-attributes-recognition-barrier-0039
, which is executed on top of the results from the first network and reports general vehicle attributes, for example, vehicle type (car/van/bus/track) and colorlicense-plate-recognition-barrier-0001
, which is executed on top of the results from the first network and reports a string per recognized license plateFor more details on the topologies, please refer to the descriptions in the deployment_tools/intel_models
folder of the OpenVINO™ toolkit installation directory.
Other demo objectives are:
On the start-up, the application reads command line parameters and loads the specified networks. The Vehicle and License Plate Detection network is required, the other two are optional.
Upon getting a frame from the OpenCV VideoCapture, the applications performs inference of Vehicles and License Plate Detection network, then performs another two inferences of Vehicle Attributes and License Plate Recognition networks if they were specified in command line, and displays the results.
Running the application with the -h
option yields the following usage message:
Running the application with an empty list of options yields the usage message given above and an error message.
To run the demo, you can use public models or a set of pre-trained and optimized models delivered with the package:
<INSTALL_DIR>/deployment_tools/intel_models/vehicle-license-plate-detection-barrier-0106
<INSTALL_DIR>/deployment_tools/intel_models/vehicle-attributes-recognition-barrier-0039
<INSTALL_DIR>/deployment_tools/intel_models/license-plate-recognition-barrier-0001
For example, to do inference on a GPU with the OpenVINO toolkit pre-trained models, run the following command:
To do inference for two video inputs using two asynchronous infer request on FPGA with the OpenVINO toolkit pre-trained models, run the following command:
NOTE: Before running the demo with another trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
OMP_NUM_THREADS
: Specifies number of threads to use. For heterogeneous scenarios with FPGA, when several inference requests are used asynchronously, limiting the number of CPU threads with OMP_NUM_THREADS
allows to avoid competing for resources between threads. For the Security Barrier Camera Demo, recommended value is OMP_NUM_THREADS=1
.KMP_BLOCKTIME
: Sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping. The default value is 200ms, which is not optimal for the demo. Recommended value is KMP_BLOCKTIME=1
.The demo uses OpenCV to display the resulting frame with detections rendered as bounding boxes and text: