The demo shows an example of joint usage of several neural networks to detect student actions (sitting, standing, raising hand for the
person-detection-action-recognition-0005 model and sitting, writing, raising hand, standing, turned around, lie on the desk for the
person-detection-action-recognition-0006 model) and recognize people by faces in the classroom environment. The demo uses Async API for action and face detection networks. It allows to parallelize execution of face recognition and detection: while face recognition is running on one accelerator, face and action detection could be performed on another. You can use a set of the following pre-trained models with the demo:
face-detection-adas-0001, which is a primary detection network for finding faces.
landmarks-regression-retail-0009, which is executed on top of the results from the first network and outputs a vector of facial landmarks for each detected face.
face-reidentification-retail-0095, which is executed on top of the results from the first network and outputs a vector of features for each detected face.
person-detection-action-recognition-0005, which is a detection network for finding persons and simultaneously predicting their current actions (3 actions - sitting, standing, raising hand).
person-detection-action-recognition-0006, which is a detection network for finding persons and simultaneously predicting their current actions (6 actions: sitting, writing, raising hand, standing, turned around, lie on the desk).
person-detection-raisinghand-recognition-0001, which is a detection network for finding students and simultaneously predicting their current actions (in contrast with the previous model, predicts only if a student raising hand or not).
person-detection-action-recognition-teacher-0002, which is a detection network for finding persons and simultaneously predicting their current actions.
On startup, the application reads command line parameters and loads four networks to the Inference Engine for execution on different devices depending on
-m... options family. Upon getting a frame from the OpenCV VideoCapture, it performs inference of Face Detection and Action Detection networks. After that, the ROIs obtained by Face Detector are fed to the Facial Landmarks Regression network. Then landmarks are used to align faces by affine transform and feed them to the Face Recognition network. The recognized faces are matched with detected actions to find an action for a recognized person for each frame.
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with the
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
To recognize faces on a frame, the demo needs a gallery of reference images. Each image should contain a tight crop of face. You can create the gallery from an arbitrary list of images:
id_name0.png, id_name1.png, ....
python3 create_list.py <path_to_folder_with_images>command, which will create a
faces_gallery.jsonfile with list of files and identities.
For demo input image or video files, refer to the section Media Files Available for Demos in the Open Model Zoo Demos Overview. The list of models supported by the demo is in
<omz_dir>/demos/smart_classroom_demo/cpp/models.lst file. This file can be used as a parameter for Model Downloader and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (*.xml + *.bin).
An example of using the Model Downloader:
An example of using the Model Converter:
Running the application with the
-h option yields the following usage message:
Running the application with the empty list of options yields an error message.
Example of a valid command line to run the application with pre-trained models for recognizing students actions:
NOTE: To recognize actions of students, use
person-detection-action-recognition-0005model for 3 basic actions and
person-detection-action-recognition-0006model for 6 actions. See model description for more details on the list of recognized actions.
Example of a valid command line to run the application for recognizing actions of a teacher:
NOTE: To recognize actions of a teacher, use
Example of a valid command line to run the application for recognizing first raised-hand students:
NOTE: To recognize raising hand action of students, use
>NOTE: If you provide a single image as an input, the demo processes and renders it quickly, then exits. To continuously visualize inference results on the screen, apply the
loop option, which enforces processing a single image in a loop.
You can save processed results to a Motion JPEG AVI file or separate JPEG or PNG files using the
aviextension, for example:
pngextension, for example:
-o output_%03d.jpg. The actual file names are constructed from the template at runtime by replacing regular expression
%03dwith the frame number, resulting in the following:
output_001.jpg, and so on. To avoid disk space overrun in case of continuous input stream, like camera, you can limit the amount of data stored in the output file(s) with the
limitoption. The default value is 1000. To change it, you can apply the
-limit Noption, where
Nis the number of frames to store.
>NOTE: Windows* systems may not have the Motion JPEG codec installed by default. If this is the case, you can download OpenCV FFMPEG back end using the PowerShell script provided with the OpenVINO ™ install package and located at
<INSTALL_DIR>/opencv/ffmpeg-download.ps1. The script should be run with administrative privileges if OpenVINO ™ is installed in a system protected folder (this is a typical case). Alternatively, you can save results as images.
The demo uses OpenCV to display the resulting frame with labeled actions and faces.