This is the demo application for Action Recognition algorithm, which classifies actions that are being performed on input video. The following pre-trained models are delivered with the product:
driver-action-recognition-adas-0002-decoder, which are models for driver monitoring scenario. They recognize actions like safe driving, talking to the phone and others
i3d-rgb-tf, which are general-purpose action recognition (400 actions) models for Kinetics-400 dataset.
The demo pipeline consists of several frames, namely
Render. Every step implements
PipelineStep interface by creating a class derived from
PipelineStep base class. See
steps.py for implementation details.
DataStepreads frames from the input video.
EncoderSteppreprocesses a frame and feeds it to the encoder model to produce a frame embedding. simple averaging of encoder's outputs over a time window is applied.
DecoderStepfeeds embeddings produced by the
EncoderStepto the decoder model and produces predictions. For models that use
DummyDecodersimple averaging of encoder's outputs over a time window is applied.
<ModelNameStep>which does preprocess and produce predictions.
RenderSteprenders prediction results.
Pipeline steps are composed in
AsyncPipeline. Every step can be run in separate thread by adding it to the pipeline with
parallel=True option. When two consequent steps occur in separate threads, they communicate via message queue (for example, deliver step result or stop signal).
To ensure maximum performance, Inference Engine models are wrapped in
AsyncWrapper that uses Inference Engine async API by scheduling infer requests in cyclical order (inference on every new input is started asynchronously, result of the longest working infer request is returned). You can change the value of
action_recognition_demo.py to find an optimal number of parallel working infer requests for your inference accelerators (Compute Sticks and GPUs benefit from higher number of infer requests).
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
Running the application with the
-h option yields the following usage message:
Running the application with an empty list of options yields the usage message given above and an error message.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader. The list of models supported by the demo is in the
models.lst file in the demo's directory.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
For example, to run the demo for in-cabin driver monitoring scenario, please provide a path to the encoder and decoder models, an input video and a file with label names:
The application uses OpenCV to display the real-time results and current inference performance (in FPS).