This demo demonstrates how to run 3D Human Pose Estimation models using OpenVINO™. The following pre-trained models can be used:
For more information about the pre-trained models, refer to the model documentation.
NOTE: Only batch size of 1 is supported.
## How It Works
The demo application expects a 3D human pose estimation model in the Intermediate Representation (IR) format.
As input, the demo application can take:
The demo workflow is the following:
NOTE: By default, Open Model Zoo demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the demo application or reconvert your model using the Model Optimizer tool with
--reverse_input_channelsargument specified. For more information about the argument, refer to When to Reverse Input Channels section of Converting a Model Using General Conversion Parameters.
This demo application requires a native Python extension module to be built before you can run it. Refer to Using Open Model Zoo demos, for instructions on how to build it and prepare the environment for running the demo.
Run the application with the
-h option to see the following usage message:
Running the application with an empty list of options yields the short version of the usage message and an error message.
To run the demo, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO Model Downloader. The list of models supported by the demo is in the
models.lst file in the demo's directory.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (
*.bin) using the Model Optimizer tool.
To run the demo, please provide paths to the model in the IR format, and to an input video or image(s):
The application uses OpenCV to display found poses and current inference performance.