This demo demonstrates an example of using neural networks to colorize a video. You can use the following models with the demo:
For more information about the pre-trained models, refer to the model documentation.
How It Works
On the start-up, the application reads command-line parameters and loads one network to the Inference Engine for execution.
Once the program receives an image, it performs the following steps:
- Converts the frame of video into the LAB color space.
- Uses the L-channel to predict A and B channels.
- Restores the image by converting it into the BGR color space.
Running the Demo
Running the application with the
-h option yields the following usage message:
usage: colorization_demo.py [-h] -m MODEL [-d DEVICE] -i "<path>" [--no_show]
[-v] [-u UTILIZATION_MONITORS]
-h, --help Help with the script.
-m MODEL, --model MODEL
Required. Path to .xml file with pre-trained model.
-d DEVICE, --device DEVICE
Optional. Specify target device for infer: CPU, GPU,
FPGA, HDDL or MYRIAD. Default: CPU
-i "<path>", --input "<path>"
Required. Input to process.
--no_show Optional. Disable display of results on screen.
-v, --verbose Optional. Enable display of processing logs on screen.
-u UTILIZATION_MONITORS, --utilization_monitors UTILIZATION_MONITORS
Optional. List of monitors to show initially.
To run the demo, you can use public or Intel's pretrained models. To download pretrained models, use the OpenVINO™ Model Downloader. The list of models supported by the demo is in the
models.lst file in the demo's directory.
NOTE: Before running the demo with a trained model, make sure the model is converted to the Inference Engine format (*.xml + *.bin) using the Model Optimizer tool.
The demo uses OpenCV to display the colorized frame.