On startup the demo application reads command line parameters and loads a network to Inference engine. It also read user-provided sound file with mix of speech and some noise to feed it into the network by small sequential patches. The output of network is also sequence of audio patches with clean speech. The patches collected together and save into ouput audio file.
The list of models supported by the demo is in
<omz_dir>/demos/noise_suppression_demo/python/models.lst file. This file can be used as a parameter for Model Downloader and Converter to download and, if necessary, convert models to OpenVINO Inference Engine format (*.xml + *.bin).
An example of using the Model Downloader:
An example of using the Model Converter:
Running the application with the
-h option yields the following usage message:
The command yields the following usage message:
You can use the following command to try the demo (assuming the model from the Open Model Zoo, downloaded with the Model Downloader executed with "--name noise-suppression*"):
The application reads audio wave from the input file with given name. The input file has to have 16kHZ discretization frequency The model is also required demo arguments.
The application outputs cleaned wave to output file.
Even though the demo reports inference performance (by measuring wall-clock time for individual inference calls), it is only baseline performance. Please use the full-blown Benchmark C++ Sample for any actual performance measurements.