Converting a Kaldi* Model

A summary of the steps for optimizing and deploying a model that was trained with Kaldi*:

  1. Configure the Model Optimizer for Kaldi*.
  2. Convert a Kaldi* Model to produce an optimized Intermediate Representation (IR) of the model based on the trained network topology, weights, and biases values.
  3. Test the model in the Intermediate Representation format using the Inference Engine in the target environment via provided Inference Engine validation application or sample applications.
  4. Integrate the Inference Engine in your application to deploy the model in the target environment.

NOTE: The Model Optimizer supports only nnet1 and nnet2 format of Kaldi models. Note that nnet3 format is not supported.

Supported Topologies

Command Line Parameters

To convert a Kaldi* model:

  1. Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory.
  2. Use the mo.py script to simply convert a model with the path to the input model .nnet file:
    python3 mo.py --input_model <INPUT_MODEL>.nnet

Two groups of parameters are available to convert your model:

Using Kaldi*-Specific Conversion Parameters

The following list provides the Kaldi*-specific parameters.

Kaldi-specific parameters:
--counts COUNTS A file name with full path to the counts file
--remove_output_softmax
Removes the Softmax layer that is the output layer

Examples of CLI Commands

NOTE: Model Optimizer can remove SoftMax layer only if the topology has one output.

Supported Kaldi* Layers

Refer to Supported Framework Layers for the list of supported standard layers.