This tutorial explains how to convert Google* Neural Machine Translation (GNMT) model to the Intermediate Representation (IR).
On GitHub*, you can find several public versions of TensorFlow* GNMT model implementation. This tutorial explains how to convert the GNMT model from the TensorFlow* Neural Machine Translation (NMT) repository to the IR.
Before converting the model, you need to create a patch file for the repository. The patch modifies the framework code by adding a special command-line argument to the framework options that enables inference graph dumping:
NOTE: Please, use TensorFlow version 1.13 or lower.
Step 1. Clone the GitHub repository and check out the commit:
Step 2. Get a trained model. You have two options:
wmt16_gnmt_8_layer.jsonconfiguration file using the NMT framework.
This tutorial assumes the use of the trained GNMT model from
wmt16_gnmt_4_layer.json config, German to English translation.
Step 3. Create an inference graph:
The OpenVINO™ assumes that a model is used for inference only. Hence, before converting the model into the IR, you need to transform the training graph into the inference graph. For the GNMT model, the training graph and the inference graph have different decoders: the training graph uses a greedy search decoding algorithm, while the inference graph uses a beam search decoding algorithm.
GNMT_inference.patchpatch to the repository. Refer to the Create a Patch File instructions if you do not have it:
If you use different checkpoints, use the corresponding values for the
vocab_prefix parameters. Inference checkpoint
inference_GNMT_graph and frozen inference graph
frozen_GNMT_inference_graph.pb will appear in the
vocab.bpe.32000, execute the
nmt/scripts/wmt16_en_de.sh script. If you face an issue of a size mismatch between the checkpoint graph's embedding layer and vocabulary (both src and target), we recommend you to add the following code to the
nmt.py file to the
extend_hparams function after the line 508 (after initialization of the
Step 4. Convert the model to the IR:
Input and output cutting with the
--output options is required since OpenVINO™ does not support
IteratorGetNextoperation iterates over a dataset. It is cut by output ports: port 0 contains data tensor with shape
[batch_size, max_sequence_length], port 1 contains
sequence_lengthfor every batch with shape
dynamic_seq2seq/hash_table_Lookupnodes in the graph) are cut with constant values).
LookupTableFindV2operation is cut from the output and the
dynamic_seq2seq/decoder/decoder/GatherTreenode is treated as a new exit point.
For more information about model cutting, refer to Cutting Off Parts of a Model.
NOTE: This step assumes you have converted a model to the Intermediate Representation.
Inputs of the model:
IteratorGetNext/placeholder_out_port_0input with shape
batch_sizedecoded input sentences. Every sentence is decoded the same way as indices of sentence elements in vocabulary and padded with index of
eos(end of sentence symbol). If the length of the sentence is less than
max_sequence_length, remaining elements are filled with index of
IteratorGetNext/placeholder_out_port_1input with shape
[batch_size]contains sequence lengths for every sentence from the first input. \ For example, if
max_sequence_length = 50,
batch_size = 1and the sentence has only 30 elements, then the input tensor for
Outputs of the model:
dynamic_seq2seq/decoder/decoder/GatherTreetensor with shape
[max_sequence_length * 2, batch, beam_size], that contains
beam_sizebest translations for every sentence from input (also decoded as indices of words in vocabulary). \
NOTE: Shape of this tensor in TensorFlow* can be different: instead of
max_sequence_length * 2, it can be any value less than that, because OpenVINO™ does not support dynamic shapes of outputs, while TensorFlow can stop decoding iterations when
eossymbol is generated.*
NOTE: Before running the example, insert a path to your GNMT
WEIGHTS_PATH, and fill
seq_lengthstensors according to your input data.
For more information about Python API, refer to Inference Engine Python API Overview.