This README describes the Question Answering demo application that uses a Squad-tuned BERT model for inference.
Upon the start-up the demo application reads command line parameters and loads a network to Inference engine. It also fetch data from the user-provided url to populate the "context" text. The text is then used to search answers for user-provided questions.
Running the application with the
-h option yields the following usage message:
The command yields the following usage message:
NOTE: Before running the demo with a trained model, make sure to convert the model to the Inference Engine's Intermediate Representation format (*.xml + *.bin) using the Model Optimizer tool. When using the pre-trained BERT from the model zoo (please see Model Downloader), the model is already converted to the IR.
The application reads text from the HTML page at the given url and then answers questions typed from the console. The model and its parameters (inputs and outputs) are also important demo arguments. Notice that since order of inputs for the model does matter, the demo script checks that the inputs specified from the command-line match the actual network inputs. When the reshape option (
-r) is specified, the script also attempts to reshape the network to the length of the context plus length of the question (both in tokens), if the resulting value is smaller than the original sequence length that the network expects. This is performance (speed) and memory footprint saving option. Since some networks are not-reshapable (due to limitations of the internal layers) the reshaping might fail, so you will need to run the demo without it. Please see general reshape intro and limitations
The application outputs found answers to the same console.
Open Model Zoo Models feature example BERT-large trained on the Squad*. One specific flavor of that is so called "distilled" model (for that reason it comes with "small" in its name, but don't get confused as it is still originated from the BERT Large) that is indeed substantially smaller and faster.
The demo also works fine with official MLPerf* BERT ONNX models fine-tuned on the Squad dataset. Unlike [Open Model Zoo Models that come directly as the Intermediate Representation (IR), the MLPerf models should be explicitly converted with OpenVINO Model Optimizer. Specifically the example command-line (for the int8 model) is as follows:
You can use the following command to try the demo (assuming the model from the Open Model Zoo, downloaded with the Model Downloader executed with "--name bert*"):
The demo will use a wiki-page about the Bert character to answer your questions like "who is Bert", "how old is Bert", etc.
Notice that when the original "context" (text from the url) together with the question do not fit the model input (usually 384 tokens for the Bert-Large, or 128 for the Bert-Base), the demo splits the context into overlapping segments. Thus, for the long texts, the network is called multiple times. The results are then sorted by the probabilities.
Even though the demo reports inference performance (by measuring wall-clock time for individual inference calls), it is only baseline performance, as certain tricks like batching, throughput mode can be applied. Please use the full-blown Benchmark C++ Sample for any actual performance measurements.