To trigger more inference jobs of an existing configuration, go to the Profile tab of the Selected Configuration section. To run a single inference, select Single Inference, specify the number of inferences per a stream and a batch, and click Execute:
The process starts, and the Status column in the Inference Results table shows the status of the inference generation. You can filter inferences by the number of streams, batch size, throughput and latency values by clicking arrows in the corresponding columns.
NOTE: For details about inference processes, see the Inference Engine documentation.