View Inference Results

Inference Results

Once an initial inference has been run with a model, sample dataset, and target, you can view performance results on the Configurations Page.


The components specified below provide visual representation of a model performance on a selected dataset and help find potential bottlenecks and areas for improvement:

Model Analyzer

The Model Analyzer is used for generating estimated performance information on neural networks. The tool analyzes of the following characteristics:

Characteristic Unit of MeasurementExplanation
Computational Complexity GFLOPsRepresents a number of floating point operations required to infer a model.
Number of Parameters Millions Represents a total number of weights in a model.
Minimum Memory Consumption,
Maximum Memory Consumption
Millions of units A unit depends on the precision of model weights. For example, for FP32 model these parameters must be multiplied by 4 bytes.

Model analysis data is collected when the model is imported. All parameters depend on the size of a batch. Currently, information is gathered on the default model batch.

To view analysis data, click Details next to the name of a model in the table:


The details appear on the right:


See Also