Using Shape Inference

Inference Engine takes two kinds of model description as an input: Intermediate Representation (IR) and nGraph::Function objects. Both should have fixed input shapes to be successfully loaded to the Inference Engine. To feed input data of a shape that is different from the model input shape, resize the model first.

Model resizing on the stage of IR generation or [nGraph::Function creation](TODO: link to nGraph Function creation overview) is the recommended approach. OpenVINO™ provides the following experimental methods for runtime model reshaping:

  1. [ EXPERIMENTAL ] Setting a new input shape with the InferenceEngine::CNNNetwork::reshape method

    InferenceEngine::CNNNetwork::reshape method updates input shapes and propagates them down to the outputs of the model through all intermediate layers.

    Shape propagation for InferenceEngine::CNNNetwork objects created from nGraph::Function or IR of the version 10 works through the nGraph shape inference mechanism. InferenceEngine::CNNNetwork objects created from lower IR versions are considered deprecated and may be reshaped incorrectly or give unexpected results.

    To keep the v10 IR resizable by the InferenceEngine::CNNNetwork::reshape method, convert the model with the additional Model Optimizer key --keep_shape_ops.

  2. [ EXPERIMENTAL ] Setting a new batch dimension value with the InferenceEngine::CNNNetwork::setBatchSize method

    The meaning of a model batch may vary depending on choices you made during the model designing. The InferenceEngine::CNNNetwork::setBatchSize method deduces index of batch dimension relying only on the input rank. This method does not work for models with a non-zero index batch placement or models with inputs without a batch dimension.

    Batch-setting algorithm does not involve shape inference mechanism. Batch of input and output shapes for all layers is set to a new batch value without layer validation. It may cause both positive and negative side effects.

    Due to the limitations described above, the current method is recommended for simple image processing models only.

Practically, some models are not ready to be resized. In this case, a new input shape cannot be set with the Model Optimizer or the InferenceEngine::CNNNetwork::reshape method.

Troubleshooting Resize Errors

Operation semantics may impose restrictions on input shapes of the operation. Shape collision during shape propagation may be a sign that a new shape does not satisfy the restrictions. Changing the model input shape may result in intermediate operations shape collision.

Examples of such operations:

Model structure and logic should not change significantly after resizing.

Usage of Reshape Method

The primary method of the feature is InferenceEngine::CNNNetwork::reshape. It gets new input shapes and propagates it from input to output for all intermediates layers of the given network. The method takes InferenceEngine::ICNNNetwork::InputShapes - a map of pairs: name of input data and its dimension.

The algorithm for resizing network is the following:

1) Collect the map of input names and shapes from Intermediate Representation (IR) using helper method InferenceEngine::CNNNetwork::getInputShapes

2) Set new input shapes

3) Call reshape

Here is a code example:

// ------------- 0. Read IR and image ----------------------------------------------
CNNNetwork network = core.ReadNetwork("path/to/IR/xml");
cv::Mat image = cv::imread("path/to/image");
// ---------------------------------------------------------------------------------
// ------------- 1. Collect the map of input names and shapes from IR---------------
auto input_shapes = network.getInputShapes();
// ---------------------------------------------------------------------------------
// ------------- 2. Set new input shapes -------------------------------------------
std::string input_name;
SizeVector input_shape;
std::tie(input_name, input_shape) = *input_shapes.begin(); // let's consider first input only
input_shape[0] = batch_size; // set batch size to the first input dimension
input_shape[2] = image.rows; // changes input height to the image one
input_shape[3] = image.cols; // changes input width to the image one
input_shapes[input_name] = input_shape;
// ---------------------------------------------------------------------------------
// ------------- 3. Call reshape ---------------------------------------------------
network.reshape(input_shapes);
// ---------------------------------------------------------------------------------
...
// ------------- 4. Loading model to the device ------------------------------------
std::string device = "CPU";
ExecutableNetwork executable_network = core.LoadNetwork(network, device);
// ---------------------------------------------------------------------------------

Shape Inference feature is used in Smart classroom sample.

Extensibility

Inference Engine provides a special mechanism that allows to add the support of shape inference for custom operations. To enable shape inference for custom operations, create the library with custom nGraph operations and load it to the Inference Engine.

Each nGraph operation must implement two methods:

void validate_and_infer_types() override; // The method infers shapes
bool visit_attributes(AttributeVisitor& visitor) override; // The method allows to read and set all attributes of operation

nGraph provides an operation sets (opsets) mechanism for operation versioning. Different opsets distinguish between different versions of one operation.

IMPORTANT: In your library, implement the InferenceEngine::IExtension::getOpSets() method that returns opsets with custom operations.

When specifying opset names, follow the rules below:

Use a custom opset to create a new operation or extend functionality of an existing operation from another opset.

Load your library with custom nGraph operations to the InferenceEngine::Core object using the InferenceEngine::Core::AddExtension() method.

Old Extensibility API

As already mentioned above, the new approach to shape inference suggests a creation of a custom nGraph operation that contains a special method for shape inference. However, the old approach with the InferenceEngine::IShapeInferExtension method still works for already existing custom layers. Custom Shape Inference functions are registered by calling InferenceEngine::ICNNNetwork::AddExtension with the implemented InferenceEngine::IShapeInferExtension method, which is a holder of custom implementations. The holder requires to implement two key methods:

Custom shape inference implementation is represented by the InferenceEngine::IShapeInferImpl::inferShapes method.

It is not possible to overwrite built-in shape inference functions. Custom type must be different from the supported ones. Extensibility mechanism of the Shape Inference feature is demonstrated in the Hello Shape Infer SSD sample.