Supported Framework Layers

Caffe* Supported Layers and the Mapping to the Intermediate Representation Layers

Standard Caffe* layers:

Number Layer Name in Caffe* Layer Name in the Intermediate Representation
1Input Input
2GlobalInput Input
3InnerProduct FullyConnected
4Dropout Ignored, does not appear in IR
5Convolution Convolution
6Deconvolution Deconvolution
7Pooling Pooling
8BatchNorm BatchNormalization
9LRN Norm
10Power Power
11ReLU ReLU
12Scale ScaleShift
13Concat Concat
14Eltwise Eltwise
15Flatten Flatten
16Reshape Reshape
17Slice Slice
18Softmax SoftMax
19Permute Permute
20ROIPooling ROIPooling
21Tile Tile
22ShuffleChannel Reshape + Split + Permute + Concat
23Axpy ScaleShift + Eltwise
24BN ScaleShift
25DetectionOutput DetectionOutput
26StridedSlice StridedSlice
27Bias Eltwise(operation = sum)

MXNet* Supported Symbols and the Mapping to the Intermediate Representation Layers

Standard MXNet* symbols:

NumberSymbol Name in MXNet*Layer Name in the Intermediate Representation
1BatchNorm BatchNormalization
2Crop Crop
3ScaleShift ScaleShift
4Pooling Pooling
5SoftmaxOutput SoftMax
6SoftmaxActivation SoftMax
7null Ignored, does not appear in IR
8Convolution Convolution
9Deconvolution Deconvolution
10Activation(act_type = relu) ReLU
11ReLU ReLU
12LeakyReLU ReLU (negative_slope = 0.25)
13Concat Concat
14elemwise_add Eltwise(operation = sum)
15_Plus Eltwise(operation = sum)
16Flatten Flatten
17Reshape Reshape
18FullyConnected FullyConnected
19UpSampling Resample
20transpose Permute
21LRN Norm
22L2Normalization Normalize
23Dropout Ignored, does not appear in IR
24_copy Ignored, does not appear in IR
25_contrib_MultiBoxPrior PriorBox
26_contrib_MultiBoxDetection DetectionOutput
27broadcast_mul ScaleShift
28sigmoid sigmoid
29Activation (act_type = tanh) Activation (operation = tanh)
30LeakyReLU (act_type = prelu) PReLU
31LeakyReLU (act_type = elu) Activation (operation = elu)
32elemwise_mul Eltwise (operation = mul)
33add_n
mxnet_add_n_ref.png
Eltwise (operation = sum)
ir_add_n_ref.png
34ElementWiseSum
mxnet_add_n_ref.png
Eltwise (operation = sum) or ScaleShift
ir_add_n_ref.png
35_mul_scalar Power
36broadcast_add Eltwise (operation = sum)
37slice_axis Crop
38Custom Custom Layers in the Model Optimizer
39_minus_scalar Power
40Pad Pad
41_contrib_Proposal Proposal
42ROIPooling ROIPooling
43stack Concat
44swapaxis Permute
45zeros Const
45rnn TensorIterator
46rnn_param_concat Concat
47slice_channel Split
48_maximum Eltwise(operation = max)
49_minimum Power(scale=-1) + Eltwise(operation = max) + Power(scale=-1)
50InstanceNormscale * (x - mean) / sqrt(variance + epsilon) + B
51EmbeddingGather
52DeformableConvolution DeformableConvolution
53DeformablePSROIPooling PSROIPooling (method=deformable)
54Where Select
55exp Exp
56slice_like Crop
57div_scalar Power(power = -1) + Eltwise(operation = mul)
58minus_scalar Eltwise(operation = sum) + Power(scale=-1)
59greater_scalar Eltwise(operation=Greater)
60elemtwise_sub Eltwise(operation = sum) + Power(scale=-1)
61expand_dims Unsqueeze

TensorFlow* Supported Operations and the Mapping to Intermediate Representation Layers

Some TensorFlow* operations do not match to any Inference Engine layer, but are still supported by the Model Optimizer and can be used on constant propagation path. These layers are labeled 'Constant propagation' in the table.

Standard TensorFlow* operations:

NumberOperation Name in TensorFlow Layer Name in the Intermediate Representation
1Transpose Permute
2LRN Norm
3Split Split
4SplitV Split
5FusedBatchNorm ScaleShift (can be fused into Convolution or FullyConnected)
6Relu6 Clamp
7DepthwiseConv2dNativeConvolution
8ExpandDims Unsqueeze
9Slice Split
10ConcatV2 Concat
11MatMul FullyConnected
12Pack Reshapes and Concat
13StridedSlice StridedSlice or Split
14Prod Constant propagation
15Const Const
16Tile Tile
17Placeholder Input
18Pad Fused into Convolution or Pooling layers (not supported as single operation)
19Conv2D Convolution
20Conv2DBackpropInput Deconvolution
21Identity Ignored, does not appear in the IR
22Add Eltwise(operation = sum) or ScaleShift
23Mul Eltwise(operation = mul)
24Maximum Eltwise(operation = max)
25Rsqrt Power(power=-0.5)
26Neg Power(scale=-1)
27Sub Eltwise(operation = sum) + Power(scale=-1)
28Relu ReLU
29AvgPool Pooling (pool_method=avg)
30MaxPool Pooling (pool_method=max)
31Mean Pooling (pool_method = avg) (sequential reduce dimensions are supported only)
32RandomUniform Not supported
33BiasAdd Fused or converted to ScaleShift
34Reshape Reshape
35Squeeze Squeeze
36Shape Constant propagation (or layer generation if the "--keep_shape_ops" command line parameter has been specified)
37Softmax SoftMax
38SpaceToBatchND Supported in a pattern when converted to Convolution layer dilation attribute, Constant propagation
39BatchToSpaceND Supported in a pattern when converted to Convolution layer dilation attribute, Constant propagation
40StopGradient Ignored, does not appear in IR
41Square Constant propagation
42Sum Pool(pool_method = avg) + Eltwise(operation = mul)
43Range Constant propagation
44CropAndResize ROIPooling (if the the method is 'bilinear')
45ArgMax ArgMax
46DepthToSpaceReshape + Permute + Reshape (works for CPU only because of 6D tensors)
47ExtractImagePatches ReorgYolo
48ResizeBilinear Interp
49ResizeNearestNeighbor Resample
50Unpack Split + Reshape (removes dimension being unpacked) if the number of parts is equal to size along given axis
51AddN Several Eltwises
52Concat Concat
53Minimum Power(scale=-1) + Eltwise(operation = max) + Power(scale=-1)
54TopkV2 TopK
55RealDiv Power(power = -1) and Eltwise(operation = mul)
56SquaredDifference Power(scale = -1) + Eltwise(operation = sum) + Power(power = 2)
57Gather Gather
58GatherV2 Gather
59ResourceGatherGather
60Sqrt Power(power=0.5)
61SquarePower(power=2)
62Pad Pad
63PadV2 Pad
64MirrorPad Pad
65ReverseSequence ReverseSequence
66ZerosLike Constant propagation
67Fill Broadcast
68Cast Cast to the following data types are removed from the graph float32, double, int32, int64
69Enter Supported only when it is fused to the TensorIterator layer
70Exit Supported only when it is fused to the TensorIterator layer
71LoopCond Supported only when it is fused to the TensorIterator layer
72Merge Supported only when it is fused to the TensorIterator layer
73NextIteration Supported only when it is fused to the TensorIterator layer
74TensorArrayGatherV3 Supported only when it is fused to the TensorIterator layer
75TensorArrayReadV3 Supported only when it is fused to the TensorIterator layer
76TensorArrayScatterV3 Supported only when it is fused to the TensorIterator layer
77TensorArraySizeV3 Supported only when it is fused to the TensorIterator layer
78TensorArrayV3 Supported only when it is fused to the TensorIterator layer
79TensorArrayWriteV3 Supported only when it is fused to the TensorIterator layer
80Equal Eltwise(operation = equal)
81Exp Eltwise(operation = exp)
82Greater Eltwise(operation = greater)
83GreaterEqual Eltwise(operation = greater_equal)
84Less Eltwise(operation = less)
85LogicalAnd Eltwise(operation = logical_and)
86Min Constant propagation
87Max Reshape + Pooling (pool_method=max) + Reshape
88GatherNd Supported if it can be replaced with Gather
89PlaceholderWithDefault Const
90Rank Constant propagation
91Round Constant propagation
92Sigmoid Activation(operation = sigmoid)
93Size Constant propagation
94Switch Control flow propagation
94Swish Mul(x, Sigmoid(x))

Kaldi* Supported Layers and the Mapping to the Intermediate Representation Layers

Standard Kaldi* Layers:

Number Layer Name in Kaldi*Layer name in the Intermediate Representation
1AddShift Will be fused or converted to ScaleShift
2AffineComponent FullyConnected
3AffineTransform FullyConnected
4ConvolutionalComponent Convolution
5Convolutional1DComponent Convolution
6FixedAffineComponent FullyConnected
7LstmProjected
LSTMNode.png
8LstmProjectedStreams The same as for LstmProjected
9MaxPoolingComponent Pooling (pool_method = max)
10NormalizeComponent ScaleShift
11RectifiedLinearComponent ReLU
12ParallelComponent
kaldi_pc.png
pc.png
13Rescale Will be fused or converted to ScaleShift
14Sigmoid Activation (operation = sigmoid)
15Softmax Softmax
16SoftmaxComponent Softmax
17SpliceComponent
splice.png
18TanhComponent Activation (operation = tanh)

ONNX* Supported Operators and the mapping to the Intermediate Representation layers

Standard ONNX* operators:

Number Operator name in ONNX* Layer type in the Intermediate Representation
1Add Eltwise(operation = sum) (added 'axis' support) or ScaleShift
2AveragePool Pooling (pool_method=avg)
3BatchNormalization ScaleShift (can be fused into Convlution or FC)
4Concat Concat
5Constant Const
6Conv Convolution
7ConvTranspose Deconvolution (added auto_pad and output_shape attributes support))
8Div Eltwise(operation = mul)->Power
9Dropout Ignored, does not apeear in IR
10Elu Activation (ELU)
11Flatten Reshape
12Gemm FullyConnected or GEMM depending on inputs
13GlobalAveragePool Pooling (pool_method=avg)
14Identity Ignored, does not appear in IR
15LRN Norm
16LeakyRelu ReLU
17MatMul FullyConnected
17MaxPool Pooling (pool_method=max)
19Mul Eltwise(operation = mul) (added axis support)
20Relu ReLU
21Reshape Reshape
22Shape Constant propagation
23Softmax SoftMax
24Squeeze Squeeze
25Sub Power->Eltwise(operation = sum)
26Sum Eltwise(operation = sum)
27Transpose Permute
28Unsqueeze Reshape
29Upsample Resample
30ImageScaler ScaleShift
31Affine ScaleShift
32Reciprocal Power(power=-1)
33Crop Split
34Tanh Activation (operation = tanh)
35Sigmoid Activation (operation = sigmoid)
36Pow Power
37ConvTranspose
38Gather Gather
39ConstantFill Constant propagation
40ReduceMean Reshape + Pooling(pool_method=avg) + Reshape (sequential reduce dimensions are supported only)
41ReduceSum Reshape + Pooling(pool_method=avg) + Power(scale=reduce_dim_size) + Reshape (sequential reduce dimensions are supported only)
42Gather Gather
43Gemm GEMM
44GlobalMaxPool Pooling (pool_method=max)
45Neg Power(scale=-1)
46Pad Pad
47ArgMax ArgMax
48Clip Clamp
49DetectionOutput (Intel experimental) DetectionOutputONNX
50PriorBox (Intel experimental) PriorBoxONNX
51RNN TensorIterator(with RNNCell in a body)
52GRU TensorIterator(with GRUCell in a body)
53LSTM TensorIterator(with LSTMCell in a body)
54FakeQuantize (Intel experimental) FakeQuantize
55Erf Erf
56BatchMatMul GEMM
57SpaceToDepth Reshape + Permute + Reshape
58Fill Broadcast
59Select Select
60OneHot OneHot
61TopK TopK
62GatherTree GatherTree
63LogicalAnd Eltwise(operation = LogicalAnd)
64LogicalOr Eltwise(operation = LogicalOr)
65Equal Eltwise(operation = Equal)
66NotEqual Eltwise(operation = NotEqual)
67Less Eltwise(operation = Less)
68LessEqual Eltwise(operation = LessEqual)
69Greater Eltwise(operation = Greater)
70GreaterEqual Eltwise(operation = GreaterEqual)
71ConstantOfShape Broadcast
72Expand Broadcast
73Not Activation (operation = not)
74ReduceMin ReduceMin
75NonMaxSuppression NonMaxSuppression
76Floor Activation (operation = floor)
77Slice Split or StridedSlice