This is a lightweight net for face re-identification. It is based on modified Mobilenet V2 backbone that consists of 3x3 inverted residual blocks with squeeze-excitation attention modules. Instead of the ReLU6 activations in the original Mobilnet V2 PReLU ones are used. After the backbone goes global depthwise pooling and 1x1 convolution to create the final embedding vector. The model produces feature vectors which should be close in cosine distance for similar faces and far for different faces.
|Face location requirements||Tight aligned crop|
LFW metric is the accuracy in the pairwise reidentification test. See the full benchmark description for details.
The model achieves the best results if an input face is frontally oriented and aligned. Face image is aligned if five keypoints (left eye, right eye, tip of nose, left lip corner, right lip corner) are located in the following points in normalized coordinates [0,1]x[0,1]:
To align the face, use a landmarks regression model: using regressed points and the given reference landmarks, build an affine transformation to transform regressed points to the reference ones and apply this transformation to the input face image.
Link to performance table
Name: "data" , shape: [1x3x128x128] - An input image in the format [BxCxHxW], where:
Expected color order is BGR.
The net outputs a blob with the shape [1, 256, 1, 1], containing a row-vector of 256 floating point values. Outputs on different images are comparable in cosine distance.
[*] Other names and brands may be claimed as the property of others. [*] Other names and brands may be claimed as the property of others.