OpenIMAJ contains a number of tools for face detection, recognition and similarity comparison. In particular, OpenIMAJ implements a fairly standard recognition pipeline. The pipeline consists of four stages:
Face Detection
Face Alignment
Facial Feature Extraction
Face Recognition/Classification
Each stage of the pipeline is configurable, and OpenIMAJ contains a number of different options for each stage as well as offering the possibility to easily implement more. The pipeline is designed to allow researchers to focus on a specific area of the pipeline without having to worry about the other components. At the same time, it is fairly easy to modify and evaluate a complete pipeline.
In addition to the parts of the recognition pipeline, OpenIMAJ also includes code for tracking faces in videos and comparing the similarity of faces.
Bear in mind that as with all computer vision techniques, because of the variability of real-world images, each stage of the pipeline has the potential to fail.
The first stage of the pipeline is face detection. Given an image, a
face detector attempts to localise all the faces in the image. All
OpenIMAJ face detectors are subclasses of
FaceDetector
, and they all produce subclasses of
DetectedFace
as their output. Currently, OpenIMAJ
implements the following types of face detector:
org.openimaj.image.processing.face.detection.SandeepFaceDetector
:
A face detector that searches the image for areas of skin-tone
that have a height/width ratio similar to the golden ratio. The
detector will only find faces that are upright in the image (or
upside-down).
org.openimaj.image.processing.face.detection.HaarCascadeDetector
:
A face detector based on the classic Viola-Jones classifier
cascade framework. The classifier comes with a number of
pre-trained models for frontal and side face views. The
classifier is only mildly invariant to rotation, and it won’t
detect non-upright faces.
org.openimaj.image.processing.face.keypoints.FKEFaceDetector
:
The Frontal Keypoint Enhanced (FKE) Face Detector is not
actually a detector in it’s own right, but rather a wrapper
around the HaarCascadeDetector
. The FKE
provides additional information about a face detection by
finding facial keypoints on top of the face. The facial
keypoints are located at stable points on the face (sides of the
eyes, bottom of the nose, corners of the mouth). The facial
keypoints can be used for alignment and feature extraction as
described in the next section.
org.openimaj.image.processing.face.detection.CLMFaceDetector
:
The Constrained Local Model (CLM) face detector uses an
underlying HaarCascadeDetector
to perform an
initial face detection and then fits a statistical 3D face model
to the detected region. The 3D face model can be used to locate
facial keypoints within the image and also to determine the pose
of the face. The model is a form of parameterised statistical
shape model called a “point distribution model”;
this means that the 3D model has an associated set of parameters
which control elements of its shape (i.e. there are parameters
that determine whether the mouth is open or closed, or how big
the nose is). During the process of fitting the model to the
image, these parameters are learned automatically, and are
exposed through the detections
(CLMDetectedFaces
) returned by the
CLMFaceDetector
.
org.openimaj.image.processing.face.detection.IdentityFaceDetector
:
The identity face detector just returns a single detection for
each input image it is given. The detection covers the entire
area of the input image. This is useful for working with face
datasets that contain pre-extracted and cropped faces.
Table of Contents