Once you have detected a face (and possibly chosen an aligner for
it), you need to extract a feature which you can then use for
recognition or similarity comparison. As with the detection and
alignment phases, OpenIMAJ contains a number of different
implementations of FacialFeatureExtractor
s which
produce FacialFeature
s together with methods for
comparing pairs of FacialFeature
s in order to get
a similarity measurement. The currently implemented
FacialFeature
s are listed below:
CLMPoseFeature
: A feature that represents the
pose of a face detected with the
CLMFaceDetector
. The pose consists of the
pitch, roll and yaw of the face. The feature can expose itself
as a DoubleFV
and can be compared using a
FaceFVComparator
.
CLMPoseShapeFeature
: A feature that
represents the shape parameters and pose of a face detected with
the CLMFaceDetector
. The shape vector
describes the shape of the face as a small set of variables, and
the pose consists of the pitch, roll and yaw of the face. The
feature can expose itself as a DoubleFV
and
can be compared using a FaceFVComparator
.
CLMShapeFeature
: A feature that represents
the shape parameters of a face detected with the
CLMFaceDetector
. The shape vector describes
the shape of the face as a small set of variables. The feature
can expose itself as a DoubleFV
and can be
compared using a FaceFVComparator
.
DoGSIFTFeature
: A feature built by detecting
local interest points on the face using a Difference of Gaussian
pyramid, and then describing these using SIFT features. The
DoGSIFTFeatureComparator
can be used to
compare these features.
EigenFaceFeature
: A feature built by
projecting the pixels of an aligned face into a
lower-dimensional space learned through PCA. The feature
extractor must be “trained” with a set of example
aligned faces before it can be used. This forms the core of the
classic Eigen Faces algorithm. The feature can expose itself as
a DoubleFV
and can be compared using a
FaceFVComparator
.
FaceImageFeature
: A feature built directly
from the pixels of an aligned face. No normalisation is
performed. The feature can expose itself as a
FloatFV
and can be compared using a
FaceFVComparator
.
FacePatchFeature
: A feature built by
concatenating the pixels from each of the normalised circular
patches around each facial keypoint from an
FKEDetectedFace
. The feature can expose
itself as a FloatFV
and can be compared using
a FaceFVComparator
.
FisherFaceFeature
: A feature built by
projecting the pixels of an aligned face into a
lower-dimensional space learned through Fisher’s Linear
Discriminant Analysis. The feature extractor must be
“trained” with a set of example aligned faces
before it can be used. This forms the core of the classic Fisher
Faces algorithm. The feature can expose itself as a
DoubleFV
and can be compared using a
FaceFVComparator
.
LocalLBPHistogram
: Feature constructed by
breaking the image into blocks and computing histograms of Local
Binary Patterns (LBPs) for each block. All histograms are
concatenated to form the final feature. The feature can expose
itself as a FloatFV
and can be compared using
a FaceFVComparator
.
LtpDtFeature
: A feature built from Local
Trinary Patterns (LTPs) within an aligned image. The features
are constructed to be compared using a Euclidean Distance
Transform with the LtpDtFeatureComparator
or
ReversedLtpDtFeatureComparator
comparators.