Facial variation divides into a number of functional subspaces, and ensemblespecific variation. An improved method of measuring these is presented, within the space defined by an Appearance Model. Initial estimates of the subspaces (lighting, pose, identity and expression) are obtained by Principal Components Analysis on appropriate groups of faces. An expectationmaximization algorithm is applied to image codings to maximise the probability of coding across these non-orthogonal subspaces. Ensemble specific variation is then removed by measuring the spatial predictability of the eigenvectors excluding those which are less predictable than the ensemble. These procedures significantly enhance identity recognition for a disjoint test set.
Nicholas Costen, Timothy F. Cootes, Gareth J. Edwa