In the past decade or so, subspace methods have been largely used in face recognition ? generally with quite success. Subspace approaches, however, generally assume the training data represents the full spectrum of image variations. Unfortunately, in face recognition applications one usually has an under-represented training set. A known example is that possed by images bearing different expressions; i.e., where the facial expression in the training image and in the testing image diverge. If the goal is to recognize the identity of the person in the picture, facial expressions will be seen as distracters. Subspace methods do not address this problem successfully, because the feature-space learned is dependent over the set of training images available ? leading to poor generalization results. In this communication, we show how one can use the deformation of the face (between the training and testing images) to solve the above defined problem. To achieve this, we calculate the facial de...
Aleix M. Martínez, Yongbin Zhang