Automatic extraction of facial feature deformations (either due to identity change or expression) is a challenging task and could be the base of a facial expression interpretation system. We use Active Appearance Models and the simultaneous inverse compositional algorithm to extract facial deformations as a starting point and propose a modified version addressing the problem of facial appearance variation in an efficient manner. To consider important variation of facial appearance is a first step toward a realistic facial feature deformation extraction system able to adapt to a new face or to track a face with changing video conditions. Moreover, in order to test fittings, we design an experiment protocol that takes human inaccuracies into account when building a ground truth.