We introduce a procedure to estimate human face high level animation parameters from a marker-less image sequence in presence of strong illumination changes. We use an efficient appearance-based tracker to stabilise face images and estimate illumination variation. This is achieved by using an appearance model composed by two independent linear subspaces modelling face deformation and illumination changes respectively. The system is very simple to train and is able to re-animate a 3D face model in real-time.