The synthesis of facial expression with control of intensity and personal styles is important in intelligent and affective human-computer interaction, especially in face-to-face interaction between human and intelligent agent. We present a facial expression animation system that facilitates control of expressiveness and style. We learn a decomposable generative model for the nonlinear deformation of facial expressions by analyzing the mapping space between low dimensional embedded representation and high resolution tracking data. Bilinear analysis of the mapping space provides a compact representation of the nonlinear generative model for facial expressions. The decomposition allows synthesis of new facial expressions by control of geometry and expression style. The generative model provides control of expressiveness preserving nonlinear deformation in the expressions with simple parameters and allows synthesis of stylized facial geometry. In addition, we can directly extract the MPEG...
Chan-Su Lee, Ahmed M. Elgammal, Dimitris N. Metaxa