Sciweavers

NIPS
2007

A probabilistic model for generating realistic lip movements from speech

14 years 29 days ago
A probabilistic model for generating realistic lip movements from speech
The present work aims to model the correspondence between facial motion and speech. The face and sound are modelled separately, with phonemes being the link between both. We propose a sequential model and evaluate its suitability for the generation of the facial animation from a sequence of phonemes, which we obtain from speech. We evaluate the results both by computing the error between generated sequences and real video, as well as with a rigorous double-blind test with human subjects. Experiments show that our model compares favourably to other existing methods and that the sequences generated are comparable to real video sequences.
Gwenn Englebienne, Tim Cootes, Magnus Rattray
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2007
Where NIPS
Authors Gwenn Englebienne, Tim Cootes, Magnus Rattray
Comments (0)