Sciweavers

VR
2008
IEEE

Creating a Speech Enabled Avatar from a Single Photograph

14 years 6 months ago
Creating a Speech Enabled Avatar from a Single Photograph
This paper presents a complete framework for creating a speechenabled avatar from a single image of a person. Our approach uses a generic facial motion model which represents deformations of a prototype face during speech. We have developed an HMM-based facial animation algorithm which takes into account both lexical stress and coarticulation. This algorithm produces realistic animations of the prototype facial surface from either text or speech. The generic facial motion model can be transformed to a novel face geometry using a set of corresponding points between the prototype face surface and the novel face. Given a face photograph, a small number of manually selected features in the photograph are used to deform the prototype face surface. The deformed surface is then used to animate the face in the photograph. We show several examples of avatars that are driven by text and speech inputs.
Dmitri Bitouk, Shree K. Nayar
Added 01 Jun 2010
Updated 01 Jun 2010
Type Conference
Year 2008
Where VR
Authors Dmitri Bitouk, Shree K. Nayar
Comments (0)