Sciweavers

INTERSPEECH
2010

Active appearance models for photorealistic visual speech synthesis

13 years 7 months ago
Active appearance models for photorealistic visual speech synthesis
The perceived quality of a synthetic visual speech signal greatly depends on the smoothness of the presented visual articulators. This paper explains how concatenative visual speech synthesis systems can apply active appearance models to achieve a smooth and natural visual output speech. By modeling the visual speech contained in the system's speech database, a diversification between the synthesis of the shape and the texture of the talking head is feasible. This allows the system to accurately balance between the articulation strength of the visual articulators and the signal smoothness of the visual mode in order to optimize the synthesis. To improve the synthesis quality, an automatic database normalization strategy has been designed that removes variations from the database which are not related to speech production. As was verified by a perception experiment, this normalization strategy significantly improves the perceived signal quality.
Wesley Mattheyses, Lukas Latacz, Werner Verhelst
Added 18 May 2011
Updated 18 May 2011
Type Journal
Year 2010
Where INTERSPEECH
Authors Wesley Mattheyses, Lukas Latacz, Werner Verhelst
Comments (0)