The present work aims to model the correspondence between facial motion and speech. The face and sound are modelled separately, with phonemes being the link between both. We propo...
The use of visual information from lip movements can improve the accuracy and robustness of a speech recognition system. Accurate extraction of visual features associated with the...
Alan Wee-Chung Liew, Shu Hung Leung, Wing Hong Lau
In this paper, we propose a novel correlation based method for speech-video synchronization (synch) and relationship classification. The method uses the envelope of the speech sig...
Different kinds of articulators, such as the upper and lower lips, jaw, and tongue, are precisely coordinated in speech production. Based on a perturbation study of the production ...
In this paper we present a trace-driven framework capable of building realistic mobility models for the simulation studies of mobile systems. With the goal of realism, this framew...
Jungkeun Yoon, Brian D. Noble, Mingyan Liu, Minkyo...