Sciweavers

AMFG
2003
IEEE

Boosted Audio-Visual HMM for Speech Reading

14 years 5 months ago
Boosted Audio-Visual HMM for Speech Reading
We propose a new approach for combining acoustic and visual measurements to aid in recognizing lip shapes of a person speaking. Our method relies on computing the maximum likelihoods of (a) HMM used to model phonemes from the acoustic signal, and (b) HMM used to model visual features motions from video. One significant addition in this work is the dynamic analysis with features selected by AdaBoost, on the basis of their discriminant ability. This form of integration, leading to boosted HMM, permits AdaBoost to find the best features first, and then uses HMM to exploit dynamic information inherent in the signal.
Pei Yin, Irfan A. Essa, James M. Rehg
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where AMFG
Authors Pei Yin, Irfan A. Essa, James M. Rehg
Comments (0)