Sciweavers

AIHC
2007
Springer

Audio-Visual Spontaneous Emotion Recognition

14 years 5 months ago
Audio-Visual Spontaneous Emotion Recognition
Automatic multimodal recognition of spontaneous emotional expressions is a largely unexplored and challenging problem. In this paper, we explore audio-visual emotion recognition in a realistic human conversation setting—the Adult Attachment Interview (AAI). Based on the assumption that facial expression and vocal expression are at the same coarse affective states, positive and negative emotion sequences are labeled according to Facial Action Coding System. Facial texture in visual channel and prosody in audio channel are integrated in the framework of Adaboost multi-stream hidden Markov model (AdaMHMM) in which the Adaboost learning scheme is used to build component HMM fusion. Our approach is evaluated in AAI spontaneous emotion recognition experiments.
Zhihong Zeng, Yuxiao Hu, Glenn I. Roisman, Zhen We
Added 07 Jun 2010
Updated 07 Jun 2010
Type Conference
Year 2007
Where AIHC
Authors Zhihong Zeng, Yuxiao Hu, Glenn I. Roisman, Zhen Wen, Yun Fu, Thomas S. Huang
Comments (0)