Sciweavers

ICMI
2004
Springer

Analysis of emotion recognition using facial expressions, speech and multimodal information

14 years 4 months ago
Analysis of emotion recognition using facial expressions, speech and multimodal information
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on facial expressions or speech, relatively limited work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. This paper analyzes the strengths and the limitations of systems based only on facial expressions or acoustic information. It also discusses two approaches used to fuse these two modalities: decision level and feature level integration. Using a database recorded from an actress, four emotions were classified: sadness, anger, happiness, and neutral state. By the use of markers on her face, detailed facial motions were captured with motion capture, in conjunction with simultaneous speech recordings. The results reveal that the system based on facial expressi...
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murta
Added 01 Jul 2010
Updated 01 Jul 2010
Type Conference
Year 2004
Where ICMI
Authors Carlos Busso, Zhigang Deng, Serdar Yildirim, Murtaza Bulut, Chul Min Lee, Abe Kazemzadeh, Sungbok Lee, Ulrich Neumann, Shrikanth Narayanan
Comments (0)