Multimodal speech and speaker modeling and recognition are widely accepted as vital aspects of state of the art human-machine interaction systems. While correlations between speec...
Mehmet Emre Sargin, Oya Aran, Alexey Karpov, Ferda...
Recently, multimodal and affective technologies have been adopted to support expressive and engaging interaction, bringing up a plethora of new research questions. Among the chall...
Lassi A. Liikkanen, Giulio Jacucci, Eero Huvio, To...
In this paper we present DaFEx (Database of Facial Expressions), a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embo...
This paper presents an audio-visual emotion database that can be used as a reference database for testing and evaluating video, audio or joint audio-visual emotion recognition alg...
O. Martin, Irene Kotsia, Benoit M. Macq, Ioannis P...
In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspon...