Sciweavers

67 search results - page 8 / 14
» Visual Acoustic Emotion Recognition
Sort
View
TMM
2010
224views Management» more  TMM 2010»
13 years 1 months ago
A 3-D Audio-Visual Corpus of Affective Communication
Communication between humans deeply relies on the capability of expressing and recognizing feelings. For this reason, research on human-machine interaction needs to focus on the re...
Gabriele Fanelli, Jürgen Gall, Harald Romsdor...
CVPR
2004
IEEE
14 years 9 months ago
Asymmetrically Boosted HMM for Speech Reading
Speech reading, also known as lip reading, is aimed at extracting visual cues of lip and facial movements to aid in recognition of speech. The main hurdle for speech reading is th...
Pei Yin, Irfan A. Essa, James M. Rehg
ROMAN
2007
IEEE
131views Robotics» more  ROMAN 2007»
14 years 1 months ago
Real-time acoustic source localization in noisy environments for human-robot multimodal interaction
— Interaction between humans involves a plethora of sensory information, both in the form of explicit communication as well as more subtle unconsciously perceived signals. In ord...
Vlad M. Trifa, Ansgar Koene, Jan Morén, Gor...
ICMCS
2006
IEEE
148views Multimedia» more  ICMCS 2006»
14 years 1 months ago
Acoustically-Driven Talking Face Synthesis using Dynamic Bayesian Networks
Dynamic Bayesian Networks (DBNs) have been widely studied in multi-modal speech recognition applications. Here, we introduce DBNs into an acoustically-driven talking face synthesi...
Jianxia Xue, Jonas Borgstrom, Jintao Jiang, Lynne ...
HCI
2007
13 years 8 months ago
Recognition of Affect Conveyed by Text Messaging in Online Communication
Abstract. In this paper, we address the task of affect recognition from text messaging. In order to sense and interpret emotional information expressed through written language, ru...
Alena Neviarouskaya, Helmut Prendinger, Mitsuru Is...