Sciweavers

19 search results - page 3 / 4
» Multimodal Complex Emotions: Gesture Expressivity and Blende...
Sort
View
IVA
2010
Springer
13 years 5 months ago
Realizing Multimodal Behavior - Closing the Gap between Behavior Planning and Embodied Agent Presentation
Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control...
Michael Kipp, Alexis Heloir, Marc Schröder, P...
LRE
2008
174views more  LRE 2008»
13 years 7 months ago
IEMOCAP: interactive emotional dyadic motion capture database
Since emotions are expressed through a combination of verbal and non-verbal channels, a joint analysis of speech and gestures is required to understand expressive human communicati...
Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kaz...
VIP
2003
13 years 8 months ago
Face and Body Gesture Recognition for a Vision-Based Multimodal Analyzer
For the computer to interact intelligently with human users, computers should be able to recognize emotions, by analyzing the human’s affective state, physiology and behavior. I...
Hatice Gunes, Massimo Piccardi, Tony Jan
ICRA
2007
IEEE
165views Robotics» more  ICRA 2007»
14 years 1 months ago
Emotional Architecture for the Humanoid Robot Head ROMAN
— Humanoid robots as assistance or educational robots is an important research topic in the field of robotics. Especially the communication of those robots with a human operator...
Jochen Hirth, Norbert Schmitz, Karsten Berns
AGENTS
2000
Springer
13 years 11 months ago
Experimental assessment of the effectiveness of synthetic personae for multi-modal e-retail applications
This paper details results of an experiment to empirically evaluate the effectiveness and user acceptability of human-like synthetic agents in a multi-modal electronic retail scen...
Helen McBreen, Paul Shade, Mervyn A. Jack, Peter J...