Sciweavers

70 search results - page 7 / 14
» Emotional Facial Expression Classification for Multimodal Us...
Sort
View
CHI
2009
ACM
14 years 8 months ago
Designing CALLY, : a cell-phone robot
This proposal describes the early phase of our design process developing a robot cell-phone named CALLY, with which we are exploring the roles of facial and gestural expressions o...
Ji-Dong Yim, Christopher D. Shaw
ATAL
2008
Springer
13 years 9 months ago
Trackside DEIRA: a dynamic engaging intelligent reporter agent
DEIRA is a virtual agent commenting on virtual horse races in real time. DEIRA analyses the state of the race, acts emotionally and comments about the situation in a believable an...
François L. A. Knoppel, Almer S. Tigelaar, ...
AVI
2008
13 years 9 months ago
Exploring emotions and multimodality in digitally augmented puppeteering
Recently, multimodal and affective technologies have been adopted to support expressive and engaging interaction, bringing up a plethora of new research questions. Among the chall...
Lassi A. Liikkanen, Giulio Jacucci, Eero Huvio, To...
ICIP
1999
IEEE
14 years 9 months ago
Multimodal Interaction in Collaborative Virtual Environments
Human interfaces for computer graphics systems are now evolving towards a total multi-modal approach. Information gathered using visual, audio and motion capture systems are now b...
Taro Goto, Marc Escher, Christian Zanardi, Nadia M...
IJVR
2007
123views more  IJVR 2007»
13 years 7 months ago
Towards Sociable Virtual Humans: Multimodal Recognition of Human Input and Behavior
—One of the biggest obstacles for constructing effective sociable virtual humans lies in the failure of machines to recognize the desires, feelings and intentions of the human us...
Christian Eckes, Konstantin Biatov, Frank Hül...