Sciweavers

247 search results - page 27 / 50
» Multimodal expression in virtual humans
Sort
View
LDVF
2000
112views more  LDVF 2000»
13 years 9 months ago
The instructible agent Lokutor
In this paper we describe Lokutor, a virtual human. Lokutor is a partially autonomous agent, inhabiting a 3D virtual environment. The agent can be controlled via natural language ...
Jan-Torsten Milde
AIHC
2007
Springer
14 years 4 months ago
Audio-Visual Spontaneous Emotion Recognition
Automatic multimodal recognition of spontaneous emotional expressions is a largely unexplored and challenging problem. In this paper, we explore audio-visual emotion recognition in...
Zhihong Zeng, Yuxiao Hu, Glenn I. Roisman, Zhen We...
ICMI
2009
Springer
171views Biometrics» more  ICMI 2009»
14 years 4 months ago
Static vs. dynamic modeling of human nonverbal behavior from multiple cues and modalities
Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including th...
Stavros Petridis, Hatice Gunes, Sebastian Kaltwang...
CHI
2006
ACM
14 years 10 months ago
Attention meter: a vision-based input toolkit for interaction designers
This paper presents Attention Meter- a vision-based input toolkit for visual artist and designers to develop interactive art installation. This toolkit creates a text interpretati...
Chia-Hsun Jackie Lee, Chiun-Yi Ian Jang, Ting-Han ...
ICNC
2010
Springer
13 years 7 months ago
Emotional talking agent: System and evaluation
In this paper, we introduce a system that synthesizes the emotional audio-visual speech for a 3-D talking agent by adopting the PAD (Pleasure-Arousal-Dominance) emotional model. A ...
Shen Zhang, Jia Jia, Yingjin Xu, Lianhong Cai