Sciweavers

11 search results - page 2 / 3
» Evaluating models of speaker head nods for virtual agents
Sort
View
ACCV
2010
Springer
13 years 5 months ago
Social Interactive Human Video Synthesis
In this paper, we propose a computational model for social interaction between three people in a conversation, and demonstrate results using human video motion synthesis. We utilis...
Dumebi Okwechime, Eng-Jon Ong, Andrew Gilbert, Ric...
IVA
2007
Springer
14 years 5 months ago
Emotionally Expressive Head and Body Movement During Gaze Shifts
The current state of the art virtual characters fall far short of characters produced by skilled animators. One reason for this is that the physical behaviors of virtual characters...
Brent Lance, Stacy Marsella
IVA
2009
Springer
14 years 3 months ago
Should Agents Speak Like, um, Humans? The Use of Conversational Fillers by Virtual Agents
We describe the design and evaluation of an agent that uses the fillers um and uh in its speech. We describe an empirical study of human-human dialogue, analyzing gaze behavior dur...
Laura M. Pfeifer, Timothy W. Bickmore
ATAL
2008
Springer
14 years 26 days ago
Politeness and alignment in dialogues with a virtual guide
Language alignment is something that happens automatically in dialogues between human speakers. The ability to align is expected to increase the believability of virtual dialogue ...
Markus de Jong, Mariët Theune, Dennis Hofs
ATAL
2008
Springer
14 years 26 days ago
A model of gaze for the purpose of emotional expression in virtual embodied agents
Currently, state of the art virtual agents lack the ability to display emotion as seen in actual humans, or even in hand-animated characters. One reason for the emotional inexpres...
Brent J. Lance, Stacy Marsella