Sciweavers

247 search results - page 32 / 50
» Multimodal expression in virtual humans
Sort
View
ICMI
2003
Springer
150views Biometrics» more  ICMI 2003»
14 years 3 months ago
Auditory, graphical and haptic contact cues for a reach, grasp, and place task in an augmented environment
An experiment was conducted to investigate how performance of a reach, grasp and place task was influenced by added auditory and graphical cues. The cues were presented at points ...
Mihaela A. Zahariev, Christine L. MacKenzie
LREC
2010
219views Education» more  LREC 2010»
13 years 11 months ago
The GIVE-2 Corpus of Giving Instructions in Virtual Environments
We present the GIVE-2 Corpus, a new corpus of human instruction giving. The corpus was collected by asking one person in each pair of subjects to guide the other person towards co...
Andrew Gargett, Konstantina Garoufi, Alexander Kol...
ATAL
2009
Springer
14 years 4 months ago
Using rituals to express cultural differences in synthetic characters
There is currently an ongoing demand for richer Intelligent Virtual Environments (IVEs) populated with social intelligent agents. As a result, many agent architectures are taking ...
Samuel Mascarenhas, João Dias, Nuno Afonso,...
IWC
2008
84views more  IWC 2008»
13 years 8 months ago
I hate you! Disinhibition with virtual partners
This paper presents a descriptive lexical analysis of spontaneous conversations between users and the 2005 Loebner prize winning chatterbot, Jabberwacky. The study was motivated i...
Antonella De Angeli, Sheryl Brahnam
ACMACE
2007
ACM
14 years 1 months ago
Gaze-based infotainment agents
We propose an infotainment presentation system that relies on eye gaze as an intuitive and unobtrusive input modality. The system analyzes eye movements in real-time to infer user...
Helmut Prendinger, Tobias Eichner, Elisabeth Andr&...