Sciweavers

194 search results - page 17 / 39
» Multimodality and Gestures in the Teacher
Sort
View
JVCA
2006
134views more  JVCA 2006»
13 years 7 months ago
Multimodal expression in virtual humans
This work proposes a real-time virtual human multimodal expression model. Five modalities explore the affordances of the body: deterministic, non-deterministic, gesticulation, faci...
Celso de Melo, Ana Paiva
IUI
2006
ACM
14 years 1 months ago
Head gesture recognition in intelligent interfaces: the role of context in improving recognition
Acknowledging an interruption with a nod of the head is a natural and intuitive communication gesture which can be performed without significantly disturbing a primary interface ...
Louis-Philippe Morency, Trevor Darrell
AAAI
2006
13 years 9 months ago
The Role of Context in Head Gesture Recognition
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. We investigate how dialog context from an ...
Louis-Philippe Morency, Candace L. Sidner, Christo...
ICMI
2005
Springer
164views Biometrics» more  ICMI 2005»
14 years 1 months ago
A user interface framework for multimodal VR interactions
This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments. Its configurable input and gesture processing compon...
Marc Erich Latoschik
ICMI
2005
Springer
170views Biometrics» more  ICMI 2005»
14 years 1 months ago
Inferring body pose using speech content
Untethered multimodal interfaces are more attractive than tethered ones because they are more natural and expressive for interaction. Such interfaces usually require robust vision...
Sy Bor Wang, David Demirdjian