Sciweavers

104 search results - page 7 / 21
» Multimodal Interfaces That Process What Comes Naturally
Sort
View
ICMI
2010
Springer
217views Biometrics» more  ICMI 2010»
13 years 5 months ago
Focusing computational visual attention in multi-modal human-robot interaction
Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot interaction. Most importantly, it is essential to achieve a joint focus of attentio...
Boris Schauerte, Gernot A. Fink
IUI
2000
ACM
13 years 12 months ago
Expression constraints in multimodal human-computer interaction
Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and pointing out gestures on a touchscreen. However, present sp...
Sandrine Robbe-Reiter, Noelle Carbonell, Pierre Da...
ICMI
2009
Springer
95views Biometrics» more  ICMI 2009»
14 years 2 months ago
Salience in the generation of multimodal referring acts
Pointing combined with verbal referring is one of the most paradigmatic human multimodal behaviours. The aim of this paper is foundational: to uncover the central notions that are...
Paul Piwek
SG
2010
Springer
14 years 16 days ago
Articulate: A Semi-automated Model for Translating Natural Language Queries into Meaningful Visualizations
While many visualization tools exist that offer sophisticated functions for charting complex data, they still expect users to possess a high degree of expertise in wielding the to...
Yiwen Sun, Jason Leigh, Andrew E. Johnson, Sangyoo...
ICMI
2003
Springer
143views Biometrics» more  ICMI 2003»
14 years 21 days ago
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The...
Edward C. Kaiser, Alex Olwal, David McGee, Hrvoje ...