Sciweavers

ICMI
2003
Springer

Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality

14 years 4 months ago
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive statistics for ranking potential referents. We discuss the means by which the system supports mutual disambiguation of these modalities and information sources, and show through a user study how mutual disambiguation accounts for over 45% of the successful 3D multimodal interpretations. An accompanying video demonstrates the system in action. Categories H.5.1 (Multimedia Information Systems): Artificial, augmented, and virtual realities; H.5.2 (User Interfaces): Graphical user in...
Edward C. Kaiser, Alex Olwal, David McGee, Hrvoje
Added 07 Jul 2010
Updated 07 Jul 2010
Type Conference
Year 2003
Where ICMI
Authors Edward C. Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Philip R. Cohen, Steven Feiner
Comments (0)