True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design spa...
Edward Tse, Saul Greenberg, Chia Shen, Clifton For...
My doctoral research focuses on the usability and usage of new computer technology such as interactive systems that support the combination different input media such as voice, ge...
Multimodal interaction enables the user to employ different modalities such as voice, gesture and typing for communicating with a computer. This paper presents an analysis of the ...
We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The...
Edward C. Kaiser, Alex Olwal, David McGee, Hrvoje ...
The Catchment Feature Model (CFM) addresses two questions in multimodal interaction: how do we bridge video and audio processing with the realities of human multimodal communicati...
In this paper we report on ongoing experiments with an advanced multimodal system for applications in architectural design. The system supports uninformed users in entering the rel...
Lou Boves, Andre Neumann, Louis Vuurpijl, Louis te...
A user experiment on multimodal interaction (speech, hand position and hand shapes) to study two major relationships: between the level of cognitive load experienced by users and t...
The Open Interface Development Environment (OIDE) was developed as part of the OpenInterface (OI) platform, an open source framework for the rapid development of multimodal intera...
Marilyn Rose McGee-Lennon, Andrew Ramsay, David K....
In this paper we present the integration of graph-based visual perception to spoken conversation in human-robot interaction. The proposed architecture has a dialogue manager as the...