Visual information has been shown to improve the performance of speech recognition systems in noisy acoustic environments. However, most audio-visual speech recognizers rely on a ...
We present a generic approach to multimodal fusion which we call context based multimodal integration. Key to this approach is that every multimodal input event is interpreted and...
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, ...
Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal in...
Sharon L. Oviatt, Rachel Coulston, Rebecca Lunsfor...
We present a video demonstration of an agent-based test bed application for ongoing research into multi-user, multimodal, computer-assisted meetings. The system tracks a two perso...
Edward C. Kaiser, David Demirdjian, Alexander Grue...
We describe a home lighting robot that uses directional spotlights to create complex lighting scenes. The robot senses its visual environment using a panoramic camera and attempts...
In this paper, we introduce input adaptability as the ability of an application to exploit alternative sets of input devices effectively and offer users a way of adapting input in...
In this paper, we propose a multimodal system for detecting human activity and interaction patterns in a nursing home. Activities of groups of people are firstly treated as intera...
We are interested in applying and extending existing frameworks for combining output modalities for adaptations of multimodal content on heterogeneous devices based on user and de...
Songsak Channarukul, Susan Weber McRoy, Syed S. Al...