We present a multi-camera vision-based eye tracking method to robustly locate and track user’s eyes as they interact with an application. We propose enhancements to various visi...
Ravikrishna Ruddarraju, Antonio Haro, Kris Nagel, ...
This paper presents a multi-modal approach to locate a speaker in a scene and determine to whom he or she is speaking. We present a simple probabilistic framework that combines mu...
Michael Siracusa, Louis-Philippe Morency, Kevin Wi...
The development of an intelligent user interface that supports multimodal access to multiple applications is a challenging task. In this paper we present a generic multimodal inte...
Norbert Reithinger, Jan Alexandersson, Tilman Beck...
This paper describes techniques that allow users to collaborate on tablet computers that employ distributed sensing techniques to establish a privileged connection between devices...
Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can...
Tracy L. Westeyn, Helene Brashear, Amin Atrash, Th...