An experiment was conducted to investigate how performance of a reach, grasp and place task was influenced by added auditory and graphical cues. The cues were presented at points ...
Multimodal interfaces are designed with a focus on flexibility, although very few currently are capable of adapting to major sources of user, task, or environmental variation. The...
Gesture recognition is becoming a more common interaction tool in the fields of ubiquitous and wearable computing. Designing a system to perform gesture recognition, however, can...
Tracy L. Westeyn, Helene Brashear, Amin Atrash, Th...
Intensive computations required for sensing and processing perceptual information can impose significant burdens on personal computer systems. We explore several policies for sel...
The development of an intelligent user interface that supports multimodal access to multiple applications is a challenging task. In this paper we present a generic multimodal inte...
Norbert Reithinger, Jan Alexandersson, Tilman Beck...
Kinesthetic feedback is a key mechanism by which people perceive object properties during their daily tasks – particularly inertial properties. For example, transporting a glass...
Enabling computer systems to recognize facial expressions and infer emotions from them in real time presents a challenging research topic. In this paper, we present a real time ap...
This paper presents a multi-modal approach to locate a speaker in a scene and determine to whom he or she is speaking. We present a simple probabilistic framework that combines mu...
Michael Siracusa, Louis-Philippe Morency, Kevin Wi...