Sign language recognition (SLR) plays an important role in human-computer interaction (HCI), especially for the convenient communication between deaf and hearing society. How to e...
This paper describes the Hapticat , a device we developed to study affect through touch. Though intentionally not highly zoomorphic, the device borrows behaviors from pets and th...
Steve Yohanan, Mavis Chan, Jeremy Hopkins, Haibo S...
Untethered multimodal interfaces are more attractive than tethered ones because they are more natural and expressive for interaction. Such interfaces usually require robust vision...
Analysis of the human gaze is a basic way to investigate human attention. Similarly, the view image of a human being includes the visual information of what he/she pays attention ...
In this paper, we present a novel approach for tracking a lecturer during the course of his speech. We use features from multiple cameras and microphones, and process them in a jo...
Kai Nickel, Tobias Gehrig, Rainer Stiefelhagen, Jo...
Visual information overload is a threat to the interpretation of displays presenting large data sets or complex application environments. To combat this problem, researchers have ...
Anthony Tang, Peter McLachlan, Karen Lowe, Chalapa...
Head pose and gesture offer several key conversational grounding cues and are used extensively in face-to-face interaction among people. We investigate how dialog context from an ...
Louis-Philippe Morency, Candace L. Sidner, Christo...
With recent advances in eye tracking technology, eye gaze gradually gains acceptance as a pointing modality. Its relatively low accuracy, however, determines the need to use enlar...
This paper presents Latent Semantic Googling, a variant of Landauer’s Latent Semantic Indexing that uses the Google search engine to judge the semantic closeness of sets of word...