Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To recognize visual feedback efficiently, humans often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human-computer interfaces. Lexical, prosodic, timing, and gesture features can be used to predict a user’s visual feedback during conversational dialog with a robotic or virtual agent. In nonconversational interfaces, context features based on user-interface system events can improve detection of head gestures for dialog box confirmation or document browsing. Our user study with prototype gesture-based components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives. Using a discriminative approach ...
Louis-Philippe Morency, Candace L. Sidner, Christo