Sciweavers

63 search results - page 4 / 13
» Gesture, Gaze, and Ground
Sort
View
HRI
2006
ACM
14 years 3 months ago
Working with robots and objects: revisiting deictic reference for achieving spatial common ground
Robust joint visual attention is necessary for achieving a common frame of reference between humans and robots interacting multimodally in order to work together on realworld spat...
Andrew G. Brooks, Cynthia Breazeal
HRI
2010
ACM
14 years 2 months ago
Recognizing engagement in human-robot interaction
—Based on a study of the engagement process between humans, we have developed and implemented an initial computational model for recognizing engagement between a human and a huma...
Charles Rich, Brett Ponsleur, Aaron Holroyd, Canda...
ICMI
2004
Springer
129views Biometrics» more  ICMI 2004»
14 years 3 months ago
Multimodal transformed social interaction
Understanding human-human interaction is fundamental to the long-term pursuit of powerful and natural multimodal interfaces. Nonverbal communication, including body posture, gestu...
Matthew Turk, Jeremy N. Bailenson, Andrew C. Beall...
WACV
2002
IEEE
14 years 2 months ago
Appearance-based Eye Gaze Estimation
We present a method for estimating eye gaze direction, which represents a departure from conventional eye gaze estimation methods, the majority of which are based on tracking spec...
Kar-Han Tan, David J. Kriegman, Narendra Ahuja
ACL
2003
13 years 11 months ago
Towards a Model of Face-to-Face Grounding
We investigate the verbal and nonverbal means for grounding, and propose a design for embodied conversational agents that relies on both kinds of signals to establish common groun...
Yukiko I. Nakano, Gabe Reinstein, Tom Stocky, Just...