Sciweavers

98 search results - page 3 / 20
» Visual recognition of pointing gestures for human-robot inte...
Sort
View
ECAI
2008
Springer
13 years 8 months ago
Salience-driven Contextual Priming of Speech Recognition for Human-Robot Interaction
Abstract. The paper presents an implemented model for priming speech recognition, using contextual information about salient entities. The underlying hypothesis is that, in human-r...
Pierre Lison, Geert-Jan M. Kruijff
ESSLLI
2009
Springer
13 years 4 months ago
A Salience-Driven Approach to Speech Recognition for Human-Robot Interaction
We present an implemented model for speech recognition in natural environments which relies on contextual information about salient entities to prime utterance recognition. The hyp...
Pierre Lison
ICMI
2010
Springer
217views Biometrics» more  ICMI 2010»
13 years 4 months ago
Focusing computational visual attention in multi-modal human-robot interaction
Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot interaction. Most importantly, it is essential to achieve a joint focus of attentio...
Boris Schauerte, Gernot A. Fink
HRI
2010
ACM
13 years 12 months ago
Recognizing engagement in human-robot interaction
—Based on a study of the engagement process between humans, we have developed and implemented an initial computational model for recognizing engagement between a human and a huma...
Charles Rich, Brett Ponsleur, Aaron Holroyd, Canda...
HRI
2006
ACM
14 years 21 days ago
Working with robots and objects: revisiting deictic reference for achieving spatial common ground
Robust joint visual attention is necessary for achieving a common frame of reference between humans and robots interacting multimodally in order to work together on realworld spat...
Andrew G. Brooks, Cynthia Breazeal