Sciweavers

514 search results - page 94 / 103
» Cognitively Plausible Models of Human Language Processing
Sort
View
ICMI
2009
Springer
95views Biometrics» more  ICMI 2009»
14 years 2 months ago
Salience in the generation of multimodal referring acts
Pointing combined with verbal referring is one of the most paradigmatic human multimodal behaviours. The aim of this paper is foundational: to uncover the central notions that are...
Paul Piwek
ICMI
2010
Springer
217views Biometrics» more  ICMI 2010»
13 years 5 months ago
Focusing computational visual attention in multi-modal human-robot interaction
Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot interaction. Most importantly, it is essential to achieve a joint focus of attentio...
Boris Schauerte, Gernot A. Fink
HRI
2006
ACM
14 years 1 months ago
Using context and sensory data to learn first and second person pronouns
We present a method of grounded word learning that is powerful enough to learn the meanings of first and second person pronouns. The model uses the understood words in an utteran...
Kevin Gold, Brian Scassellati
JAIR
2008
164views more  JAIR 2008»
13 years 7 months ago
Gesture Salience as a Hidden Variable for Coreference Resolution and Keyframe Extraction
Gesture is a non-verbal modality that can contribute crucial information to the understanding of natural language. But not all gestures are informative, and non-communicative hand...
Jacob Eisenstein, Regina Barzilay, Randall Davis
INLG
2004
Springer
14 years 1 months ago
Resolving Structural Ambiguity in Generated Speech
Ambiguity in the output is a concern for NLG in general. This paper considers the case of structural ambiguity in spoken language generation. We present an algorithm which inserts ...
Chris Mellish