Sciweavers

CONTEXT
2005
Springer

Utilizing Visual Attention for Cross-Modal Coreference Interpretation

14 years 5 months ago
Utilizing Visual Attention for Cross-Modal Coreference Interpretation
In this paper, we describe an exploratory study to develop a model of visual attention that could aid automatic interpretation of exophors in situated dialog. The model is intended to support the reference resolution needs of embodied conversational agents, such as graphical avatars and robotic collaborators. The model tracks the attentional state of one dialog participant as it is represented by his visual input stream, taking into account the recency, exposure time, and visual distinctness of each viewed item. The model correctly predicts the correct referent of 52% of referring expressions produced by speakers in human-human dialog while they were collaborating on a task in a virtual world. This accuracy is comparable with reference resolution based on calculating linguistic salience for the same data.
Donna K. Byron, Thomas Mampilly, Vinay Sharma, Tia
Added 26 Jun 2010
Updated 26 Jun 2010
Type Conference
Year 2005
Where CONTEXT
Authors Donna K. Byron, Thomas Mampilly, Vinay Sharma, Tianfang Xu
Comments (0)