Sciweavers

109 search results - page 9 / 22
» Interactive learning of mappings from visual percepts to act...
Sort
View
AI
2005
Springer
13 years 7 months ago
Learning to talk about events from narrated video in a construction grammar framework
The current research presents a system that learns to understand object names, spatial relation terms and event descriptions from observing narrated action sequences. The system e...
Peter Ford Dominey, Jean-David Boucher
CVPR
2010
IEEE
14 years 4 months ago
Learning a Hierarchy of Discriminative Space-Time Neighborhood Features for Human Action Recognition
Recent work shows how to use local spatio-temporal features to learn models of realistic human actions from video. However, existing methods typically rely on a predefined spatial...
Adriana Kovashka, Kristen Grauman
ISCIS
2009
Springer
14 years 2 months ago
Unsupervised learning of affordance relations on a humanoid robot
—In this paper, we study how a humanoid robot can learn affordance relations in his environment through its own interactions in an unsupervised way. Specifically, we developed a...
Baris Akgun, Nilgun Dag, Tahir Bilal, Ilkay Atil, ...
CSL
2002
Springer
13 years 7 months ago
Learning visually grounded words and syntax for a scene description task
A spoken language generation system has been developed that learns to describe objects in computer-generated visual scenes. The system is trained by a `show-and-tell' procedu...
Deb K. Roy
AROBOTS
1998
111views more  AROBOTS 1998»
13 years 7 months ago
Emergence and Categorization of Coordinated Visual Behavior Through Embodied Interaction
This paper discusses the emergence of sensorimotor coordination for ESCHeR, a 4DOF redundant foveated robot-head, by interaction with its environment. A feedback-error-learning(FEL...
Luc Berthouze, Yasuo Kuniyoshi