Sciweavers

ICRA
2003
IEEE

Visual transformations in gesture imitation: what you see is what you do

14 years 4 months ago
Visual transformations in gesture imitation: what you see is what you do
We propose an approach for a robot to imitate the gestures of a human demonstrator. Our framework consists solely of two components: a Sensory-Motor Map (SMM) and a View-Point Transformation (VPT). The SMM establishes an association between an arm image and the corresponding joint angles and it is learned by the system during a period of observation of its own gestures. The VPT is widely discussed in the psychology of visual perception and is used to transform the image of the demonstrator’s arm to the so-called ego-centric image, as if the robot were observing its own arm. Different structures of the SMM and VPT are proposed in accordance with observations in human imitation. The whole system relies on monocular visual information and leads to a parsimonious architecture for learning by imitation. Real-time results are presented and discussed.
Manuel Cabido-Lopes, José Santos-Victor
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where ICRA
Authors Manuel Cabido-Lopes, José Santos-Victor
Comments (0)