Sciweavers

IJRR
2011

Learning visual representations for perception-action systems

13 years 6 months ago
Learning visual representations for perception-action systems
We discuss vision as a sensory modality for systems that effect actions in response to perceptions. While the internal representations informed by vision may be arbitrarily complex, we argue that in many cases it is advantageous to link them rather directly to action via learned mappings. These arguments are illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension RLJC also handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pos...
Justus H. Piater, Sébastien Jodogne, Renaud
Added 14 May 2011
Updated 14 May 2011
Type Journal
Year 2011
Where IJRR
Authors Justus H. Piater, Sébastien Jodogne, Renaud Detry, Dirk Kraft, Norbert Krüger, Oliver Kroemer, Jan Peters
Comments (0)