Sciweavers

ICML
2005
IEEE

Interactive learning of mappings from visual percepts to actions

15 years 1 months ago
Interactive learning of mappings from visual percepts to actions
We introduce flexible algorithms that can automatically learn mappings from images to actions by interacting with their environment. They work by introducing an image classifier in front of a Reinforcement Learning algorithm. The classifier partitions the visual space according to the presence or absence of highly informative local descriptors. The image classifier is incrementally refined by selecting new local descriptors when perceptual aliasing is detected. Thus, we reduce the visual input domain down to a size manageable by Reinforcement Learning, permitting us to learn direct percept-to-action mappings. Experimental results on a continuous visual navigation task illustrate the applicability of the framework.
Justus H. Piater, Sébastien Jodogne
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2005
Where ICML
Authors Justus H. Piater, Sébastien Jodogne
Comments (0)