Sciweavers

ICML
2005
IEEE

Proto-value functions: developmental reinforcement learning

15 years 9 days ago
Proto-value functions: developmental reinforcement learning
This paper presents a novel framework called proto-reinforcement learning (PRL), based on a mathematical model of a proto-value function: these are task-independent basis functions that form the building blocks of all value functions on a given state space manifold. Proto-value functions are learned not from rewards, but instead from analyzing the topology of the state space. Formally, proto-value functions are Fourier eigenfunctions of the Laplace-Beltrami diffusion operator on the state space manifold. Proto-value functions facilitate structural decomposition of large state spaces, and form geodesically smooth orthonormal basis functions for approximating any value function. The theoretical basis for proto-value functions combines insights from spectral graph theory, harmonic analysis, and Riemannian manifolds. Protovalue functions enable a novel generation of algorithms called representation policy iteration, unifying the learning of representation and behavior.
Sridhar Mahadevan
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2005
Where ICML
Authors Sridhar Mahadevan
Comments (0)