Sciweavers

ICML
2007
IEEE

Tracking value function dynamics to improve reinforcement learning with piecewise linear function approximation

15 years 10 days ago
Tracking value function dynamics to improve reinforcement learning with piecewise linear function approximation
Reinforcement learning algorithms can become unstable when combined with linear function approximation. Algorithms that minimize the mean-square Bellman error are guaranteed to converge, but often do so slowly or are computationally expensive. In this paper, we propose to improve the convergence speed of piecewise linear function approximation by tracking the dynamics of the value function with the Kalman filter using a random-walk model. We cast this as a general framework in which we implement the TD, Q-Learning and MAXQ algorithms for different domains, and report empirical results demonstrating improved learning speed over previous methods.
Chee Wee Phua, Robert Fitch
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2007
Where ICML
Authors Chee Wee Phua, Robert Fitch
Comments (0)