Sciweavers

ICML
2008
IEEE

A worst-case comparison between temporal difference and residual gradient with linear function approximation

15 years 7 days ago
A worst-case comparison between temporal difference and residual gradient with linear function approximation
Residual gradient (RG) was proposed as an alternative to TD(0) for policy evaluation when function approximation is used, but there exists little formal analysis comparing them except in very limited cases. This paper employs techniques from online learning of linear functions and provides a worst-case (non-probabilistic) analysis to compare these two types of algorithms when linear function approximation is used. No statistical assumptions are made on the sequence of observations, so the analysis applies to nonMarkovian and even adversarial domains as well. In particular, our results suggest that RG may result in smaller temporal differences, while TD(0) is more likely to yield smaller prediction errors. These phenomena can be observed even in two simple Markov chain examples that are non-adversarial.
Lihong Li
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2008
Where ICML
Authors Lihong Li
Comments (0)