Sciweavers

ML
2002
ACM

On Average Versus Discounted Reward Temporal-Difference Learning

13 years 11 months ago
On Average Versus Discounted Reward Temporal-Difference Learning
We provide an analytical comparison between discounted and average reward temporal-difference (TD) learning with linearly parameterized approximations. We first consider the asymptotic behavior of the two algorithms. We show that as the discount factor approaches 1, the value function produced by discounted TD approaches the differential value function generated by average reward TD. We further argue that if the constant function--which is typically used as one of the basis functions in discounted TD--is appropriately scaled, the transient behaviors of the two algorithms are also similar. Our analysis suggests that the computational advantages of average reward TD that have been observed in some prior empirical work may have been caused by inappropriate basis function scaling rather than fundamental differences in problem formulations or algorithms.
John N. Tsitsiklis, Benjamin Van Roy
Added 22 Dec 2010
Updated 22 Dec 2010
Type Journal
Year 2002
Where ML
Authors John N. Tsitsiklis, Benjamin Van Roy
Comments (0)