Sciweavers

77 search results - page 13 / 16
» Value Function Approximation in Reinforcement Learning Using...
Sort
View
ATAL
2008
Springer
13 years 9 months ago
Sigma point policy iteration
In reinforcement learning, least-squares temporal difference methods (e.g., LSTD and LSPI) are effective, data-efficient techniques for policy evaluation and control with linear v...
Michael H. Bowling, Alborz Geramifard, David Winga...
ML
2002
ACM
168views Machine Learning» more  ML 2002»
13 years 7 months ago
On Average Versus Discounted Reward Temporal-Difference Learning
We provide an analytical comparison between discounted and average reward temporal-difference (TD) learning with linearly parameterized approximations. We first consider the asympt...
John N. Tsitsiklis, Benjamin Van Roy
CORR
2010
Springer
204views Education» more  CORR 2010»
13 years 6 months ago
Predictive State Temporal Difference Learning
We propose a new approach to value function approximation which combines linear temporal difference reinforcement learning with subspace identification. In practical applications...
Byron Boots, Geoffrey J. Gordon
ICML
1996
IEEE
14 years 8 months ago
Learning Evaluation Functions for Large Acyclic Domains
Some of the most successful recent applications of reinforcement learning have used neural networks and the TD algorithm to learn evaluation functions. In this paper, we examine t...
Justin A. Boyan, Andrew W. Moore
JMLR
2010
148views more  JMLR 2010»
13 years 2 months ago
A Generalized Path Integral Control Approach to Reinforcement Learning
With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical tec...
Evangelos Theodorou, Jonas Buchli, Stefan Schaal