Sciweavers

2108 search results - page 111 / 422
» Tracking in Reinforcement Learning
Sort
View
ICML
2008
IEEE
14 years 10 months ago
An analysis of linear models, linear value-function approximation, and feature selection for reinforcement learning
We show that linear value-function approximation is equivalent to a form of linear model approximation. We then derive a relationship between the model-approximation error and the...
Ronald Parr, Lihong Li, Gavin Taylor, Christopher ...
ICML
1996
IEEE
14 years 10 months ago
Sensitive Discount Optimality: Unifying Discounted and Average Reward Reinforcement Learning
Research in reinforcementlearning (RL)has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the averagereward frame...
Sridhar Mahadevan
ICES
2003
Springer
125views Hardware» more  ICES 2003»
14 years 3 months ago
Evolving Reinforcement Learning-Like Abilities for Robots
Abstract. In [8] Yamauchi and Beer explored the abilities of continuous time recurrent neural networks (CTRNNs) to display reinforcementlearning like abilities. The investigated ta...
Jesper Blynel
AAAI
2008
14 years 10 days ago
Potential-based Shaping in Model-based Reinforcement Learning
Potential-based shaping was designed as a way of introducing background knowledge into model-free reinforcement-learning algorithms. By identifying states that are likely to have ...
John Asmuth, Michael L. Littman, Robert Zinkov
NIPS
2007
13 years 11 months ago
Online Linear Regression and Its Application to Model-Based Reinforcement Learning
We provide a provably efficient algorithm for learning Markov Decision Processes (MDPs) with continuous state and action spaces in the online setting. Specifically, we take a mo...
Alexander L. Strehl, Michael L. Littman