Sciweavers

124 search results - page 5 / 25
» Basis function construction for hierarchical reinforcement l...
Sort
View
IWANN
1999
Springer
13 years 12 months ago
Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning
To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of comp...
R. Matthew Kretchmar, Charles W. Anderson
PKDD
2009
Springer
144views Data Mining» more  PKDD 2009»
14 years 2 months ago
Compositional Models for Reinforcement Learning
Abstract. Innovations such as optimistic exploration, function approximation, and hierarchical decomposition have helped scale reinforcement learning to more complex environments, ...
Nicholas K. Jong, Peter Stone
CG
2006
Springer
13 years 9 months ago
Feature Construction for Reinforcement Learning in Hearts
Temporal difference (TD) learning has been used to learn strong evaluation functions in a variety of two-player games. TD-gammon illustrated how the combination of game tree search...
Nathan R. Sturtevant, Adam M. White
ICML
2002
IEEE
14 years 8 months ago
Hierarchically Optimal Average Reward Reinforcement Learning
Two notions of optimality have been explored in previous work on hierarchical reinforcement learning (HRL): hierarchical optimality, or the optimal policy in the space defined by ...
Mohammad Ghavamzadeh, Sridhar Mahadevan
ATAL
2008
Springer
13 years 9 months ago
Transfer of task representation in reinforcement learning using policy-based proto-value functions
Reinforcement Learning research is traditionally devoted to solve single-task problems. Therefore, anytime a new task is faced, learning must be restarted from scratch. Recently, ...
Eliseo Ferrante, Alessandro Lazaric, Marcello Rest...