Sciweavers

132 search results - page 6 / 27
» Generalization in Reinforcement Learning: Safely Approximati...
Sort
View
IWANN
1999
Springer
13 years 11 months ago
Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning
To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of comp...
R. Matthew Kretchmar, Charles W. Anderson
IAT
2003
IEEE
14 years 12 days ago
Asymmetric Multiagent Reinforcement Learning
A gradient-based method for both symmetric and asymmetric multiagent reinforcement learning is introduced in this paper. Symmetric multiagent reinforcement learning addresses the ...
Ville Könönen
ESANN
2007
13 years 8 months ago
Replacing eligibility trace for action-value learning with function approximation
The eligibility trace is one of the most used mechanisms to speed up reinforcement learning. Earlier reported experiments seem to indicate that replacing eligibility traces would p...
Kary Främling
CDC
2010
IEEE
160views Control Systems» more  CDC 2010»
13 years 2 months ago
Adaptive bases for Q-learning
Abstract-- We consider reinforcement learning, and in particular, the Q-learning algorithm in large state and action spaces. In order to cope with the size of the spaces, a functio...
Dotan Di Castro, Shie Mannor
ICRA
2007
IEEE
155views Robotics» more  ICRA 2007»
14 years 1 months ago
Value Function Approximation on Non-Linear Manifolds for Robot Motor Control
— The least squares approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular an...
Masashi Sugiyama, Hirotaka Hachiya, Christopher To...