Sciweavers

36 search results - page 1 / 8
» Posterior Weighted Reinforcement Learning with State Uncerta...
Sort
View
NECO
2010
103views more  NECO 2010»
13 years 6 months ago
Posterior Weighted Reinforcement Learning with State Uncertainty
Reinforcement learning models generally assume that a stimulus is presented that allows a learner to unambiguously identify the state of nature, and the reward received is drawn f...
Tobias Larsen, David S. Leslie, Edmund J. Collins,...
NIPS
1996
13 years 9 months ago
Exploiting Model Uncertainty Estimates for Safe Dynamic Control Learning
Model learning combined with dynamic programming has been shown to be e ective for learning control of continuous state dynamic systems. The simplest method assumes the learned mod...
Jeff G. Schneider
AAAI
2012
11 years 10 months ago
Kernel-Based Reinforcement Learning on Representative States
Markov decision processes (MDPs) are an established framework for solving sequential decision-making problems under uncertainty. In this work, we propose a new method for batchmod...
Branislav Kveton, Georgios Theocharous
ICRA
2008
IEEE
173views Robotics» more  ICRA 2008»
14 years 2 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Mark...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi...
PKDD
2009
Springer
129views Data Mining» more  PKDD 2009»
14 years 2 months ago
Considering Unseen States as Impossible in Factored Reinforcement Learning
Abstract. The Factored Markov Decision Process (FMDP) framework is a standard representation for sequential decision problems under uncertainty where the state is represented as a ...
Olga Kozlova, Olivier Sigaud, Pierre-Henri Wuillem...