Sciweavers

215 search results - page 14 / 43
» Model-Based Reinforcement Learning with Continuous States an...
Sort
View
ICML
2010
IEEE
13 years 5 months ago
Constructing States for Reinforcement Learning
POMDPs are the models of choice for reinforcement learning (RL) tasks where the environment cannot be observed directly. In many applications we need to learn the POMDP structure ...
M. M. Hassan Mahmud
ATAL
2008
Springer
13 years 9 months ago
Reinforcement learning for DEC-MDPs with changing action sets and partially ordered dependencies
Decentralized Markov decision processes are frequently used to model cooperative multi-agent systems. In this paper, we identify a subclass of general DEC-MDPs that features regul...
Thomas Gabel, Martin A. Riedmiller
IROS
2009
IEEE
206views Robotics» more  IROS 2009»
14 years 2 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solv...
Patrick Dallaire, Camille Besse, Stéphane R...
FLAIRS
2004
13 years 9 months ago
State Space Reduction For Hierarchical Reinforcement Learning
er provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as -reduction, ...
Mehran Asadi, Manfred Huber
PRICAI
2000
Springer
13 years 11 months ago
Generating Hierarchical Structure in Reinforcement Learning from State Variables
This paper presents the CQ algorithm which decomposes and solves a Markov Decision Process (MDP) by automatically generating a hierarchy of smaller MDPs using state variables. The ...
Bernhard Hengst