Sciweavers

ICML
2010
IEEE

Constructing States for Reinforcement Learning

13 years 9 months ago
Constructing States for Reinforcement Learning
POMDPs are the models of choice for reinforcement learning (RL) tasks where the environment cannot be observed directly. In many applications we need to learn the POMDP structure and parameters from experience and this is considered to be a difficult problem. In this paper we address this issue by modeling the hidden environment with a novel class of models that are less expressive, but easier to learn and plan with than POMDPs. We call these models deterministic Markov models (DMMs), which are deterministic-probabilistic finite automata from learning theory, extended with actions to the sequential (rather than i.i.d.) setting. Conceptually, we extend the Utile Suffix Memory method of McCallum to handle long term memory. We describe DMMs, give Bayesian algorithms for learning and planning with them and also present experimental results for some standard POMDP tasks and tasks to illustrate its efficacy.
M. M. Hassan Mahmud
Added 12 Feb 2011
Updated 12 Feb 2011
Type Journal
Year 2010
Where ICML
Authors M. M. Hassan Mahmud
Comments (0)