Sciweavers

113 search results - page 9 / 23
» Learning Representation and Control in Continuous Markov Dec...
Sort
View
JMLR
2006
116views more  JMLR 2006»
13 years 9 months ago
Point-Based Value Iteration for Continuous POMDPs
We propose a novel approach to optimize Partially Observable Markov Decisions Processes (POMDPs) defined on continuous spaces. To date, most algorithms for model-based POMDPs are ...
Josep M. Porta, Nikos A. Vlassis, Matthijs T. J. S...
PKDD
2009
Springer
129views Data Mining» more  PKDD 2009»
14 years 3 months ago
Considering Unseen States as Impossible in Factored Reinforcement Learning
Abstract. The Factored Markov Decision Process (FMDP) framework is a standard representation for sequential decision problems under uncertainty where the state is represented as a ...
Olga Kozlova, Olivier Sigaud, Pierre-Henri Wuillem...
UAI
2004
13 years 10 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
JMLR
2006
124views more  JMLR 2006»
13 years 9 months ago
Policy Gradient in Continuous Time
Policy search is a method for approximately solving an optimal control problem by performing a parametric optimization search in a given class of parameterized policies. In order ...
Rémi Munos
ICML
2008
IEEE
14 years 9 months ago
Reinforcement learning in the presence of rare events
We consider the task of reinforcement learning in an environment in which rare significant events occur independently of the actions selected by the controlling agent. If these ev...
Jordan Frank, Shie Mannor, Doina Precup