Sciweavers

499 search results - page 14 / 100
» Model Minimization in Markov Decision Processes
Sort
View
ICTAI
1996
IEEE
13 years 11 months ago
Incremental Markov-Model Planning
This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. T...
Richard Washington
NIPS
2001
13 years 8 months ago
Predictive Representations of State
We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded...
Michael L. Littman, Richard S. Sutton, Satinder P....
AAAI
1997
13 years 8 months ago
Structured Solution Methods for Non-Markovian Decision Processes
Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dy...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
JMLR
2010
135views more  JMLR 2010»
13 years 2 months ago
Finite-sample Analysis of Bellman Residual Minimization
We consider the Bellman residual minimization approach for solving discounted Markov decision problems, where we assume that a generative model of the dynamics and rewards is avai...
Odalric-Ambrym Maillard, Rémi Munos, Alessa...
EMMCVPR
2001
Springer
13 years 12 months ago
A Hierarchical Markov Random Field Model for Figure-Ground Segregation
To segregate overlapping objects into depth layers requires the integration of local occlusion cues distributed over the entire image into a global percept. We propose to model thi...
Stella X. Yu, Tai Sing Lee, Takeo Kanade