Sciweavers

135 search results - page 14 / 27
» Dynamic Workflow Composition using Markov Decision Processes
Sort
View
AAAI
1996
13 years 9 months ago
Rewarding Behaviors
Markov decision processes (MDPs) are a very popular tool for decision theoretic planning (DTP), partly because of the welldeveloped, expressive theory that includes effective solu...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
NIPS
2001
13 years 9 months ago
Predictive Representations of State
We show that states of a dynamical system can be usefully represented by multi-step, action-conditional predictions of future observations. State representations that are grounded...
Michael L. Littman, Richard S. Sutton, Satinder P....
AAAI
2004
13 years 9 months ago
Dynamic Programming for Partially Observable Stochastic Games
We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable M...
Eric A. Hansen, Daniel S. Bernstein, Shlomo Zilber...
AAAI
2011
12 years 7 months ago
Linear Dynamic Programs for Resource Management
Sustainable resource management in many domains presents large continuous stochastic optimization problems, which can often be modeled as Markov decision processes (MDPs). To solv...
Marek Petrik, Shlomo Zilberstein
RSS
2007
136views Robotics» more  RSS 2007»
13 years 9 months ago
The Stochastic Motion Roadmap: A Sampling Framework for Planning with Markov Motion Uncertainty
— We present a new motion planning framework that explicitly considers uncertainty in robot motion to maximize the probability of avoiding collisions and successfully reaching a ...
Ron Alterovitz, Thierry Siméon, Kenneth Y. ...