Sciweavers

44 search results - page 3 / 9
» Approximate inference for planning in stochastic relational ...
Sort
View
JAIR
2008
107views more  JAIR 2008»
13 years 8 months ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
NIPS
2003
13 years 10 months ago
Envelope-based Planning in Relational MDPs
A mobile robot acting in the world is faced with a large amount of sensory data and uncertainty in its action outcomes. Indeed, almost all interesting sequential decision-making d...
Natalia Hernandez-Gardiol, Leslie Pack Kaelbling
NIPS
1998
13 years 10 months ago
Approximate Learning of Dynamic Models
Inference is a key component in learning probabilistic models from partially observable data. When learning temporal models, each of the many inference phases requires a complete ...
Xavier Boyen, Daphne Koller
TROB
2010
129views more  TROB 2010»
13 years 7 months ago
A Probabilistic Particle-Control Approximation of Chance-Constrained Stochastic Predictive Control
—Robotic systems need to be able to plan control actions that are robust to the inherent uncertainty in the real world. This uncertainty arises due to uncertain state estimation,...
Lars Blackmore, Masahiro Ono, Askar Bektassov, Bri...
NIPS
2003
13 years 10 months ago
Approximate Policy Iteration with a Policy Language Bias
We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual...
Alan Fern, Sung Wook Yoon, Robert Givan