Sciweavers

26 search results - page 3 / 6
» Solving Decentralized Continuous Markov Decision Problems wi...
Sort
View
IROS
2009
IEEE
206views Robotics» more  IROS 2009»
15 years 10 months ago
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
— Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle realworld sequential decision processes but require a known model to be solv...
Patrick Dallaire, Camille Besse, Stéphane R...
UAI
2004
15 years 5 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
IJCAI
2003
15 years 5 months ago
Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings
The problem of deriving joint policies for a group of agents that maximize some joint reward function can be modeled as a decentralized partially observable Markov decision proces...
Ranjit Nair, Milind Tambe, Makoto Yokoo, David V. ...
AAAI
1997
15 years 5 months ago
Structured Solution Methods for Non-Markovian Decision Processes
Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dy...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
AAAI
2008
15 years 6 months ago
Interaction Structure and Dimensionality Reduction in Decentralized MDPs
Decentralized Markov Decision Processes are a powerful general model of decentralized, cooperative multi-agent problem solving. The high complexity of the general problem leads to...
Martin Allen, Marek Petrik, Shlomo Zilberstein