Sciweavers

6 search results - page 1 / 2
» Q-value functions for decentralized POMDPs
Sort
View
ATAL
2007
Springer
14 years 1 months ago
Q-value functions for decentralized POMDPs
Planning in single-agent models like MDPs and POMDPs can be carried out by resorting to Q-value functions: a (near-) optimal Q-value function is computed in a recursive manner by ...
Frans A. Oliehoek, Nikos A. Vlassis
JAIR
2008
126views more  JAIR 2008»
13 years 7 months ago
Optimal and Approximate Q-value Functions for Decentralized POMDPs
Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent ...
Frans A. Oliehoek, Matthijs T. J. Spaan, Nikos A. ...
IJCAI
2003
13 years 9 months ago
Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings
The problem of deriving joint policies for a group of agents that maximize some joint reward function can be modeled as a decentralized partially observable Markov decision proces...
Ranjit Nair, Milind Tambe, Makoto Yokoo, David V. ...
ATAL
2008
Springer
13 years 9 months ago
Exploiting locality of interaction in factored Dec-POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) constitute an expressive framework for multiagent planning under uncertainty, but solving them is provabl...
Frans A. Oliehoek, Matthijs T. J. Spaan, Shimon Wh...
AIPS
2008
13 years 10 months ago
Multiagent Planning Under Uncertainty with Stochastic Communication Delays
We consider the problem of cooperative multiagent planning under uncertainty, formalized as a decentralized partially observable Markov decision process (Dec-POMDP). Unfortunately...
Matthijs T. J. Spaan, Frans A. Oliehoek, Nikos A. ...