Sciweavers

ATAL
2004
Springer

Communication for Improving Policy Computation in Distributed POMDPs

14 years 4 months ago
Communication for Improving Policy Computation in Distributed POMDPs
Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore,
Ranjit Nair, Milind Tambe, Maayan Roth, Makoto Yok
Added 30 Jun 2010
Updated 30 Jun 2010
Type Conference
Year 2004
Where ATAL
Authors Ranjit Nair, Milind Tambe, Maayan Roth, Makoto Yokoo
Comments (0)