Sciweavers

AAMAS
2010
Springer

Coordinated learning in multiagent MDPs with infinite state-space

13 years 11 months ago
Coordinated learning in multiagent MDPs with infinite state-space
Abstract In this paper we address the problem of simultaneous learning and coordination in multiagent Markov decision problems (MMDPs) with infinite state-spaces. We separate this problem in two distinct subproblems: learning and coordination. To tackle the problem of learning, we survey Q-learning with soft-state aggregation (Q-SSA), a well-known method from the reinforcement learning literature [40]. Q-SSA allows the agents in the game to approximate the optimal Q-function, from which the optimal policies can be computed. We establish the convergence of Q-SSA and introduce a new result describing the rate of convergence of this method. In tackling the problem of coordination, we start by pointing out that the knowledge of the optimal Q-function is not enough to ensure that all agents adopt a jointly optimal policy. We propose a novel coordination mechanism that, given the knowledge of the optimal Q-function for a MMDP, ensures that all agents converge to a jointly optimal policy in e...
Francisco S. Melo, M. Isabel Ribeiro
Added 08 Dec 2010
Updated 08 Dec 2010
Type Journal
Year 2010
Where AAMAS
Authors Francisco S. Melo, M. Isabel Ribeiro
Comments (0)