A graphical multiagent model (GMM) represents a joint distribution over the behavior of a set of agents. One source of knowledge aboutagents'behaviormaycomefromgametheoretic analysis, as captured by several graphical game representations developed in recentyears. GMMsgeneralizethisapproach to express arbitrary distributions, based on game descriptions or other sources of knowledgebearingonbeliefsaboutagentbehavior. To illustrate the exibility of GMMs, we exhibit game-derived models that allow probabilistic deviation fromequilibrium, aswellas models based on heuristic action choice. We investigate three dierent methods of integrating these models into a single model representing the combined knowledge sources. To evaluate the predictive performance of the combined model, we treat as actual outcome the behavior produced by a reinforcement learning process. We nd that combining the two knowledge sources, using any of the methods, provides better predictions than either source alone....
Quang Duong, Michael P. Wellman, Satinder P. Singh