Agents must form and update mental models about each other in a wide range of domains: team coordination, plan recognition, social simulation, user modeling, games of incomplete information, etc. Existing research typically treats the problem of forming beliefs about other agents as an isolated subproblem, where the modeling agent starts from an initial set of possible models for another agent and then maintains a belief about which of those models applies. This initial set of models is typically a full specification of possible agent types. Although such a rich space gives the modeling agent high accuracy in its beliefs, it will also incur high cost in maintaining those beliefs. In this paper, we demonstrate that by taking this modeling problem out of its isolation and placing it back within the overall decision-making context, the modeling agent can drastically reduce this rich model space without sacrificing any performance. Our approach comprises three methods. The first method...
David V. Pynadath, Stacy Marsella