This paper presents properties and results of a new framework for sequential decision-making in multiagent settings called interactive partially observable Markov decision processes (I-POMDPs). I-POMDPs are generalizations of POMDPs, a well-known framework for decisiontheoretic planning in uncertain domains, to cases when an agent needs to plan a course of action in an environment populated by other agents.
Piotr J. Gmytrasiewicz, Prashant Doshi