Agents in a multi-agent system do not act in a vacuum. The outcome of their efforts depends on the environment in which they seek to act, and in particular on the efforts of other agents with whom they share the environment. We review previous efforts to address this problem, including active environments, concurrency modeling, recursive reasoning, and stochastic processes. Then we propose an approach that combines active environments and stochastic processes while addressing their limitations: a swarming agent simulation (which maintains transition probabilities dynamically, avoiding the static assumptions most convenient with traditional Markov models), applied concurrently to multiple perspectives (thus partitioning the active environment and addressing its scalability challenges). We demonstrate this method on a simple example. Categories and Subject Descriptors I.2.11 [Computing Methodologies]: Distributed Artificial Intelligence
H. Van Dyke Parunak, Robert Bisson, Sven A. Brueck