This paper addresses the evolution of control strategies for a collective: a set of entities that collectively strives to maximize a global evaluation function that rates the performance of the full system. Directly addressing such problems by having a population of collectives and applying the evolutionary algorithm to that population is appealing, but the search space is prohibitively large in most cases. Instead, we focus on evolving control policies for each member of the collective. The fundamental issue in this approach is how to create an evaluation function for each member of the collective that is both aligned with the global evaluation function and is sensitive to the fitness changes of the member, while relatively insensitive to the fitness changes of other members. We show how to construct evaluation functions in dynamic, noisy and communication-limited collective environments. On a rover coordination problem, a control policy evolved using aligned and member-sensitive e...
Kagan Tumer, Adrian K. Agogino