In multiagent planning, it is often convenient to view a problem as two subproblems: agent local planning and coordination. Thus, we can classify agent activities into two categories: agent local problem solving activities and coordination activities, with each category of activities addressing the corresponding subproblem. However, recent mathematical models, such as decentralized Markov decision processes (DEC-MDP) and partially observable Markov decision processes (DEC-POMDP), view the problem as a single decision process and do not make the distinctions between agent local planning and coordination. In this paper, we present a synergistic representation that brings these two views together, and show that these two views are equivalent. Under this representation, traditional plan coordination mechanisms can be conveniently modeled and interpreted as approximation methods for solving the decision processes. General Terms Algorithms, Theory Keywords Multiagent Systems; Coordination, ...