Multiagent reinforcement learning problems are especially difficult because of their dynamism and the size of joint state space. In this paper a new benchmark problem is proposed, which involves the need for cooperation, competition and synchronization between agents. The notion of state attractor is introduced, such that agents compute their actions based on the proximity of their current state to the nearest state attractor. A genetic algorithm is used to find the state attractors. This representation can be used as a compact way to define individual or joint policies.