This paper describes an algorithm, called CQ-learning, which learns to adapt the state representation for multi-agent systems in order to coordinate with other agents. We propose a multi-level approach which builds a progressively more advanced representation of the learning problem. The idea is that agents start with a minimal single agent state space representation, which is expanded only when necessary. In cases where agents detect conflicts, they automatically expand their state to explicitly take into account the other agents. These conflict situations are then analyzed in an atfind an abstract representation which generalises over the problem states. Our system allows agents to learn effective policies, while avoiding the exponential state space growth typical in multi-agent environments. Furthermore, the method we introduce to generalise over conflict states allows knowledge to be transferred to unseen and possibly more complex situations. Our research departs from previous eff...