A fundamental difficulty faced by groups of agents that work together is how to efficiently coordinate their efforts. This coordination problem is both ubiquitous and challenging, especially in environments where autonomous agents are motivated by personal goals. Previous AI research on coordination has developed techniques that allow agents to act efficiently from the outset based on common built-in knowledge or to learn to act efficiently when the agents are not autonomous. The research described in this paper builds on those efforts by developing distributed learning techniques that improve coordination among autonomous agents. The techniques presented in this work encompass agents who are heterogeneous, who do not have complete built-in common knowledge, and who cannot coordinate solely by observation. An agent learns from her experiences so that her future behavior more accurately reflects what works (or does not work) in practice. Each agent stores past successes (both planned a...