Many algorithms such as Q-learning successfully address reinforcement learning in single-agent multi-time-step problems. In addition there are methods that address reinforcement learning in multi-agent single-time-step problems. However, unmodified single-agent multi-time-step methods and multi-agent single-time-step methods cannot necessarily be combined to solve multi-agent multi-time-step problems due to strong coupling between multi-agent interactions between time steps. Rewards that result in multi-agent collaboration for a single time-step may result in poor collaboration in future time-steps. This paper shows how to avoid this problem.
Kagan Tumer, Adrian K. Agogino