This paper studies iterative learning control (ILC) in a multi-agent framework. A group of agents simultaneously and repeatedly perform the same task. The agents improve their performance by using the knowledge gained from previous executions. Assuming similarity between the agents, we investigate whether exchanging information between the agents improves an individual's learning performance. That is, does an individual agent benefit from the experience of the other agents? The multi-agent iterative learning problem is viewed as a two-step process of first estimating the repetitive disturbance of each agent based on the given measurements, and second, correcting for it. This setup reduces the previous question to a comparison of an agent's disturbance estimate in case of (I) independent estimation, where each agent has only access to its own measurement, and (II) joint estimation, where information of all agents is globally accessible. An upper bound of the performance increa...