To prevent or alleviate conflicts in multi-agent environments, it is important to distinguish between situations where another agent has misbehaved intentionally and situations wh...
Reward shaping is a well-known technique applied to help reinforcement-learning agents converge more quickly to nearoptimal behavior. In this paper, we introduce social reward sha...
Monica Babes, Enrique Munoz de Cote, Michael L. Li...
Abstract. Computational Game Theory is a way to study and evaluate behaviors using game theory models, via agent-based computer simulations. One of the most known example of this a...
In many Multi-Agent Systems (MAS), agents (even if selfinterested) need to cooperate in order to maximize their own utilities. Most of the multi-agent learning algorithms focus on...
Jose Enrique Munoz de Cote, Alessandro Lazaric, Ma...
This paper focuses on the Noisy Iterated Prisoner's Dilemma, a version of the Iterated Prisoner's Dilemma (IPD) in which there is a nonzero probability that a "coop...
Table 1 shows the payoff to player one. The same matrix also holds for player two. Player one can gain the maximum 5 points (T = 5) by defection if player two cooperates. However,...
The Classical Iterated Prisoner's Dilemma (CIPD) is used to study the evolution of cooperation. We show, with a genetic approach, how basic ideas could be used in order to gen...
Bruno Beaufils, Jean-Paul Delahaye, Philippe Mathi...
We investigate the following question. Do populations of evolving agents adapt only to their recent environment or do general adaptive features appear over time? We find statistica...
The iterated prisoner’s dilemma is a widely used computational model of cooperation and conflict. Many studies report emergent cooperation in populations of agents trained to p...
In multi-agent communities, trust is required when agents hold different beliefs or conflicting goals. We present a framework for decomposing agent reputation into competence—...