Reward shaping is a well-known technique applied to help reinforcement-learning agents converge more quickly to nearoptimal behavior. In this paper, we introduce social reward shaping, which is reward shaping applied in the multiagentlearning framework. We present preliminary experiments in the iterated Prisoner's dilemma setting that show that agents using social reward shaping appropriately can behave more effectively than other classical learning and nonlearning strategies. In particular, we show that these agents can both lead --encourage adaptive opponents to stably cooperate-- and follow --adopt a best-response strategy when paired with a fixed opponent-- where better known approaches achieve only one of these objectives. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning General Terms Algorithms, Performance, Experimentation Keywords Reinforcement learning, leader/follower strategies, iterated prisoner's dilemma, game theory, subgame perfect ...
Monica Babes, Enrique Munoz de Cote, Michael L. Li