Transfer learning has proven to be a wildly successful approach for speeding up reinforcement learning. Techniques often use low-level information obtained in the source task to achieve successful transfer in the target task. Yet, a most general transfer approach can only assume access to the output of the learning algorithm in the source task, i.e. the learned policy, enabling transfer irrespective of the learning algorithm used in the source task. We advance the state-ofthe-art by using a reward shaping approach to policy transfer. One of the advantages in following such an approach, is that it firmly grounds policy transfer in an actively developing body of theoretical research on reward shaping. Experiments in Mountain Car, Cart Pole and Mario demonstrate the practical usefulness of the approach. Categories and Subject Descriptors I.2.6 [Learning]: Miscellaneous General Terms Algorithms, Performance Keywords Reinforcement Learning; Transfer Learning; Reward Shaping
Tim Brys, Anna Harutyunyan, Matthew E. Taylor, Ann