This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining five different reinforcement learning algorithms: Q-learning, Sarsa, Actor-Critic, QV-learning, and ACLA. The intuitively designed ensemble methods: majority voting, rank voting, Boltzmann multiplication, and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity, the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the Boltz...
Marco A. Wiering, Hado van Hasselt