In neuroevolution, a genetic algorithm is used to evolve a neural network to perform a particular task. The standard approach is to evolve a population over a number of generations, and then select the final generation’s champion as the end result. However, it is possible that there is valuable information present in the population that is not captured by the champion. The standard approach ignores all such information. One possible solution to this problem is to combine multiple individuals from the final population into an ensemble. This approach has been successful in supervised classification tasks, and in this paper, it is extended to evolutionary reinforcement learning in control problems. The method is evaluated on a challenging extension of the classic pole balancing task, demonstrating that an ensemble can achieve significantly better performance than the champion alone. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning General Terms Algorith...
David Pardoe, Michael S. Ryoo, Risto Miikkulainen