—Despite the range of applications and successes of evolutionary algorithms, expensive fitness computations often form a critical performance bottleneck. A preferred method of reducing the computational overhead is to coevolve rank predictors, providing a coarse and lightweight fitness approximation that has proven to drastically increase performance. However, the majority of previous work on rank predictor coevolution focused solely on improving the predictor heuristics while strategies to select the equally important trainer population is often an afterthought. Four different strategies are presented and benchmarked on a symbolic regression problem using hundreds of test problems with varying complexities. Of the four strategies, updating the trainer population with the solution of the highest rank variance is found to be significantly superior, resulting in a four to ten fold reduction in computational effort for similar convergence rates over the remaining strategies.
Daniel L. Ly, Hod Lipson