Abstract— The Learnable Evolution Model (LEM) was introduced by Michalski in 2000, and involves interleaved bouts of evolution and learning. Here we investigate LEM in (we think) its simplest form, using k-nearest neighbour as the ‘learning’ mechanism. The essence of the hybridisation is that candidate children are filtered, before evaluation, based on predictions from the learning mechanism (which learns based on previous populations). We test the resulting ‘KNNGA’ on the same set of problems that were used in the original LEM paper. We find that KNNGA provides very significant advantages in both solution speed and quality over the unadorned GA. This is in keeping with the original LEM paper’s results, in which the learning mechanism was AQ and the evolution/learning interface was more sophisticated. It is surprising and interesting to see such beneficial improvement in the GA after such a simple learning-based intervention. Since the only applicationspecific demand o...
Guleng Sheri, David W. Corne