A general problem in model selection is to obtain the right parameters that make a model "t observed data. For a multilayer perceptron (MLP) trained with back-propagation (BP), this means "nding appropiate layer size and initial weights. This paper proposes a method (G-Prop, genetic backpropagation) that attempts to solve that problem by combining a genetic algorithm (GA) and BP to train MLPs with a single hidden layer. The GA selects the initial weights and changes the number of neurons in the hidden layer through the application of speci"c genetic operators. G-Prop combines the advantages of the global search performed by the GA over the MLP parameter space and the local search of the BP algorithm. The application of the G-Prop algorithm to several real-world and benchmark problems shows that MLPs evolved using G-Prop are smaller and achieve a higher level of generalization than other perceptron training algorithms, such as Quick-Propagation or RPROP, and other evolut...
Pedro A. Castillo Valdivieso, Juan J. Merelo Guerv