We analyze generalization and learning in XCS with gradient descent. At first, we show that the addition of gradient in XCS may slow down learning because it indirectly decreases the learning rate. However, in contrast to what was suggested elsewhere, gradient descent has no effect on the achieved generalization. We also show that when gradient descent is combined with roulette wheel selection, which is known to be sensitive to small values of the learning rate, the learning speed can slow down dramatically. Previous results reported no difference in the performance of XCS with gradient descent when roulette wheel selection or tournament selection were used. In contrast, we suggest that gradient descent should always be combined with tournament selection, which is not sensitive to the value of the learning rate. When gradient descent is used in combination with tournament selection, the results show that (i) the slowdown in learning is limited and (ii) the generalization capabiliti...
Pier Luca Lanzi, Martin V. Butz, David E. Goldberg