Abstract - In this paper we develop and analyze Spiking Neural Network (SNN) versions of Resilient Propagation (RProp) and QuickProp, both training methods used to speed up training in Artificial Neural Networks (ANNs) by making certain assumptions about the data and the error surface. Modifications are made to both algorithms to adapt them to SNNs. Results generated on standard XOR and Fisher Iris data sets using the QuickProp and RProp versions of SpikeProp are shown to converge to a final error of 0.5 - an average of 80% faster than using SpikeProp on its own.
Sam McKennoch, Dingding Liu, Linda G. Bushnell