We derive continuous-time batch and online versions of the recently introduced efficient O(N2 ) training algorithm of Atiya and Parlos [2000] for fully recurrent networks. A mathematical analysis of the respective weight dynamics yields that efficient learning is achieved although relative rates of weight change remain constant due to the way errors are backpropagated. The result is a highly structured network where an unspecific internal dynamical reservoir can be distinguished from the output layer, which learns faster and changes at much higher rates. We discuss this result with respect to the recently introduced “echo state” and “liquid state” networks, which have similar structure.
Ulf D. Schiller, Jochen J. Steil