Sciweavers

ESANN
2003

On the weight dynamics of recurrent learning

14 years 1 months ago
On the weight dynamics of recurrent learning
We derive continuous-time batch and online versions of the recently introduced efficient O(N2 ) training algorithm of Atiya and Parlos [2000] for fully recurrent networks. A mathematical analysis of the respective weight dynamics yields that efficient learning is achieved although relative rates of weight change remain constant due to the way errors are backpropagated. The result is a highly structured network where an unspecific internal dynamical reservoir can be distinguished from the output layer, which learns faster and changes at much higher rates. We discuss this result with respect to the recently introduced “echo state” and “liquid state” networks, which have similar structure.
Ulf D. Schiller, Jochen J. Steil
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2003
Where ESANN
Authors Ulf D. Schiller, Jochen J. Steil
Comments (0)