Sciweavers

CONNECTION
2004

Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

14 years 9 days ago
Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting
While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability to generalize. A realistic and effective system that solves the problem of catastrophic interference in sequential learning of `static' (i.e. non-temporally ordered) patterns has been proposed recently (Robins 1995, Connection Science, 7: 123
Bernard Ans, Stephane Rousset, Robert M. French, S
Added 17 Dec 2010
Updated 17 Dec 2010
Type Journal
Year 2004
Where CONNECTION
Authors Bernard Ans, Stephane Rousset, Robert M. French, Serban C. Musca
Comments (0)