Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. State-ofthe-art performance can easily be achieved with this setup, called Reservoir Computing. The idea can even be broadened by stating that any high dimensional, driven dynamic system, operated in the correct dynamic regime can be used as a temporal ‘kernel’ which makes it possible to solve complex tasks using just linear post-processing techniques. This tutorial will give an overview of current research on theory, application and implementations of Reservoir Computing.
Benjamin Schrauwen, David Verstraeten, Jan M. Van