Reservoir Computing is a new paradigm for using Recurrent Neural Networks which shows promising results. However, as the recurrent part is created randomly, it typically needs to be large enough to be able to capture the dynamic features of the data considered. Moreover, this random creation is still lacking a strong methodology. We propose to study how pruning some connections from the reservoir to the readout can help on the one hand to increase the generalisation ability, in much the same way as regularisation techniques do, and on the other hand to improve the implementability of reservoirs in hardware. Furthermore we study the actual sub-reservoir which is kept after pruning which leads to important insights in what we have to expect from a good reservoir.
Xavier Dutoit, Benjamin Schrauwen, Jan M. Van Camp