— A solution for the slow convergence of most learning rules for Recurrent Neural Networks (RNN) has been proposed under the terms Liquid State Machines (LSM) and Echo State Networks (ESN). These methods use a RNN as a reservoir that is not trained. For this article we build upon previous work, where we used reservoir-based techniques to solve the task of isolated digit recognition. We present a straightforward improvement of our previous LSM-based implementation that results in an outperformance of a stateof-the-art Hidden Markov Model (HMM) based recognizer. Also, we apply the Echo State approach to the problem, which allows us to investigate the impact of several interconnection parameters on the performance of our speech recognizer.