In this work a hybrid training scheme for the supervised learning of feedforward neural networks is presented. In the proposed method, the weights of the last layer are obtained employing linear least squares while the weights of the previous layers are updated using a standard learning method. The goal of this hybrid method is to assist the existing learning algorithms in accelerating their convergence. Simulations performed on two data sets show that the proposed method outperforms, in terms of convergence speed, the Levenberg-Marquardt algorithm.