Sciweavers

IJCNN
2006
IEEE

On derivation of stagewise second-order backpropagation by invariant imbedding for multi-stage neural-network learning

14 years 5 months ago
On derivation of stagewise second-order backpropagation by invariant imbedding for multi-stage neural-network learning
— We present a simple, intuitive argument based on “invariant imbedding” in the spirit of dynamic programming to derive a stagewise second-order backpropagation (BP) algorithm. The method evaluates the Hessian matrix of a general objective function efficiently by exploiting the multistage structure embedded in a given neural-network model such as a multilayer perceptron (MLP). In consequence, for instance, our stagewise BP can compute the full Hessian matrix “faster” than the standard method that evaluates the GaussNewton Hessian matrix alone by rank updates in nonlinear least squares learning. Through our derivation, we also show how the procedure serves to develop advanced learning algorithms;
Eiji Mizutani, Stuart Dreyfus
Added 11 Jun 2010
Updated 11 Jun 2010
Type Conference
Year 2006
Where IJCNN
Authors Eiji Mizutani, Stuart Dreyfus
Comments (0)