Sciweavers

AMAI
2002
Springer

Minimizing Output Error in Multi-Layer Perceptrons

13 years 11 months ago
Minimizing Output Error in Multi-Layer Perceptrons
act It is well-established that a multi-layer perceptron (MLP) with a single hidden layer of N neurons and an activation function bounded by zero at negative infinity and one at infinity can learn N distinct training sets with zero error. Previous work has shown that the input weights and biases for such a MLP can be chosen in an effectively arbitrary manner; however, this work makes the implicit assumption that the samples used to train the MLP are noiseless. We demonstrate that the values of the input weights and biases have a provable effect on the susceptibility of the MLP to noise, and can result in increased output error. It is shown how to compute a quantity called Dilution of Precision (DOP), originally developed for the Global Positioning System, for a given set of input weights and biases, and further shown that by minimizing DOP the susceptibility of the MLP to noise is also minimized.
Jonathan P. Bernick
Added 16 Dec 2010
Updated 16 Dec 2010
Type Journal
Year 2002
Where AMAI
Authors Jonathan P. Bernick
Comments (0)