Sciweavers

FLAIRS
2004

Invariance of MLP Training to Input Feature De-correlation

14 years 25 days ago
Invariance of MLP Training to Input Feature De-correlation
In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.
Changhua Yu, Michael T. Manry, Jiang Li
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2004
Where FLAIRS
Authors Changhua Yu, Michael T. Manry, Jiang Li
Comments (0)