The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately the computational complexity of the resulting method is of the order of the training set, which is quite large for many applications. This paper proposes a two step procedure for arriving at a compact and computationally efficient learning procedure. After learning, the second step takes advantage of the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate the empirical kernel maps. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.
Omar Arif, Patricio A. Vela