This paper is based on a new way for determining the regularization trade-off in least squares support vector machines (LS-SVMs) via a mechanism of additive regularization which has been recently introduced in [6]. This framework enables computational fusion of training and validation levels and allows to train the model together with finding the regularization constants by solving a single linear system at once. In this paper we show that this framework allows to consider a penalized validation criterion that leads to sparse LS-SVMs. The model, regularization constants and sparseness follow from a convex quadratic program in this case. Regularization has a rich history which dates back to the theory of inverse ill-posed and ill-conditioned problems [12]. Regularized cost functions have been considered e.g. in splines, multilayer perceptrons, regularization networks [7], support vector machines (SVM) and related methods (see e.g. [5]). SVM [13] is a powerful methodology for solving pro...
Kristiaan Pelckmans, Johan A. K. Suykens, Bart De