Sciweavers

IJCNN
2006
IEEE

Sparse Optimization for Second Order Kernel Methods

14 years 6 months ago
Sparse Optimization for Second Order Kernel Methods
— We present a new optimization procedure which is particularly suited for the solution of second-order kernel methods like e.g. Kernel-PCA. Common to these methods is that there is a cost function to be optimized, under a positive definite quadratic constraint, which bounds the solution. For example, in Kernel-PCA the constraint provides unit length and orthogonal (in feature space) principal components. The cost function is often quadratic which allows to solve the problem as a generalized eigenvalue problem. However, in contrast to Support Vector Machines, which employ box constraints, quadratic constraints usually do not lead to sparse solutions. Here we give up the structure of the generalized eigenvalue problem in favor of a non-quadratic regularization term added to the cost function, which enforces sparse solutions. To optimize this more ’complicated’ cost function, we introduce a modified conjugate gradient descent method. Starting from an admissible point, all iterati...
Roland Vollgraf, Klaus Obermayer
Added 11 Jun 2010
Updated 11 Jun 2010
Type Conference
Year 2006
Where IJCNN
Authors Roland Vollgraf, Klaus Obermayer
Comments (0)