Sciweavers

ICML
2009
IEEE

Multiple indefinite kernel learning with mixed norm regularization

13 years 10 months ago
Multiple indefinite kernel learning with mixed norm regularization
We address the problem of learning classifiers using several kernel functions. On the contrary to many contributions in the field of learning from different sources of information using kernels, we here do not assume that the kernels used are positive definite. The learning problem that we are interested in involves a misclassification loss term and a regularization term that is expressed by means of a mixed norm. The use of a mixed norm allows us to enforce some sparsity structure, a particular case of which is, for instance, the Group Lasso. We solve the convex problem by employing proximal minimization algorithms, which can be viewed as refined versions of gradient descent procedures capable of naturally dealing with nondifferentiability. A numerical simulation on a UCI dataset shows the modularity of our approach.
Matthieu Kowalski, Marie Szafranski, Liva Ralaivol
Added 19 Feb 2011
Updated 19 Feb 2011
Type Journal
Year 2009
Where ICML
Authors Matthieu Kowalski, Marie Szafranski, Liva Ralaivola
Comments (0)