Sciweavers

JMLR
2010

A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning

13 years 10 months ago
A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
We extend the well-known BFGS quasi-Newton method and its memory-limited variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: the local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We prove that under some technical conditions, the resulting subBFGS algorithm is globally convergent in objective function value. We apply its memory-limited variant (subLBFGS) to L2-regularized risk minimization with the binary hinge loss. To extend our algorithm to the multiclass and multilabel settings, we develop a new, efficient, exact line search algorithm. We prove its worst-case time complexity bounds, and show that our line search can also be used to extend a recently developed bundle method to the multiclass and multilabel settings. We also apply the direction-finding component of our algorithm to L1regularized risk minimization with...
Jin Yu, S. V. N. Vishwanathan, Simon Günter,
Added 28 Jan 2011
Updated 28 Jan 2011
Type Journal
Year 2010
Where JMLR
Authors Jin Yu, S. V. N. Vishwanathan, Simon Günter, Nicol N. Schraudolph
Comments (0)