Sciweavers

ICML
2009
IEEE

A majorization-minimization algorithm for (multiple) hyperparameter learning

15 years 7 days ago
A majorization-minimization algorithm for (multiple) hyperparameter learning
We present a general Bayesian framework for hyperparameter tuning in L2-regularized supervised learning models. Paradoxically, our algorithm works by first analytically integrating out the hyperparameters from the model. We find a local optimum of the resulting nonconvex optimization problem efficiently using a majorization-minimization (MM) algorithm, in which the non-convex problem is reduced to a series of convex L2-regularized parameter estimation tasks. The principal appeal of our method is its simplicity: the updates for choosing the L2-regularized subproblems in each step are trivial to implement (or even perform by hand), and each subproblem can be efficiently solved by adapting existing solvers. Empirical results on a variety of supervised learning models show that our algorithm is competitive with both grid-search and gradient-based algorithms, but is more efficient and far easier to implement.
Chuan-Sheng Foo, Chuong B. Do, Andrew Y. Ng
Added 17 Nov 2009
Updated 17 Nov 2009
Type Conference
Year 2009
Where ICML
Authors Chuan-Sheng Foo, Chuong B. Do, Andrew Y. Ng
Comments (0)