We propose an algorithm to construct classification models with a mixture of kernels from labeled and unlabeled data. The derived classifier is a mixture of models, each based on one kernel choice from a library of kernels. The sparse-favoring 1-norm regularization method is employed to restrict the complexity of mixture models and to achieve the sparsity of solutions. By modifying the column generation boosting algorithm LPBoost to a more general linear programming formulation, we are able to efficiently solve mixture-of-kernel problems and automatically select kernel basis functions centered at labeled data as well as unlabeled data. The effectiveness of the proposed approach is proved by experimental results on benchmark datasets.
Jinbo Bi, Glenn Fung, Murat Dundar, R. Bharat Rao