This paper proposes a general boosting framework for combining multiple kernel models in the context of both classification and regression problems. Our main approach is built on the idea of gradient boosting together with a new regularization scheme and aims at reducing the cubic complexity of training kernel models. We focus mainly on using the proposed boosting framework to combine kernel ridge regression (KRR) models for regression tasks. Numerical experiments on four large-scale data sets have shown that boosting multiple small KRR models is superior to training a single large KRR model on both improving generalization performance and reducing computational requirements.