Combining machine learning models is a means of improving overall accuracy.Various algorithms have been proposed to create aggregate models from other models, and two popular examples for classification are Bagging and AdaBoost. In this paper we examine their adaptation to regression, and benchmark them on synthetic and real-world data. Our experiments reveal that different types of AdaBoost algorithms require different complexities of base models. They outperform Bagging at their best, but Bagging achieves a consistent level of success with all base models, providing a robust alternative.