In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing the specifics of each learning task with task-specific parameters and the commonalities between them through shared parameters. This enables implicit data sharing and regularization. We evaluate our learning method on web-search ranking data sets from several countries. Here, multitask learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability. Categories and Subject Descriptors I.2.6 [Artificial intelligence]: Learning; H.3.3 [Information storage and retrieval]: Information search and retrieval General Terms Algorithms
Olivier Chapelle, Pannagadatta K. Shivaswamy, Srin