In this paper we propose a new distributed learning method called distributed network boosting (DNB) algorithm for distributed applications. The learned hypotheses are exchanged between neighboring sites during learning process. Theoretical analysis shows that the DNB algorithm minimizes the cost function through the collaborative functional gradient descent in hypotheses space. Comparison results of the DNB algorithm with other distributed learning methods on real data sets with different sizes show its effectiveness.