Abstract—Learning in parallel or from distributed data becomes increasingly important. Factors contributing to this trend include emergence of data sets exceeding RAM sizes and inherently distributed scenarios such as mobile environments. Also in these cases interpretable models are favored: they facilitate identifying artifacts and understanding the impact of individual variables. Given the distributed environment, even if the individual learner on each site is interpretable, the overall model usually is not (as e.g. in case of voting schemes). To overcome this problem we propose an approach for efficient merging of decision trees (each learned independently) into a single decision tree. The method complements the existing parallel decision trees algorithms by providing interpretable intermediate models and tolerating constraints on bandwidth and RAM size. The latter properties are achieved by trading RAM and communication constraints for accuracy. Our method and the mentioned trad...