The meta-learner MLR (Multi-response Linear Regression) has been proposed as a trainable combiner for fusing heterogeneous baselevel classifiers. Although it has interesting properties, it never has been evaluated extensively up to now. This paper employs learning curves to investigate the relative performance of MLR for solving multi-class classification problems in comparison with other trainable combiners. Several strategies (namely, Reusing, Validation and Stacking) are considered for using the available data to train both the base-level classifiers and the combiner. Experimental results show that due to the limited complexity of MLR, it can outperform the other combiners for small sample sizes when the Validation or Stacking strategy is adopted. Therefore, MLR should be a preferential choice of trainable combiners when solving a multi-class task with small sample size.
Chun-Xia Zhang, Robert P. W. Duin