Sciweavers

MCS
2004
Springer

A Comparison of Ensemble Creation Techniques

14 years 5 months ago
A Comparison of Ensemble Creation Techniques
We experimentally evaluate bagging and six other randomization-based approaches to creating an ensemble of decision-tree classifiers. Bagging uses randomization to create multiple training sets. Other approaches, such as Randomized C4.5 apply randomization in selecting a test at a given node of a tree. Then there are approaches, such as random forests and random subspaces, that apply randomization in the selection of attributes to be used in building the tree. On the other hand boosting, as compared here, incrementally builds classifiers by focusing on examples misclassified by existing classifiers. Experiments were performed on 34 publicly available data sets. While each of the other six approaches has some strengths, we find that none of them is consistently more accurate than standard bagging when tested for statistical significance.
Robert E. Banfield, Lawrence O. Hall, Kevin W. Bow
Added 02 Jul 2010
Updated 02 Jul 2010
Type Conference
Year 2004
Where MCS
Authors Robert E. Banfield, Lawrence O. Hall, Kevin W. Bowyer, Divya Bhadoria, W. Philip Kegelmeyer, Steven Eschrich
Comments (0)