Ensemble methods like bagging and boosting that combine the decisions of multiple hypotheses are some of the strongest existing machine learning methods. The diversity of the memb...
This paper proposes a semi-supervised boosting approach to improve statistical word alignment with limited labeled data and large amounts of unlabeled data. The proposed approach ...
This paper studies boosting algorithms that make a single pass over a set of base classifiers. We first analyze a one-pass algorithm in the setting of boosting with diverse base...
To improve weak classifiers bagging and boosting could be used. These techniques are based on combining classifiers. Usually, a simple majority vote or a weighted majority vote are...
Combining multiple global models (e.g. back-propagation based neural networks) is an effective technique for improving classification accuracy by reducing a variance through manipu...
Boosted PRIM (Patient Rule Induction Method) is a new algorithm developed for two-class classification problems. PRIM is a variation of those Tree-Based methods ( [4] Ch9.3), seek...
Pei Wang, Young Kim, Jonathan R. Pollack, Robert T...
The foremost goal of superscalar processor design is to increase performance through the exploitation of instruction-level parallelism (ILP). Previous studies have shown that spec...
This paper connects two fundamental ideas from theoretical computer science: hard-core set construction, a type of hardness amplification from computational complexity, and boosti...
Boosting algorithms like AdaBoost and Arc-GV are iterative strategies to minimize a constrained objective function, equivalent to Barrier algorithms. Based on this new understandi...
Abstract. Wrappers have recently been used to obtain parameter optimizations for learning algorithms. In this paper we investigate the use of a wrapper for estimating the correct n...
Bernhard Pfahringer, Geoffrey Holmes, Gabi Schmidb...