Sciweavers

AAAI
2004

Online Parallel Boosting

14 years 1 months ago
Online Parallel Boosting
This paper presents a new boosting (arcing) algorithm called POCA, Parallel Online Continuous Arcing. Unlike traditional boosting algorithms (such as Arc-x4 and Adaboost), that construct ensembles by adding and training weak learners sequentially on a round-by-round basis, training in POCA is performed over an entire ensemble continuously and in parallel. Since members of the ensemble are not frozen after an initial learning period (as in traditional boosting) POCA is able to adapt rapidly to nonstationary environments, and because POCA does not require the explicit scoring of a fixed exemplar set, it can perform online learning of non-repeating data. We present results from experiments conducted using neural network experts that show POCA is typically faster and more adaptive than existing boosting algorithms. Results presented for the UCI letter dataset are, to our knowledge, the best published scores to date.
Jesse A. Reichler, Harlan D. Harris, Michael A. Sa
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2004
Where AAAI
Authors Jesse A. Reichler, Harlan D. Harris, Michael A. Savchenko
Comments (0)