Sciweavers

COLT
2001
Springer

Geometric Bounds for Generalization in Boosting

14 years 4 months ago
Geometric Bounds for Generalization in Boosting
We consider geometric conditions on a labeled data set which guarantee that boosting algorithms work well when linear classifiers are used as weak learners. We start by providing conditions on the error of the weak learner which guarantee that the empirical error of the composite classifier is small. We then focus on conditions required in order to insure that the linear weak learner itself achieves an error which is smaller than 1/2 − γ, where the advantage parameter γ is strictly positive and independent of the sample size. Such a condition guarantees that the generalization error of the boosted classifier decays to its minimal value at a rate of 1/ √ m, where m is the sample size. The required conditions, which are based solely on geometric concepts, can be easily verified for any data set in time O(m2 ), and may serve as an indication for the effectiveness of linear classifiers as weak learners for a particular data set.
Shie Mannor, Ron Meir
Added 28 Jul 2010
Updated 28 Jul 2010
Type Conference
Year 2001
Where COLT
Authors Shie Mannor, Ron Meir
Comments (0)