Sciweavers

NIPS
2001

A Parallel Mixture of SVMs for Very Large Scale Problems

14 years 24 days ago
A Parallel Mixture of SVMs for Very Large Scale Problems
Support Vector Machines (SVMs) are currently the state-of-the-art models for many classication problems but they suer from the complexity of their training algorithm which is at least quadratic with respect to the number of examples. Hence, it is hopeless to try to solve real-life problems having more than a few hundreds of thousands examples with SVMs. The present paper proposes a new mixture of SVMs that can be easily implemented in parallel and where each SVM is trained on a small subset of the whole dataset. Experiments on a large benchmark dataset (Forest) as well as a dicult speech database, yielded signicant time improvement (time complexity appears empirically to locally grow linearly with the number of examples). In addition, and that is a surprise, a signicant improvement in generalization was observed on Forest.
Ronan Collobert, Samy Bengio, Yoshua Bengio
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2001
Where NIPS
Authors Ronan Collobert, Samy Bengio, Yoshua Bengio
Comments (0)