Sciweavers

NIPS
2004

Parallel Support Vector Machines: The Cascade SVM

14 years 1 months ago
Parallel Support Vector Machines: The Cascade SVM
We describe an algorithm for support vector machines (SVM) that can be parallelized efficiently and scales to very large problems with hundreds of thousands of training vectors. Instead of analyzing the whole training set in one optimization step, the data are split into subsets and optimized separately with multiple SVMs. The partial results are combined and filtered again in a `Cascade' of SVMs, until the global optimum is reached. The Cascade SVM can be spread over multiple processors with minimal communication overhead and requires far less memory, since the kernel matrices are much smaller than for a regular SVM. Convergence to the global optimum is guaranteed with multiple passes through the Cascade, but already a single pass provides good generalization. A single pass is 5x
Hans Peter Graf, Eric Cosatto, Léon Bottou,
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2004
Where NIPS
Authors Hans Peter Graf, Eric Cosatto, Léon Bottou, Igor Durdanovic, Vladimir Vapnik
Comments (0)