When linear support vector machines (SVMs) are applied to multi-class text categorization in industry, the size of the linear SVM model is very large, usually greater than several gigabytes. As a result, the model cannot directly fit into the computer memory and the classification process is slow. In this paper, a novel method based on vector norm is proposed to shrink the model size significantly without sacrificing the classification accuracy. Also, we propose a cache-efficient implementation of multi-class linear SVMs in the classification phase. Our experimental results have shown that on Yahoo-Korea dataset the proposed method can shrink the model size from 5.2 gigabytes to 260 megabytes and the efficient implementation of linear SVM has obtained a speedup factor of 44.
Jian-xiong Dong, Ching Y. Suen, Adam Krzyzak