Abstract. It has been shown that many kernel methods can be equivalently formulated as minimal-enclosing-ball (MEB) problems in certain feature space. Exploiting this reduction efficient algorithms to scale up Support Vector Machines (SVMs) and other kernel methods have been introduced under the name Core-Vector-Machines (CVMs). In this paper we study a new algorithm to train SVMs based on an instance of the Frank-Wolfe optimization method recently proposed to approximate the solution of the MEB problem. We show that specialized to SVM training this algorithm can scale better than CVMs at the price of a slightly lower accuracy.