—A fast online algorithm OnlineSVMR for training Ramp-Loss Support Vector Machines (SVMR s) is proposed. It finds the optimal SVMR for t+1 training examples using SVMR built on t previous examples. The algorithm retains the Karush–Kuhn–Tucker conditions on all previously observed examples. This is achieved by an SMO-style incremental learning and decremental unlearning under the ConcaveConvex Procedure framework. Further speedup of training time could be achieved by dropping the requirement of optimality. A variant, called OnlineASVMR , is a greedy approach that approximately optimizes the SVMR objective function and is suitable for online active learning. The proposed algorithms were comprehensively evaluated on 9 large benchmark data sets. The results demonstrate that OnlineSVMR (1) has the similar computational cost as its offline counterpart; (2) outperforms IDSVM, its competing online algorithm that uses hinge-loss, in terms of accuracy, model sparsity and training time. The...