We present an improvement of Noviko 's perceptron convergence theorem. Reinterpreting this mistakebound as a margindependent sparsity guarantee allows us to give a PAC{style generalisation error bound for the classi er learned by the dual perceptron learning algorithm. The bound value crucially depends on the margina support vector machine would achieve on the same data set using the same kernel. Ironically, the bound yields better guarantees than are currently available for the support vector solution itself.
Thore Graepel, Ralf Herbrich, Robert C. Williamson