The problem of learning linear discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of quasi-additive algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers much of this class, including both Perceptron and Winnow but also many novel algorithms. Our proof introduces a generic measure of progress that seems to capture much of when and how these algorithms converge. Using this measure, we develop a simple general technique for proving mistake bounds, which we apply to the new algorithms as well as existing algorithms. When applied to known algorithms, our technique “automatically” produces close variants of existing proofs (and we generally obtain the known bounds, to within constants)— thus showing, in a certain sense, that these seemingly diverseresults are fundament...
Adam J. Grove, Nick Littlestone, Dale Schuurmans