While recent research on rule learning has focussed largely on finding highly accurate hypotheses, we evaluate the degree to which these hypotheses are also simple, that is small. To realize this, we compare well-known rule learners, such as CN2, RIPPER, PART, FOIL and C5.0 rules, with the benchmark system SL2 that explicitly aims at computing small rule sets with few literals. The results show that it is possible to obtain a similar level of accuracy as state-of-the-art rule learners using much smaller rule sets. Key words: Rule Learning, Simplicity, Stochastic Local Search