The automatic induction of classification rules from examples in the form of a decision tree is an important technique used in data mining. One of the problems encountered is the overfitting of rules to training data. In some cases this can lead to an excessively large number of rules, many of which have very little predictive value for unseen data. This paper is concerned with the reduction of overfitting during decision tree generation. It introduces a technique known as J-pruning, based on the J-measure, an information theoretic means of quantifying the information content of a rule.