Conventional algorithms for decision tree induction use an attribute-value representation scheme for instances. This paper explores the empirical consequences of using set-valued attributes. This simple representational extension is shown to yield significant gains in speed and accuracy. To do so, the paper also describes an intuitive and practical version of pre-pruning. This method is shown to yield considerably better accuracy results when used as a pre-processor for numeric data. It is also shown to improve the accuracy for the second best classification option, which has valuable ramifications for post-processing.