Most rule learning systems posit hard decision boundaries for continuous attributes and point estimates of rule accuracy, with no measures of variance, which may seem arbitrary to a domain expert. These hard boundaries/points change with small perturbations to the training data due to algorithm instability. Moreover, rule induction typically produces a large number of rules that must be filtered and interpreted by an analyst. This paper describes a method of combining rules over multiple bootstrap replications of rule induction so as to reduce the total number of rules presented to an analyst, to measure and increase the stability of the rule induction process, and to provide a measure of variance to continuous attribute decision boundaries and accuracy point estimates. A measure of similarity between rules is also introduced as a basis of multidimensional scaling to visualize rule similarity. The method was applied to perioperative data and to the UCI (University of California, Irvine...
Lemuel R. Waitman, Douglas H. Fisher, Paul H. King