This paper addresses the problem of the explanation of the result given by a decision tree, when it is used to predict the class of new cases. In order to evaluate this result, the end-user relies on some estimate of the error rate and on the trace of the classification. Unfortunately the trace doesn’t contain the information necessary to understand the case at hand. We propose a new method to qualify the result given by a decision tree when the data are continuous-valued. We perform a geometric study of the decision surface (the boundary of the inverse image of the different classes). This analysis gives the list of the tests of the tree that are the most sensitive to a change in the input data. Unlike the trace, this list can easily be ordered and pruned so that only the most important tests are presented. We also show how the metric can be used to interact with the end-user. 3