A learner's performance does not rely only on the representation language and on the algorithm inducing a hypothesis in this language. Also the way the induced hypothesis is interpreted for the needs of concept recognition is of interest. A exible methodology for hypothesis interpretion is o ered by the philosophy of a learner's second tier as originally suggested by Michalski (1987). Here, the potential of this general approach is demonstrated in the framework of numeric decision trees. The second tier improves classi cation performance, increases ability to handle context, and facilitates transfer of a hypothesis between di erent contexts.