This paper addresses the issue of the explanation of the result given to the end-user by a classifier, when it is used as a decision support system. We consider machine learning classifiers, which provide a class for new cases, but also deterministic classifiers that are built to solve a particular problem (like in viability or control problems). The end-user relies mainly on global information (like error rates) to assess the quality of the result given by the system. Even class membership probability, if available, describes only the statistical viewpoint, it doesn't take into account the context of a particular case. In the case of numerical state space, we propose to use the decision boundary of the classifier (which always exists, even implicitly), to describe the situation of a particular case: The distance of a case to the decision boundary measures the robustness of the decision to a change in the input data. Other geometric concepts can present a precise picture of the si...