Existing methods for exploiting awed domain theories depend on the use of a su ciently large set of training examples for diagnosing and repairing aws in the theory. In this paper, we o er a method of theory reinterpretation that makes only marginal use of training examples. The idea is as follows: Often a small number of aws in a theory can completely destroy the theory's classi cation accuracy. Yet it is clear that valuable information is available even from such awed theories. For example, an instance with several independent proofs in a slightly awed theory is certainly more likely to be correctly classi ed as positive than an instance with only a single proof. This idea can be generalized to a numerical notion of \degree of provedness" which measures the robustness of proofs or refutations for a given instance. This \degree of provedness" can be easily computed using a \soft" interpretation of the theory. Given a ranking of instances based on the values so obta...