It has become good practice to expect authors of new model checking algorithms to provide not only rigorous evidence of the algorithms correctness, but also evidence of their practical signi cance. Though the rules for determining what is and what is not a good proof of correctness are clear, no comparable rules are usually enforced for determining the soundness of the data that is used to support the claim for practical signi cance. We consider here how we can ag the more common types of omission. Invited paper. Proc. CAV98, LNCS 1427, pp. 61-70.
Gerard J. Holzmann