It is difficult to determine the cost effectiveness of program analysis tools because we cannot evaluate them in the same environment where we will be using the tool. Tool evaluations are usually run on mature, stable code after it has passed developer testing. However, program analysis tools are usually run on unstable code, and some tools are meant to run right after compilation. Naturally, the results of the evaluation are not comparable to the true contribution of the tool. This leaves program analysis tool evaluations being very subjective and usually dependent on the evaluators intuition. While we could not solve this problem, we suggest techniques to make the evaluations more objective. We started by making enforcement-based customizations of the tool to be evaluated. When we evaluate a tool, we used a comparative evaluation technique to make the ROI analysis more objective. We also show how to use coverage models to select several tools when they each find different kinds of...