In the last decade, a great many code clone detection tools have been proposed. Such a large number of tools calls for a quantitative comparison, and there have been several attempts to empirically evaluate and compare many of the state-of-the-art tools. However, a recent study shows that there are several factors that could influence the the validity of the results of such comparisons. In order to overcome the effects of such factors (at least in part), in this student poster paper we outline a mutation-based controlled framework for evaluating clone detection tools using edit-based mutation operators that model cloning actions. While the framework is not yet completely implemented and as yet we do not have experimental data, we anticipate that such a framework will provide a useful contribution to the community by providing a more solid objective foundation for tool evaluation. Categories and Subject Descriptors D.2.7 [Distribution, Maintenance, and Enhancement]: Restructuring, reve...
Chanchal Kumar Roy, James R. Cordy