The problem of planning the Next Best View (NBV) still poses many questions. However, the achieved methods and algorithms are hard to compare, since researchers use their own test objects for planning and reconstruction and compute specific quality measures. Consequently, these numbers make different statements about different objects. Thus, the quality of the results and the performance of the methods are not easily comparable. In order to mend this lack of measure and comparability, this paper suggests a test object together with a reference benchmark. These allow comparison of reconstruction results from different NBV algorithms achieved with different techniques and various kinds of sensors.