An algorithm is presented that automatically generates groundtruthed video from a symbolic description for an object and a specification for the movement of a handheld video camera around that object. This provides a method to generate large amounts of training and test data for the development of computer vision algorithms at very low cost. We describe an implementation of this technique for an imaging application in which a cell phone video camera is moved over a paper document. Experimental results demonstrate the similarity of images captured by the real camera to images generated by the proposed technique.
Andrew Lookingbill, Emilio R. Antúnez, Bern