We present a high performance reconstruction approach, which generates true 3D models from multiple views with known camera parameters. The complete pipeline from depth map generation over depth image integration to the final 3D model visualization is performed on programmable graphics processing units (GPUs). The proposed pipeline is suitable for long image sequences and uses a planesweep depth estimation procedure optionally employing robust image similarity functions to generate a set of depth images. The subsequent volumetric fusion step combines these depth maps into an impicit surface representation of the final model, which can be directly displayed using GPUbased raycasting methods. Depending on the number of input views and the desired resolution of the final model the computing times range from several seconds to a few minutes. The quality of the obtained models is illustrated with real-world datasets.
Christopher Zach, Mario Sormann, Konrad F. Karner