We present an algorithm designed for navigating around a performance that was filmed as a “casual” multi-view video collection: real-world footage captured on hand held cameras by a few audience members. The objective is to easily navigate in 3D, generating a video-based rendering (VBR) of a performance filmed with widely separated cameras. Casually filmed events are especially challenging because they yield footage with complicated backgrounds and camera motion. Such challenging conditions preclude the use of most algorithms that depend on correlation-based stereo or 3D shape-from-silhouettes. Our algorithm builds on the concepts developed for the exploration of photo-collections of empty scenes. Interactive performerspecific view-interpolation is now possible through innovations in interactive rendering and offline-matting relating to i) modeling the foreground subject as video-sprites on billboards, ii) modeling the background geometry with adaptive view-dependent textures...
Luca Ballan, Gabriel J. Brostow, Jens Puwein, Marc