Modeling dynamic scenes is a challenging problem faced by applications such as digital content generation and motion analysis. Fast single-frame methods obtain sparse depth samples while multipleframe methods often reply on the rigidity of the object to correspond a small number of consecutive shots for decoding the pattern by feature tracking. We present a novel structured-light acquisition method which can obtain dense depth and color samples for moving and deformable surfaces undergoing repetitive motions. Our key observation is that for repetitive motion, different views of the same motion state under different structured-light patterns can be corresponded together by image matching. These images densely encode an effectively “static” scene with time-multiplexed patterns that we can use for reconstruction of the timevarying scene. At the same time, color samples are reconstructed by matching images illuminated using white light to those using structured-light patterns. We demo...
Yi Xu, Daniel G. Aliaga