The fundamental assumption of 3D videos using depth-imagebased rendering is the full availability of range images at video rate. In this work we alleviate this hard demand and assume that only limited resources of range images are available, i.e. corresponding range images exist for some, but not all, color images of the monoscopic video stream. We propose to synthesize the missing range images between two consecutive range images. Experiments on real videos have demonstrated very encouraging results. Especially, one 3D video was generated from a 2D video without any sensory 3D data available at all. In a quality evaluation using an autostereoscopic 3D display the test viewers have attested similar 3D video quality for our synthesis technique and rendering based on depth ground truth.