This paper presents a new automatic approach to building a videorama with shallow depth of field. We stitch the static background of video frames and render the dynamic foreground onto the enlarged background after foreground/background segmentation. To this end, we extract the depth information from a two-view video stream. We show that the depth cues combined with color cues improve segmentation. Finally, we use the depth cues to synthesize the shallow depth of field effects in the final videorama. Our approach stabilizes the camera motion as if the video was captured from a static camera and improves the visual quality with the increased field of view and shallow depth of field effects.