This paper presents a new approach for virtual view synthesis that does not require any information of scene geometry. Our approach first generates multiple virtual views at the same position based on multiple depths by the conventional view interpolation method. The interpolated views suffer from blurring and ghosting artifacts due to the pixel mis-correspondence. Secondly, the multiple views are integrated into a novel view where all regions are focused. This integration problem can be formulated as the problem of solving a set of linear equations that relates the multiple views. To solve this set of equations, two methods using projection onto convex sets (POCS) and inverse filtering are presented that effectively integrate the focused regions in each view into a novel view. Experimental results using real images show the validity of our methods.