This paper focuses on the sampling problem in light field rendering (LFR) that is a fundamental approach to image based rendering. Quality of LFR depends on a light ray database generated from pre-acquired images, since image synthesis is a process of gathering appropriate light ray data from the database. For improving the quality, interpolation of light ray data is effective. It is based on an assumption that scene objects are placed on a plane called "focal plane." According to the depth of the focal plane (distance between cameras and focal plane), focus-like effect would appear on the synthesized images. In this paper, we formulate the depth of field in light field rendering to address the range of depth where scene objects are rendered in focus. In our theory, the plenoptic sampling theory is generalized to compare with some other related works. Our representation is intuitive and useful for quantitative analysis of LFR.