We show a simple and efficient way for rendering arbitrary views from so-called free-form light fields, employing a convex free form camera surface and a set of arbitrarily oriented camera planes. This way directionally varying real-world imagery can be displayed without intermediate resampling steps, and yet rendering of free form light fields can be performed as efficiently as for two-planeparameterized light fields using texture mapping graphics hardware. Comparable to sphere-based parameterizations, a single free form light field can represent all possible views of the scene without the need for multiple slabs, and it allows for relatively uniform sampling. Furthermore, we extend the rendering algorithm to account for occlusion in certain input views. We apply our method to synthetic and real-world datasets with and without additional geometric information and compare the resulting rendering performance and quality to twoplane-parameterized light field rendering.