In this paper, we propose a distributed compression approach for multi-view images, where each camera efficiently encodes its visual information locally without requiring any collaboration with the other cameras. Such a compression scheme can be necessary for camera sensor networks, where each camera has limited power and communication resources and can only transmit data to a central base station. The correlation in the multi-view data acquired by a dense multi-camera system can be extremely large and should therefore be exploited at each encoder in order to reduce the amount of data transmitted to the receiver. Our distributed source coding approach is based on a quadtree decomposition method and uses some geometrical information about the scene and the position of the cameras to estimate this multi-view correlation. We assume that the different views can be modelled as 2D piecewise polynomial functions with 1D linear boundaries and show how our approach applies in this context. Ou...