Abstract. Vision systems for various tasks are increasingly being deployed. Although significant effort has gone into improving the algorithms for such tasks, there has been relatively little work on determining optimal sensor configurations. This paper addresses this need. We specifically address and enhance the state-of-the-art in the analysis of scenarios where there are dynamically occuring objects capable of occluding each other. The visibility constraints for such scenarios are analyzed in a multi-camera setting. Also analyzed are other static constraints such as image resolution and field-of-view, and algorithmic requirements such as stereo reconstruction, face detection and background appearance. Theoretical analysis with the proper integration of such visibility and static constraints leads to a generic framework for sensor planning, which can then be customized for a particular task. Our analysis can be applied to a variety of applications, especially those involving randomly...