Depth computation is an attractive feature in computer vision. The use of traditional perspective cameras for panoramic perception requires several images, most likely implying the use of several cameras or of a sensor with mobile elements. Moreover, misalignments can appear for non-static scenes. Omnidirectional cameras offer a much wider field of view (FOV) than perspective cameras, capture a panoramic image at every moment and alleviate problems due to occlusions. A practical way to obtain depth in computer vision is the use of structured light systems. This paper is focused on combining omnidirectional vision and structured light with the aim of obtaining panoramic depth information. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. The model and the prototype of a new omnidirectional depth computation sensor are presented in this article and its accuracy is estimated by means of laboratory experimental setups.