This paper presents an integrated modeling system capable of generating coloured three dimensional representations of a scene observed from multiple viewpoints. Emphasis is given to the integration of the components and to the algorithms used for acquisition, registration and final surface mapping. First, a sensor operating with structured light is used to acquire 3D and colour data of a scene from multiple views. Second, a frequency-domain based registration algorithm computes the transformation between pairs of views from the raw measurements and without a priori knowledge on the transformation parameters. Finally, the registered views are merged together and refined to create a rich 3D model of the objects. Real world modeling examples are presented and analyzed to validate the operation of the proposed integrated modeling system.