Multimodal interactive systems support multiple interaction techniques such as the synergistic use of speech, gesture and eye gaze tracking. The flexibility they offer results in an increased complexity that current software development tools do not address appropriately. In this paper we describe a component-based approach, called ICARE, for specifying and developing multimodal interfaces. Our approach relies on two types of components: (i) elementary components that describe pure modalities and (ii) composition components (Complementarity, Redundancy and Equivalence) that enable the designer to specify combined usage of modalities. The designer graphically assembles the ICARE components and the code of the multimodal user interface is automatically generated. Although the ICARE platform is not fully developed, we illustrate the applicability of the approach with the implementation of two multimodal systems: MEMO a GeoNote system and MID, a multimodal identification interface. Catego...