Speech recognition has matured over the past years to the point that companies can seriously consider its use. However, from a developer’s perspective we observe that speech input is rarely used in mobile application development, not even if it allowed users to work with their devices more flexibly. This stems partly from the fact that programming speech-enabled applications is tedious, because there is insufficient framework and tool support. This paper describes a component-based framework that uniformly supports development of multimodal applications on heterogeneous devices, ranging from laptop PCs to mobile phones. It especially focuses on distributed components (each performing a single step in speech recognition) to enable speech recognition on any type of device. Moreover, it describes how to develop and integrate different user interfaces for one application (voice-only, graphical-only, and multimodal) in a model-driven development approach, to minimize development and ma...