Multimodal interfaces combining, e.g., natural language and graphics take advantage of both the individual strength of each communication mode and the fact that several modes can be employed in parallel, e.g., in the text-picture combinations of illustrated documents. It is an important goal of this research not simply to merge the verbalization results of a natural language generator and the visualization results of a knowledge-based graphics generator, but to carefully coordinate graphics and text in such a way that they complement each other. We describe the architecture of the knowledge-based presentation system WIP which guarantees a design process with a large degree of freedom that can be used to tailor the presentation to suit the specific context. In WIP, decisions of the language generator may influence graphics generation and graphical constraints may sometimes force decisions in the language production process. In this paper, we focus on the influence of graphical constrai...