Abstract. We introduce an approach to multimodal generation of verbal and nonverbal contributions for virtual characters in a multiparty dialogue scenario. This approach addresses issues of turn-taking, is able to synchronize the different modalities in real-time, and supports fixed utterances as well as utterances that are assembled by a full-fledged treebased text generation algorithm. The system is implemented in a first version as part of the second VirtualHuman demonstrator.