In this paper, we introduce a toolkit called SceneMaker for authoring scenes for adaptive, interactive performances. These performances are based on automatically generated and prescripted scenes which can be authored with the SceneMaker in a two-step approach: In step one, the scene flow is defined using cascaded finite state machines. In a second step, the content of each scene must be provided. This can be done either manually by using a simple scripting language, or by integrating scenes which are automatically generated at runtime based on a domain and dialogue model. Both scene types can be interweaved in our planbased, distributed platform. The system provides a context memory with access functions that can be used by the author to make scenes user-adaptive. Using CrossTalk as the target application, we describe our models and languages, and illustrate the authoring process. CrossTalk is an interactive installation with animated presentation agents which “live” beyond the a...