In this paper we describe how to generate affective dialogs for multiple virtual characters based on a combination of both automatically generated and pre-scripted scenes. This is done by using the same technique for emotion elicitation and computation that takes either input from the human author in the form of appraisal and dialog act tags or from a dialog planner in the form inferred emotion eliciting conditions. In either case, the system computes the resulting emotions and their intensities. Emotions are used to inform the selection of pre-scripted scenes and dialog strategies, and their surface realization. The approach has been integrated in two fully operable systems, the CrossTalk II installation and the NECA eShowroom.