In this paper we introduce a system that automatically adds different types of non-verbal behavior to a given dialogue script between two virtual embodied agents. It allows us to t...
Werner Breitfuss, Helmut Prendinger, Mitsuru Ishiz...
Abstract. Generating coordinated multimodal behavior for an embodied agent (speech, gesture, facial expression. . . ) is challenging. It requires a high degree of animation control...
In this paper, we discuss scripting tools that aim at facilitating the design of web-based interactions with animated characters capable of affective communication. Specifically, ...
Helmut Prendinger, Sylvain Descamps, Mitsuru Ishiz...
In this paper, we consider a way to represent contact center applications as a set of multiple XML documents written in different markups including VoiceXML and CCXML. Application...
In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchro...