In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspon...
—In this paper we report about the use of computer generated affect to control body and mind of cognitively modeled virtual characters. We use the computational model of affect A...
We present our current state of development regarding animated agents applicable to affective dialogue systems. A new set of tools are under development to support the creation of...
In this paper, we present our approach to modelling perceptive 3D virtual characters with emotion and personality. The characters are powered by a dialogue system that consists of...
Arjan Egges, Sumedha Kshirsagar, Xuan Zhang, Nadia...
— This paper presents our expressive facial speech synthesis system Eface, for a social or service robot. Eface aims at enabling a robot to deliver information clearly with empat...