A two-phase procedure, based on biosignal recordings, is applied in an attempt to classify the emotion valence content in human-agent interactions. In the first phase, participants...
Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferenci...
Rieks op den Akker, Dennis Hofs, Hendri Hondorp, H...
This paper presents a study on multimodal conversation analysis of Greek TV interviews. Specifically, we examine the type of facial, hand and body gestures and their respective com...
In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspon...
This paper deals with emotional speech detection in home movies. In this study, we focus on infant-directed speech also called "motherese" which is characterized by highe...
After perceiving multi-modal behaviour from a user or agent a conversational agent needs to be able to determine what was intended with that behaviour. Contextual variables play an...
Abstract. Being aware of the gap between technological offers and user expectations, the paper aims to illustrate the necessity of anthropocentric designs ("user-pulled")...
Virtual worlds are developing rapidly over the internet. They are visited by avatars and staffed with Embodied Conversational Agents (ECAs). An avatar is a representation of a phys...
Articulatory synthesis of speech and singing aims for modeling the production process of speech and singing as human-like or natural as possible. The state of the art is described ...
In this overview, we look at embedded clauses that report somebody's attitude or speech. Semantic content in the embedded clause can in some such cases be interpreted from eit...