The quality of static phones (e.g. vowels, fricatives, nasals, laterals) generated by articulatory speech synthesizers has reached a high level in the last years. Our goal is to ex...
Abstract. Human machine interaction is one of the emerging fields for the coming years. Interacting with others in our daily life is a face to face interaction. Faces are the natur...
Zahid Riaz, Christoph Mayer, Michael Beetz, Bernd ...
In the present study, selected properties of multimodal instructing acts are discussed. Realisations of the instructing acts extracted from a corpus of task-oriented dialogues are ...
A two-phase procedure, based on biosignal recordings, is applied in an attempt to classify the emotion valence content in human-agent interactions. In the first phase, participants...
Remote participants in hybrid meetings often have problems to follow what is going on in the (physical) meeting room they are connected with. This paper describes a videoconferenci...
Rieks op den Akker, Dennis Hofs, Hendri Hondorp, H...
This paper presents a study on multimodal conversation analysis of Greek TV interviews. Specifically, we examine the type of facial, hand and body gestures and their respective com...
In our current work we concentrate on finding correlation between speech signal and occurrence of facial gestures. Motivation behind this work is computer-generated human correspon...
This paper deals with emotional speech detection in home movies. In this study, we focus on infant-directed speech also called "motherese" which is characterized by highe...
After perceiving multi-modal behaviour from a user or agent a conversational agent needs to be able to determine what was intended with that behaviour. Contextual variables play an...
Abstract. Being aware of the gap between technological offers and user expectations, the paper aims to illustrate the necessity of anthropocentric designs ("user-pulled")...