Human nonverbal behavior recognition from multiple cues and modalities has attracted a lot of interest in recent years. Despite the interest, many research questions, including th...
Stavros Petridis, Hatice Gunes, Sebastian Kaltwang...
In this paper we present a novel system for driver-vehicle interaction which combines speech recognition with facialexpression recognition to increase intention recognition accura...
One of many skills required to engage properly in a conversation is to know the appropiate use of the rules of engagement. In order to engage properly in a conversation, a virtual...
Social interactions unfold over time, at multiple time scales, and can be observed through multiple sensory modalities. In this paper, we propose a machine learning framework for ...
Ian R. Fasel, Masahiro Shiomi, Pilippe-Emmanuel Ch...
We investigate how fantasy, curiosity and challenge contribute to the user experience in multimodal dialogue computer games for preschool children. For this purpose, an on-line mu...
We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and wh...
The influence of multimodal sources of input data to the construction of accurate computational models of user preferences is investigated in this paper. The case study presented...
Robot autonomy is of high relevance for HRI, in particular for interactions of humans and robots in mixed human-robot teams. In this paper, we investigate empirically the extent t...
: We report on a new kind of culturally-authentic embodied conversational agent more in line with the ways that culture and ethnicity function in the real world. On the basis of th...