Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such v...
This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user’s actions and emotions are modele...
A new algorithm is proposed for novel view generation in one-toone teleconferencing applications. Given the video streams acquired by two cameras placed on either side of a comput...
Antonio Criminisi, Jamie Shotton, Andrew Blake, Ph...
During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backc...
A novel measure for automatically quantifying the amount of interpersonal influence present in face-toface conversations is proposed based on the visualattention patterns of the p...