In this paper we present a meeting state recognizer based on a combination of multi-modal sensor data in a smart room. Our approach is based on the training of a statistical model ...
Abstract. This paper presents a framework for corpus based multimodal research. Part of this framework is applied in the context of meeting modelling. A generic model for differen...
For the purpose of Multimodal Meeting Manager Project (M4), an approach based on face and a hand tracking is proposed. The technique essentially includes skin color detection, seg...
In this work we present a novel multi-modal mixed-state dynamic Bayesian network (DBN) for robust meeting event classification. The model uses information from lapel microphones,...
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modalit...
Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yu...