Sciweavers

121 search results - page 5 / 25
» Multimodal Meeting Tracker
Sort
View
MLMI
2007
Springer
14 years 2 months ago
Meeting State Recognition from Visual and Aural Labels
In this paper we present a meeting state recognizer based on a combination of multi-modal sensor data in a smart room. Our approach is based on the training of a statistical model ...
Jan Curín, Pascal Fleury, Jan Kleindienst, ...
MLMI
2004
Springer
14 years 1 months ago
Meeting Modelling in the Context of Multimodal Research
Abstract. This paper presents a framework for corpus based multimodal research. Part of this framework is applied in the context of meeting modelling. A generic model for differen...
Dennis Reidsma, Rutger Rienks, Natasa Jovanovic
CGI
2004
IEEE
14 years 10 days ago
Participant Activity Detection by Hands and Face Movement Tracking in the Meeting Room
For the purpose of Multimodal Meeting Manager Project (M4), an approach based on face and a hand tracking is proposed. The technique essentially includes skin color detection, seg...
Igor Potucek, Stanislav Sumec
ICMCS
2005
IEEE
173views Multimedia» more  ICMCS 2005»
14 years 2 months ago
A Multi-Modal Mixed-State Dynamic Bayesian Network for Robust Meeting Event Recognition from Disturbed Data
In this work we present a novel multi-modal mixed-state dynamic Bayesian network (DBN) for robust meeting event classification. The model uses information from lapel microphones,...
Marc Al-Hames, Gerhard Rigoll
ICMCS
2000
IEEE
90views Multimedia» more  ICMCS 2000»
14 years 1 months ago
Towards a Multimodal Meeting Record
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, and person identification. Recognition and integration of each of these modalit...
Ralph Gross, Michael Bett, Hua Yu, Xiaojin Zhu, Yu...