Sciweavers

MLMI
2005
Springer

Multimodal Integration for Meeting Group Action Segmentation and Recognition

14 years 5 months ago
Multimodal Integration for Meeting Group Action Segmentation and Recognition
We address the problem of segmentation and recognition of sequences of multimodal human interactions in meetings. These interactions can be seen as a rough structure of a meeting, and can be used either as input for a meeting browser or as a first step towards a higher semantic analysis of the meeting. A common lexicon of multimodal group meeting actions, a shared meeting data set, and a common evaluation procedure enable us to compare the different approaches. We compare three different multimodal feature sets and four modelling infrastructures: a higher semantic feature approach, multi-layer HMMs, a multistream DBN, as well as a multi-stream mixed-state DBN for disturbed data.
Marc Al-Hames, Alfred Dielmann, Daniel Gatica-Pere
Added 28 Jun 2010
Updated 28 Jun 2010
Type Conference
Year 2005
Where MLMI
Authors Marc Al-Hames, Alfred Dielmann, Daniel Gatica-Perez, Stephan Reiter, Steve Renals, Gerhard Rigoll, Dong Zhang
Comments (0)