Sciweavers

207 search results - page 7 / 42
» Context based multimodal fusion
Sort
View
MM
2005
ACM
134views Multimedia» more  MM 2005»
14 years 1 months ago
Graph based multi-modality learning
To better understand the content of multimedia, a lot of research efforts have been made on how to learn from multi-modal feature. In this paper, it is studied from a graph point ...
Hanghang Tong, Jingrui He, Mingjing Li, Changshui ...
ICMCS
2010
IEEE
164views Multimedia» more  ICMCS 2010»
13 years 7 months ago
Exploiting multimodal data fusion in robust speech recognition
This article introduces automatic speech recognition based on Electro-Magnetic Articulography (EMA). Movements of the tongue, lips, and jaw are tracked by an EMA device, which are...
Panikos Heracleous, Pierre Badin, Gérard Ba...
MMM
2009
Springer
187views Multimedia» more  MMM 2009»
14 years 4 months ago
Evidence Theory-Based Multimodal Emotion Recognition
Automatic recognition of human affective states is still a largely unexplored and challenging topic. Even more issues arise when dealing with variable quality of the inputs or aim...
Marco Paleari, Rachid Benmokhtar, Benoit Huet
DAGM
2003
Springer
14 years 27 days ago
A Computational Model of Early Auditory-Visual Integration
We introduce a computational model of sensor fusion based on the topographic representations of a ”two-microphone and one camera” configuration. Our aim is to perform a robust...
Carsten Schauer, Horst-Michael Gross
ICPR
2008
IEEE
14 years 2 months ago
Multimodal biometrics fusion using Correlation Filter Bank
In this paper, a novel class-dependence feature analysis method based on Correlation Filter Bank (CFB) technique for effective multimodal biometrics fusion at the feature level is...
Yan Yan, Yu-Jin Zhang