Sciweavers

207 search results - page 20 / 42
» Context based multimodal fusion
Sort
View
ECCV
2006
Springer
13 years 9 months ago
Robust Head Tracking with Particles Based on Multiple Cues Fusion
This paper presents a fully automatic and highly robust head tracking algorithm based on the latest advances in real-time multi-view face detection techniques and multiple cues fus...
Yuan Li, Haizhou Ai, Chang Huang, Shihong Lao
KDD
2012
ACM
238views Data Mining» more  KDD 2012»
11 years 10 months ago
Multi-source learning for joint analysis of incomplete multi-modality neuroimaging data
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiativ...
Lei Yuan, Yalin Wang, Paul M. Thompson, Vaibhav A....
ICMI
2005
Springer
215views Biometrics» more  ICMI 2005»
14 years 1 months ago
Multimodal multispeaker probabilistic tracking in meetings
Tracking speakers in multiparty conversations constitutes a fundamental task for automatic meeting analysis. In this paper, we present a probabilistic approach to jointly track th...
Daniel Gatica-Perez, Guillaume Lathoud, Jean-Marc ...
MM
2010
ACM
200views Multimedia» more  MM 2010»
13 years 8 months ago
Multimodal location estimation
In this article we define a multimedia content analysis problem, which we call multimodal location estimation: Given a video/image/audio file, the task is to determine where it wa...
Gerald Friedland, Oriol Vinyals, Trevor Darrell
ICALT
2006
IEEE
14 years 1 months ago
Augmented Learning: Context-Aware Mobile Augmented Reality Architecture for Learning
Mobile Augmented Reality System (MARS) based elearning environments equip a learner with a mobile wearable see-through display that interacts with training/learning software. MARS...
Jayfus T. Doswell