Sciweavers

207 search results - page 18 / 42
» Context based multimodal fusion
Sort
View
MTA
2006
122views more  MTA 2006»
13 years 7 months ago
Context-aware design of adaptable multimodal documents
In this paper we present a model and an adaptation architecture for context-aware multimodal documents. A compound virtual document describes the different ways in which multimodal...
Augusto Celentano, Ombretta Gaggi
CVPR
1997
IEEE
13 years 12 months ago
Multi-Modal Tracking of Faces for Video Communications
This paper describes a system which uses multiple visual processes to detect and track faces for video compression and transmission. The system is based on an architecture in whic...
James L. Crowley, François Bérard
CVPR
2000
IEEE
14 years 9 months ago
Multimodal Speaker Detection Using Error Feedback Dynamic Bayesian Networks
Design and development of novel human-computer interfaces poses a challenging problem: actions and intentions of users have to be inferred from sequences of noisy and ambiguous mu...
Vladimir Pavlovic, James M. Rehg, Ashutosh Garg, T...
AMFG
2005
IEEE
203views Biometrics» more  AMFG 2005»
14 years 1 months ago
Learning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels
2D intensity images and 3D shape models are both useful for face recognition, but in different ways. While algorithms have long been developed using 2D or 3D data, recently has see...
Stan Z. Li, ChunShui Zhao, Meng Ao, Zhen Lei
HCI
2007
13 years 9 months ago
A System for Adaptive Multimodal Interaction in Crisis Environments
In the recent years multimodal interfaces have acquired an important role in human computer interaction applications. Subsequently these interfaces become more and more human-orien...
Dragos Datcu, Zhenke Yang, Léon J. M. Rothk...