Sciweavers

207 search results - page 14 / 42
» Context based multimodal fusion
Sort
View
INFFUS
2007
117views more  INFFUS 2007»
13 years 7 months ago
Pixel-based and region-based image fusion schemes using ICA bases
The task of enhancing the perception of a scene by combining information captured by different sensors is usually known as image fusion. The pyramid decomposition and the Dual-Tr...
Nikolaos Mitianoudis, Tania Stathaki
ICMI
2005
Springer
162views Biometrics» more  ICMI 2005»
14 years 1 months ago
Distributed pointing for multimodal collaboration over sketched diagrams
A problem faced by groups that are not co-located but need to collaborate on a common task is the reduced access to the rich multimodal communicative context that they would have ...
Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, D...
UIC
2007
Springer
14 years 1 months ago
Audio-Visual Fused Online Context Analysis Toward Smart Meeting Room
Abstract. Context-aware systems incorporate multimodal information to analyze contextual information in users’ environment and provide various proactive services according to dyn...
Peng Dai, Linmi Tao, Guangyou Xu
HCI
2007
13 years 9 months ago
Unobtrusive Multimodal Emotion Detection in Adaptive Interfaces: Speech and Facial Expressions
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a com...
Khiet P. Truong, David A. van Leeuwen, Mark A. Nee...
ICMI
2004
Springer
263views Biometrics» more  ICMI 2004»
14 years 1 months ago
Analysis of emotion recognition using facial expressions, speech and multimodal information
The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although ...
Carlos Busso, Zhigang Deng, Serdar Yildirim, Murta...