Sciweavers

37 search results - page 3 / 8
» Multimodal model integration for sentence unit detection
Sort
View
IJCV
2007
134views more  IJCV 2007»
13 years 7 months ago
Multi-sensory and Multi-modal Fusion for Sentient Computing
This paper presents an approach to multi-sensory and multi-modal fusion in which computer vision information obtained from calibrated cameras is integrated with a large-scale sent...
Christopher Town
ICIP
2007
IEEE
14 years 9 months ago
Multi-Modal Particle Filtering Tracking using Appearance, Motion and Audio Likelihoods
We propose a multi-modal object tracking algorithm that combines appearance, motion and audio information in a particle filter. The proposed tracker fuses at the likelihood level ...
Matteo Bregonzio, Murtaza Taj, Andrea Cavallaro
ECCV
2004
Springer
14 years 9 months ago
Audio-Video Integration for Background Modelling
This paper introduces a new concept of surveillance, namely, audio-visual data integration for background modelling. Actually, visual data acquired by a fixed camera can be easily ...
Marco Cristani, Manuele Bicego, Vittorio Murino
ICASSP
2008
IEEE
14 years 2 months ago
Visual-aural attention modeling for talk show video highlight detection
In this paper, we propose a visual-aural attention modeling based video content analysis approach, which can be used to automatically detect the highlights of the popular TV progr...
Yijia Zheng, Guangyu Zhu, Shuqiang Jiang, Qingming...
EMNLP
2007
13 years 9 months ago
Probabilistic Coordination Disambiguation in a Fully-Lexicalized Japanese Parser
This paper describes a probabilistic model for coordination disambiguation integrated into syntactic and case structure analysis. Our model probabilistically assesses the parallel...
Daisuke Kawahara, Sadao Kurohashi