Sciweavers

316 search results - page 2 / 64
» The Catchment Feature Model for Multimodal Language Analysis
Sort
View
ICASSP
2011
IEEE
12 years 11 months ago
Kernel cross-modal factor analysis for multimodal information fusion
This paper presents a novel approach for multimodal information fusion. The proposed method is based on kernel cross-modal factor analysis (KCFA), in which the optimal transformat...
Yongjin Wang, Ling Guan, Anastasios N. Venetsanopo...
ICMLA
2008
13 years 8 months ago
Multimodal Music Mood Classification Using Audio and Lyrics
In this paper we present a study on music mood classification using audio and lyrics information. The mood of a song is expressed by means of musical features but a relevant part ...
Cyril Laurier, Jens Grivolla, Perfecto Herrera
COLING
2010
13 years 2 months ago
Latent Mixture of Discriminative Experts for Multimodal Prediction Modeling
During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backc...
Derya Ozkan, Kenji Sagae, Louis-Philippe Morency
ICMCS
2006
IEEE
132views Multimedia» more  ICMCS 2006»
14 years 1 months ago
Clustering-Based Analysis of Semantic Concept Models for Video Shots
In this paper we present a clustering-based method for representing semantic concepts on multimodal low-level feature spaces and study the evaluation of the goodness of such model...
Markus Koskela, Alan F. Smeaton
KDD
2012
ACM
238views Data Mining» more  KDD 2012»
11 years 9 months ago
Multi-source learning for joint analysis of incomplete multi-modality neuroimaging data
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiativ...
Lei Yuan, Yalin Wang, Paul M. Thompson, Vaibhav A....