This paper presents a novel approach for multimodal information fusion. The proposed method is based on kernel cross-modal factor analysis (KCFA), in which the optimal transformat...
Yongjin Wang, Ling Guan, Anastasios N. Venetsanopo...
In this paper we present a study on music mood classification using audio and lyrics information. The mood of a song is expressed by means of musical features but a relevant part ...
During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backc...
In this paper we present a clustering-based method for representing semantic concepts on multimodal low-level feature spaces and study the evaluation of the goodness of such model...
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiativ...
Lei Yuan, Yalin Wang, Paul M. Thompson, Vaibhav A....