Sciweavers

63 search results - page 6 / 13
» Gaze estimation from multimodal Kinect data
Sort
View
TCSV
2011
13 years 2 months ago
Concept-Driven Multi-Modality Fusion for Video Search
—As it is true for human perception that we gather information from different sources in natural and multi-modality forms, learning from multi-modalities has become an effective ...
Xiao-Yong Wei, Yu-Gang Jiang, Chong-Wah Ngo
SDM
2009
SIAM
394views Data Mining» more  SDM 2009»
14 years 4 months ago
Multi-Modal Hierarchical Dirichlet Process Model for Predicting Image Annotation and Image-Object Label Correspondence.
Many real-world applications call for learning predictive relationships from multi-modal data. In particular, in multi-media and web applications, given a dataset of images and th...
Oksana Yakhnenko, Vasant Honavar
ICMI
2010
Springer
172views Biometrics» more  ICMI 2010»
13 years 5 months ago
Modelling and analyzing multimodal dyadic interactions using social networks
Social network analysis became a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore th...
Sergio Escalera, Petia Radeva, Jordi Vitrià...
KDD
2012
ACM
238views Data Mining» more  KDD 2012»
11 years 10 months ago
Multi-source learning for joint analysis of incomplete multi-modality neuroimaging data
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiativ...
Lei Yuan, Yalin Wang, Paul M. Thompson, Vaibhav A....
CVPR
2011
IEEE
13 years 4 months ago
Learning Effective Human Pose Estimation from Inaccurate Annotation
The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from di...
Sam Johnson, Mark Everingham