This paper presents a Bayesian network based multimodal fusion method for robust and real-time face tracking. The Bayesian network integrates a prior of second order system dynami...
We propose an unsupervised approach to learn associations between continuous-valued attributes from different modalities. These associations are used to construct a multi-modal t...
This paper presents a method for automatically annotating and retrieving animal images. Our model is a multi-modality ontology extended from our previous works in the sense that b...
The Post-PC revolution is bringing information access to a wide-range of devices beyond the desktop, such as public kiosks, and mobile devices like cellular telephones, PDAs, and ...
Steven J. Ross, Jason L. Hill, Michael Y. Chen, An...
Many real-world applications call for learning predictive relationships from multi-modal data. In particular, in multi-media and web applications, given a dataset of images and th...