Sciweavers

KDD
2004
ACM

Automatic multimedia cross-modal correlation discovery

14 years 12 months ago
Automatic multimedia cross-modal correlation discovery
Given an image (or video clip, or audio song), how do we automatically assign keywords to it? The general problem is to find correlations across the media in a collection of multimedia objects like video clips, with colors, and/or motion, and/or audio, and/or text scripts. We propose a novel, graph-based approach, "MMG", to discover such cross-modal correlations. Our "MMG" method requires no tuning, no clustering, no user-determined constants; it can be applied to any multimedia collection, as long as we have a similarity function for each medium; and it scales linearly with the database size. We report auto-captioning experiments on the "standard" Corel image database of 680 MB, where it outperforms domain specific, fine-tuned methods by up to 10 percentage points in captioning accuracy (50% relative improvement). Categories and Subject Descriptors H.2.8 [Database Management]: Database Applications-Data Mining General Terms Design, Experimentation This m...
Jia-Yu Pan, Hyung-Jeong Yang, Christos Faloutsos,
Added 30 Nov 2009
Updated 30 Nov 2009
Type Conference
Year 2004
Where KDD
Authors Jia-Yu Pan, Hyung-Jeong Yang, Christos Faloutsos, Pinar Duygulu
Comments (0)