Sciweavers

CVPR
2010
IEEE

Reading Between The Lines: Object Localization Using Implicit Cues from Image Tags

14 years 8 months ago
Reading Between The Lines: Object Localization Using Implicit Cues from Image Tags
Current uses of tagged images typically exploit only the most explicit information: the link between the nouns named and the objects present somewhere in the image. We propose to leverage "unspoken" cues that rest within an ordered list of image tags so as to improve object localization. We define three novel implicit features from an image's tags--the relative prominence of each object as signified by its order of mention, the scale constraints implied by unnamed objects, and the loose spatial links hinted by the proximity of names on the list. By learning a conditional density over the localization parameters (position and scale) given these cues, we show how to improve both accuracy and efficiency when detecting the tagged objects. We validate our approach with 25 object categories from the PASCAL VOC and LabelMe datasets, and demonstrate its effectiveness relative to both traditional sliding windows as well as a visual context baseline.
Sung Ju Hwang, University of Texas, Kristen Grauma
Added 29 Mar 2010
Updated 14 May 2010
Type Conference
Year 2010
Where CVPR
Authors Sung Ju Hwang, University of Texas, Kristen Grauman
Comments (0)