Sciweavers

MMM
2012
Springer

Combining Image-Level and Segment-Level Models for Automatic Annotation

12 years 7 months ago
Combining Image-Level and Segment-Level Models for Automatic Annotation
Abstract. For the task of assigning labels to an image to summarize its contents, many early attempts use segment-level information and try to determine which parts of the images correspond to which labels. Best performing methods use global image similarity and nearest neighbor techniques to transfer labels from training images to test images. However, global methods cannot localize the labels in the images, unlike segment-level methods. Also, they cannot take advantage of training images that are only locally similar to a test image. We propose several ways to combine recent image-level and segment-level techniques to predict both image and segment labels jointly. We cast our experimental study in an unified framework for both image-level and segment-level annotation tasks. On three challenging datasets, our joint prediction of image and segment labels outperforms either prediction alone on both tasks. This confirms that the two levels offer complementary information.
Daniel Küttel, Matthieu Guillaumin, Vittorio
Added 25 Apr 2012
Updated 25 Apr 2012
Type Journal
Year 2012
Where MMM
Authors Daniel Küttel, Matthieu Guillaumin, Vittorio Ferrari
Comments (0)