Sciweavers

SDM
2007
SIAM

Mining Visual and Textual Data for Constructing a Multi-Modal Thesaurus

14 years 1 months ago
Mining Visual and Textual Data for Constructing a Multi-Modal Thesaurus
We propose an unsupervised approach to learn associations between continuous-valued attributes from different modalities. These associations are used to construct a multi-modal thesaurus that could serve as a foundation for inter-modality translation, and for hybrid navigation and search algorithms. We focus on extracting associations between visual features and textual keywords. Visual features consist of low-level attributes extracted from image content such as color, texture, and shape. Textual features consist of keywords that provide a description of the images. We assume that a collection of training images is available and that each image is globally annotated by few keywords. The objective is to extract representative visual profiles that correspond to frequent homogeneous regions, and to associate them with keywords. These profiles would be used to build the a multimodal thesaurus. The proposed approach was trained with a large collection of images, and the constructed the...
Hichem Frigui, Joshua Caudill
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2007
Where SDM
Authors Hichem Frigui, Joshua Caudill
Comments (0)