Sciweavers

MM
2010
ACM

iLike: integrating visual and textual features for vertical search

13 years 11 months ago
iLike: integrating visual and textual features for vertical search
Content-based image search on the Internet is a challenging problem, mostly due to the semantic gap between low-level visual features and high-level content, as well as the excessive computation brought by huge amount of images and high dimensional features. In this paper, we present iLike, a new approach to truly combine textual features from web pages, and visual features from image content for better image search in a vertical search engine. We tackle the first problem by trying to capture the meaning of each text term in the visual feature space, and re-weight visual features according to their significance to the query content. Our experimental results in product search for apparels and accessories demonstrate the effectiveness of iLike and its capability of bridging semantic gaps between visual features and concepts. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval--Retrieval models General Terms Algorithms, Design
Yuxin Chen, Nenghai Yu, Bo Luo, Xue-wen Chen
Added 06 Dec 2010
Updated 06 Dec 2010
Type Conference
Year 2010
Where MM
Authors Yuxin Chen, Nenghai Yu, Bo Luo, Xue-wen Chen
Comments (0)