With the exponential growth of Web 2.0 applications, tags have been used extensively to describe the image contents on the Web. Due to the noisy and sparse nature in the human generated tags, how to understand and utilize these tags for image retrieval tasks has become an emerging research direction. As the low-level visual features can provide fruitful information, they are employed to improve the image retrieval results. However, it is challenging to bridge the semantic gap between image contents and tags. To attack this critical problem, we propose a unified framework in this paper which stems from a two-level data fusions between the image contents and tags: (1) A unified graph is built to fuse the visual feature-based image similarity graph with the image-tag bipartite graph; (2) A novel random walk model is then proposed, which utilizes a fusion parameter to balance the influences between the image contents and tags. Furthermore, the presented framework not only can naturally inc...
Hao Ma, Jianke Zhu, Michael R. Lyu, Irwin King