Sciweavers

ICMCS
2007
IEEE

Efficient Near-Duplicate Keyframe Retrieval with Visual Language Models

13 years 11 months ago
Efficient Near-Duplicate Keyframe Retrieval with Visual Language Models
Near-duplicate keyframe retrieval is a critical task for video similarity measure, video threading and tracking. In this paper, instead of using expensive point-to-point matching on keypoints, we investigate the visual language models built on visual keywords to speed up the near-duplicate keyframe retrieval. The main idea is to estimate a visual language model on visual keywords for each keyframe and compare keyframes by the likelihood of their visual language models. Experiments on a subset of TRECVID-2004 video corpus show that visual language models built on visual keywords demonstrate promising performance for near-duplicate keyframe retrieval, which greatly speed up the retrieval speed although sacrifice a little performance compared to expensive point-to-point matching.
Xiao Wu, Wanlei Zhao, Chong-Wah Ngo
Added 08 Dec 2010
Updated 08 Dec 2010
Type Conference
Year 2007
Where ICMCS
Authors Xiao Wu, Wanlei Zhao, Chong-Wah Ngo
Comments (0)