Overcoming the semantic-feature gap and adapting to context are two main challenges in content-based retrieval. The problem is even more complicated for unstructured videos such as automated recordings of meetings. To address this problem, we propose a model-based approach to meeting retrieval with user controlled weighting for dynamic similarity comparison. Each video is represented by an HMM, and the similarity between videos is determined by comparing the corresponding models. Users can control the relative importance of temporal and static features by adjusting a weighting parameter in a way similar to content-based image retrieval. Experimental results demonstrate the feasibility and versatility of this approach.
Dar-Shyang Lee, Jonathan J. Hull, Berna Erol