Modeling text with topics is currently a popular research area in both Machine Learning and Information Retrieval (IR). Most of this research has focused on automatic methods though there are many hand-crafted topic resources available online. In this paper we investigate retrieval performance with topic models constructed manually based on a hand-crafted directory resource. The original query is smoothed on the manually selected topic model, which can also be viewed as an “ideal” user context model. Experiments with these topic models on the TREC retrieval tasks show that this type of topic model alone provides little benefit, and the overall performance is not as good as relevance modeling (which is an automatic query modification model). However, smoothing the query with topic models outperforms relevance models for a subset of the queries and automatic selection from these two models for particular queries gives better results overall than relevance models. We further demonstr...
Xing Wei, W. Bruce Croft