Sciweavers

TREC
2007

Parsimonious Language Models for a Terabyte of Text

14 years 1 months ago
Parsimonious Language Models for a Terabyte of Text
: The aims of this paper are twofold. Our first aim is to compare results of the earlier Terabyte tracks to the Million Query track. We submitted a number of runs using different document representations (such as full-text, title-fields, or incoming anchor-texts) to increase pool diversity. The initial results show broad agreement in system rankings over various measures on topic sets judged at both Terabyte and Million Query tracks, with runs using the full-text index giving superior results on all measures, but also some noteworthy upsets. Our second aim is to explore the use of parsimonious language models for retrieval on terabytescale collections. These models are smaller thus more efficient than the standard language models when used at indexing time, and they may also improve retrieval performance. We have conducted initial experiments using parsimonious models in combination with pseudo-relevance feedback, for both the Terabyte and Million Query track topic sets, and obtaine...
Djoerd Hiemstra, Rongmei Li, Jaap Kamps, Rianne Ka
Added 07 Nov 2010
Updated 07 Nov 2010
Type Conference
Year 2007
Where TREC
Authors Djoerd Hiemstra, Rongmei Li, Jaap Kamps, Rianne Kaptein
Comments (0)