The TREC 2007 question answering (QA) track contained two tasks: the main task consisting of series of factoid, list, and “Other” questions organized around a set of targets, ...
500,000 PubMed abstracts. However, less than 50 documents are relevant for most queries. Applying scoring to all 500,000 abstracts would create a lot of noise. In the first step, ...
This paper provides an overview of experiments carried out at the TREC 2004 Terabyte Track using the Indri search engine. Indri is an efficient, effective distributed search engin...
Donald Metzler, Trevor Strohman, Howard R. Turtle,...
The TREC 2003 web track consisted of both a non-interactive stream and an interactive stream. Both streams worked with the .GOV test collection. The non-interactive stream continu...
Nick Craswell, David Hawking, Ross Wilkinson, Ming...
In this paper, we report our experiments in the TREC 2003 Genomics Track and the Robust Track. A common theme that we explored is the robustness of a basic language modeling retri...