Sciweavers

17 search results - page 2 / 4
» Retrieval system evaluation: automatic evaluation versus inc...
Sort
View
CIKM
2003
Springer
14 years 23 days ago
Using titles and category names from editor-driven taxonomies for automatic evaluation
Evaluation of IR systems has always been difficult because of the need for manually assessed relevance judgments. The advent of large editor-driven taxonomies on the web opens the...
Steven M. Beitzel, Eric C. Jensen, Abdur Chowdhury...
SIGIR
2003
ACM
14 years 24 days ago
Using manually-built web directories for automatic evaluation of known-item retrieval
Information retrieval system evaluation is complicated by the need for manually assessed relevance judgments. Large manually-built directories on the web open the door to new eval...
Steven M. Beitzel, Eric C. Jensen, Abdur Chowdhury...
SIGIR
2003
ACM
14 years 24 days ago
Automatic ranking of retrieval systems in imperfect environments
The empirical investigation of the effectiveness of information retrieval (IR) systems requires a test collection, a set of query topics, and a set of relevance judgments made by ...
Rabia Nuray, Fazli Can
ECIR
2010
Springer
13 years 9 months ago
A Case for Automatic System Evaluation
Ranking a set retrieval systems according to their retrieval effectiveness without relying on relevance judgments was first explored by Soboroff et al. [13]. Over the years, a numb...
Claudia Hauff, Djoerd Hiemstra, Leif Azzopardi, Fr...
SIGIR
2004
ACM
14 years 29 days ago
Forming test collections with no system pooling
Forming test collection relevance judgments from the pooled output of multiple retrieval systems has become the standard process for creating resources such as the TREC, CLEF, and...
Mark Sanderson, Hideo Joho