Sciweavers

4589 search results - page 4 / 918
» A new evaluation measure for information retrieval systems
Sort
View
SIGIR
2005
ACM
14 years 1 months ago
Information retrieval system evaluation: effort, sensitivity, and reliability
The effectiveness of information retrieval systems is measured by comparing performance on a common set of queries and documents. Significance tests are often used to evaluate the...
Mark Sanderson, Justin Zobel
CIKM
2005
Springer
13 years 9 months ago
Using RankBoost to compare retrieval systems
This paper presents a new pooling method for constructing the assessment sets used in the evaluation of retrieval systems. Our proposal is based on RankBoost, a machine learning v...
Huyen-Trang Vu, Patrick Gallinari
SIGIR
2003
ACM
14 years 25 days ago
Using manually-built web directories for automatic evaluation of known-item retrieval
Information retrieval system evaluation is complicated by the need for manually assessed relevance judgments. Large manually-built directories on the web open the door to new eval...
Steven M. Beitzel, Eric C. Jensen, Abdur Chowdhury...
WSDM
2010
ACM
173views Data Mining» more  WSDM 2010»
14 years 5 months ago
Measuring the Reusability of Test Collections
While test collection construction is a time-consuming and expensive process, the true cost is amortized by reusing the collection over hundreds or thousands of experiments. Some ...
Ben Carterette, Evgeniy Gabrilovich, Vanja Josifov...
SIGIR
1998
ACM
13 years 12 months ago
How Reliable Are the Results of Large-Scale Information Retrieval Experiments?
Two stages in measurement of techniques for information retrieval are gathering of documents for relevance assessment and use of the assessments to numerically evaluate effective...
Justin Zobel