Sciweavers

1900 search results - page 46 / 380
» Crowdsourcing for relevance evaluation
Sort
View
SIGIR
2012
ACM
11 years 10 months ago
An uncertainty-aware query selection model for evaluation of IR systems
We propose a mathematical framework for query selection as a mechanism for reducing the cost of constructing information retrieval test collections. In particular, our mathematica...
Mehdi Hosseini, Ingemar J. Cox, Natasa Milic-Frayl...
SIGIR
2005
ACM
14 years 1 months ago
Automated evaluation of search engine performance via implicit user feedback
Measuring the information retrieval effectiveness of Web search engines can be expensive if human relevance judgments are required to evaluate search results. Using implicit user ...
Himanshu Sharma, Bernard J. Jansen
VLDB
2001
ACM
92views Database» more  VLDB 2001»
14 years 4 days ago
Fast Evaluation Techniques for Complex Similarity Queries
Complex similarity queries, i.e., multi-feature multi-object queries, are needed to express the information need of a user against a large multimedia repository. Even if a user in...
Klemens Böhm, Michael Mlivoncic, Hans-Jö...
CORR
2008
Springer
176views Education» more  CORR 2008»
13 years 7 months ago
An evaluation of Bradfordizing effects
The purpose of this paper is to apply and evaluate the bibliometric method Bradfordizing for information retrieval (IR) experiments. Bradfordizing is used for generating core docu...
Philipp Mayr
HICSS
2003
IEEE
93views Biometrics» more  HICSS 2003»
14 years 1 months ago
A General Method for Statistical Performance Evaluation
In the paper, we propose a general method for statistical performance evaluation. The method incorporates various statistical metrics and automatically selects an appropriate stat...
Longzhuang Li, Yi Shang, Wei Zhang, Hongchi Shi