The goal of system evaluation in information retrieval has always been to determine which of a set of systems is superior on a given collection. The tool used to determine system ...
In this paper, we propose a novel top-k learning to rank framework, which involves labeling strategy, ranking model and evaluation measure. The motivation comes from the difficul...
Interleaving experiments are an attractive methodology for evaluating retrieval functions through implicit feedback. Designed as a blind and unbiased test for eliciting a preferen...
Yisong Yue, Yue Gao, Olivier Chapelle, Ya Zhang, T...
This paper reports on experiments submitted for the robust task at CLEF 2007. We applied a system previously tested for ad-hoc retrieval. Experiments were focused on the effect of...
Online information services have grown too large for users to navigate without the help of automated tools such as collaborative filtering, which makes recommendations to users ba...