We propose a mathematical framework for query selection as a mechanism for reducing the cost of constructing information retrieval test collections. In particular, our mathematica...
Mehdi Hosseini, Ingemar J. Cox, Natasa Milic-Frayl...
Most test collections (like TREC and CLEF) for experimental research in information retrieval apply binary relevance assessments. This paper introduces a four-point relevance scal...
In this paper we propose a model for relevance feedback. Our model combines evidence from user's relevance assessments with algorithms describing how words are used within do...
In information retrieval research, comparing retrieval approaches requires test collections consisting of documents, user requests and relevance assessments. Obtaining relevance a...
Benjamin Piwowarski, Andrew Trotman, Mounia Lalmas
This paper investigates the agreement of relevance assessments between official TREC judgments and those generated from an interactive IR experiment. Results show that 63% of docu...
Research on cross-language information retrieval (CLIR) has typically been restricted to settings using binary relevance assessments. In this paper, we present evaluation results f...
Raija Lehtokangas, Heikki Keskustalo, Kalervo J&au...
We investigate possible assessment trends and inconsistencies within the collected relevance assessments of the INEX'02 test collection in order to provide a critical analysis...
IR research has a strong tradition of laboratory evaluation of systems. Such research is based on test collections, pre-defined test topics, and standard evaluation metrics. While ...
Abstract. In this paper we report an initial comparison of relevance assessments made as part of the INEX 2006 Interactive Track (itrack’06) to those made for the topic assessmen...
This paper provides an overview of the newly launched Book Search Track at INEX 2007 (BookSearch’07), its participants, tasks, book corpus, test topics and relevance assessments...