Information retrieval evaluation has typically been performed over several dozen queries, each judged to near-completeness. There has been a great deal of recent work on evaluatio...
Ben Carterette, Virgiliu Pavlu, Evangelos Kanoulas...
LETOR is a benchmark collection for the research on learning to rank for information retrieval, released by Microsoft Research Asia. In this paper, we describe the details of the L...
Users enter queries that are short as well as long. The aim of this work is to evaluate techniques that can enable information retrieval (IR) systems to automatically adapt to per...
Relevance feedback, which traditionally uses the terms in the relevant documents to enrich the user's initial query, is an effective method for improving retrieval performanc...
With the massive advance of electronic document repositories, usable interfaces to these repositories gain importance. While sophisticated information retrieval techniques provide...