Optimising the parameters of ranking functions with respect to standard IR rank-dependent cost functions has eluded satisfactory analytical treatment. We build on recent advances in alternative differentiable pairwise cost functions, and show that these techniques can be successfully applied to tuning the parameters of an existing family of IR scoring functions (BM25), in the sense that we cannot do better using sensible search heuristics that directly optimize the rank-based cost function NDCG. We also demonstrate how the size of training set affects the number of parameters we can hope to tune this way. Categories and Subject Descriptors H.3.3 [Information Systems]: Information Storage and Retrieval--information search and retrieval General Terms Experimentation Keywords evaluation, optimisation, effectiveness measures, ranking, scoring
Michael J. Taylor, Hugo Zaragoza, Nick Craswell, S