Sciweavers

634 search results - page 46 / 127
» On the history of evaluation in IR
Sort
View
IR
2010
13 years 6 months ago
LETOR: A benchmark collection for research on learning to rank for information retrieval
LETOR is a benchmark collection for the research on learning to rank for information retrieval, released by Microsoft Research Asia. In this paper, we describe the details of the L...
Tao Qin, Tie-Yan Liu, Jun Xu, Hang Li
RE
2010
Springer
13 years 6 months ago
Assessing traceability of software engineering artifacts
Abstract The generation of traceability links or traceability matrices is vital to many software engineering activities. It is also person-power intensive, time-consuming, error-pr...
Senthil Karthikeyan Sundaram, Jane Huffman Hayes, ...
TSD
2010
Springer
13 years 5 months ago
Comparison of Different Lemmatization Approaches through the Means of Information Retrieval Performance
This paper presents a quantitative performance analysis of two different approaches to the lemmatization of the Czech text data. The first one is based on manually prepared diction...
Jakub Kanis, Lucie Skorkovská
IR
2011
13 years 2 months ago
Learning to rank for why-question answering
In this paper, we evaluate a number of machine learning techniques for the task of ranking answers to why-questions. We use TF-IDF together with a set of 36 linguistically motivate...
Suzan Verberne, Hans van Halteren, Daphne Theijsse...
ECRA
2007
102views more  ECRA 2007»
13 years 7 months ago
WebTracer: A new web usability evaluation environment using gazing point information
WebTracer is a new usability evaluation environment that supports recording, replaying, and analysis of a gazing point and operation while a user is browsing a website. WebTracer ...
Noboru Nakamichi, Makoto Sakai, Kazuyuki Shima, Ji...