Sciweavers

473 search results - page 29 / 95
» Evaluating interactive information retrieval systems: opport...
Sort
View
CIKM
2006
Springer
13 years 11 months ago
Evaluation by comparing result sets in context
Familiar evaluation methodologies for information retrieval (IR) are not well suited to the task of comparing systems in many real settings. These systems and evaluation methods m...
Paul Thomas, David Hawking
ECIR
2009
Springer
14 years 4 months ago
Design and Evaluation of a University-Wide Expert Search Engine
We present an account of designing and evaluating a university-wide expert search engine. We performed system-based evaluation to determine the optimal retrieval settings and an ex...
Ruud Liebregts, Toine Bogers
IPM
2008
89views more  IPM 2008»
13 years 7 months ago
Design and evaluation of awareness mechanisms in CiteSeer
Awareness has been extensively studied in human computer interaction (HCI) and computer supported cooperative work (CSCW). The success of many collaborative systems hinges on effe...
Umer Farooq, Craig H. Ganoe, John M. Carroll, Isaa...
SIGIR
2010
ACM
13 years 11 months ago
Human performance and retrieval precision revisited
Several studies have found that the Cranfield approach to evaluation can report significant performance differences between retrieval systems for which little to no performance...
Mark D. Smucker, Chandra Prakash Jethani
SIGIR
2010
ACM
13 years 2 months ago
Report on the SIGIR 2010 workshop on the simulation of interaction
All search in the real-world is inherently interactive. Information retrieval (IR) has a firm tradition of using simulation to evaluate IR systems as embodied by the Cranfield par...
Leif Azzopardi, Kalervo Järvelin, Jaap Kamps,...