Sciweavers

SIGIR
2008
ACM

Crowdsourcing for relevance evaluation

13 years 11 months ago
Crowdsourcing for relevance evaluation
Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.
Omar Alonso, Daniel E. Rose, Benjamin Stewart
Added 15 Dec 2010
Updated 15 Dec 2010
Type Journal
Year 2008
Where SIGIR
Authors Omar Alonso, Daniel E. Rose, Benjamin Stewart
Comments (0)