Sciweavers

1900 search results - page 87 / 380
» Crowdsourcing for relevance evaluation
Sort
View
SIGIR
2004
ACM
14 years 1 months ago
Forming test collections with no system pooling
Forming test collection relevance judgments from the pooled output of multiple retrieval systems has become the standard process for creating resources such as the TREC, CLEF, and...
Mark Sanderson, Hideo Joho
WSDM
2010
ACM
245views Data Mining» more  WSDM 2010»
14 years 5 months ago
Improving Quality of Training Data for Learning to Rank Using Click-Through Data
In information retrieval, relevance of documents with respect to queries is usually judged by humans, and used in evaluation and/or learning of ranking functions. Previous work ha...
Jingfang Xu, Chuanliang Chen, Gu Xu, Hang Li, Elbi...
IPM
2007
85views more  IPM 2007»
13 years 7 months ago
A retrospective study of a hybrid document-context based retrieval model
This paper describes our novel retrieval model that is based on contexts of query terms in documents (i.e., document contexts). Our model is novel because it explicitly takes into...
Ho Chung Wu, Robert W. P. Luk, Kam-Fai Wong, K. L....
ICDCSW
2007
IEEE
14 years 2 months ago
Context to Make You More Aware
The goal of our work is to help users make more informed choices about what physical activities they undertake. One example is to provide relevant information to help someone choo...
Adrienne H. Andrew, Yaw Anokwa, Karl Koscher, Jona...
SIGIR
2006
ACM
14 years 1 months ago
Bias and the limits of pooling
Modern retrieval test collections are built through a process called pooling in which only a sample of the entire document set is judged for each topic. The idea behind pooling is...
Chris Buckley, Darrin Dimmick, Ian Soboroff, Ellen...