Sciweavers

366 search results - page 20 / 74
» Easy Tasks Dominate Information Retrieval Evaluation Results
Sort
View
CLEF
2006
Springer
14 years 5 days ago
The University of New South Wales at GeoCLEF 2006
This paper describes our participation in the GeoCLEF monolingual English task of the Cross Language Evaluation Forum 2006. The main objective of this study is to evaluate the retr...
You-Heng Hu, Linlin Ge
NAACL
2003
13 years 9 months ago
Unsupervised methods for developing taxonomies by combining syntactic and statistical information
This paper describes an unsupervised algorithm for placing unknown words into a taxonomy and evaluates its accuracy on a large and varied sample of words. The algorithm works by ï...
Dominic Widdows
SIGIR
2002
ACM
13 years 8 months ago
Document clustering with committees
Document clustering is useful in many information retrieval tasks: document browsing, organization and viewing of retrieval results, generation of Yahoo-like hierarchies of docume...
Patrick Pantel, Dekang Lin
LREC
2010
143views Education» more  LREC 2010»
13 years 10 months ago
Evaluation of Document Citations in Phase 2 Gale Distillation
The focus of information retrieval evaluations, such as NIST's TREC evaluations (e.g. Voorhees 2003), is on evaluation of the information content of system responses. On the ...
Olga Babko-Malaya, Dan Hunter, Connie Fournelle, J...
WSDM
2012
ACM
236views Data Mining» more  WSDM 2012»
12 years 4 months ago
Effective query formulation with multiple information sources
Most standard information retrieval models use a single source of information (e.g., the retrieval corpus) for query formulation tasks such as term and phrase weighting and query ...
Michael Bendersky, Donald Metzler, W. Bruce Croft