Sciweavers

43 search results - page 6 / 9
» Catriple: Extracting Triples from Wikipedia Categories
Sort
View
KDD
2007
ACM
189views Data Mining» more  KDD 2007»
14 years 9 months ago
Corroborate and learn facts from the web
The web contains lots of interesting factual information about entities, such as celebrities, movies or products. This paper describes a robust bootstrapping approach to corrobora...
Shubin Zhao, Jonathan Betz
CLEF
2009
Springer
13 years 9 months ago
Where in the Wikipedia Is That Answer? The XLDB at the GikiCLEF 2009 Task
GikiCLEF focused on the evaluation of the reasoning capabilities of systems to provide right answers for geographically-challenging topics. As we did not have previous experience ...
Nuno Cardoso, David Batista, Francisco J. Ló...
SIGIR
2011
ACM
12 years 11 months ago
No free lunch: brute force vs. locality-sensitive hashing for cross-lingual pairwise similarity
This work explores the problem of cross-lingual pairwise similarity, where the task is to extract similar pairs of documents across two different languages. Solutions to this pro...
Ferhan Ture, Tamer Elsayed, Jimmy J. Lin
WWW
2010
ACM
14 years 3 months ago
Not so creepy crawler: easy crawler generation with standard xml queries
Web crawlers are increasingly used for focused tasks such as the extraction of data from Wikipedia or the analysis of social networks like last.fm. In these cases, pages are far m...
Franziska von dem Bussche, Klara A. Weiand, Benedi...
SIGMOD
2009
ACM
144views Database» more  SIGMOD 2009»
14 years 8 months ago
Do we mean the same?: disambiguation of extracted keyword queries for database search
Users often try to accumulate information on a topic of interest from multiple information sources. In this case a user's informational need might be expressed in terms of an...
Elena Demidova, Irina Oelze, Peter Fankhauser