Sciweavers

215 search results - page 8 / 43
» Open Information Extraction Using Wikipedia
Sort
View
WSDM
2012
ACM
252views Data Mining» more  WSDM 2012»
12 years 3 months ago
WebSets: extracting sets of entities from the web using unsupervised information extraction
We describe a open-domain information extraction method for extracting concept-instance pairs from an HTML corpus. Most earlier approaches to this problem rely on combining cluste...
Bhavana Bharat Dalvi, William W. Cohen, Jamie Call...
COMAD
2008
13 years 9 months ago
Kshitij: A Search and Page Recommendation System for Wikipedia
Semantic information helps in identifying the context of a document. It will be interesting to find out how effectively this information can be used in recommending related docume...
Phanikumar Bhamidipati, Kamalakar Karlapalem
NLDB
2007
Springer
14 years 1 months ago
Applying Wikipedia's Multilingual Knowledge to Cross-Lingual Question Answering
The application of the multilingual knowledge encoded in Wikipedia to an open–domain Cross–Lingual Question Answering system based on the Inter Lingual Index (ILI) module of Eu...
Sergio Ferrández, Antonio Toral, Ósc...
BMCBI
2008
122views more  BMCBI 2008»
13 years 7 months ago
OpenDMAP: An open source, ontology-driven concept analysis engine, with applications to capturing knowledge regarding protein tr
Background: Information extraction (IE) efforts are widely acknowledged to be important in harnessing the rapid advance of biomedical knowledge, particularly in areas where import...
Lawrence Hunter, Zhiyong Lu, James Firby, William ...
CORR
2008
Springer
113views Education» more  CORR 2008»
13 years 7 months ago
Clustering of scientific citations in Wikipedia
The instances of templates in Wikipedia form an interesting data set of structured information. Here I focus on the cite journal template that is primarily used for citation to art...
Finn Årup Nielsen