Sciweavers

252 search results - page 35 / 51
» Using Wikipedia to bootstrap open information extraction
Sort
View
AAAI
2008
13 years 9 months ago
Text Categorization with Knowledge Transfer from Heterogeneous Data Sources
Multi-category classification of short dialogues is a common task performed by humans. When assigning a question to an expert, a customer service operator tries to classify the cu...
Rakesh Gupta, Lev-Arie Ratinov
LREC
2010
197views Education» more  LREC 2010»
13 years 9 months ago
Question Answering Biographic Information and Social Network Powered by the Semantic Web
After several years of development, the vision of the Semantic Web is gradually becoming reality. Large data repositories have been created and offer semantic information in a mac...
Peter Adolphs, Xiwen Cheng, Tina Klüwer, Hans...
ELPUB
2006
ACM
14 years 1 months ago
Automated Building of OAI Compliant Repository from Legacy Collection
In this paper, we report on our experience with the creation of an automated, human-assisted process to extract metadata from documents in a large (>100,000), dynamically growi...
Jianfeng Tang, Kurt Maly, Steven J. Zeil, Mohammad...
BMCBI
2010
165views more  BMCBI 2010»
13 years 7 months ago
Multivariate meta-analysis of proteomics data from human prostate and colon tumours
Background: There is a vast need to find clinically applicable protein biomarkers as support in cancer diagnosis and tumour classification. In proteomics research, a number of met...
Lina Hultin Rosenberg, Bo Franzén, Gert Aue...
WSDM
2010
ACM
266views Data Mining» more  WSDM 2010»
14 years 4 months ago
Gathering and Ranking Photos of Named Entities with High Precision, High Recall, and Diversity
Knowledge-sharing communities like Wikipedia and automated extraction methods like those of DBpedia enable the construction of large machine-processible knowledge bases with relat...
Bilyana Taneva, Mouna Kacimi, Gerhard Weikum