Sciweavers

166 search results - page 12 / 34
» Semi-supervised Learning from Only Positive and Unlabeled Da...
Sort
View
ALT
2006
Springer
14 years 5 months ago
Iterative Learning from Positive Data and Negative Counterexamples
A model for learning in the limit is defined where a (so-called iterative) learner gets all positive examples from the target language, tests every new conjecture with a teacher ...
Sanjay Jain, Efim B. Kinber
JMLR
2008
117views more  JMLR 2008»
13 years 8 months ago
Closed Sets for Labeled Data
Closed sets have been proven successful in the context of compacted data representation for association rule learning. However, their use is mainly descriptive, dealing only with ...
Gemma C. Garriga, Petra Kralj, Nada Lavrac
AUSAI
2008
Springer
13 years 10 months ago
Learning to Find Relevant Biological Articles without Negative Training Examples
Classifiers are traditionally learned using sets of positive and negative training examples. However, often a classifier is required, but for training only an incomplete set of pos...
Keith Noto, Milton H. Saier Jr., Charles Elkan
SIGIR
2008
ACM
13 years 8 months ago
Learning from labeled features using generalized expectation criteria
It is difficult to apply machine learning to new domains because often we lack labeled problem instances. In this paper, we provide a solution to this problem that leverages domai...
Gregory Druck, Gideon S. Mann, Andrew McCallum
PAKDD
2005
ACM
132views Data Mining» more  PAKDD 2005»
14 years 2 months ago
SETRED: Self-training with Editing
Self-training is a semi-supervised learning algorithm in which a learner keeps on labeling unlabeled examples and retraining itself on an enlarged labeled training set. Since the s...
Ming Li, Zhi-Hua Zhou