To solve the knowledge bottleneck problem, active learning has been widely used for its ability to automatically select the most informative unlabeled examples for human annotation...
Jingbo Zhu, Huizhen Wang, Benjamin K. Tsou, Matthe...
Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tra...
We present a description of three different algorithms that use background knowledge to improve text classifiers. One uses the background knowledge as an index into the set of tra...
We investigate the following data mining problem from Computational Chemistry: From a large data set of compounds, find those that bind to a target molecule in as few iterations o...
To classify a large number of unlabeled examples we combine a limited number of labeled examples with a Markov random walk representation over the unlabeled examples. The random w...
Abstract. Interactively learning from a small sample of unlabeled examples is an enormously challenging task. Relevance feedback and more recently active learning are two standard ...
Charlie K. Dagli, ShyamSundar Rajaram, Thomas S. H...
Abstract. The problem of classification from positive and unlabeled examples attracts much attention currently. However, when the number of unlabeled negative examples is very sma...
Xiaoling Wang, Zhen Xu, Chaofeng Sha, Martin Ester...
Huge amount of manual efforts are required to annotate large image/video archives with text annotations. Several recent works attempted to automate this task by employing supervis...
Self-training is a semi-supervised learning algorithm in which a learner keeps on labeling unlabeled examples and retraining itself on an enlarged labeled training set. Since the s...
Abstract. We study the problem of learning from positive and unlabeled examples. Although several techniques exist for dealing with this problem, they all assume that positive exam...