Sciweavers

AAAI
2007

Semi-Supervised Learning with Very Few Labeled Training Examples

14 years 2 months ago
Semi-Supervised Learning with Very Few Labeled Training Examples
In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.
Zhi-Hua Zhou, De-Chuan Zhan, Qiang Yang
Added 02 Oct 2010
Updated 02 Oct 2010
Type Conference
Year 2007
Where AAAI
Authors Zhi-Hua Zhou, De-Chuan Zhan, Qiang Yang
Comments (0)