Sciweavers

WAIM
2009
Springer

Kernel-Based Transductive Learning with Nearest Neighbors

14 years 5 months ago
Kernel-Based Transductive Learning with Nearest Neighbors
In the k-nearest neighbor (KNN) classifier, nearest neighbors involve only labeled data. That makes it inappropriate for the data set that includes very few labeled data. In this paper, we aim to solve the classification problem by applying transduction to the KNN algorithm. We consider two groups of nearest neighbors for each data point — one from labeled data, and the other from unlabeled data. A kernel function is used to assign weights to neighbors. We derive the recurrence relation of neighboring data points, and then present two solutions to the classification problem. One solution is to solve it by matrix computation for small or medium-size data sets. The other is an iterative algorithm for large data sets, and in the iterative process an energy function is minimized. Experiments show that our solutions achieve high performance and our iterative algorithm converges quickly. Key words: KNN, transductive learning, semi-supervised learning, kernel function
Liangcai Shu, Jinhui Wu, Lei Yu, Weiyi Meng
Added 27 Jul 2010
Updated 27 Jul 2010
Type Conference
Year 2009
Where WAIM
Authors Liangcai Shu, Jinhui Wu, Lei Yu, Weiyi Meng
Comments (0)