Locally adaptive classifiers are usually superior to the use of a single global classifier. However, there are two major problems in designing locally adaptive classifiers. First, how to place the local classifiers, and, second, how to combine them together. In this paper, instead of placing the classifiers based on the data distribution only, we propose a responsibility mixture model that uses the uncertainty associated with the classification at each training sample. Using this model, the local classifiers are placed near the decision boundary where they are most effective. A set of local classifiers are then learned to form a global classifier by maximizing an estimate of the probability that the samples will be correctly classified with a nearest neighbor classifier. Experimental results on both artificial and real-world data sets demonstrate its superiority over traditional algorithms.
Juan Dai, Shuicheng Yan, Xiaoou Tang, James T. Kwo