An algorithm is proposed to prune the prototype vectors (prototype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a prototype will be pruned if its deletion leads to the smallest classification error increase. Also each pruning iteration is followed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.
Jiang Li, Michael T. Manry, Changhua Yu