It is well known that editing techniques can be applied to (large) sets of prototypes in order to bring the error rate of the Nearest Neighbour classifier close to the optimal Bayes risk. However, in practice, the behaviour of these techniques uses to be much worse than expected from the asymptotic predictions. A novel editing technique is introduced here which explicitly aims at obtaining a good editing rule for each given prototype set. This is achieved by first learning an adequate assignment of a weight to each prototype and then pruning out those prototypes having large weights. Experiments are presented which clearly show the superiority of this new method, specially for small data sets and/or large dimensions.