Standard learning procedures are better fitted to estimation than to classification problems, and focusing the training on appropriate samples provides performance advantages in classification tasks. In this paper, we combine these ideas creating smooth targets for classification by means of a convex combination of the original target and the output of an auxiliary classifier, the combination parameter being a function of the auxiliary classifier error. Experimental results with Multilayer Perceptron architectures support the usefulness of this approach.