In this paper, we present an empirical comparison among four different schemes of coding the outputs of a Multilayer Feedforward networks. Results are obtained for eight different classification problems from the UCI repository of machine learning databases. Our results show that the usual codification is superior to the rest in the case of using one output unit per class. However, if we use several output units per class we can obtain an improvement in the generalization performance and in this case the noisy codification seems to be more appropriate.