Abstract. Validation can be used to detect when over tting starts during supervised training of a neural network; training is then stopped before convergence to avoid the over tting early stopping". The exact criterion used for validation-based early stopping, however, is usually chosen in an ad-hoc fashion or training is stopped interactively. This trick describes how to select a stopping criterion in a systematic fashion; it is a trick for either speeding learning procedures or improving generalization, whichever is more important in the particular situation. An empirical investigation on multi-layer perceptrons shows that there exists a tradeo between training time and generalization: From the given mix of 1296 training runs using di erent 12 problems and 24 di erent network architectures I conclude slower stopping criteria allow for small improvements in generalization here: about 4 on average, but cost much more training time here: about factor 4 longer on average. 1 Early s...