The number of required hidden units is statistically estimated for feedforward neural networks that are constructed by adding hidden units one by one. The output error decreases with the number of hidden units by an almost constant rate, if each appropriate hidden unit is selected out of a great number of candidate units. The expected value of the maximum decrease per hidden unit is estimated theoretically as a function of the number of learning data sets in relation to the number of candidates that are obtained by random search. This relation can be expanded to cover other searching methods. In such a case, the number of candidates implies how many steps might be required if random search were used instead. Therefore the number of candidates can be regarded as a parameter that represents the efficiency of the search. Computer simulation shows that estimating this parameter experimentally from the actual decrease in output error is useful for demonstrating the efficiency of the gradie...