We present the theoretical results about the construction of confidence intervals for a nonlinear regression based on least squares estimation and using the linear Taylor expansion of the nonlinear model output. We stress the assumptions on which these results are based, in order to derive an appropriate methodology for neural black-box modeling; the latter is then analyzed and illustrated on simulated and real processes. We show that the linear Taylor expansion of a nonlinear model output also gives a tool to detect the possible ill-conditioning of neural network candidates, and to estimate their performance. Finally, we show that the least squares and linear Taylor expansion based approach compares favourably with other analytic approaches, and that it is an efficient and economic alternative to the non analytic and computationally intensive bootstrap methods. Keywords Nonlinear regression, neural networks, least squares estimation, linear Taylor expansion, confidence intervals, ill...