In this paper, two modified constrained learning algorithms are proposed to obtain better generalization performance and faster convergence rate. The additional cost terms of the first algorithm are selected based on the first-order derivatives of the activation functions of the hidden neurons and the second-order derivatives of the activation functions of the output neurons, while the additional cost terms of the second one are selected based on the first-order derivatives of the activation functions of the output neurons and the second-order derivatives of the activation functions of the hidden neurons. In the course of training, the additional cost terms of the proposed algorithms can penalize the input-to-output mapping sensitivity and the high frequency components simultaneously so that the better generalization performance can be obtained. Finally, theoretical justifications and simulation results are given to verify the efficiency and effectiveness of our proposed learning algo...