In this paper a neural network for approximating function is described. The activation functions of the hidden nodes are the Radial Basis Functions (RBF) whose parameters are learnt by a two-stage gradient descent strategy. A new growing radial basis functions-node insertion strategy with different radial basis functions is used in order to improve the net performances. The learning strategy is able to save computational time and memory space because of the selective growing of nodes whose activation functions consist of different radial basis functions. An analysis of the learning capabilities and a comparison of the net performances with other approaches have been performed. It is shown that the resulting network improves the approximation results.