— Representation of knowledge within a neural model is an active field of research involved with the development of alternative structures, training algorithms, learning modes and applications. Radial Basis Function Neural Networks (RBFNNs) constitute an important part of the neural networks research as the operating principle is to discover and exploit similarities between an input vector and a feature vector. In this paper, we consider nine architectures comparatively in terms of learning performances. LevenbergMarquardt (LM) technique is coded for every individual configuration and it is seen that the model with a linear part augmentation performs better in terms of the final least mean squared error level in almost all experiments. Furthermore, according to the results, this model hardly gets trapped to the local minima. Overall, this paper presents clear and concise figures of comparison among 9 architectures and this constitutes its major contribution.