In this paper fully connected RTRL neural networks are studied. In order to learn dynamical behaviours of linear-processes or to predict time series, an autonomous learning algorithm has been developed. The originality of this method consists of the gradient based adaptation of the learning rate and time parameter of the neurons using a small perturbations method. Starting from zero initial conditions (neural states, rate of learning, time parameter and matrix of weights) the evolution is completely driven by the dynamic of the learning data. The overfitting phenomenon and the arising of several equilibrium states are discussed. Some examples are proposed to illustrate how the network is able to learn different kinds of data dynamics.