Abstract-- Genetic algorithms have been used to evolve several neural network architectures. In a previous effort, we introduced the evolution of three well known ART architects; Fuzzy ARTMAP (FAM), Ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (GAM). The resulting architectures were shown to achieve competitive generalization and exceptionally small size. A major concern regarding these architectures, and any evolved neural network architecture in general, is the added overhead in terms of computational time needed to produce the finally evolved network. In this paper we investigate ways of reducing this computational overhead by reducing the computations needed for the calculation of the fitness value of the evolved ART architectures. The results obtained in this paper can be directly extended to many other evolutionary neural network architectures, beyond the studied evolution of ART neural network architectures.