— Over time, neural networks have proven to be extremely powerful tools for data exploration with the capability to discover previously unknown dependencies and relationships in the data sets. However, the sheer volume of available data and its dimensionality makes data exploration a challenge. Employing neural network training paradigms in such domains can prove to be prohibitively expensive. An algorithm, originally proposed for supervised on-line learning, has been improvised upon to make it suitable for deployment in large volume, highdimensional domains. The basic strategy is to divide the data into manageable subsets or blocks and maintain multiple copies of a neural network with each copy training on a different block. A method to combine the results has been defined in such a way that convergence towards stationary points of the global error function can be guaranteed. A parallel algorithm has been implemented on a Linux-based cluster. Experimental results on popular benchma...