The use of tuned collective’s module of Open MPI to improve a parallelization efficiency of parallel batch pattern back propagation training algorithm of a multilayer perceptron is considered in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of a parallel version of the batch pattern training method is introduced. The obtained parallelization efficiency results using Open MPI tuned collective’s module and MPICH2 are compared. Our results show that (i) Open MPI tuned collective’s module outperforms MPICH2 implementation both on SMP computer and computational cluster and (ii) different internal algorithms of MPI_Allreduce() collective operation give better results on different scenarios and different parallel systems. Therefore the properties of the communication network and user application should be taken into account when a specific collective algorithm is used.