Sciweavers

PROCEDIA
2010

Improvement of parallelization efficiency of batch pattern BP training algorithm using Open MPI

13 years 9 months ago
Improvement of parallelization efficiency of batch pattern BP training algorithm using Open MPI
The use of tuned collective’s module of Open MPI to improve a parallelization efficiency of parallel batch pattern back propagation training algorithm of a multilayer perceptron is considered in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of a parallel version of the batch pattern training method is introduced. The obtained parallelization efficiency results using Open MPI tuned collective’s module and MPICH2 are compared. Our results show that (i) Open MPI tuned collective’s module outperforms MPICH2 implementation both on SMP computer and computational cluster and (ii) different internal algorithms of MPI_Allreduce() collective operation give better results on different scenarios and different parallel systems. Therefore the properties of the communication network and user application should be taken into account when a specific collective algorithm is used.
Volodymyr Turchenko, Lucio Grandinetti, George Bos
Added 30 Jan 2011
Updated 30 Jan 2011
Type Journal
Year 2010
Where PROCEDIA
Authors Volodymyr Turchenko, Lucio Grandinetti, George Bosilca, Jack J. Dongarra
Comments (0)