Sciweavers

ICPP
2005
IEEE

Optimizing Collective Communications on SMP Clusters

14 years 6 months ago
Optimizing Collective Communications on SMP Clusters
We describe a generic programming model to design collective communications on SMP clusters. The programming model utilizes shared memory for collective communications and overlapping inter-node/intra-node communications, both of which are normally platform specific approaches. Several collective communications are designed based on this model and tested on three SMP clusters of different configurations. The results show that the developed collective communications can, with proper tuning, provide significant performance improvements over existing generic implementations. For example, when broadcasting an 8MB message our implementations outperform the vendor’s MPI Bcast by 35% on an IBM SP system, 51% on a G4 cluster, and 63% on an Intel cluster, the latter two using MPICH’s MPI Bcast. With all-gather operations using 8MB messages, our implementation outperform the vendor’s MPI Allgather by 75% on the IBM SP, 60% on the Intel cluster, and 48% on the G4 cluster.
Meng-Shiou Wu, Ricky A. Kendall, Kyle Wright
Added 25 Jun 2010
Updated 25 Jun 2010
Type Conference
Year 2005
Where ICPP
Authors Meng-Shiou Wu, Ricky A. Kendall, Kyle Wright
Comments (0)