Sciweavers

PPOPP
2006
ACM

Collective communication on architectures that support simultaneous communication over multiple links

14 years 5 months ago
Collective communication on architectures that support simultaneous communication over multiple links
Traditional collective communication algorithms are designed with the assumption that a node can communicate with only one other node at a time. On new parallel architectures such as the IBM Blue Gene/L, a node can communicate with multiple nodes simultaneously. We have redesigned and reimplemented many of the MPI collective communication algorithms to take advantage of this ability to send simultaneously, including broadcast, reduce(-to-one), scatter, gather, allgather, reduce-scatter, and allreduce. We show that these new algorithms have lower expected costs than the previously known lower bounds based on old models of parallel computation. Results are included comparing their performance to the default implementations in IBM’s MPI. Categories and Subject Descriptors D.m [Software]: Miscellaneous General Terms Algorithms, Performance
Ernie Chan, Robert A. van de Geijn, William Gropp,
Added 14 Jun 2010
Updated 14 Jun 2010
Type Conference
Year 2006
Where PPOPP
Authors Ernie Chan, Robert A. van de Geijn, William Gropp, Rajeev Thakur
Comments (0)