Abstract. Distributing process-oriented programs across a cluster of machines requires careful attention to the effects of network latency. The MPI standard, widely used for cluster computation, defines a number of collective operations: efficient, reusable algorithms for performing operations among a group of machines in the cluster. In this paper, we describe our techniques for implementing MPI communication patterns in process-oriented languages, and how we have used them to implement collective operations in PyCSP and occam- on top of an asynchronous messaging framework. We show how to make use of collective operations in distributed processoriented applications. We also show how the process-oriented model can be used to increase concurrency in existing collective operation algorithms. Keywords. CSP, clusters, concurrency, MPI, occam-, parallel, PyCSP, Python.
John Markus Bjørndalen, Adam T. Sampson