Abstract. When parallelizing loop nests for distributed memory parallel computers, we have to specify when the different computations are carried out (computation scheduling), where they are carried out (computation mapping), and where the data are stored (data mapping). We show that even the "best" scheduling and mapping functions can lead to a sequential execution when combined, if they are independently chosen. We characterize when combined scheduling and mapping functions actually lead to a parallel execution. We present an algorithm which computes a scheduling compatible with a given computation mapping, if such a schedule exists.
Alain Darte, Claude G. Diderich, Marc Gengler, Fr&