llc is a language designed to extend OpenMP to distributed memory systems. Work in progress on the implementation of a compiler that translates llc code and targets distributed memory platforms is presented. Our approach generates code for communications directly on top of MPI. We present computational results for two different benchmark applications on a PC-cluster platform. The results reflect similar performances for the llc compiled version and an ad-hoc MPI implementation, even for applications with fine-grain parallelism.
Antonio J. Dorta, José M. Badía, Enr