Most of the research, to date, on optimizing program transformations for declarative languages has focused on sequential execution strategies. In this paper, we consider a class of commonly encountered computations whose \natural" speci cation is essentially sequential, and show how algebraic properties of the operators involved can be used to transform them into divide-and-conquer programs that are considerably more e cient, both in theory and in practice, on parallel machines.
Saumya K. Debray, Mudita Jain