This paper introduces an analysis technique, commutativity analysis, for automatically parallelizing computations that manipulate dynamic, pointer-based data structures. Commutativity analysis views computations as composed of operations on objects. It then analyzes the program to discover when operations commute, i.e. leave the objects in the same state regardless of the order in which they execute. If all of the operations required to perform a given computation commute,the compiler can automatically generate parallel code. Commutativity analysis eliminates many of the limitations that have prevented existing compilers, which use data dependence analysis, from successfully parallelizing pointer-based applications. It enables compilers to parallelize computations that manipulate graphs and eliminates the need to analyze the data structure construction code to extract global properties of the data structure topology. This paper shows how to use symbolic execution and expression manipu...
Martin C. Rinard, Pedro C. Diniz