DIMACS Princeton Colloquium Talk


Title:

Commutativity Analysis: A New Analysis Framework for Parallelizing Compilers

Speaker:

Martin Rinard
University of California, Santa Barbara

Place:

Room 402, Computer Science (CS) Bldg, 35 Olden St.
Princeton University, Princeton, NLJ

Time:

12:00 - 1:30 p.m.
Monday, September 23, 1996

Abstract:

Commutativity analysis is a new analysis framework designed to automatically parallelize object-oriented computations that manipulate pointer-based data structures. It differs from data dependence analysis in that it does not attempt to precisely determine which pieces of computation access which pieces of data. It instead views the computation as composed of operations on objects. It then analyzes the program to discover when operations commute (i.e. generate the same final result regardless of the order in which they execute). If all of the operations required to perform a given computation commute, the compiler can automatically generate parallel code. Even though this parallel code may violate the data dependences of the original serial program, it is guaranteed to generate the same result.

This talk presents the basic concepts behind commutativity analysis and discusses some extensions that we have found necessary for its practical application. It also presents performance results for two automatically parallelized computations running on the Stanford DASH machine. These results provide encouraging evidence that commutativity analysis may be able to serve as the basis for a successful parallelizing compiler.


Document last modified on September 20, 1996