## Harry Mairson

Brandeis University

Universal models of computation, as defined by the Church-Turing
thesis, divide naturally into those that are machine based (Turing
machines, register machines, and their variants), and those that are
language based (lambda-calculus, Post systems, equational computing a
la Herbrand-G"odel, etc.). While a great deal is known about the
complexity of machine simulation, little is known about the inherent
complexity of language-based models of computation. How efficiently
can they be implemented? This theoretical question is the natural
complement to applied problems in compiler technology.
The lambda-calculus, while providing an elegant foundation for modern
programming languages, is indeterminate: we are not told in full
detail how to evaluate lambda-terms, though the Church-Rosser theorem
ensures that at most one answer is possible. We discuss the
computational complexity of evaluating lambda-terms in a manner that
is correct (the evaluator does not diverge when another evaluation
strategy could have produced an answer) as well as optimal (no
``duplication of work'' is required, defined in a technical sense).
The crude functional programming analogies are that call-by-name
evaluation is correct but not optimal, while call-by-value evaluation
is an approximation of an incorrect but optimal strategy.

Recent work by Lamping, with independent revisions by Gonthier, Abadi,
Levy, as well as Asperti, has shown that correct, optimal evaluators
do in fact exist. These beautiful evaluators augment graph reduction
technology, used to implement functional languages, with a mechanism
that can lazily share both values and environments (evaluation
contexts). We analyze their algorithms, showing that while certain
essential operations are indeed optimized, other supportive
``bookkeeping'' operations grow exponentially in the number of
optimized operations. Their algorithms also suffer computational
indignities resulting from local, distributed control: global
knowledge matters.

Sixty years after its invention by Church, and unlike all machine
models, no satisfactory cost model for the lambda-calculus has ever
been defined, so it is not even clear what ``efficient
implementation'' means. We propose an implementation-independent
metric, based on the labelled lambda-calculus of Jean-Jacques Levy,
that satisfies the machine-invariance thesis, and sketch how Lamping's
abstract algorithm can be modified to run polynomial in that metric.

(Joint work with Julia Lawall, currently at the Institut de Recherche
en Informatique et Systemes Aleatoires (IRISA), Rennes, France.)

Dept. of Computer Science

Brandeis University

Waltham, Massachusetts

USA

** Email:**
mairson@cs.brandeis.edu

** Home page: **
http://www.cs.brandeis.edu/~mairson/

Back to the list of talks