Ryan Williams [homepage] [email]

Computational Complexity Theory [what is this?]

Mentor: Dieter van Melkebeek [homepage]

In the following, click on a word's link to view a page giving an informal definition.

Reversible Computation.

The main focus of this REU project has been complexity theoretic aspects of reversible computation. Reversible computation is, in short, computation that can be executed "backwards"; i.e. from the (without loss of generality, unique) final state of a computation, one can execute every step of the computation in reverse, deterministically computing the initial state. More precisely, in reversible computation it is not only the next configuration of the computation which is unique, but the previous configuration as well. Reversible computation is special in that the laws of thermodynamics do not require such computation to dissipate heat; also, quantum  computing is a subset of reversible computing, in that the laws governing quantum mechanics are reversible.

However, deliberately writing algorithms that can be executed in reverse can be an unintuitive and difficult process. Therefore, we would like to be able to write algorithms in the normal (irreversible) way, with irrecoverable erasures of data, etc., yet still reap the benefits reversible computing provides. So our objective here is to find improved theoretical methods for efficiently simulating the usual deterministic (irreversible) computations in a reversible way. On this web page, we give an overview suitable for anyone with elementary knowledge of Turing machines. The paper linked at the bottom of the page gives a more sophisticated and detailed report of our results.

Let M be a Turing machine (called TM for short).

We define a configuration graph of M to be an infinite set of nodes V, such that there is a 1-1 correspondence between the nodes of V and the possible configurations of M (state, head position, tape contents), and there is an edge (u,v) in the graph if and only if the configuration of M corresponding to node u becomes the configuration of M corresponding to node v after one step. Note that for a deterministic TM, every node in the configuration graph has outdegree <=1; either there is a unique next step of the computation, or none at all. Also observe that every configuration graph of M is isomorphic, so we will just refer to "the" configuration graph from now on.

We say a TM is reversible if and only if its configuration graph on any input x is such that every node has indegree <= 1.

Let T(n) and S(n) be upper bounds on the amount of time and space used by an arbitrary (deterministic) Turing machine M on inputs of size n. We wish to construct a reversible Turing machine N that is equivalent to M, yet uses only cT(n) time and dS(n) space, for some constants c, d.

Let TISP[T(n), S(n)]  ( rTISP[T(n), S(n)] ) be the class of languages accepted by deterministic  (reversible) Turing machines in space <= dS(n) and time <= cT(n) for some c, d. Then we can state our ultimate goal as: rTISP[T(n), S(n)] = TISP[T(n), S(n)]. Note that rTISP[T(n), S(n)] is a subset of TISP[T(n), S(n)] trivially-- every reversible machine is also deterministic.

If such a result could be shown, it would be a major improvement on previous work. A naive reversible simulation N(x) in which each transition taken during the computation of M(x) is saved to working tape shows that TISP[T(n), T(n)] is a subset of rTISP[T(n), S(n)].

In 1989, C.H. Bennett showed that for all e > 0, TISP[T(n), S(n)] is a subset of rTISP[T(n)1+e, S(n) log T(n)], using a high level idea similar to that found in Savitch's theorem-- we recursively compute the midpoint of the computation reversibly, recusively undo the computation history we saved up to that midpoint, then recusively compute the final configuration starting from the midpoint.

In 1996, Li and Vitanyi  showed that all of the straightforward space-saving simulations (modelling a pebble game on a simplified configuration graph) could save no more space than Bennett's simulation, and conjectured that all reversible simulations of deterministic computation must use  >= S(n) log T(n) space.

Recently, Lange, McKenzie, and Tapp (using a clever simulation first introduced by Sipser in 1980) showed that TISP[T(n), S(n)] is a subset of rTISP[2O(S(n)), S(n)], refuting the conjecture but obviously leaving open the question of whether one could simulate reversibly in linear space and an efficient amount of time. (Now, Li and Vitanyi have revised their conjecture to claim that rTISP[T(n), S(n)] is not equal to TISP[T(n), S(n)], which is precisely the opposite of the goal of this project!)

The simulation given by Bennett and the simulation of Lange, McKenzie, and Tapp (LMT) are very different from each other. The former method recursively divides the computation into two pieces, reversibly simulating the first piece, saving the configuration of the machine at the halfway point, reversibly erasing the first piece, then reversibly simulating the second piece using the saved configuration. This method then, focuses on modifying the actual transitions of the given machine M sufficiently, so that the outcome is an N with a reversible configuration graph. The same strategy goes for the naive reversible simulation we described earlier.  However, the LMT method does not attempt to modify the configuration graph of M at all; it simply executes M in a different way. Starting from the final configuration on input x, the simulation performs a depth-first search on the configuration graph on M for the initial configuration on x, for all configurations that use S(n) space or less. This is done by executing M backwards and forwards in the appropriate way. This simulation, when performed in the right way, uses approximately the same space as M, but possibly exponentially more time.

We believe that there may be a way to combine these two methods, such that we get linear space and efficient (polynomial) time. A result towards this goal is the following:

Lemma. For any function f(n) <= T(n) constructible in O(n) space, TISP[T(n), S(n)] is a subset of rTISP[f(n)1+e 2O(min[T(n)/f(n), S(n))], S(n) log f(n)], for all e > 0.


The proof combines both Bennett's simulation and LMT's simulation in a natural way. In its essence, it simply replaces the base case of Bennett's recursive algorithm with the LMT simulation. It is easy to see that when f(n) = T(n), we have the original bounds on Bennett's simulation (improved bounds were later given by Levine and Sherman in 1990). And when f(n) is a constant, we get the same bounds as LMT. When f(n) is a polynomial in S, we have O(S log S) reversible space and (significantly) subexponential reversible time.

A generalization of LMT that does not use Bennett's simulation yields the following:

Theorem. For all e > 0, TISP[T(n), S(n)] is a subset of rTISP[2min[eT(n), O(S(n))] / e, S(n) / e].

We have also begun generalizing our results in the form of a game, where we can place pebbles on nodes (save configurations to tape), remove pebbles (erase configurations), coloring edges (saving transitions to tape), and uncoloring edges (erasing).

A (little old) draft of the paper can be found at the following link:  Space-Efficient Reversible Simulations (postscript)

I will post a more "final" version of the paper in a few days.


The complexity of finite automata intersection.

Some time has also been devoted to studying and improving upon the recent result of Karakostas, Lipton, and Viglas that says if there is an algorithm that can determine if k arbitrary finite automata with n states each have nonempty intersection in no(k) time (note the little-o), then for all e > 0, NTIME[n] is properly contained in DTIME[2en]. Therefore, such an algorithm could be used to factor integers in subexponential time, or find solutions to Boolean logic formulas in subexponential time, which would be surprising. On the other hand, if the algorithm does not exist, that would imply P is properly contained in PSPACE, since the problem of nonempty intersection is complete for PSPACE. In other words, there exist problems that can be solved using an efficient amount of storage, but cannot be solved in general using an efficient amount of time. Intuitively, this seems true, but formalizing the intuition (and proving the result) has shown to be extremely difficult. At any rate, a proof of the existence or nonexistence of such an algorithm described above would yield a major result.

We have spent a little time attempting to improve their result, trying to show that if such an algorithm existed then NL would be properly contained in NP, a step towards the resolution of the P vs. NP problem. Karakostas, Lipton, and Viglas have shown a slightly more general hypothesis implies this result.


A new characterisation of the classical complexity classes and its usefulness.

The newest insight found during the REU was partially discovered while attending the Summer Session on Computational Complexity at the Institute for Advanced Study.

We give new characterisations of the complexity classes NL, P, NP, and PSPACE, by adding new components to logspace Turing machines.  (Actually, the PSPACE characterisation is not new at all, but seen from the light we are about to describe, it seems easier to relate PSPACE to the other classes.)

Start with an arbitrary logspace machine M. On input x, add to M a special polynomial length tape, and a special head to read the tape. The power of the head and contents of the tape will determine which complexity class (NL, P, NP, or PSPACE) is captured.

Informally speaking, let L + HT[P, Q] be the class of languages accepted by the above modified logspace machines when the special head has property P, and the special tape has property Q.

Some possibilities for P are: read-only 1-way, read and write-once 2-way, read-only 2-way, read-write 2-way, etc. We will abbreviate these as 1RO, 2RWO, 2RO, and 2RW, respectively.

Some possibilities for Q are: anything (A) and blank (B). "Anything" means that independent of the input, in the initial state anything may be on the special tape, and the acceptance of the machine depends on if there exists any possible contents of the tape at all such that the machine accepts (basically, a nondeterministic certificate). "Blank" means just that; the tape is initially blank.

We first show that

NL = L + HT[1RO, A].

That is, the logspace machines with this special head that is 1-way read-only on a polynomial size tape that can initially have anything accept precisely the NL sets. We have also shown that

P = L + HT[2RWO, B],

NP = L + HT[2RO, A],

and PSPACE = L + HT[2RW, B] = L + HT[2RW, A].

The NL and PSPACE proofs are quite straightforward. For NP, just note that 2-way read-only on a satisfying assignment plus logspace work tape is enough for verifying SAT, and that all NP languages are logspace reducible to SAT. For P, it suffices to observe that if the special head is write-once, only a polynomial number of writes on the special tape ever take place. Now, if the head ever stops writing at any time, the L machine will cycle after a polynomial number of steps. All in all, these machines will loop after polynomial time has passed.

We use the above characterization to show that 1-way NL (the class of languages accepted by NL machines that only allow its input head to move right, and the input head doesn't have to move every transition) has very, very low power-- unless some major complexity class collapse occurs. That is,

Theorem. If DLOGTIME-uniform AC0 is contained in 1-way NL, then NL = NP.

The proof is very simple. I will describe it roughly here. The proof shows that if the language L = {x | there is a w and integer n such that x = w^n} is decidable in 1-way NL, then 3CNF SAT can be solved in NL. The reason is that if such a language is in 1-way NL, an NL machine deciding 3CNF SAT works as follows. On input formula F with n variables and m clauses, it has two stages: one where its one-way read-only special tape of (n+1)m symbols has a satisfying assignment for F written m times on the tape (n+1 because we follow each assignment with a special marker), and another stage where the machine verifies that the contents of the special tape is indeed a string being repeated over and over. In the first stage, assuming the satisfying assignment is being repeated, the machine checks each clause individually with the satisfying assignment on the special tape. In the second stage, the machine considers the special tape as input, and using 1-way NL determines that it is a string being repeated. Finally, a counter independently checks that the special marker is being guessed every n+1 steps (and in no other steps), to make sure that the string being repeated is n+1 bits long.

Note that this proof could have very easily been carried out without the characterisation; but the formalism of L + HT[1RO, A] simply makes it easier to visualize the certificate being "guessed over and over".

We conjecture that these characterisations will be useful in proving other results relating the classical complexity classes.