Workshop: Special Year on Logic and Algorithms - One Year Later
July 23-25, 1997
DIMACS Center, CoRE Building, Rutgers University, Piscataway, NJ
- Organizers:
- Eric Allender, Rutgers University, allender@cs.rutgers.edu
- Robert Kurshan, Bell Labs, k@research.bell-labs.com
- Moshe Vardi, Rice University, vardi@cs.rice.edu
Abstracts
- Anuj Puri
- A Minimized Automaton Representation of Reachable States
We consider the problem of storing a set $S \subset \Sigma^{k}$ as a
minimized deterministic finite automaton (DFA). We show that inserting
a new string $\sigma \in \Sigma^{k}$ or deleting a string from the
set $S$ represented as a minimized DFA can be done in time $O(k |\Sigma|)$ so
that the resulting set is also represented as a minimized DFA.
We then discuss some applications of this work to the problem of
state space exploration.
This is joint work with Gerard Holzmann.
- E. Bounimova
- Compiler Design using a Formal Specification
The talk presents a compiler development methodology based on a formal
specification of a compiler. The formal specification defines a mapping from
a source language of the compiler into its target language through some
special predicate calculus with an interpretation domain consisting of
a structured pair of source and target extended derivation trees.
In the methodology suggested the compiler implementation can be split
into the following phases: designing the mapping, specifying it formally,
testing the mapping and coding the algorithms which implement
the mapping. The methodology evolved out of a number of realistic
applications including the compiler from the subset of SDL 92 into Cospan.
- V. Levin
- Verification of Message Sequence Charts using COSPAN
Message sequence charts (MSC) are popular as a low-level design of a
communication system. We present a methodology for specifying and
verifying MSC. Specification is given using templates, namely charts
with only partial information about the participating events and their
interrelated order. Verification is done by a search whose aim is to
match a template against a design MSC. The search is implemented by
COSPAN after the template and design MSC are translated into two
COSPAN processes. If a matching sequence of events exists, the template
process reaches its final state organized as a trap. This gives rise to
a bad cycle and the matching sequence found is extracted from the error
track reported by COSPAN.
- M. Vishwanathan
- NEXP-completeness and OBDDs
When analyzing the complexity of decision problems on graphs, one
normally assumes that the input size is polynomial in the number of
vertices in the graph. Galperin and Wigderson~\cite{gw}, and later
Papadimitriou and Yannakakis~\cite{py}, investigated the
complexity of these problems when the
input graph is represented by a polylogarithmically succinct circuit. They
showed that, under such a representation, not only do certain trivial
properties become intractable, but also that, in general, there is an
exponential blow up in the complexity of these problems.
In this paper, we show that, even when the input graph is represented
by a small OBDD, there is an exponential blow up in the complexity of
most graph properties. In particular, we show that the GAP and AGAP
problems become complete for PSPACE and EXP, respectively,
when the graphs are succinctly represented by OBDDs.
- Husnu Yenigun
- Verifying SDL at a Hardware Interface
We present a methodology for Hardware/Software co-verification.
SDL is used to specify the software part of the system, Verilog or VHDL
is used to specify the hardware part, and the combined system specification
is compiled into S/R, the native language of the verification tool COSPAN.
The interface between HW and SW is given in SDL with a modified semantics.
An interface process behaves like an SDL process from the point of view of
SW and like a HW module from that of HW.
We assume that co-design properties refer either to the hardware part or
to the software part. For hardware properties, we apply a HW-centric
methodology in which the interface process is an abstraction of the SW part
while for software properties, we apply a SW-centric methodology in which the
interface is considered to be an abstraction of the HW. In the HW-centric
view, one can use localization reduction. For the SW-centric view, we apply
the ample-set partial order reduction technique statically to define an
automaton that encodes the reduction. Compiling this automaton together with
the source enables us to treat the result as a synchronous system, and thus
apply a further localization reduction and apply symbolic state enumeration,
all with no modification to the verification engine COSPAN. As a by-product,
this method gives a way to combine partial order reduction, localization
reduction and symbolic search in a single algorithm by compiling the partial
order reduction into the source, relative to each instance.
- E. A. Emerson
- Abstractions and Related Methods Facilitating Program Analysis
Program analysis methods such as model checking are hindered by the
state explosion problem. Potential solutions include various
types of abstraction such as symmetry reduction methods,
partial order reduction, and symbolic representations. Some relations
among these methods will be discussed.
(This decribes in part some works done jointly with
S. Jha, K. Namjoshi, D. Peled)
- R. Kurshan
- Verification Technology Transfer
FormalCheck is a CAD tool for computer-aided verification of hardware
which grew out of some important theoretical and technological advances
of the last decade. I'll talk about the process through which it evolved
from concepts to commerce.
The concepts were founded in logic, automata and computational complexity.
The process involved jumping the hurdles of technology transfer, resistance
to change and commercial justification, including the world of pay-back and
back-bite. The experience plows through the center of issues such as:
Is there such a thing as "Technology Transfer"? If so, how do you do
it? If not, why does everyone talk about it? Is there really a role for
fundamental science anymore? If so, who will support it?
This is an update of a similar talk given during the SYLA.
- Alexander A.Razborov
- Lower Bounds for Algebraic Proof Systems
Algebraic proof systems are proof systems that operate with
(commutative) multi-variable polynomials over some ring and prove facts of
the form ``{\em every solution $(a_1,\ldots,a_n)$ to some system of
polynomial equations $f_1(x_1,\ldots,x_n)=\ldots=f_m(x_1,\ldots,x_n)=0$ is
also a solution to another equation $g(x_1,\ldots,x_n)=0$}''. Of especial
interest to us in this talk is the ``combinatorial'' framework in which we
include into the set of initial equations $f_1=\ldots=f_m=0$ all equations
$x_i^2-x_i=0$ as default axioms (this ensures us that we are in the Boolean
world, where $a_i\in\{0,1\}$), and pay a special attention to those
implications $(f_1=\ldots=f_m=0)\Rightarrow g=0$ that reflect {\em natural}
combinatorial principles (which is to make people inhabiting that world
excited).
Algebraic proof systems originally emerged in the paper ``Lower
bounds on Hilbert's Nullstellensatz\ldots'' by Beame et al\ (FOCS 1994) as a
tool for proving lower bounds in the {\em propositional\/} proof complexity
(more specifically, for attacking the system $F_d+MOD_p$ of constant-depth
Frege proofs allowing also modular gates modulo $p$). It soon became clear,
however, that the importance of algebraic proof systems goes far beyond this
particular task: they provide natural and elegant models for studying (our
way of thinking about) the most basic algebraic facts and constructions. At
the same time, in many aspects they are closely related to such already
established areas as propositional proof complexity or automatic theorem
proving. Somewhat surprising is also the fact that lower bounds for algebraic
proof systems are rather hard to come by, despite the fact that they amount
to some very ``elementary'' linear algebra.
In this talk we firstly survey various connections between algebraic
proof systems on the one hand, and such areas as propositional proof systems
or automatic theorem proving on the other. In particular, we try to
explain from the complexity perspective what makes algebraic proof systems
very unique for the automatic proof generation.
Then we survey lower bounds known for algebraic proof systems. We pay
a special attention to two recent advances in the area directly influenced by
the 1996 special year at DIMACS. One result establishes strong lower bounds
on the complexity of Nullstellensatz proofs for the {\em onto} version of the
pigeon-hole principle PHP (Beame, Riis). Another result shows the first
non-trivial lower bounds on proofs in the general polynomial calculus for the
(ordinary) PHP (Razborov).
- Andreas Blass
- Choiceless polynomial time
Turing machines define polynomial time (PTime) on strings but
cannot deal with structures like graphs directly, and there is no known,
easily computable string encoding of isomorphism classes of structures.
Is there a computation model whose machines do not distinguish between
isomorphic structures and compute exactly PTime properties? The question
can be recast as follows: Does there exist a logic that captures
polynomial time (without presuming the presence of a linear order)?
Earlier, Gurevich conjectured the negative answer. The problem motivated
a quest for stronger and stronger PTime logics. All these logics avoid
arbitrary choice. Here we attempt to capture the choiceless fragment of
PTime. Our computation model is a version of abstract state machines
(formerly called evolving algebras) operating on the given input
structure plus a universe of finite sets built from it. The idea is to
replace arbitrary choice with parallel execution. The resulting logic is
more expressive than other PTime logics in the literature. A more
difficult theorem shows that the logic does not capture all of PTime.
- Leonid Libkin
- Local Properties in Finite-Model Theory
The development of Descriptive Complexity suggests a very close
connection between proving lower bounds in complexity theory and
proving inexpressibility results in logic. The latter are of the form
"property P cannot be expressed in logic L over the class of finite
models." Developing tools for proving such expressivity bounds is one
of the central problems in Finite-Model Theory. The
Ehrenfeucht-Fraisse (EF) games remain the dominant tool; however, as
we are interested in more complex problems and logics, playing a game
is often a nontrivial task, and thus it becomes important to find
easier tools for proving expressivity bounds.
In this talk, I survey tools based on "local" properties of
logics. Intuitively, such local properties say that a given logic
cannot grasp the whole structure, but rather only a small part of it.
I start by reviewing various locality theorems proved for first-order
logic, FO (theorems by Gaifman, Hanf, Fagin-Stockmeyer-Vardi, and
corollaries) and show how they can be used to prove expressivity
bounds. I also show how locality notions provide winning strategies in
various EF games that capture expressivity in FO logic and beyond.
In the second part of the talk, I show how to extract four abstract
notions of locality by examining locality theorems, and show a chain
of implications among them. This chain of implications immediately
gives new expressivity bounds for some extensions of FO (for example,
with counting and unary generalized quantifiers). I also give
characterization of those notions on structures of small degree.
At the end, I'll show two applications of the locality techniques. One
deals with a descriptive complexity approach to the TC^0 vs. Logspace
problem. The other is about expressive power of database query
languages with aggregate functions (essentially SQL-like languages).
- M. Otto
- Decision Problems for Two-Variable Logics
Logics with only two element variables play an important role
as process logics, as temporal logics or as logics for knowledge
representation. Typical applications of logics in these areas hinge
on the tractability of their satisfiability problem.
By a classical result of Mortimer's, two-variable first-order logic
has the finite model property and is decidable for satisfiability.
Recent studies by a number of different authors have addressed the
decision problems (of satisfiability in finite or in general models)
for logics in the vicinity of two-variable first-order logic --
guided by the above-mentioned applications.
This programme has been carried out also with a view to obtaining
extensions that combine features of decidable extensions of modal
logic with those of two-variable first-order logic (viz. global
quantification).
Concerning decidability of extensions, however, two-variable first-order logic
turns out to be nowhere near as robust as modal logic.
For instance, least fixed points (as in mu-calculus) or transitive closures (as
in PDL or CTL), when added to two-variable first-order logic lead to high
degrees of undecidability.
One important exception concerns the extension of two-variable
first-order logic by counting quantifiers (as with graded modalities).
Even though this logic does not have the finite model property, it is decidable
(for satisfiability in finite as well as in general models),
and good upper complexity bounds have been be obtained.
- S. Burris
- Bootstrapping 0-1 Laws
Finite model theory has been primarily concerned with the properties
of logics (first order and otherwise) on finite models. This is
analogous to the focus of first order model theory until the early
60's. Since then, first order model theory has progressed by
studying the type structure of complete first order theories. We will
pose a number of questions concerning the development of a classification
theory (analogous to stability theory) for complete theories in
$k$-variable logic with arbitrarily large finite models.
A first order theory is unstable if and only if
it has the independence property or the strict order property. Theorem.
(Baldwin and Dawar) A complete theory in $k$-variable logic with
arbitrarily large finite models does not have the strict order
property. Conjecture: A complete theory in $k$-variable logic with
arbitrarily large finite models is either stable or has the independence
property. The conjecture does not follow easily from the theorem
because of a number of complexities in rephrasing first order notions
for $k$-variable logic.
- M. Grohe
- Finite-variable logics and polynomial time
We discuss the role of the finite-variable fragments of first-order
logic in descriptive complexity theory.
Taking the number of variables as a complexity measure sounds strange
first, but has turned out to be quite useful. This can best be seen
from the following two fundamental results
(due to Vardi and Immerman/Lander):
- Given a k-variable formula F and a finite structure A, it
can be decided in TIME(n^O(k)) if F holds in A.
- Given two finite structures A and B, it can be decided in
TIME(n^O(k)) if A and B satisfy the same k-variable sentences.
Actually, both problems are PTIME complete (for k>1). This suggests a
close connection between the number of variables needed to express a
problem and the exponent of a polynomial bounding the time needed to
compute the problem.
Starting from the results above, the finite variable logics have been
developed into the framework for important questions in finite model
theory, which are in particular closely related to the problem of
finding a logic for PTIME.
In this talk we report on recent advances in this area.
- S. Cook
- Relating the Provable Collapse of P to NC1 and the Power of Logical Theories
We show that the following three statements are equivalent: QPV is
conservative over QALV, QALV proves its open induction formulas,
and QALV proves P = NC1. Here QPV and QALV are first order theories
whose function symbols range over polynomial-time and NC1 functions,
respectively.
- C. Pollett
- General Definability and Conservation Results in Bounded Arithmetic
Three well-studied bounded arithmetic theories are $T^i_2$, $S^i_2$, $R^i_2$.
It is known that $S^{i+1}_2$ is $\Sigma^b_{i+1}$-conservative over $T^i_2$ and
that the $\Sigma^b_{i+1}$-definable functions of $S^{i+1}_2$ are the class
$FP^{\Sigma^p_i}$. The $\Sigma^b_{i+1}$-multifunctions of $S^i_2$ are the class
$FP^{\Sigma^p_i}(wit,log)$. Definable multifunctions for the classes $R^i_2$
are
less well understood. The main known result is that the $\Sigma^b_1$-functions
of $R^1_2$ are the class $FNC$. In this talk I will discuss results which arose
while trying to understand the $\Sigma^b_{i+1}$ and $\Sigma^b_i$-definable
multifunctions of $R^i_2$. One tactic which allowed me to derive my results was
to consider only prenex $\Sigma^b_i$-definability in theories whose induction
schemas were restricted to prenex formulas. The prenex versions of $S^i_2$ and
$T^i_2$ turn out to be the same as the usual versions. However, this does not
appear to be the case for $R^i_2$. Nevertheless, $R^i_2$ is
$\Sigma^b_i$-conservative
over its prenex version. Results from my thesis about prenex $R^{i+1}_2$ enable
one to show the $\Sigma^b_{i+1}$-multifunctions of $R^{i+1}_2$ are
$FP^{\Sigma^p_i}(wit,log^{O(1)})$ and the $\Sigma^b_{i+1}$-functions are
$FNC^{\Sigma^p_i}$. As well as the above in this talk I will discuss how prenex
theories allow one to generalize the above definability results to much weaker
theories than $T^i_2$, $S^i_2$, or $R^i_2$. I will also describe a nice
generalization of the conservation result between $S^{i+1}_2$ and $T^i_2$
which holds for prenex theories. Finally, I will discuss how a theorem of Kadin
allows
one to show various weak equalities among bounded arithmetic theories imply the
collapse of the polynomial hierarchy.
- J. Toran
- Optimal proof systems for Propositional Logic
and complete sets
A polynomial time computable function $h:\Sigma^*\to\Sigma^*$ whose range
is the set of tautologies in Propositional Logic (TAUT), is called
a proof system. Cook and Reckhow defined this concept
and in order to compare the relative strenth of different proof systems,
they considered the notion of p-simulation. Intuitively a proof system
$h$ p-simulates a second one $h'$ if there is a polynomial time computable
function $\gamma$ translating proofs in $h'$ into proofs in $h$.
A proof system is called optimal if it p-simulates every other proof system.
The question of whether p-optimal proof systems exist is an important one
in the field. Kraj\'{\i}\v{c}ek and Pudl\'ak
have given a sufficient condition for the
existence of such optimal systems, showing that if the deterministic
and nondeterministic exponential time classes coincide,
then
p-optimal proof systems exist. They also give a condition implying the
existence of optimal proof systems
(a related concept to the one of p-optimal systems) exist.
In this paper we improve this result giving a weaker sufficient condition
for this fact. We show that if a particular class of sets with low information
content in nondeterministic double exponential time is included in
the corresponding nondeterministic class, then p-optimal proof systems
exist.
We also show some complexity theoretical consequences that follow from
the assumption of the existence of p-optimal systems. We prove that
if p-optimal systems exist the the class UP (an some other related complexity
classes) have many-one complete languages, and that many-one complete sets for
NP $\cap$ SPARSE follow from the existence of optimal proof systems.
- E. Rosen
- On fragments of existential finite variable logics
Finite variable logics have played a prominent role in finite
model theory, for a variety of reasons. We discuss some results
and open questions on finite variable logics of a model theoretic
nature, concentrating on their existential fragments. In particular,
many concepts from classical model theory have natural finite
variable analogs, so that one can ask whether versions of
classical theorems hold in a finite variable context. For
example, the Los-Tarski theorem states that every first order
sentence that defines a class of models that is closed under
extensions is equivalent to an existential first order sentence.
On the other hand, Andreka, van Benthem, and Nemeti proved
that the finite variable version of this statement fails to hold.
Another line of investigation considers questions about the
finite axiomatizability of theories of finite structures.
For example, although the L^k theory of every finite structure
is finitely axiomatized, this is not true of existential L^k
theories. We present some related results and open problems.
- J. Avigad
- Building Finite Approximations to Models of Arithmetic
When it comes to extracting constructive or computational information
from classical proofs, variations of Gentzen's cut-elimination
procedure play a central role in most proof-theoretic analyses to
date. Cut elimination involves applying a sequence of transformations
to the proof in question, and hence has a very syntactic flavor. In
this talk I will discuss a semantic approach to the subject, which is
nonetheless finitistic and computationally oriented. (This work is
still in progress.)
- S. Buss
- Interpolation, Arithmetic and Cryptography
There have been a number of striking applications of the Craig
interpolation to bounded arithmetic and to propostional proof
complexity. Razborov proved that, under a cryptographic
assumption, certain weak fragments of arithmetic cannot prove NP is
unequal to P, and his proof is based on the fact that a certain kind of
interpolation theorem holds for a fragment of arithmetic. On the other
hand, interpolation does not hold for Peano arithmetic. Krajicek-Pudlak
have shown that if interpolation holds for certain fragments of bounded
arithmetic, then commonly used number-theoretic cryptographic schemes
(eg, RSA) are not secure.
The interpolation theorem holds for many propositional proof systems
as well and there are close connections between feasible interpolation
for propositional proof systems and interpolation for bounded arithmetic.
This talk will give an introduction to Craig interpolation for bounded
arithmetic and propositional proof systems, and its applications to
complexity lower bounds and to cryptography.
- O. Kupferman
- Reduction Hierarchies
In the automata-theoretic approach to verification, we model
programs and specifications by automata on infinite words.
Correctness of a program with respect to a specification can then be
reduced to the language-containment problem.
In a concurrent setting, the program is typically a parallel
composition of many coordinating processes, and the
language-containment problem that corresponds to verification is
(*) $L(P_1) \cap L(P_2) \cap ... \cap L(P_n) \subseteq L(T)$,
where $P_1,P_2,...,P_n$ are automata that model the underlying
coordinating processes, and $T$ is the task they should perform.
In 1994, Kurshan suggested the heuristic of Reduction Hierarchies for
circumventing the exponential blow-up introduced by conventional
methods that solve the problem (*). In the reduction-hierarchy
heuristic, we solve the problem (*) by solving a sequence of easier
problems, which involve only automata of tractable sizes.
Complexity-theoretic conjectures imply that there are settings in
which the heuristic cannot circumvent the exponential blow-up.
In this talk, we demonstrate the strength of the heuristic, study
its properties, characterize settings in which it performs
effectively, and suggest a method for searching for reduction
hierarchies.
This is a joint work with Robert Kurshan and Mihalis Yannakakis.
- Doron Peled
- Software Design Tools
Large software design projects often begin with
a requirements capture and analysis phase.
So far, there are few tools that a designer can
rely on to support these tasks.
We show that a portion of the requirements, relating
the architecture and the temporal behavior of the new
system, can be captured in formalized message sequence charts.
In this talk, a set of small, light-weight, design tools
will be presented. THese tools
support the requirements engineering phase with
standardized descriptions, and automated checks.
The design of these tools focusses on the use of graphical
interfaces to avoid special formalisms and hide
the actual algorithms that are used by these tools.
- O. Maler
- Timed regular expressions
In this paper we define {\it timed regular expressions}, an extension
of regular expressions for specifying sets of dense-time
discrete-valued signals. We show that this formalism is
equivalent in expressive power to the timed automata of Alur and Dill
by providing a translation procedure from expressions
to automata and vice versa. The result is extended to omega-regular
expressions.
- V. Sazanov
- From World Wide Web to Bounded Hyperset Theory
It is demonstrated how a Hyperset Theory (satisfying P.Aczel's
Anti-Foundation Axiom) naturally arises from the idea of World-Wide
Web (WWW). Alternatively, Web serves as an illustration and possible
application of the abstract notion of antifounded sets. A
$\Delta$-language of Bounded Hyperset Theory is presented as a query
language to the Web or, more generally, to Web-like Data Bases (WDB).
It is shown that it describes exactly all abstract (or generic, up to
bisimulation) PTIME-computable queries with respect to (possibly cyclic)
graph encoding of hypersets. This result is based (i) on
reducing of the $\Delta$-language for hypersets to the language
FO+LFP ( = First-Order Logic with the Least Fixed-Point operator)
over graphs considered up to bisimulation relation and (ii) on
definability in FO+LFP of a linear ordering on any strongly
extensional finite graph by a single formula (cf.\ also DIMACS
TR-97-05). The case of finitely branching, possibly infinite
graphs and corresponding hypersets is also discussed. It
corresponds to finitely-branching, but infinite Web. However, it
deserves special further investigation.
The talk is based on DIMACS TR-97-21 and also 97-05 by
Alexei Lisitsa and Vladimir Sazonov.
Cf.
http://dimacs.rutgers.edu/archive/TechnicalReports/1997.html
Previous: Participation
Next: Participants
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on July 2, 1997.