A Celebration of 10 Years of DIMACS
Friday, October 9, 1998
8:00 a.m. - 7:00 p.m.
Busch Campus Center, Busch Campus,
Rutgers University, Piscataway, NJ
The DIMACS Partners:
- Rutgers, The State University of New Jersey
- Princeton University, Department of Computer Science
- AT&T Labs - Research
- Bell Labs
- Bellcore
- NEC Research Institute
DIMACS is an NSF Science and Technology Center funded under grant STC
91-19999 and is also supported by the State of New Jersey Commission
on Science and Technology.
DIMACS 10th Anniversary Celebration Abstracts:
1.
"Analysis of a Strategy for Sequencing the Human Genome"
Dick Karp
University of Washington
The rapid increase in human genome sequencing effort and the emergence of
several alternative strategies for large-scale sequencing raise the need
for a thorough comnparison of such strategies. This talk surveys the various
strategies and provides a mathematical analysis of one of them: the
BAC-end strategy of Venter, Smith and Hood.
2.
"Algorithms in real algebraic geometry"
Ricky Pollack
Courant Institute, New York University
For the first decade or so of its existence, computational geometry
was concerned primarily with a linear world: arrangements of points,
lines, line segments and their higher dimensional analogues. In the past
decade, motivated by questions in robotics, motion planning, computer vision,
and learning theory, as well as by theoretical studies of arrangements
in general, computational geometry has investigated arrangements of sets which
cannot be described by linear equations and inequalities but are instead
arrangements of ``semi-algebraic sets''. These are the solution sets of
polynomial equalities and inequalities over the reals. The study of
geometric, topological, algebraic and algorithmic issues about them
constitutes a significant portion of real algebraic geometry.
We begin the talk with an introduction to the subject, presenting the
ways in which it is both similar to and different from the linear world and
introducing many of the problems in which we are primarily interested.
This is followed by a discussion of earlier results and the techniques
that have made increasingly efficient algorithms for these problems
possible. We report on recent developments by the speaker
together with S. Basu and M. F. Roy, which have produced improved
algorithms for the existential theory of the reals, quantifier elimination
over real closed fields, the computation of roadmaps for semi-algebraic
sets and the computation of a semi-algebraic description of the connected
components of a semi-algebraic set. We conclude by presenting some very
recent results of S. Basu on algorithms to compute the Euler
characteristic of a semi-algeberaic set and his bounds on the complexity
of a single cell in an arrangement of n surface patches in d-space.
3.
Jennifer Chayes, Microsoft
Finite-Size Scaling in Percolation and Satisfiability
Phase transitions are abrupt changes in the global behavior
of a system in response to small changes in an external
parameter. They are familiar phenomena in physical systems.
But such phenomena also occur in many probabilistic models
and even in some problems in theoretical computer science.
Technically, phase transitions occur only in infinite systems.
Finite-size scaling is the study of how the transition behavior
emerges from the behavior of large, finite near the critical value
of the external parameter. In particular, finite-size scaling
characterizes the sharpness of the transition as a function of
the size of the system.
In this talk, I will focus on finite-size scaling in two systems:
percolation and the 2-satisfiability problem.
4.
"Transforming Men Into Mice"
Pavel Pevzner
University of Southern California
Many people (including myself) believe that transformations
of humans into mice happen only in fairy tales. However,
despite some differences in appearance and habits, men and
mice are genetically very similar. In the pioneering paper,
Nadeau and Taylor, 1984 estimated that surprisingly few genomic
rearrangements (approximately 180) happened since the divergence
of human and mouse 80 million years ago. However, their analysis
is non-constructive and no rearrangement scenario for human-mouse
evolution has been suggested yet. The problem is complicated by the
fact that rearrangements in multi-chromosomal genomes include
inversions, translocations, fusions and fissions of chromosomes, a
rather complex set of operations.
We prove a duality theorem which expresses the genomic distance
in terms of easily computable parameters reflecting different
combinatorial properties of sets of strings. This theorem leads
to a polynomial-time algorithm for computing most parsimonious
rearrangement scenarios. Based on this result and the comparative
physical mapping data we have constructed a scenario of human-mouse
evolution with 131 reversals/translocations/fusions/fissions. A
combination of the genome rearrangement algorithm with the recently
proposed experimental techniques suggests a new "rearrangement-based"
approach to the problem of reconstructing mammalian evolution.
This is a joint work with Sridhar Hannenhalli (SmithKline Beecham
Pharmaceuticals).
5.
"Verification of Open Systems"
Moshe Y. Vardi
Rice University
In computer system design, we distinguish between closed and open
systems. A closed system is a system whose behavior is completely
determined by the state of the system. An open system is a system that
interacts with its environment and whose behavior depends on this
interaction.
The ability of temporal logics to describe an ongoing interaction of a
reactive program with its environment makes them particularly appropriate
for the specification of open systems. Nevertheless, model-checking
algorithms used for the verification of closed systems are not appropriate
for the verification of open systems. Correct verification of open systems
should check the system with respect to arbitrary environments and should
take into account uncertainty regarding the environment. This is not the
case with current model-checking algorithms and tools.
Module checking is an algorithmic method that checks, given an open
system (modeled as a finite structure) and a desired requirement
(specified by a temporal-logic formula), whether the open system
satisfies the requirement with respect to all environments. In this
paper we describe and examine module checking problem, and study its
computational complexity. Our results show that module checking is
computationally harder than model checking.
This work was done jointly with Orna Kupferman during the DIMACS
Special Year on Logic and Algorithms.
6.
"Sampling by Markov Chains: How to Speed It Up?"
Laszlo Lovasz
Yale University
Random walk techniques have many applications to the problem of
sampling, i.e., to the problem of generating a random element from a
given distribution. Applications of such methods include
simulation in statistical mechanics, combinatorial
enumeration, integration, volume computation, generation of
contingency tables, and optimization over convex bodies.
The crucial issue in this applications is the "mixing time"
of the random walk, i.e., the speed of convergence to
the stationary distributions.
Focusing on the application to sampling from a convex body, we discuss
basic techniques an new developments in the analysis and improvement
of sampling algorithms. In particular, we show how a
"log-Cheeger inequality" improves mixing time bounds (joint
work with Ravi Kannan), and how "lifting", a general construction
suggested by the work of Diaconis, Holmes and Neal, leads
to dramatic improvement in the mixing time (not just in the bound!)
in some cases (joint work with Fang Chen and Igor Pak).
DIMACS Homepage
Contacting the Center
Document last modified on October 2, 1998.