SCIENTIFIC ACHIEVEMENTS AT DIMACS:

HIGHLIGHTS OF THE FIRST FOUR YEARS


Introduction

DIMACS, the Center for Discrete Mathematics and Theoretical Computer Science, is a National Science Foundation Science and Technology Center, also supported by the New Jersey Commission on Science and Technology. It is a consortium of four institutions, Rutgers and Princeton Universities, AT&T Bell Laboratories, and Bellcore. DIMACS has some 120 permanent members from the four institutions and hosts hundreds of associate members who are visitors, postdoctoral fellows, and graduate students.

DIMACS was founded in February 1989 and in its first four years of existence literally hundreds of scientific results have been achieved. These results range from the very useful, with almost immediate practical application, to the highly theoretical, building the foundations of a body of research.

DIMACS' research program is centered around two themes, discrete mathematics and theoretical computer science, broadly interpreted. The program is designed to stimulate the scientific interactions of DIMACS' permanent members, to expose a broad community of discrete problems and methods; and to facilitate the interactions between the researchers in discrete mathematics/theoretical computer science and the users of these areas of science. This program is carried out through special year research thrusts, additional week-long workshops on special topics, special day-long events, seminars, and lecture series, and it emphasizes strongly the interactions among permanent members, and among its permanent and associate members, the rest of the research community, and the users of discrete mathematics and theoretical computer science.

So far, there have been four special years at DIMACS. Year 1 (September 1989 to August 1990) was on the topic Discrete and Computational Geometry; Year 2 (September 1990 to August 1991) on the topic Complexity Theory of Interactive Computing; Year 3 (September 1991 to August 1992) on the topic Graph Theory and Algorithms; and Year 4 (September 1992 to August 1993, currently ongoing) on the topic Combinatorial Optimization. Year 5 (September 1993 to August 1994) will be on the topic Massively Parallel Computing.

In this booklet, we emphasize specific scientific results. Many of the scientific achievements described here are explicitly traceable to DIMACS' programs. They resulted directly from a special year thrust, bringing the attention of permanent members and/or associate members to bear on a particular problem or type of problem; they arose directly from a DIMACS workshop; they were carried out in collaboration between a DIMACS permanent member and a DIMACS visitor, postdoc, or graduate student; or they were carried out in a newly-forged partnership between DIMACS members at several of the DIMACS institutions. However, research successes in discrete mathematics and theoretical computer science are not always so clearly traceable, often being the result of casual conversations while in the hallway or while sipping coffee during a break in a workshop. Thus, we have chosen to include other achievements that are the work of individual DIMACS permanent members, alone or in collaboration, which are not as immediately traceable to the activities of DIMACS. We include them because they give a good summary of the successes of the individual research programs of DIMACS members and because it is fair to say that at least some of these research programs were stimulated by the interactions at and through DIMACS and by the events and visitors and research thrusts at DIMACS.

Table of Contents


Solution of the Gilbert-Pollak Conjecture on Design of Networks

During the first DIMACS special year, on Discrete and Computational Geometry, DIMACS postdoc D.Z. Du and DIMACS permanent member Frank Hwang (AT&T Bell Labs) solved an old problem in the design of networks. Designers of things like computer circuits, long-distance telephone lines or mail routings seek to find a minimum total length collection of routes which will connect up all desired locations. In solving such a problem, it makes a difference whether or not one is allowed to add extra points in the network to use as interconnection points. If interconnection points are not allowed, we talk of seeing the minimum spanning tree. If interconnection points are allowed, we talk of finding a Steiner minimum tree. Du and Hwang proved a 22-year-old conjecture of Gilbert and Pollak which says that adding extra interconnection points cannot reduce the total length of the tree by more than about 13 percent. That is, they proved that the minimum ratio between the length of a Steiner minimum tree and a minimum spanning tree r is (SQRT 3) /2.

In 1977, Garey, Graham and Johnson proved that the Steiner minimum tree problem is NP-hard. Therefore, it is important to find fast heuristics with good performance. By contrast, there are efficient algorithms (like Prim's and Kruskal's) to find minimum spanning trees. The Du-Hwang result suggests that minimum spanning trees are viable heuristics for Steiner minimum trees. While previous proofs of the Gilbert-Pollak conjecture for special cases are typically very messy with lots of computations, the Du-Hwang proof is conceptual in nature and requires essentially no computation. Such an approach renders the technique of the proof adaptable to many minimax problems.


Linear-Time Triangulation of the Polygon

A major accomplishment arising directly from DIMACS' first special year, on Discrete and Computational Geometry, was the solution by Bernard Chazelle, DIMACS permanent member from Princeton University, of the longstanding open problem of finding a linear-time algorithm for triangulating a simple polygon. The triangulation problem is to partition a simple polygon into triangles without adding new vertices. Triangulation is the necessary preprocessing step for many nontrivial operations on simple polygons. It is used in computer graphics to fill regions and paint two-dimensional scenes. Operations which commonly assume that polygons have been triangulated include navigating through a polyhedral scene, ray-tracing, computing geodesic paths, performing hidden surface removal, intersecting polygons, and computing visibility regions.

Despite its apparent simplicity, the problem remained elusive for many years. Naive methods require quadratic or even cubic time. While the literature on the problem is abundant, most results have concentrated on special classes of polygons, and breakthroughs on the triangulation problem in its full generality had been few and far between until Chazelle discovered his linear time algorithm.


Interactive Proofs and Approximate Algorithms

One of the most important developments in theoretical computer science in recent years has been the development of ways to reformulate computer "proofs" so that the correctness of a computation can be verified with arbitrarily high confidence by a small number of random spot checks. This development has recently received widespread publicity, with articles in such leading publications as the New York Times. A critical result leading to these developments came out of the second DIMACS special year, on Complexity Theory of Interactive Computing. This was the discovery of connections between interactive proofs and approximate algorithms.

The class of problems that computer scientists call NP-complete are well known for being computationally intractable. Such problems come in different shades. Some of them have fast algorithms yielding high-quality approximate solutions (e.g., the bin-packing problem), while others (e.g., the unrestricted traveling saleman problem) are probably impossible to solve even approximately (assuming that P notequal NP). One long-standing mystery in combinatorial optimization has been to explain why it has been so hard to find even approximate solutions to the well-known problem of finding the largest clique in a graph. This maximum clique problem has a wide variety of practical applications ranging from the clustering problems arising in handling large data sets in medicine, signal processing, and robotics; the problems of locating facilities such as emergency centers, urban services, airport hubs, warehouses, and communications centers; and the problem of discovering blocks in social networks. During the DIMACS special year, DIMACS postdoc Uriel Feige, DIMACS special year visitor Shafi Goldwasser, DIMACS permanent member Laszlo Lovasz, and DIMACS visitors Shmuel Safra and Mario Szegedy solved the mystery, by showing that the problem of finding the maximum clique cannot be solved even approximately, unless NP-complete problems can be solved in almost polynomial time. Remarkably, the proof techniques employed to obtain this result were derived from recent research on cryptography, stressed in DIMACS' second special year, and having on the surface no connection whatsoever with the maximum clique problem. The connection was triggered by a DIMACS workshop on "Structural Complexity and Cryptography" and played a central role in the development of the theory of proof checking.


More on Approximate Solutions to Hard Problems

Recently, there has been great interest in approximate solutions to hard problems. The flurry of interest in this topic was triggered by results, coming out of DIMACS' second special year, on the difficulty of solving approximately the famous maximum clique problem. (This work is described immediately above.) Recent results build on some work of Christos Papadimitriou and DIMACS permanent member Mihalis Yannakakis of AT&T Bell Laboratories. The new results are due to DIMACS visitor and now permanent member Carsten Lund of AT&T Bell Laboratories, working in collaboration with Sanjeev Arora, Rajeev Motwani, Madhu Dudan, and Mario Szegedy. Szegedy was a DIMACS visitor when most of this work was done and is now a DIMACS postdoc at AT&T Bell Labs. Although NP-complete problems can be translated into one another by polynomial-time algorithms, the same is not true of the corresponding approximation problems. Papadimitriou and Yannakakis found a class of problems for which a polynomial-time translation of one problem into another extended to their approximations. This class is called MAX SNP. Arora, et al. have shown that one MAX SNP problem, MAX 3 SAT, has the property that it has a polynomial time approximation scheme if and only if P = NP. Thus, it is unlikely that there will be such an approximation scheme for various widely studied and important problems such as the Traveling Salesman Problem with triangle inequality, the Steiner tree problem, and the vertex covering problem. (These problems are mentioned throughout this booklet.) Recent results by Lund and Yannakakis have also shown the nonapproximability of the chromatic number and the minimum set cover problem.


The Finite Game-Chromatic Number for Planar Graphs

One of the most interesting results coming out of DIMACS' thirds special year, on Graph Theory and Algorithms, was the direct result of a DIMACS workshop on planar graphs held during this special year. The results link pure combinatorics with genuinely applied topics, and the DIMACS workshop played a key role in making it happpen. Much of discrete mathematics and theoretical computer science is concerned with optimization problems. Traditional research concentrates on solving these problem when they are entirely spelled out in advance. However, suppose that in the middle of trying to implement the solution to one of these problems we encounter an "adversary" who tries to make things go wrong. The adversary could be a computer virus, it could be a real-life opponent, or it could just be human error. The study of algorithms that must be modified dynamically on the basis of adversarial behavior is an important topic with many practical applications. An idealized model of an adversarial situation involves graph coloring in which a game is played between two players, alternating turns, coloring a previously-uncolored vertex with a color different from that of its neighbors, using one of t available colors. The first player wins if all vertices are eventually colored while the second player wins if an impasse is reached and no additional vertex can be colored. This game is originally due to Boedlander. The game chromatic number of a graph G is the smallest t so that the first player has a winning strategy. Early results showed that the game chromatic number of a tree is at most 4. However, it was not clear that the game chromatic number of a planar graph is bounded. This problem was recently solved by DIMACS member Tom Trotter of Bellcore and Hal Kierstead of the University of South Carolina. They used an idea presented at the planar graphs workshop in a totally different context, that is, the idea of p-arrangeability of a graph that is due to G. Chen and R. Schelp and arises from the theory of linear Ramsey numbers in planar graphs, a highly theoretical concept. Specifically, Kierstead and Trotter proved that the game chromatic number of a p-arrangeable graph is at most 1 + (p+1)2 and that every planar graph is 10-arrangeable. Hence, its game chromatic number is at most 123. Since, they have reduced this upper bound to 41.


The Case k = 5 of Hadwiger's Conjecture

One of the most famous open problems in graph theory is the so-called Hadwiger conjecture, dating from 1943, which states that every graph not contractible to the complete graph on k+1 vertices is colorable in k colors. The famous four-color theorem, proved in 1976, shows the case k = 4 of this conjecture. As an outgrowth of DIMACS' third special year, on Graph Theory and Algorithms, DIMACS permanent member Paul Seymour of Bellcore together with DIMACS visitors Neil Robertson and Robin Thomas have proved the next case of this conjecture, the case k = 5.


Solution to Very Large Traveling Salesman Problem

One of the most important problems in the field called combinatorial optimization is the traveling salesman problem. Although originally formulated as the problem of finding a shortest route for a traveling salesman to visit all of his customers, the problem has found important applications in drilling holes through printed circuit boards (where the drill has to move through a set of prescribed locations) and in x-ray crystallography (where the diffractometer has to move through a set of prescribed angles). The problem and variations of it has also found important industrial applications in routing robots through automated warehouses, sending couriers to service automatic teller machines, and picking up coins from pay telephone booths. The traveling salesman problem is notoriously difficult. Since the 1950's, mathematicians and computer scientists have been designing algorithms and writing computer programs to solve it. A standard collection of test problems for these programs has been gathered in a publicly available file called TSPLIB. In the Spring of 1992, DIMACS permanent members David Applegate from AT&T Bell Labs, Vasek Chvátal from Rutgers, and Bill Cook from Bellcore, together with Robert Bixby from the Center for Research on Parallel Computation at Rice University, solved ten problems from the TSPLIB with sizes ranging from 1060 to 3038 "cities." In particular, the 3038-city problem is the largest traveling salesman problem ever solved. The problem arose from a practical application: Finding the most efficient way to connect 3038 points for drilling holes in making a circuit board.

It should be noted that a large number of cities is not the only factor that makes a particular instance of the traveling salesman problem difficult; the number of nodes in a "branching tree" used to divide the problem into subproblems may provide a better measure of the inherent difficulty of the instance. From this point of view, bigger does not always mean harder (for instance, Applegate, Bixby, Chvátal, and Cook disposed of the previous world record, a 2392-city problem, without having to subdivide it at all, but required a subdivision of a 532-city problem into five subproblems); and the new world record looks all the more impressive: In going from 2392 cities to 3038 cities, these four researchers went from a single node in the branching tree to 1761 nodes. The accomplishments of Applegate, Bixby, Chvátal, and Cook have recently been described in Discover Magazine.


Approximate Solutions to Very Large Traveling Salesman Problems

In the example immediately above, we described the Traveling Salesman Problem and its practical significance. The result described there was for the optimal solution to a traveling salesman problem. However, approximate solutions to difficult problems such as the TSP are also of great practical importance, and approximate solutions to the TSP have been widely studied. DIMACS permanent members Jon Bentley and David Johnson of AT&T Bell Labs and DIMACS visitor Lyle McGeoch have developed a heuristic that has been very successful at finding such approximate solutions. For example, applied to a 1,000,000 city TSP randomly generated from points in the unit square, this heuristic got within 1.5% of optimal in about 2.7 hours on a 25 Mhz MIPS processor. In 16 minutes, the heuristic got within 2% of optimal on a real-world instance with 85,900 cities.


Solutions to Very Large Matching Problems

The matching problem of combinatorial optimization has many practical applications, for example in scheduling job assignments, assigning storage locations in computer programs, in matching buyers with sellers in controlled markets, and in assigning new residents to medical schools. DIMACS permanent members David Applegate of AT&T Bell Labs and Bill Cook of Bellcore have solved matching problems with over 100,000 points and over a billion edges. This work improves the best known previous result for matching by two orders of magnitude.


Massively Parallel Discrete Event Simulation of Telecommunication Network

Discrete event simulation is an indispensable tool for the design and analysis of large telecommunication systems. Unfortunately, such simulations present a very large computational burden; the execution duration of a typical simulation run on a high performance workstation is measured in hours. DIMACS visitor Bruno Gaujal and DIMACS permanent member Albert Greenberg of AT&T Bell Laboratories have developed new massively parallel algorithms for call by call simulation of large circuit-switched networks, and have implemented the algorithms on SIMD (single instruction multiple data) and MIMD (multiple instruction multiple data) parallel machines. On networks similar to the AT&T long distance network, a sweep algorithm developed by Gaujal, Greenberg, and David Nicol from the College of William and Mary achieves a simulation execution rate of over 3 million calls a minute, reducing the time required for a production run to a small number of minutes. A key difficulty in the design of a massively parallel simulation of the network is coping with asymmetries. In realistic networks, the call arrival rates and the link capacities may vary by three orders of magnitude over the network. On the other hand, general purpose parallel computers are typically quite regular. Identical processors with identical memory capacities are linked in a symmetric interconnection network. To cope with this mismatch, our algorithms exploit fine grained parallelism among the events directed at a single link, allowing Gaujal and Greenberg to assign proportionately more processors to work on the links receiving more events. All the computations are carried out together in a manner well-suited for SIMD architectures, which are characterized by large numbers of processors, each with moderate speed and memory capacity.

Parallel discrete event simulation belongs to an important class of industrial and scientific problems that have inherent parallelism but lack the regular structure that usually accompanies success in parallel processing. Other examples of such problems include general graph and sparse matrix algorithms, branch and bound techniques, adaptive grid methods, and circuit simulations. In such problems, data dependencies are highly irregular and/or dynamically evolving at run time, making it difficult to find a balanced mapping of data and computations onto parallel computers built upon regular communication networks. The results of Gaujal and Greenberg are in part the result of increased DIMACS emphasis on massively parallel computation, as a preliminary to the fifth DIMACS special year, which is on that topic and will be held starting in September 1993. Pre-special year activities include a workshop on the subject "Beyond Regularity in Parallel Computation: Parallel Algorithms for Unstructured and Dynamic Problems," which will investigate inherently parallel irregular problems in greater depth.


Computation of Volume and Random Walks

Computing the volume of a (high-dimensional) convex body is an ancient, basic, but extremely difficult task (going back to Archimedes). There are various negative results showing its difficulty. For example, Barany and Feredi proved in 1986 that if the convex body is given by a separation oracle (a natural framework which allows the polynomial-time solution of many algorithmic problems), then any algorithm that approximates the volume within a factor of 2 necessarily takes exponential time. A breakthrough in the opposite direction was due to Dyer, Frieze, and Kannan in 1989, who designed a polynomial time randomized algorithm to approximate the volume of a convex body in n dimensions. This algorithm, though theoretically efficient (polynomial time), was very far from being practical; the dependence on the dimension, proportional to the 27th power, was prohibitive. DIMACS permanent member Laszlo Lovász, working with DIMACS visitor Miklos Simonovits during several of his DIMACS visits, took part in the efforts by various teams to reduce this exponent. They found the first improvement to 16, and later the currently best exponent of 7. While this is still too large, it now seems to be within reach to find improvements that open up a large area of potential applications in numerical analysis, physics, and statistics.

A basic tool in the above results is the theory of Markov chains, and a non-negligible byproduct of this research has been a renewed interest in, and an emerging new view of, this classical field of probability theory.

The difference between randomness and determinism remains an important issue in computer science (and in other sciences). Volume computation is a dramatic example where the use of randomization leads to very significant improvements.


Solution of the Borsuk Conjecture

The Borsuk conjecture states that every subset of Euclidean d-space of unit diameter can be covered by d+1 sets each of diameter strictly less than 1. In the book Unsolved Problems in Geometry (edited by Croft, Falconer, and Guy, Springer, 1991), this conjecture is called "One of the most famous unsolved problems of geometry." A collaboration between a DIMACS permanent member, Jeff Kahn of Rutgers, and a DIMACS visitor, Gil Kalai, recently led to the solution of the Borsuk conjecture. Kahn and Kalai showed that the conjecture is wildly incorrect.


The Number of Linear Extensions of a Partial Order

In a variety of practical decisionmaking problems, one is given data in the form of a partial order and is asked to extend it to a linear order. This occurs, for example, when individual preferences are expressed and a linear order of alternatives is required. It also occurs in activity networks in project planning, where for some activities x and y, we are told that x must be completed before y can be started; and in designing a glossary when we wish to define each word before it is used. The linear extension problem is also fundamental in many problems of sorting in computer science. Knowing the number of linear extensions of a partial order is important in designing algorithms for finding the best linear extension according to some criteria. It is also important in computing the probability that x will be greater than y (preferred to = y, after < y) once the partial order is extended. In the summer of 1990, DIMACS visitor Graham Brightwell and DIMACS permanent member Peter Winkler of Bellcore solved a notorious ten-year-old unsolved problem by showing that the problem of counting the number of linear extensions of a given partially ordered set is #-P-complete.


Entropy and Sorting

DIMACS graduate student J.H. Kim of Rutgers and DIMACS permanent member Jeff Kahn of Rutgers have made an important breakthrough on the problem of finding the best way to sort a partially ordered set by comparisons, a problem originally considered by Mike Fredman in the 1970's. The problem of sorting is a fundamental problem in computer science and arises whenever we wish to place a set of items in some natural order on the basis of some comparisons. As we noted in describing the previous achievement, it also arises in a variety of decisionmaking problems, in the analysis of activity networks in project planning, and in problems of arranging words in a glossary. Sorting trivially requires at least log (e(P)) comparisons for a partially ordered set P with e(P) linear extensions or possible sorts. Fredman showed in 1976 that log (e(P)) + 2n comparisons suffice, where n = |P|. Kahn and Saks showed in 1984 that it can be done in O(log(e(P)) comparisons, by proving the existence of a "good" comparison, meaning one which more or less splits the extensions. In both cases, finding the desired comparisons seems quite intractable.

Kahn and Kim have now shown that one can sort a partially ordered set P using O(log(e(P)) comparisons and find the comparisons in deterministic polynomial time. This is a major new development, which makes the results practically useful. The results take a completely new approach to the problem, based on entropy of the comparability graph and convex minimization via the well-known ellipsoid algorithm.


Preemptive vs. Nonpreemptive Scheduling

One of the most important areas of application of discrete mathematics is the field of scheduling. Methods of discrete mathematics are used in scheduling problems in factories, by airlines, and in telecommmunications, to give just several examples. Fundamental work in scheduling theory underlies the methods used in practice. DIMACS permanent members Edward Coffman and Michael Garey from AT&T Bell Labs have settled the 20-year-old "4/3 conjecture" comparing optimal preemptive vs. optimal nonpreemptive schedules on two-processor systems. Here one is given a set of tasks with known processing times to execute on the two processors, subject to given precedence constraints among them. These constraints are of the form "task i must be completed before task j can start." It is desired to schedule the tasks so as to minimize the completion time of the last-finishing task, the so-called "makespan" of the schedule. A schedule is said to be nonpreemptive if each task, once started, is run continuously until its completion, whereas a preemptive schedule allows the running of a task to be temporarily suspended and resumed again at a later time, i.e., run in noncontiguous pieces whose lengths sum up to the total processing time needed for that task. The long-standing conjecture, well-known in the field and the subject of at least one published incorrect proof, is that, for any set of tasks and precedence constraints among them, the least possible makespan achievable by nonpreemptive schedule is no more than 4/3 the least makespan achievable when preemptions are allowed. Hence the potential benefits from allowing preemption, which are to be balanced against the nontrivial practical complexities of keeping track of information about partially completed tasks and their various parameters, are rather limited. The proof of this deceptively simple result is long and extremely intricate, perhaps hinting at why the conjecture remained open for so long.


Keller's Cube-Tiling Conjecture

In 1930, O. Keller conjectured that any tiling of d-dimensional space with unit cubes must contain two cubes having a common facet ((d-1)-dimensional face). This generalized a 1907 conjecture of Minkowski from the geometry of numbers, which asserts that the same conclusion holds for all such tilings whose cube centers form a lattice. In 1940, O. Perron proved Keller's conjecture is true in all dimensions up to 6, and in 1942 G. Hajös proved Minkowski's conjecture for all dimensions. In early 1992, DIMACS permanent members Jeff Lagarias and Peter Shor of AT&T Bell Labs showed that Keller's conjecture is false in all dimensions 10 and above. This result exhibits the great flexibility of movement possible in high-dimensional space, which has also been repeatedly exploited in coding theory. Lagarias' and Shor's methods in fact construct new types of nonlinear codes with interesting properties. Their results will be the subject of a forthcoming feature article in SIAM News.


Digital Time Stamping of Documents

DIMACS permanent member Stuart Haber of Bellcore, together with Scott Stornetta of Bellcore, has won the 1992 Discover Award for Technological Innovation in the Computer Software category for work on digital time stamping. Already, many documents, whether text, audio, picture, or video, are sent in digital form. Unfortunately, documents in digital form are easy to modify and this raises the difficult question of how to certify when such a document was created or last modified. Currently, there is no reliable way to prove that a digital document has not been altered since its creation. Businesses rely on elaborate procedures in order to insure the integrity of their paper records, and some records are not kept electronically specifically because in digital form they would not be reliable.

Haber and Stornetta have developed an approach that uses cryptography to attach a time stamp to a digital document which certifies that the document existed prior to the stated time and has not since been modified. They have developed a computationally practical way to time-stamp the data so that it is infeasible for a user to back-date or forward-date a document, even with the collusion of the time-stamping service, i.e., to arrange that once it is time-stamped, a document cannot be altered without invalidating the stamp. Their approach should make it possible for time-stamping to be available at minimal cost per document and should be applicable to almost any business document, including work that is subject to patent or copyright, software, accounting and tax records, official correspondence, and digital photographs. Bellcore has been experimenting with a working model of the software system for digital time stamping and expects it to be available for licensing next year.


Factoring Large Integers

While factoring small numbers (dividing them into a set of prime numbers whose product is the number) is a skill learned in grade school, factoring very large numbers is very difficult. Many applications in cryptography depend on the ability to factor large integers. The problem is of importance in theoretical computer science, but also to the Defense Department, banks, and anyone who uses electronic mail systems, because all of these users use codes based on the factors of large numbers. Any algorithm that makes finding those factors easier is a potential security threat. DIMACS permanent member Arjen Lenstra of Bellcore, together with Mark Manasse of Digital Equipment Corporation, have set several world records for factoring large integers.

Computer scientists have a "hit-list" of the ten most-wanted integers. Recently, Lenstra and Manasse were able to factor the first number on this list, the 9th Fermat number 229 + 1, which has 155 decimal digits. This number is special because it is the ninth in a simple series 22n + 1 devised by the famous seventeenth century mathematician Pierre de Fermat, who believed that all numbers in his series were primes. Lenstra and Manasse found the factorization with the help of a powerful new algorithm called the number-field sieve. Even with the help of the algorithm, the solution required the assistance of 200 mathematicians and a network of some 1000 powerful workstations around the world (using only idle cycles). This work was the subject of a 1991 article in Discover Magazine.


A Surprising Result about Location Problems

Location problems arise whenever a large set of potential sites for placing certain units is available and a selection must be made of the sites to be utilized. Such problems arise naturally in situations like placing emergency centers, urban services, warehouses, airfields, shopping centers, satellites, watchmen, or communication centers.

Traditionally in the theory of location problems, one is interested in minimizing or maximizing some objective function, and this objective function is assumed a priori. Since there are so many potential objective functions, attention must be paid to how to choose an appropriate function. In 1990, R. Holzman, following in a tradition going back to the 1951 work of Nobel-prize-winning economist Kenneth Arrow in his work on social welfare functions, specified some reasonable conditions that an objective function for solving a location problem should satisfy. He showed that as long as the network in which the facilities were to be located had a tree structure, then the objective function was uniquely determined by these conditions. Later, R. Vohra obtained similar results under other conditions. Networks with tree structures are rather special. DIMACS permanent members Pierre Hansen and Fred Roberts of Rutgers have investigated more general networks and found the startling result that for all connected networks other than trees, there is no objective function satisfying a slight variant of Holzman's conditions. This is an impossibility result in the same spirit as the famous impossibility theorem of Arrow. By omitting one of the assumed conditions, one can restate the result in the following way: Under some reasonable conditions which one would like to impose on the objective function in solving a location problem, the solution can be very sensitive to small changes in locations of the users of the facilities. This has important practical implications for implementing the solutions to real location problems.


Erdös-Lovász Problem Solved

DIMACS permanent member Jeff Kahn from Rutgers has solved a problem listed for many years by Paul Erdös as one of his three favorite combinatorial problems. The problem has to do with determining if there is a linear upper bound on a certain function n(r). Specifically, n(r) is the size of the smallest r-uniform, intersecting hypergraph with cover number r. Kahn showed that there is indeed a linear upper bound on this function. The solution is somewhat related to and motivated by work over the last ten years by a number of authors on the existence of good packings and covers in hypergraphs, but is in the opposite direction: The results prove the existence of good packings under certain conditions, whereas the present work, in effect, describes situations where good covers do not exist.


Littlewood Problem Partially Solved

A well-known old problem of Littlewood is to find a polynomial of degree n with coefficients +1 or -1 such that the absolute values of the polynomial on the unit circle are always between two constant multiples of the L2-norm (the square root of n). More than ten years ago T. Körner (Cambridge) gave a polynomial where the coefficients are on the unit circle. A natural idea is to "randomly" shift the coefficents of Körner's polynomial to +1 or -1. Unfortunately, this approach gives an extra factor of square root of log n even if we relax the condition on the coefficients and allow any finite subset of the unit circle (not just +1, -1). It was therefore quite unexpected that there exists a desired polynomial where the coefficients are from some finite set. However, DIMACS permanent member Jószef Beck of Rutgers has shown precisely this, where the set is the set of 400-th roots of unity.


DIMACS Home Page
Contacting the Center
Last modified April 2, 1996.