DIMACS Workshop on Probabilistic Analysis of Algorithms for Hard Problems

November 1 - 3, 1999
Fiber Optic Materials Research Building, Rutgers University, Piscataway, NJ

Organizers:
Martin Dyer, University of Leeds, dyer@scs.leeds.ac.uk
Alan Frieze, Carnegie Mellon University, alan@random.math.cmu.edu
Presented under the auspices of the Special Year on Computational Intractability.

Co-sponsored by the National Security Agency and Microsoft Research.



The analysis of algorithms for NP-hard problems under distributional assumptions on the input, is an important approach to dealing with hardness issues. As Karp remarked in his Turing award lecture, "I felt, however, that in the case of NP-complete problems we weren't going to get the worst-case guarantees we wanted, and that the probabilistic approach was the best way and perhaps the only way to understand why heuristic combinatorial algorithms work so well in practice." Random models are also important as benchmarks for empirical testing of algorithms.

Here tools from probability theory (especially concentration inequalities), and the theory of random combinatorial structures, especially random graphs, play an important role. Problems such as finding Hamilton cycles in graphs or properly coloring them have been shown to be tractable under such assumptions. Simple heuristics have been defined which are likely to be asymptotically close to optimal with high probability. The paper of Karp on the TSP in the plane was of this type and was fundamental in stimulating interest in the subject. There are also, hybrid approaches, such as the analysis of semi-random models which combine aspects of worst case and average case analysis.

Sometimes there are results which show that the average case can be hard too. Many enumerative type algorithms have been shown to take exponential time with high probability. Of particular importance here is the result of Chvatal and Szemeredi showing that a certain random model of satisfiability, resolution can take exponential time with high probability. Levin introduced the notion of completeness on average to describe probabilistic models problems which are unlikey to have fast average case solution.

There are a variety of research goals: to extend average case analysis to new problems, to explore the effect of distributional assumptions on the behavior of algorithms, to identify distributions that are especially hard for particular problems.

The workshop aims to bring together researchers interested in all of these approaches.

The following people have already agreed to participate:

Alexander Barvinok, Bela Bollobas, Ed Coffman, John Franco, Martin Furer, Leslie Goldberg, David Johnson, Dick Karp, Jeong Han Kim, Leonid Levin, George Lueker, David Matula, Andrew Odlyszko, Andrzej Rucinski, Greg Sorkin, Joel Spencer, Angelika Steger, Leen Stougie, C.R.Subramanian, Santosh Vempala, Santosh Venkatesh, Ramarathnam Venkatesan, VanHa Vu, Peter Winkler, Joe Yukich.


Next: Call for Participation
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on June 9, 1999.