Co-sponsored by the National Security Agency and Microsoft Research.
Here tools from probability theory (especially concentration inequalities), and the theory of random combinatorial structures, especially random graphs, play an important role. Problems such as finding Hamilton cycles in graphs or properly coloring them have been shown to be tractable under such assumptions. Simple heuristics have been defined which are likely to be asymptotically close to optimal with high probability. The paper of Karp on the TSP in the plane was of this type and was fundamental in stimulating interest in the subject. There are also, hybrid approaches, such as the analysis of semi-random models which combine aspects of worst case and average case analysis.
Sometimes there are results which show that the average case can be hard too. Many enumerative type algorithms have been shown to take exponential time with high probability. Of particular importance here is the result of Chvatal and Szemeredi showing that a certain random model of satisfiability, resolution can take exponential time with high probability. Levin introduced the notion of completeness on average to describe probabilistic models problems which are unlikey to have fast average case solution.
There are a variety of research goals: to extend average case analysis to new problems, to explore the effect of distributional assumptions on the behavior of algorithms, to identify distributions that are especially hard for particular problems.
The workshop aims to bring together researchers interested in all of these approaches.
The following people have already agreed to participate:
Alexander Barvinok, Bela Bollobas, Ed Coffman, John Franco, Martin Furer, Leslie Goldberg, David Johnson, Dick Karp, Jeong Han Kim, Leonid Levin, George Lueker, David Matula, Andrew Odlyszko, Andrzej Rucinski, Greg Sorkin, Joel Spencer, Angelika Steger, Leen Stougie, C.R.Subramanian, Santosh Vempala, Santosh Venkatesh, Ramarathnam Venkatesan, VanHa Vu, Peter Winkler, Joe Yukich.