Princeton University Dept. of Computer Science

**Organizers:****Adam L. Buchsbaum**, AT & T Labs, alb at research.att.com**Valerie King**, University of Victoria, val at uvic.ca**Daniel Sleator**, Carnegie Mellon University, sleator at cs.cmu.edu

We are indebted to the following sponsors for vital financial support:

- Princeton University Dept. of Computer Science
- DIMACS
- HP Labs
- Tom Leighton, MIT and Akamai Technologies
- Bruce Maggs, CMU and Akamai Technologies
- Octoshape

Title: Life in the Fast Lane: Ackermann and Its Cousins

Most algorithms on this side of the Milky Way can be analyzed by using standard mathematical functions. Until Bob Tarjan came along. He was the first to pin down the extraordinarily exotic behavior of an extraordinarily simple data structure. Putting the spotlight on this shiny star in the algorithmic firmament revealed a whole galaxy of even stranger galactic objects. This talk will allow you to catch a quick glimpse into the telescope.

Title: Fast Converging Tatonnement Algorithms for One Time and Ongoing Markets

Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, we formalize the setting of Ongoing Markets, by contrast with the classic market scenario, which we term One-Time Markets. The Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets.

For both market settings, we define and analyze variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point.

Our algorithm introduces a new and natural update rule. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property.

Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking:

- A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price.
- A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth.
- The closeness of the market to a Fisher market (a market with buyers starting with money alone)

(This is joint work with Lisa Fleischer.)

Title: The Binary Blocking Flow Algorithm for the Maximum Flow Problem

The maximum flow problem is a classical combinatorial optimization problem with numerous applications. Efficient algorithms for this problem have been studied for over half a century. For a long time, O(nm) has been the goal of the algorithm designers, where n and m are the number of input vertices and arcs, respectively. This is a natural target as the maximum flow decomposition can have OMEGA(nm) arcs. Algorithms achieving the bound for most graph densities, and coming within a factor of log(n) or less for the remaining ones, have been developed.

However, for the unit capacity case, the problem can be solved in O(min(n^(2/3), m^(1/2))*m) time. The binary blocking flow algorithm extends this result to the case of integral capacities in the range [1...U] and achieves the bound of O(min(n^(2/3), m^(1/2))*m*log(n^2/m)*log(U)). This bound is better than O(nm) unless U is huge. Whereas the previous algorithms treated all residual arcs equally, assigning them length one, our algorithm distinguishes between small and large arcs, assigning length one to the former and length zero to the latter.

In this talk we describe the binary blocking flow algorithm and its analysis.

This result is closely related to Bob Tarjan's work. Bob is a co-inventor of the unit-capacity flow algorithm mentioned above. He is also a co-inventor of the blocking flow algorithm and the dynamic tree data structure we use. Bob taught the algorithm and simplified a part of it.

Title: Challenges in Web Information Retrieval

In this talk we will give an overview over the architecture of web search engines and then we will describe a few open algorithmic problems that arise in this setting.

Title: The Analysis of Self-Adjusting Data Structures: Results, Conjectures, and Alternatives

While over 20 years have passed since Tarjan and his collaborators introduced the world to splay trees, pairing heaps, and skew heaps, there remains a huge gap between how we think these structures perform and how we know they perform. In this talk we will try to present an overview of 20 years of fact, conjecture, transformations, counterexamples and alternatives.

Title: Stabbing a Dynamic Set of Intervals

We describe optimal dynamic data structures for maintaining segments on the line, each with priority associated with it, such that we can find the interval of minimum priority containing a query point efficiently. For the case where each pair of intervals are either disjoint or one contains the other we give a particularly simple data structure. This special case is related to the problem of finding the longest prefix of a query string in a preprocessed collection of strings.

Based on joint work with P. Agarwal, L. Arge, M. Hershkoviz, E. Molad, K. Yi, and Bob Tarjan.

Title: Competitive Auctions

Bob Tarjan's seminal work on competitive analysis of online algorithms has been the inspiration for research in a broad array of areas within computer science. In this talk, we survey one such area: competitive auction design and analysis.

Title: Algorithms for Some Network Optimization Problems in Planar Graphs

Tarjan's Ph.D. research with Hopcroft, culminating in a linear-time planarity-testing algorithm, played a fundamental and vital role in the development of efficient graph algorithms. His work with Lipton on planar separators has also been highly influential. Planar graphs continue to be a useful testbed for algorithmic techniques. In this talk, we discuss some graph algorithms that achieve better performance (running time or output quality) by exploiting planarity. We address max-flow, shortest paths, traveling salesman, and Steiner tree.

Some of this work is joint with Glencora Borradaile and with Claire Mathieu.

Title: How to Cut a Birthday Cake and other Planar Separating Tales

This talk will discuss the "separation" of birthday cakes that contain many candles. Luckily such cakes can be modeled as a planar graph and the Separator Theorem can be applied to divide them. I will discuss the history of this theorem that Bob and I proved last century. I will also point out various variations on this theme--some old and some new. Finally, as a birthday gift to Bob, I will show how a new kind of separator theorem could lead to a more humanly understandable proof of the Four Color Theorem.

Title: Non-linearity in Davenport-Schinzel Sequences

A generalized Davenport-Schinzel sequence is one over a finite alphabet that contains no subsequences isomorphic to a fixed forbidden subsequence. Besides their applications in combinatorial geometry and the analysis of data structures (such as splay trees), Davenport-Schinzel sequences have given us a glimpse into one of Tarjan's Lilliputian fantasies: a mathematical world populated solely by industrious little inverse-Ackermanns, compressing their paths while strolling slowly to infinity.

This talk will be a tour through the non-linear land of forbidden subsequences.

Title: Fun with Path Compression

I will discuss some refinements and generalizations of the recent top-down approach to analyzing the complexity of path compression.

Title: Union-Find with Constant Time Deletions

A union-find data structure maintains a collection of disjoint sets under the operations MAKESET, UNION, and FIND. Kaplan, Shafrir, and Tarjan [SODA 2002] designed data structures for an extension of the union-find problem in which elements of the sets maintained may be deleted. The cost of a DELETE operation in their implementations is essentially the same as the cost of a FIND operation. They left open the question whether DELETE operations can be implemented more efficiently than FIND operations. We resolve this open problem by presenting a relatively simple modification of the classical union-find data structure that supports DELETE, as well as MAKESET and UNION operations, in constant time, while still supporting FIND operations in O(log n) worst-case time and O(alpha_{(M+N)/N} n) amortized time. Here n is the number of elements in the set returned by the FIND operation, N is the total number of MAKESET operations performed, M is the total number of FIND operations performed, and alpha_i(j) is a functional inverse of Ackermann's function which decreases in the index i.

Our analysis supplies, in particular, a very concise potential-based amortized analysis of the standard union-find data structure that yields an O(alpha_{(M+N)/N)}(n)) amortized bound on the cost of FIND operations. All previous potential-based analyses yielded the weaker amortized bound of O(\alpha_{(M+N)/N}(N)). Furthermore, our tighter analysis extends to one-path variants of the path compression technique such as pass splitting.

Previous: Program

Workshop Index

DIMACS Homepage

Contacting the Center

Document last modified on March 28, 2008.