### DIMACS Workshop on Competitive Algorithms for Packet Scheduling, Buffering and Routing in the Internet

#### July 6 - 8, 2011 DIMACS Center, CoRE Building, Rutgers University

Organizers:
Alex Lopez-Ortiz, University of Waterloo, alopez-o at uwaterloo.ca
Yishay Mansour, Tel Aviv University
Adi Rosen, CNRS, Paris
Presented under the auspices of the DIMACS Special Focus on Algorithmic Foundations of the Internet.

#### Abstracts:

Matthew Andrews, Bell Labs

Title: Packet scheduling in unstructured OFDM-based wireless networks

Yossi Azar, Tel-Aviv University
Nikhil Bansal, IBM T.J. Watson

Title: Online algorithms for Generalized Caching

The generalized caching problem is an extension of the classic online caching problem, where the pages can have both arbitrary sizes and arbitrary fetch costs. In particular, there is a cache and requests for pages arrive in an online manner. If the requested page is already present in the cache, no cost is incurred, otherwise the page must be fetched into the cache, possibly evicting other pages. The goal is to minimize the total fetch cost.

In this talk, I will describe a randomized poly-logarithmic competitive algorithm for the problem based on the recent primal dual method for online algorithms. Prior to our work, similar results were known only for special cases of the problem. The talk will not assume any prior knowledge of the primal dual method.

Joint work with Niv Buchbinder and Seffi Naor, preliminary version in STOC 2008.

Title: A Competitive Strategy for Routing Flow Over Time

The study of routing games is motivated by the desire to understand the impact of individual user's decisions on network efficiency. To do this, prior work uses a simplified model of network flow where all flow exists simultaneously, and users route flow to either minimize their maximum delay or their total delay. Both of these measures are surrogates for measuring how long it takes to get all of your traffic through the network over time.

Instead of using these surrogates, we attempt a more direct study of how competition among users affects network efficiency by examining routing games in a flow-over-time model. We show that the network owner can reduce available capacity so that the competitive equilibrium in the reduced network is no worse than a small constant times the optimal solution in the original network using two natural measures of optimum: the time by which all flow reaches the destination, and the average amount of time it takes flow to reach the destination.

Marcin Bienkowski, University of Wroclaw, Poland

Title: An Optimal Lower Bound for Buffer Management in Multi-Queue Switches

In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, the excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e-1) = 1.582 on the competitive ratio of the throughput maximization, which holds even for randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule.

Matthias Englert, University of Warwick

Title: Oblivious Routing in the L_p-norm

Gupta et al. introduced a very general multi-commodity flow problem in which the cost of a given flow solution on a graph G=(V,E) is calculated by first computing the link loads via a load-function \ell, that describes the load of a link as a function of the flow traversing the link, and then aggregating the individual link loads into a single number via an aggregation function \agg.

We show the existence of an oblivious routing scheme with competitive ratio O(log n) and a lower bound of \Omega(log n/loglog n) for this model when the aggregation function \agg is an L_p-norm.

Our results can also be viewed as a generalization of the work on approximating metrics by a distribution over dominating tree metrics (see e.g. [Bar96,Bar98,FRT03]) and the work on minimum congestion oblivious routing [Rae02,HHR03,Rae08]. We provide a convex combination of trees such that routing according to the tree distribution approximately minimizes the L_p-norm of the link loads. The embedding techniques of Bartal [Bar96,Bar98] and Fakcharoenphol et al. [FRT03] can be viewed as solving this problem in the L_1-norm while the result of Räcke [Rae08] solves it for L_\infty. We give a single proof that shows the existence of a good tree-based oblivious routing for any L_p-norm.

(joint work with Harald Raecke)

Yuval Emek, ETH

Title: Space-Constrained Interval Selection

In this talk we discuss streaming algorithms for the interval selection problem: finding a maximum cardinality subset of disjoint intervals on the line. A 2-approximation deterministic streaming algorithm for this problem is presented. The special case of proper intervals is also considered, where the approximation ratio is improved to 3/2. Our algorithms admit an optimal space complexity, i.e., they use space linear in the size of their output. On the negative side, we show that achieving approximation ratio of 2 -\epsilon (or 3/2 - \epsilon$for proper intervals) for any \epsilon > 0 is essentially impossible in a streaming setting, i.e., requiring space linear in the input. In passing, we also answer an open question of Adler and Azar (2003) regarding the space complexity of constant-competitive randomized preemptive online algorithms for the same problem. Together, our study provides fundamental understanding of the interval selection problem in space-constrained settings. Based on a joint work with Magnus M. Halldorsson and Adi Rosen. Guy Even, Tel Aviv University Title: Online Packet-Routing in Grids with Bounded Buffers We present the first online algorithm with a polylogarithmic competitive ratio for the problem of online routing of packets in unidirectional grids. The goal is to maximize the throughput, i.e., the number of delivered packets. Our algorithm is deterministic, centralized, handles packets with deadlines, allows bounded buffers, uses adaptive routing, and may drop packets before they reach their destination. Nick Feamster, Georgia Tech Title: The BISMark Project: Broadband Measurements from the Network Gateway Abstract: TBA Lisa K. Fleischer, Dartmouth College Title: Competitive Strategies for Routing Flow Over Time The study of routing games is motivated by the desire to understand the impact of individual user's decisions on network efficiency. To do this, prior work uses a simplified model of network flow where all flow exists simultaneously, and users route flow to either minimize their maximum delay or their total delay. Both of these measures are surrogates for measuring how long it takes to get all of your traffic through the network over time. Instead of using these surrogates, we attempt a more direct study of how competition among users effects network efficiency by examining routing games in a flow-over-time model. We show that the network owner can reduce available capacity so that the competitive equilibrium in the reduced network is no worse than a small constant times the optimal solution in the original network using two natural measures of optimum: the time by which all flow reaches the destination, and the average amount of time it takes flow to reach the destination. This is joint work with Umang Bhaskar (Dartmouth) and Elliot Anshelevich (RPI). Michael D. Grigoriadis, Rutgers University Title: FPTAS for global packet routing and block-structured models The global packet routing algorithms developed for the early ARPANET, based on the concept of minimizing total queuing delay or the maximum link utilization, were a significant application of the basic convex programming techniques of the 50's and 60's. As a variant of the Frank-Wolfe gradient descent paradigm, the flow deviation method developed for this purpose nearly 40 years ago by Fratta, Gerla and Kleinrock, has also had a lasting influence on the more recent development of fully polynomial time approximation schemes for large-scale fractional multicommodity flows and other block-structured nonlinear optimization models. Conversely, analyses of such FPTAS' point to minor refinements that turn the original flow deviation technique into a practicable FPTAS. The simplicity of the resulting algorithms can serve as a base-line methodology for a competitive analysis of alternative online algorithms, especially by experimental means. This talk examines the design and analysis of such algorithms. Alex Kesselman, Google Title: Competitive Buffer Management for Shared-Memory Switches We consider buffer management policies for shared memory switches, where overloads result in packet loss due to the limited shared memory capacity. The goal of the buffer management policy is that of maximizing the total number (value) of packets transmitted. The problem is online in nature, and thus competitive analysis is a natural framework to analyze the performance of such policies. Remarkably, competitive analysis helped to differentiate between the well-known polices as well as derive new ones. In this talk, we will survey the state-of-the-art results and present open problems. Boaz Patt-Shamir, Tel Aviv University Title: Competitive Analysis of Buffer Overflows Buffers are the fundamental glue of communication: they allow one to put together different modules (such as processors and links) while requiring only minimal synchronization. One of their main features is the ability to mask temporary peaks in traffic rate, but this ability is limited. When "too much" traffic arrives, overflow occurs. Overflows are notoriously hard to analyze, and therefore, until 2000, most research has concentrated on how to avoid them. However, using the competitive analysis viewpoint, many interesting results were obtained regarding the question of how to minimize the damage of overflows. In this talk I will survey some of the early results in the area (regarding video smoothing), and some of the very recent ones. Harald Raecke, University of Warwick Title: An Almost Tight Bound for Reordering Buffer Management In the reordering buffer problem, we are given an input sequence of requests for service each of which corresponds to a point in a metric space. The cost of serving the requests heavily depends on the processing order. Serving a request induces cost corresponding to the distance between itself and the previously served request, measured in the underlying metric space. A reordering buffer with storage capacity$k$can be used to reorder the input sequence in a restricted fashion so as to construct an output sequence with lower service cost. This simple and universal framework is useful for many applications in computer science and economics, e.g., disk scheduling, rendering in computer graphics, or painting shops in car plants. We present the first non-constant lower bound on the competitive ratio of deterministic online algorithms for the reordering buffer problem in a uniform metric. Precisely, we show that any online algorithm for the problem has a competitive ratio of Omega( sqrt (log k /log log k) ). We complement this result by presenting a determinstic online algorithm that obtains a competitive ratio of O(sqrt (log k))$ for the problem. This improves upon an algorithm by Avigdor-Elgrabli and Rabani [SODA 2010] that obtained a competitive ratio of O(log k/log log k).

This is joint work with Anna Adamaszek, Artur Czumaj, and Matthias Englert.

Dror Rawitz, Tel Aviv University

Title: Online Set Packing and Competitive Scheduling of Multi-Part Tasks

In online set packing (OSP), elements arrive online, announcing which sets they belong to, and the algorithm needs to assign each element, upon arrival, to one of its sets. The goal is to maximize the number of sets that are assigned all their elements: a set that misses even a single element is deemed worthless. This is a natural online optimization problem that abstracts allocation of scarce compound resources, e.g., multi-packet data frames in communication networks. We present a randomized competitive online algorithm for the weighted case with general capacity (namely, where sets may have different values, and elements arrive with different multiplicities). We prove a matching lower bound on the competitive ratio for any randomized online algorithm. Our bounds are expressed in terms of the maximum set size and the maximum number of sets an element belongs to. We also present refined bounds that depend on the uniformity of these parameters.

Joint work with Emek, Halldorsson, Mansour, Patt-Shamir and Radhakrishnan.

Gabriel Scalosub, Ben Gurion University

Title: Providing Performance Guarantees in Multipass Network Processors

Current network processors (NPs) increasingly deal with packets with heterogeneous processing times. As a consequence, packets that require many processing cycles can significantly delay low-latency traffic, because the common approach in today's NPs is to employ run-to-completion processing. These difficulties have led to the emergence of the Multipass NP architecture, where after a processing cycle ends, all processed packets are recycled into the buffer and re-compete for processing resources.

In this work we provide a model that captures many of the characteristics of this architecture, and consider several scheduling and buffer management algorithms that are specially designed to optimize the performance of multipass network processors. In particular, we provide analytical guarantees for the throughput performance of our algorithms. We further conduct a comprehensive simulation study that validates our results.

This is joint work with Isaac Keslassy, Kirill Kogan, and Michael Segal.

Lisa Zhang, Bell Laboratories

Title: Energy-aware Scheduling Protocols for Network Stability

A key problem in the control of packet-switched data networks is to schedule the data so that the queue sizes remain bounded over time. Protocols have been developed in a number of different models that ensure network stability as long as no queue is inherently overloaded. However, this literature typically assumes that each server runs at a fixed maximum speed. Although this is optimal for clearing queue backlogs as fast as possible, it may be suboptimal in terms of energy consumption. Indeed, a lightly loaded server could operate at a lower rate, at least temporarily, to save energy.

Within an energy-aware framework, a natural question arises: What is the minimum energy that is required to keep the network stable?'' In this work, we demonstrate the following results towards answering that question.

Starting with the simplest case of a single server in isolation, we consider three types of rate adaptation policies: a heuristic policy, which sets server speed depending on queue size only, and two more complex ones that exhibit a tradeoff between queue size and energy usage. In a network environment we propose a combination of the above rate adaptation policies with the standard Farthest-to-Go scheduling protocol. This approach provides stability in the network setting, while using an amount of energy that is within a bounded factor of the optimum. We also examine an analogue of the well-known Weighted Fair Queueing scheduling policy and show how delay bounds are affected under rate adaptation.

This is joint work with Matthew Andrews and Spyros Antonakopoulos

Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on June 20, 2011.