### DIMACS Workshop on Computational Issues in Game Theory and Mechanism Design

#### October 31 - November 2, 2001 DIMACS Center, Rutgers University, Piscataway

Organizers:
Vijay Vazirani, Georgia Tech, vazirani@cc.gatech.edu
Noam Nisan, Hebrew University, Jerusalem, Israel, noam@cs.huji.ac.il
Presented under the auspices of Next Generation Networks Technologies and Applications and Social Science Methods and Information Technology.

#### Abstracts:


1.

Daniel Lehmann, Hebrew University
Title: Combinatorial Auctions with Decreasing Marginal Utilities (joint work
with Benny Lehmann and Noam Nisan)

Abstract:
In most of microeconomic theory, consumers are assumed to exhibit decreasing
marginal utilities. We consider combinatorial auctions among such buyers.
The valuations of such buyers are placed within a hierarchy of valuations
that exhibit no complementarities, a hierarchy that includes also OR and XOR
combinations of singleton valuations, and valuations satisfying the gross
substitutes property. While we show that the allocation problem among
valuations with decreasing marginal utilities is NP-hard, we present an
efficient greedy $2$-approximation algorithm for this case. No such
approximation algorithm exists in a setting allowing for complementarities.
Some results about strategic aspects of combinatorial auctions among players
with decreasing marginal utilities are also presented. Some algorithmic
results in the more general case of players
with limited complementarities are also presented.

2.
Correlated Q-Learning

Amy Greenwald and Keith Hall

Bowling [2000] puts forth two desiderata for multiagent learning
algorithms in general-sum Markov games: rationality and convergence.
This talk introduces correlated-Q learning, a natural generalization
of existing algorithms that satisfies these criteria.  Nash-Q [Hu and
Wellman 1998] satisfies rationality, but in general it does not
converge.  FF-Q [Littman 1994 and 2001] satisfies convergence, but in
general it is not rational.  Correlated-Q satisfies rationality by
construction.  Moreover, we demonstrate (empirical) convergence on the
standard testbed of general-sum Markov games, including those for
which deterministic equilibrium policies do not exist.

3.
Suresh Mutuswami, Essex University
Subscription Mechanisms for Network Formation

We analyze a model of network formation where the costs of link formation
are publicly known but individual benefits are not known to
the social planner.  The objective is to design a simple mechanism ensuring
efficiency, budget balance and equity.  We propose two mechanisms
towards this end; the first ensures efficiency and budget balance but
not equity. The second mechanism corrects the asymmetry in payoffs
through a two-stage variant of the first mechanism.  We also discuss
an extension of the basic model to cover the case of directed graphs
and give conditions under which the proposed mechanisms are immune to
coalitional deviations.

4.
Design of Competitive Mechanisms
Andrew V. Goldberg, Research Fellow, InterTrust STAR Lab

ABSTRACT

The development of Internet brings up mechanism design problems
that involve many agents, both human and computerized. In this
context, it is often desirable to have truthful (or strategyproof)
mechanisms with an objective function that is suited for the
application. Classical mechanisms maximize either the total
welfare of the system or divide the operating cost among the
participants. In some cases, however, it is desirable to achieve
different objectives. In particular, in application with a single
seller and one or more buyers, it may be desirable to maximize
seller's revenue. This requires mechanisms which are different
from those provided by the classical game theory.

Auctions are natural mechanisms with a seller and buyers.
In the spirit of on-line mechanisms, one can compare revenue
of a truthful auction with that of a nontruthful pricing mechanism.
A competitive auction is a truthful auction that for any input
has an expected revenue that is within a constant factor of the
highest possible for such an auction.

In this talk we give a high-level overview of competitive
auctions, including motivation and definitions, positive and
negative results, generalizations, and open problems.

Joint work with Amoz Fiat, Jason Hartline, Anna Karlin, and
Andrew Wright.

5.
Sven de Vries, Munchen
Linear Programming and Ascending Auctions
(joined work with Sushil Bikhchandani,
James Schummer, and Rakesh V. Vohra)

The Vickrey sealed bid auction occupies a central place in auction theory
because of its efficiency and incentive properties.
Implementing the auction requires the auctioneer to solve $n+1$ optimization
problems, where $n$ is the number of bidders.
In this talk we survey various environments (some old and some new)
where the payments bidders make under the Vickrey auction correspond to
dual variables in certain linear programs.
Thus, in these environments, at most two optimization
problems must be solved to determine the Vickrey outcome.
Furthermore, primal-dual algorithms for some of these linear
programs suggest ascending auctions that implement the Vickrey outcome.

6.
Rahul Sami
Yale University Department of Computer Science

TITLE: Approximation and Collusion in Multicast Cost Sharing

We investigate multicast cost sharing from both computational and
economic points of view.  Recent work in economics [Moulin and
Shenker,1997] leads naturally to the consideration of two mechanisms:
Marginal Cost (MC), which is efficient and strategyproof, and Shapley
value (SH), which is budget-balanced and group-strategyproof and,
among all mechanisms with these two properties, minimizes the
worst-case efficiency loss.  Subsequent work in computer science
[Feigenbaum, Papadimitriou and Shenker,2000] shows that the MC
mechanism can be computed by a simple, distributed algorithm that uses
only two messages per link of the multicast tree but that computing
the SH mechanism seems, in the worst case, to require a number of
messages that is quadratic in the size of the multicast tree.
We extend these results in two directions.  First, we give a
group-strategyproof mechanism that exhibits a tradeoff between the
other properties of the Shapley value:  It can be computed by a
distributed algorithm that is more communication-efficient than the
natural algorithm (exponentially more so in the worst case), but it
might fail to achieve exact budget balance or exact minimum efficiency
loss (albeit by a bounded amount).  Second, we completely characterize
the groups that can strategize successfully against the MC mechanism.

Joint work with Joan Feigenbaum, Arvind Krishnamurthy, and Scott Shenker

7.
On the complexity of auctions

Rudolf Muller
Maastricht University

Many publications on combinatorial auctions are motivated by two
claims: the winner determination problem is NP-complete, therefore
one should develop fast exact or approximation algorithms, and,
progressive auctions are preferable because they do not require
buyers to report their full type. However, if we consider the plain case
of a single-item auction, a sealed bid auction is
polynomial, while the English auction is exponential in the input.
Also, if buyers would report their full type in the VCG mechanism
for the multi-object case, the winner determination problem would
become tractable. It is therefore necessary to develop more
precise models of the complexity of auctions. Such
models have to consider carefully the encoding length of the input,
which is the information seller and buyers have when the auction
starts, as well as the cost of communication during the auction.
Questions to be answered include in which cases we may expect a
polynomial auction mechanism at all, and in which not? Furthermore, how
efficient can progressive auctions be, if the plain English
auction is exponential? We present ongoing research on such
questions.

8.
Fairness Measures in Optimization
Jon Kleinberg
Cornell University

In a typical discrete optimization problem, the quality
of the solution -- the value of the objective function --
is a single figure of merit that must be maximized or minimized.
But if we consider settings in which resources must be
allocated or provisioned for a set of individuals, there is really
a separate objective function associated with each individual.
(How much bandwidth was I allocated?  How far am I from
the nearest open facility?  When will my job complete?)
The quality' of the solution in such a case is most
naturally expressed as a vector with a coordinate specifying
the quality of the outcome for each individual.

Once we consider optimization problems of this flavor,
it is far from clear how to define the optimal' solution.
And without a clear notion of an optimum, it is harder
still to quantify the notion of an approximate' solution.
We discuss how one can formulate notions of optimization and
approximation in these settings using fairness measures,' which
intuitively seek solutions that are favorable for all individuals.
Several of these measures originated in the study of bandwidth
allocation, where they can be usefully applied; we will consider
how they can also be applied in other domains, including scheduling
and facility location problems.

The talk is based on joint work with Amit Kumar, Yuval Rabani, and Eva Tardos.

9.
A Generic Analysis of Selfish Routing
Eric Friedman, Cornell University

We show that for most transmission rates the losses due to selfish
routing in  networks are bounded by $O(\log(C))$, where $C$ is a
measure sensitivity of the network to changes in transmission
rates for arbitrary latency functions.  This contrasts with the losses for
the worst rate which
can be $O(C)$. These results also hold for the restricted case of typical
queuing functions (such as M/M/1) and for systems operating under
congestion control algorithms (such as TCP).

10.
A Cryptographic Solution to a Game Theoretic Problem

Yevgeniy Dodis, Shai Halevi and Tal Rabin

In this work we use cryptography to solve a game-theoretic problem
which arises naturally in the area of two party strategic games.
The standard game-theoretic solution concept for such games is that of
an equilibrium. It is known that for many games the expected
equilibrium payoffs can be much higher when a trusted third party (a
mediator'') assists the players in choosing their moves (correlated
equilibria), than when each player has to choose its move
on its own (Nash equilibria).

It is natural to ask whether there exists a mechanism
that eliminates the need for the mediator yet allows the players to
maintain the high payoffs offered by mediator-assisted strategies.  We
answer this question affirmatively provided the players are
computationally bounded and can have free communication (so-called
cheap talk'') prior to playing the game.

11.
Title: The Effect of False-name Bids on Auction Protocols

Speaker: Makoto Yokoo

Abstract:
In this talk, we examine the effect of a new type of frauds called
false-name bids, which can be a serious problem in Internet auctions.
False-name bids are bids submitted by a single agent under multiple
fictitious names (e.g., multiple e-mail addresses). From the
game-theoretic/mechanism design viewpoint, the strategy space of each
agent/player is expanded if agents can submit false-name bids. Thus, a
protocol that is dominant-strategy incentive compatible (i.e., honesty
is the best policy) without the possibility of false-name bids might
no longer has a dominant strategy equilibrium with the possibility of
false-name bids. For example, the Generalized Vickrey Auction protocol
(GVA) satisfies dominant-strategy incentive compatibility, Pareto
efficiency, and individual rationality without the possibility of
false-name bids. On the other hand, with the possibility of false-name
bids, the GVA fails to satisfy the dominant-strategy incentive
compatibility. Furthermore, we prove that it is theoretically
impossible for any combinatorial auction protocol to simultaneously
satisfy these three properties. We develop a new combinatorial auction
protocol and a double auction protocol that satisfy dominant-strategy
incentive compatibility and individual rationality, and can achieve
relatively good (but not Pareto efficient) social surplus even if the
agents can submit false-name bids.

12.
Title: On approximating optimal auctions
Author: Amir Ronen, Stanford

Computational problems that arise in game theoretical  environments are
very different from "usual"  algorithmic problems. In this talk I will
focus on a fundamental problem in economics called optimal auction
design: A seller wishes to sell an item to a group of self-interested
agents. Each agent $i$ has a  {\em privately} known valuation $v^i$ for
the object. Given a distribution on these valuations, our goal is to
design an auction (i.e. a protocol) that maximizes the  seller's
expected revenue. We present a simple generic protocol that {\bf
guarantees} at least half of the optimal revenue. When the designer can
only sample the distribution, we show a tight bound of $\ln h$ on the
approximation ratio where $h$ denotes that ratio between the highest and
the lowest possible valuations. Finally we suggest a generalization of
our auction and argue that it will generate a revenue which is close to
optimal for reasonable distributions. In particular we show this under
an independence assumption. Several open problems are presented as well.

13.
Minimal Preference Elicitation: An Equilibrium Approach

David C. Parkes
Division of Engineering and Applied Sciences,
Harvard University

The core role of price-based mechanisms, such as iBundle, in
reducing the complexity of the agent valuation problem in
combinatorial auctions is demonstrated with a simple observation.
A large class of dynamic preference elicitation methods for
efficient allocation must collect enough information from agents
to compute a set of competitive equilibrium prices, because this
information is implied in the primal solution. As such,
price-based mechanisms can usefully guide preference elicitation,
and can be combined with methods to leverage structured
preference representations. Connections with Vickrey payments also
lead to a coherent integration of incentive considerations into
dynamic mechanism design.

14.
Graphical Models for Game Theory
Michael Kearns
Syntek Capital

In order to scale up to the analysis of large multi-agent systems,
classical game theory must be augmented with compact representations of
multi-player matrix games, and these representations should permit efficient
computation of equilbria and related concepts. In this work, we introduce a
graph-theoretic representation of one-stage, multi-player games that may
yield an exponential reduction in space in many natural cases. Our main
results are efficient algorithms for computing Nash equilibria in graphical
games that are trees or are sparse.

Joint work with Michael Littman and Satinder Singh.

15.
XOR Auctions with Buyer Preferences and Seller Priorities
Ion Mandoiu, Georgia Tech

Joint work with Chaitanya Bandela, Yu Chen, Andrew B. Kahng, and
Alexander Zelikovsky

ABSTRACT

Auctions and exchanges are one of the most important market mechanisms for
price determination and allocation of goods. Combinatorial auctions extend
traditional auction formats by allowing agents to bid on bundles of items
and to place mutually exclusive (XOR) bids. In an XOR auction each buyer
makes multiple bids, but can be allocated at most one item. In this paper
we introduce extensions of XOR auctions that take into account buyer
preferences and seller priorities.

specify preferences, possibly including ties, on the items on which they
bid. We seek allocations of the items to the buyers which are stable with
respect to buyer's preferences in the sense that items preferable to the
item allocated to a buyer are sold for a price higher or equal to what she
offered for them. In the case of double auctions, the allocation should
also ensure fairness to the sellers: if an item received a bid with a
higher value than the allocated price then the buyer who placed that bid
got a more or equally preferable item. We show that (1) both buyers and
sellers are better off in an XOR-DABP than in an XOR auction when there
are no ties in buyer preferences and bid values; (2) finding stable
allocations with either maximum total value/surplus or maximum buyer
satisfaction can be done efficiently in an XOR-DABP without ties, but is
NP-hard when ties are allowed; and (3) in the special case when all bids
for an item have the same value, stable allocations form a greedoid, and
thus maximum size stable allocations can be computed efficiently.

In an XOR auction with seller priorities (XOR-ASP), the seller assigns a
priority to each item, and seeks allocations in which any item is
allocated only if all items with higher priorities are also allocated. We
show that (1) the seller is better off using an XOR-ASP rather than a
series of simple auctions, (2) feasible allocation with maximum
value/surplus can be found efficiently using maximum-weight perfect
matchings; and (3) feasible allocations form a Gaussian greedoid, and
therefore the maximum value/surplus allocation can be found even more
efficiently when every buyer bids the same value on all acceptable items.

16.
TITLE:
Two Results On Competitive Auctions

SPEAKER:
Jason Hartline

ABSTRACT:

We consider the problem of selling identical items via a truthful (or
strategy-proof) auction, with the goal of maximizing the seller's
revenue. We aim to design "competitive" auctions, auctions which achieve
a revenue that is within a constant factor of the revenue of the optimal
(non-truthful) fixed-price auction.

In this talk, we present two recent results for competitive auctions.
First, we prove that no deterministic truthful auction is competitive.  We
then present a randomized auction, based on the Shapley Value mechanism, that
is 4-competitive.  This auction has the property that it can be used as a
building block for revenue-maximizing mechanisms for other applications.

Joint work with Andrew Goldberg and Andrew Wright.

17.
Eva Tardos, Cornell University

We consider the problem of routing traffic in a congested network.
We are given a network, a rate of traffic between each pair of nodes,
and a latency function for each edge specifying the time needed to
traverse the edge given its congestion. We consider a "selfish" routing
in which each network user routes its traffic on the minimum-latency
path available to it, ignoring the latency of all other users. We will
compare this "selfish" routing to a social optimum, where the objective
is to route traffic such that the sum of all travel times---the total
latency---is minimized. In general the "selfish" assignment of traffic
to paths will not minimize the total latency, i.e., the lack of central
regulation degrades network performance. In this talk we will quantify
the degradation of network performance due to unregulated traffic. The
talk is based on joint work with Tim Roughgarden.

18.
Tim Roughgarden, Cornell University
Title: Designing Networks for Selfish Users is Hard

Abstract:

We consider a directed network in which every edge possesses a
latency function specifying the time needed to traverse
the edge given its congestion.  Selfish, noncooperative
agents constitute the
network traffic and wish to travel from a source
$s$ to a sink $t$ as quickly as possible.
Since the route chosen by one network user
affects the congestion (and hence the latency) experienced by others,
we model the problem as a noncooperative game.
Assuming each agent controls only a negligible
portion of the overall traffic, Nash
equilibria in this noncooperative game correspond
to $s$-$t$ flows in which all flow paths have equal latency.

A natural measure for the performance of a network used by
selfish agents is the common latency experienced by each user in a
Nash equilibrium.  It is a counterintuitive but well-known
fact that removing edges from a network may {\em improve} its
performance; the most famous example of this phenomenon is the
This fact motivates the following network design
problem: given such a network, which edges should be removed
to obtain the best possible flow at Nash equilibrium?  Equivalently,
given a large network of candidate edges to be built, which subnetwork
will exhibit the best performance when used selfishly?

We give optimal inapproximability results and approximation algorithms
for several network design problems of this type.
For example, we prove that for networks with $n$ nodes and continuous,
nondecreasing latency functions, there is no approximation
algorithm for this problem with approximation ratio less than $n/2$
(unless $P=NP$).
We also prove this hardness result to be best possible by exhibiting an
$n/2$-approximation algorithm.  For networks in which
the latency of each edge is a linear function of the congestion, we
prove that there
is no $(\frac{4}{3}-\eps)$-approximation algorithm for the problem
(for any $\eps > 0$, unless $P=NP$); the existence of a
$\frac{4}{3}$-approximation
algorithm follows easily from existing work, proving
this hardness result sharp.

Moreover, we prove that an optimal approximation algorithm for these
problems is what we call the {\em trivial algorithm}: given a network
of candidate edges, build the entire network.   Roughly, this result
implies that the presence of harmful extra edges in a network
(a phenomenon that can lead to extremely poor performance in large
networks with general latency functions) is impossible to detect
efficiently.

19.
Bidding Agents with Complex Valuation Problems in Auctions

Tuomas Sandholm
Associate Professor
Carnegie Mellon University
Computer Science Department

This talk analyzes automated bidding agents that do not know their
valuations for the items or bundles of items that are being auctioned a
priori, but rather have to compute
the valuations.  This is the case, for example, in transportation exchanges
where a carrier company's cost of handling a set of deliveries is the cost
of her vehicle routing solution including those tasks minus the cost of her
routing solution without them.  In practice, the routing problem instances
are too large to solve optimally.  They can be approximated using anytime
algorithms.  This spawns a host of questions related to bounded rationality:
how should an agent allocate its limited or costly computation?  An agent
needs to decide on which bundles to allocate its computation.  The agent may
also want to allocate computation on the other agents' valuation problems in
order to be able to strategically tailor her bids.  Furthermore, the other
bidders' deliberation strategies affect what the agent's best deliberation
strategy is, and vice versa.  We first introduce a normative, performance
profile based deliberation control method for anytime algorithms.  We
combine
this with a game-theoretic solution concept to treat each agent's
deliberation actions (on which agent-bundle pair to invest the next
computation step) and bids uniformly as part of the agent's strategy.  We
show under which auction mechanisms and which models of bounded rationality
(limited or costly computation), different types of strategic computation
(computing on others' valuation problems) occur.  To our knowledge, this
work is the first to treat deliberation actions strategically.

This is joint work with Kate Larson.  The talk covers the following two
papers.  (Other papers on related topics are available at
http://www.cs.cmu.edu/~sandholm ).

Larson, K. and Sandholm, T. 2001. Costly Valuation Computation in Auctions:
Deliberation Equilibrium. Theoretical Aspects of Reasoning about Knowledge
(TARK), pp. 169-182, Siena, Italy, July.
http://www.cs.cmu.edu/~sandholm/tark01.pdf

Larson, K. and Sandholm, T. 2001. Computationally Limited Agents in
Auctions. International Conference on Autonomous Agents, Workshop on
Agent-based Approaches to B2B, pp. 27-34, Montreal, Canada, May 28th.
http://www.cs.cmu.edu/~sandholm/computationally_limited.agents01ws.pdf

20.
TITLE:
Open Problems in Competitive Auction Design

Anna R. Karlin
University of Washington

ABSTRACT:
Consider the problem of selling identical items via a truthful (or
strategyproof) auction. The classic Vickrey auction, when faced with the
task of selling k identical items, sells them to the k highest bidders
at the (k+1)-st highest price bid. Suppose, however, that the auctioneer
is willing to sell fewer than k items in order to increase his revenue.
For example, if the top two bids are sufficiently high, his revenue
might be greatly increased by selling only one of the items, say to the
highest bidder at the second highest price. This motivates the problem
of designing truthful auctions that attempt to maximize revenue to the
auctioneer. Specifically, we are interested in designing "competitive"
auctions, auctions which achieve a revenue that is within a constant
factor of the revenue of the optimal (non-truthful) fixed-price auction.

We present a number of intriguing open problems and conjectures in the
area of competitive auction design, as well as limited progress towards
resolving these problems.

Joint work with Amos Fiat, Jason Hartline and Andrew Goldberg.

21.
Cooperative Facility Location Games
Michel X. Goemans
MIT

The location of facilities in order to provide service for customers
is a well-studied problem in the operations research literature. In
the basic model, there is a predefined cost for opening a facility
and also for connecting a customer to a facility, the goal being to
minimize the total cost. Often, both in the case of public
facilities (such as libraries, municipal swimming pools, fire
stations, ...) and private facilities (such as distribution centers,
switching stations, ...), we may want to find a fair' allocation of
the total cost to the customers --- this is known as the cost
allocation problem. A central question in cooperative game theory is
whether the total cost can be allocated to the customers such that
no coalition of customers has any incentive to build their own
facility, i.e. whether the core is non-empty.

We establish strong connections between fair cost allocations and
linear programming relaxations for several variants of the facility
location problem. In particular, we show that a fair cost allocation
exists if and only if there is no integrality gap for a
corresponding linear programming relaxation. We use this insight in
order to give proofs for the existence of fair cost allocations for
several classes of instances based on a subtle variant of randomized
rounding. We also establish that it is NP-complete to decide whether
a fair cost allocation exists and whether a given allocation is
fair.

This is based on joint work with Martin Skutella.

22.
The communication complexity of efficient discrete allocations
Ilya Segal, Stanford

We analyze the communication complexity of mechanisms implementing
value-maximizing discrete allocations, combinatorial auctions being the
leading example.  We study both the continuous model, in which communication
is measured with the dimensionality of the message space, and the discrete
model, in which communication is measured with the number of exchanged bits.
Our basic technique applies to both cases and yields a lower bound on the
amount of communication. This bound is exponential in the number of objects
when the class of agents' possible preferences is sufficiently rich, in
particular when it includes all submodular preferences. In contrast, when
agents' preferences satisfy the gross substitute property, efficiency can be
achieved with polynomial communication (using the Walrasian equilibrium with
single-item prices or Ausubel's deterministic mechanism). Lower bounds are
also obtained for the implementation of approximately efficient allocations.

Joint work with Noam Nisan.

23.
Kamal Jain, Microsoft Research

Equitable, Group Strategyproof Cost Allocations via Primal-Dual-Type Algorithms

We build on the well-studied egalitarian cost-sharing method of
Dutta and Ray [1987] as follows. Using a primal-dual-type algorithm, we
show how to construct, efficiently, a class of cost-sharing methods for a
submodular cost function, parametrized by $n$ functions, one for each user.
Submodular functions capture the notion of economies of scale and such cost
funcitons arise naturally in practice, e.g., in multicast routing.

All methods in our class are cross-monotone and therefore lead to group
strategyproof mechanisms (i.e., the dominant strategy for each user is to
report her true utility, even in the presence of collusions).

Our class contains cost-sharing methods satisfying a wide range of fairness
criteria, hence the name equitable''. For instance, if the $n$ functions
are identical, we get the Dutta-Ray egalitarian method, which attempts to
equalize the amounts charged from the users. If they are probability
distributions from which users pick their utilities, we get a new method that
attempts to equalize the probabilities of users getting sevice.

(This is a joint work with Vijay Vazirani.)

24.
Cost Monotonicity, Consistency and Minimum Cost Spanning Tree Games

We propose a new cost allocation rule for minimum cost spanning tree
games. The new rule is a core selection and also satisfies cost
monotonicity.

We also give characterization theorems for the new rule as well as the
much-studied Bird allocation. We show that the principal difference
between these two rules is in terms of their consistency properties.

Mixing Streaming and Flow-Controlled traffic in Networks:
Distributed Control and Incentives
Peter B. Key
Microsoft Research, Cambridge

In the current Internet, TCP is the dominant flow-control scheme, appropriate
for so-called elastic applications that can tolerate wide variations in
throughput.  Conversely, streaming media typically can only operate at one
rate, or at one of a small number of allowable rates. We show how the use of
distributed control signals based on resource-prices allows the conflicting
demands of the two types of traffic to be traded-off, and specific
user-strategies constructed for streaming traffic. The signals can be encoded
by a simple packet-marking scheme, for example by suitable use of the ECN
field in IP.  At present file transfers are categorised as elastic traffic,
and use flow-control protocols; we discuss how under certain assumptions
this may not be an optimal strategy.  Specifically, we discuss the
implications for Web mice and Web elephants.

25.

Title:  An Evolutionary Games Analysis of Bidding Strategies in a Scheduling
Auction

Authors: Jeffrey K. MacKie-Mason and Michael P. Wellman

Abstract: We have embarked on a project to design and study the performance
of market-based mechanisms for decentralized scheduling problems.  Such
problems are hard due to naturally arising nonconvexities (from
complementarities and integer constraints). We call the problem
"decentralized" when self-interested agents possess private information
relevant to a good solution; respecting private information places an
additional constraint on the design of good mechanisms.

In this paper we address a long-standing problem in the literature on
mechanism design for any complex problem: the evaluation of mechanism
performance when optimal agent behavior is unknown.  More often than not it
is impossible to derive Bayesian-Nash strategies for agents participating in
complex mechanisms (e.g., the FCC spectrum auctions).  Yet, absent empirical
data, analysis of mechanism performance requires assumptions about how
agents will interact with that mechanism.

An evolutionary approach to performance analysis has received increasing
attention as a way to understand which strategies are "fit": stable and
successful against other strategies when strategic encounters are frequently
repeated.  We take an evolutionary approach to isolate strategies or sets of
strategies that are relatively (mutually) fit, and then evaluate the
performance of market mechanisms for scheduling against these strategies.
To implement this we form a population of agents playing strategies drawn
from a parameterized space, and then use selection and updating rules to
modify the composition of the population and the characteristics of the
strategies the agents play.

26.

Elias Koutsoupias, UCLA
Selfish resource allocation.

We consider selfish resource allocation and in particular selfish
routing over a network consisting of parallel links. We assume a
collection of users, each employing a mixed strategy ---a probability
distribution over the links--- to control the routing of its own
traffic. The expected latency through a link is assumed to be equal to
the expectation of the traffic assigned to the link; all users sharing
a link incur the same expected latency cost, taken to be the link's
expected latency. In a Nash equilibrium, each user selfishly routes
its traffic on those links that minimize its expected latency cost,
given the network congestion caused by the other users. We study how
much the performance of the network deteriorates due to the selfish
nature of its users. More specifically, we study the coordination
ratio, defined as the ratio of the maximum (over all links) expected
latency in the worst possible Nash equlibrium, over the least possible
maximum latency had global regulation been available.

27.

Ilan Kremer
Assistant Professor of Finance
Stanford University
518 Memorial Way, CA 94305-5015
Phone 650-736-0288
Fax 650-725-6152

Divisible Good Auctions - the role of allocation rules

Uniform price auctions are used in many financial and non-financial markets.
Beginning with Wilson (1979), the theoretical literature has argued that
these auctions are subject to possible collusion among bidders. In this
paper we show that these results are due to the way the asset is being
divided rather than the choice of a uniform price format. We focus on
allocation rules that specify the way the asset is divided in cases of
excess demand, and show that these rules have a dramatic effect on the set
of equilibrium prices. In particular, we show that the allocation rule used
in virtually all uniform price auctions has a negative effect on equilibrium
prices. We also question a common assumption regarding agents' strategy
sets. We show that restricting agents to use only continuous strategies may
distort the set of equilibria.


Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center