DIMACS Workshop on the Boundary between Economic Theory and Computer Science

October 24 - 26, 2007
DIMACS Center, CoRE Building, Rutgers University

Organizers:
Lance Fortnow, University of Chicago, fortnow@cs.uchicago.edu
Rakesh Vohra, Northwestern University, r-vohra@kellogg.northwestern.edu
Presented under the auspices of the Special Focus on Computation and the Socio-Economic Sciences.

Abstracts:

Dirk Bergemann, Yale University and Juuso Valimaki, Helsinki School of Economics and University of Southampton

Title: Dynamic Marginal Contribution Mechanism

We consider truthful implementation of the socially efficient allocation in a dynamic private value environment in which agents receive private information over time. We propose a suitable generalization of the Vickrey-Clarke-Groves mechanism, based on the marginal contribution of each agent. In the marginal contribution mechanism, the ex post incentive and ex post participations constraints are satisfied for all agents after all histories. It is the unique mechanism satisfying ex post incentive, ex post participation and efficient exit conditions.

We develop the marginal contribution mechanism in detail for a sequential auction of a single object in which each bidders learn over time her true valuation of the object. We show that a modified second price auction leads to truthtelling.


Jennifer Chayes, Microsoft Research

Title: Trust-Based Recommendation Systems

The goal of a trust-based recommendation system is to provide personalized recommendations to agents about an item (or items), based on the opinions or reviews of other agents, as well as the declared trust between agents. We take an axiomatic approach to this problem by defining a class of models, and a set of natural axioms that we would like these systems to obey. We first obtain an impossibility result by demonstrating that these axioms cannot be satisfied simultaneously. We also give several positive results, by showing that any proper subset of these axioms can be satisfied by an appropriate system, and characterizing those systems. This is joint work with Reid Andersen, Christian Borgs, Uri Feige, Abie Flaxman, Adam Kalai, Vahab Mirrokni and Moshe Tennenholtz.


Richard Cole, Shahar Dobzinski and Lisa Fleischer

Title: Prompt Mechanisms for Online Scheduling Problems

We study the following online problem: at each time unit, one of m identical items is offered for sale. Bidders arrive and depart dynamically, and each bidder is interested in winning one item between his arrival and departure. Our goal is to design truthful mechanisms that maximize the welfare, the sum of the utilities of winning bidders.

We first consider this problem under the assumption that the private information for each bidder is his value for getting an item. In this model constant-competitive mechanisms are known, but we observe that these mechanisms suffer from the following disadvantage: a bidder might learn his payment only when he departs. We argue that these mechanism are essentially unusable, because they impose several seemingly undesirable requirements on any implementation of the mechanisms.

To crystalize these issues, we define the notions of prompt and tardy mechanisms. We present two prompt truthful mechanisms - one deterministic and the other randomized, that guarantee a constant competitive ratio. We also prove that our deterministic mechanism is optimal for this setting.

We then study a model in which both the value and the departure time are private information. While in the deterministic setting only a trivial competitive ratio can be guaranteed, we use randomization to obtain a prompt truthful T( 1 logm)-competitive mechanism. Finally, we show that no truthful randomized mechanism can achieve a ratio better than 1 2 in this model.


Joan Feigenbaum, Yale University

Title: Open Problems in Incentive-Compatible Interdomain Routing

The Internet is composed of many independently managed subnetworks called domains or autonomous systems (ASes). The task of discovering and selecting routes between these ASes is called interdomain routing (IDR). The IDR problem exemplifies many of the challenges of Internet-protocol research, e.g., massive scale, subnetwork autonomy, partial information, inadequate computational and economic models, and a highly dynamic execution environment. This talk surveys major open problems in IDR.


Mira Gonen, Tel Aviv University, Rica Gonen, Yahoo, and Elan Pavlov, MIT

Title: Generalized Trade Reduction Mechanisms

When designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB). It is well known that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB. There have been several attempts to circumvent by trad- ing welfare for BB, e.g., in domains such as double-sided auctions, distributed markets and supply chain prob- lems.


Geoffrey Gordon, Carnegie-Mellon University, Amy Greenwald and Casey Marks, Brown University, Martin Zinkevich, Yahoo!

Title: No-Regret Learning in Convex Games

A good deal is known about minimizing different kinds of regret in experts problems, and how these results translate into statements about different kinds of equilibria in the multiagent setting of repeated matrix games. Much less is known about the possible kinds of regret in online convex programming problems (OCPs), or about the analogous multiagent setting of repeated convex games: most previous OCP algorithms minimize external regret only. External regret corresponds to a very weak form of equilibrium called coarse correlated equilibrium in general-sum games. This gap is unfortunate, since convex games are much more expressive than matrix games, and since many modern machine learning algorithms can be expressed as convex programs. Here we define three types of regret for OCPs, analyze their connection to equilibria in convex games, and explicitly construct algorithms for minimizing the most important of these forms of regret. Moreover, we provide a complete complexity analysis of our algorithms, showing that they are significantly more efficient than previous OCP algorithms that provide comparable guarantees.


Sanjeev Goyal, Cambridge University

Title: Learning in Networks

We often have to choose among alternatives without knowing their relative advantages. In arriving at a decision we make use of our past experience as well as the experience of others, specially those who are close to us. These close by individuals are in turn influenced by others who are close to them and so on. It is then natural to examine how the patterns of connections between individuals shapes the process of social learning. This is the subject matter of my talk.

A study of social learning yields insights into the diffusion process of new ideas and technologies. Diffusion in networks has been extensively studied in sociology, mathematical biology, statistical physics, and more recently in computer science. There are two distinctive elements to an economics approach and they will figure prominently in the framework presented in this talk: one, an explicit model of individual beliefs, objectives and optimal decision rules and two, a concern with the attainment of socially efficient outcomes.


Mingyu Guo and Vincent Conitzer, Duke University

Title: Worst-Case Optimal Redistribution of VCG Payments in Multi-Unit Auctions

For allocation problems with one or more items, the well-known Vickrey- Clarke-Groves (VCG) mechanism (aka. Clarke mechanism, Generalized Vickrey Auction) is efficient, strategy-proof, individually rational, and does not incur a deficit. However, it is not (strongly) budget balanced: generally, the agents' payments will sum to more than 0. We study mechanisms that redistribute some of the VCG payments back to the agents, while maintaining the desirable properties of the VCG mechanism. Our objective is to come as close to budget balance as possible in the worst case (so that we do not require a prior). For auctions with multiple indistinguishable units in which marginal values are nonincreasing, we derive a mechanism that is optimal in this sense. We also derive an optimal mechanism for the case where we drop the non-deficit requirement. We show that if marginal values are not required to be nonincreasing, then the original VCG mechanism is worst-case optimal. Finally, we show how these results can also be applied to reverse auctions.


Jason Hartline, Microsoft Research

Title: Prior-free Optimal Auction Design

Motivated by the Wilson doctrine we propose a method for the design and analysis of incentive compatible mechanisms in a prior-free environment. Without a prior there is no auction that is "best" in an absolute sense; thus, we consider a relative analysis. In such a design-analysis framework the performance of an auction is measured relative to a given benchmark and the "optimal" auction is the one that performs the best relative to the benchmark, in worst case over all possible agent types. In this talk I will present the framework for relative analysis and describe a natural benchmark. I will then show how to design a near-optimal auction for this benchmark by first defining and solving an optimal auction "decision" problem and then reducing the optimal auction design problem to the decision problem. Where the optimal auction design problem asks for the auction that maximizes the profit, the decision problem is given a target profit and asks for an auction which obtains the target profit whenever the target is attainable. I will then discuss the inherent non- optimality imposed by our prior-free setting and show how bound the relative performance of any auction away from our benchmark. I will conclude with a discussion of results that show that natural auction, the "random sampling optimal price" auciton, is simultaneously near-optimal our worst-case relative framework and near-optimal under a natural average-case analysis.


Noam Hazon, Yonatan Aumann, Sarit Kraus, Bar-Ilan University andMichael Wooldridge, University of Liverpool

Title: Evaluation of Election Outcomes under Uncertainty

We investigate the extent to which it is possible to evaluate the probability of a particular candidate winning an elec- tion, given imperfect information about the preferences of the electorate. We assume that for each voter, we have a probability distribution over a set of preference orderings. Thus, for each voter, we have a number of possible prefer- ence orderings { we do not know which of these orderings actually represents the voter's preferences, but we know for each one the probability that it does. We give a polynomial algorithm to solve the problem of computing the probability that a given candidate will win when the number of candi- dates is a constant. However, when the number of candi- dates is not bounded, we prove that the problem becomes #P-Hard for the Plurality, Borda, and Copeland voting pro- tocols. We further show that even evaluating if a candidate has any chance to be a winner is NP-Complete for the Plu- rality voting protocol, in the weighted voters case. We give a polynomial algorithm for this problem when the voters weights are equal.


Ehud Kalai, Northwestern University, A.T. Kalai, E. Lehrer and D. Samet

Title: A Commitment Folk Theorem

Real world players often increase their payoffs by voluntarily committing to play a fixed strategy, prior to the start of a strategic game. In fact, the players may further benefit from commitments that are conditional on the commitments of others. This paper proposes a model of conditional commitments that unifies earlier models while avoiding circularities that often arise in such models. A commitment folk theorem shows that the potential of voluntary conditional commitments is essentially unlimited. All feasible and individually-rational payoffs of a two-person strategic game can be attained at the equilibria of one (universal) commitment game that uses simple commitment devices. The commitments are voluntary in the sense that each player maintains the option of playing the game without commitment, as originally defined.


Shankar Kalyanaraman and Christopher Umans, California Institute of Technology

Title: The Complexity of Rationalizing Matchings

We consider the problem of rationalizability of one-one matchings and settle the question of its computational complexity thereby resolving an open question posed by Echenique.


Michael Kearns, University of Pennsylvania

Title: Behavioral Games on Networks

We have been conducting behavioral experiments in which human subjects attempt to solve challenging graph-theoretic optimization problems through only local interactions and incentives. The primary goal is to shed light on the relationships between network structure and the behavioral and computational difficulty of different problem types.

To date, we have conducted experiments in which subjects are incentivized to solve problems of graph coloring, consensus, independent set, and an exchange economy game. I will report on thought-provoking findings at both the collective and individual behavioral levels, and contrast them with theories from theoretical computer science, sociology, and economics.

This talk discusses joint work with Stephen Judd, Sid Suri, and Nick Montfort.


Stephan Lauermann, University of Bonn

Title: Dynamic Matching and Bargaining Games: A General Approach

Dynamic matching and bargaining games provide models of decentralized mar- kets with trading frictions. A central objective of the literature is to investigate how equilibrium outcomes depend on the size of the frictions. In particular, will the outcome become efficient when frictions become small? Existing specifications of such games give different answers. To investigate what causes these differences, we identify four simple conditions on trading outcomes. We show that for every game which satisfies these conditions, the equilibrium outcome must become efficient when frictions are small. We demonstrate that our conditions hold under several specifica- tions in the literature, suggesting a common cause behind their convergence results. These specifications include, for example, the recent contribution by Satterthwaite and Shneyerov (Econometrica, forthcoming.) For those specifications in the liter- ature for which outcomes do not become efficient, we show exactly which of our conditions do not hold. These specifications include, for example, Serrano (2002, JME) and DeFraja and Sakovics (2001, JPE). JEL Classifications: D44, D82, D83


John Ledyard, Caltech

Title: Combinatoric Auctions

Combinatoric auctions sell K objects to N people who have preferences defined on subsets of the items. The direct version of the optimal auction satisfies incentive compatibility (it is a dominant strategy to report true values), voluntary participation (bidders are not worse off through participation) and maximizes the expected revenue of the auctioneer among such auctions. I characterize the optmal auction for the special case of single-minded bidders. The optimal auction is not a Vickrey-Clarke-Groves mechanism. Thus auctions that are efficient or in the core are not optimal. A detailed example suggests improvement over the VCG mechanism can be large. The performance of three well-known combinatoric auction designs (SMR, RAD, Sealed bid) in economic experiments are compared to the predicted performance of the optimal auction. The sealed bid auction achieves higher revenue than the optimal auction. RAD achieves higher revenues than SMR but less than the optimal auction.


Preston McAfee, Yahoo! Research

Title: Search and the Secretary Problem

The classic results on the secretary problem suggest using 37% of the available candidates for learning about the distribution, and a 37% failure to accept any candidate. Both of these conclusions seem excessive, and amendments in different environments are presented, connecting optimal search to optimal experimentation.


Vahab S. Mirrokni, Microsoft Research, Heiner Ackermann, RWTH Aachen, Paul W. Goldberg, University of Liverpool, Heiko Ršoglin and Berthold Všocking, RWTH Aachen

Title: Uncoordinated Two-sided Markets

Various economic interactions can be modeled as two-sided markets. A central solution concept to these markets are stable matchings, introduced by Gale and Shapley. It is well known that stable matchings can be achieved using a centralized polynomial-time algorithm. Many markets, however, do not have any centralized matching mechanism to match agents. In those markets, matchings are formed by actions of self-interested agents. Knuth introduced uncoordinated twosided markets and showed that the uncoordinated better response dynamics may cycle. However, Roth and Vande Vate showed that the random better response dynamics converges to a stable matching with probability one, but did not address the problem of convergence time.

In this paper, we give an exponential lower bound for the convergence time of the random better response dynamics in two-sided markets. We also extend these results to the best response dynamics, i. e., we present a cycle of best responses, and prove that the random best response dynamics converges to a stable matching with probability one, but its convergence time is exponential. Additionally, we identify the special class of monotone two-sided markets with real-life applications for which we prove that no better response cycle exists and that the random best response dynamics converges in expected polynomial time.

Finally, we extend our results to matroid many-to-one two-sided markets with ties. Our proof for existence of pure Nash equilibria in non-monotone markets and our potential function argument for monotone markets give a unified approach for the existence of equilibria in two-sided markets and player-specific matroid congestion games.


David C. Parkes and Ruggiero Cavallo,, Harvard University and Satinder Singh, University of Michigan

Title: Efficient Online Mechanisms for Persistent, Periodically Inaccessible Self-Interested Agents

We consider the problem of implementing a system-optimal decision policy in the context of self-interested agents with private state in an uncertain world. Unique to our model is that we allow both persistent agents, with an agent having a local MDP model to describe how its local world evolves given actions by a center, and also periodically-inaccessible agents, with an agent unable to report information while inaccessible. We first review the dynamic-VCG mechanism of Bergemann and Všalimšaki (2006), which handles persistent agents. We offer an independent, simple proof of its correctness. We propose a generalized mechanism to allow also for inaccessibility, and identify conditions for its correctness. In closing, we observe that the mechanism is equivalent to the earlier online-VCG mechanism of Parkes and Singh (2003) in a restricted setting.


Daniel Reeves, Alina Beygelzimer, Yiling Chen, Nicolas Lambert, John Langford, David Pennock, Bethany Soule, Yevgeniy Vorobeychik and Jennifer Wortman

Title: Betting with Budgets

We explore betting mechanisms in which players participate by specifying a belief about an uncertain outcome (typically a probability) and a budget (an amount of money to put at risk). We identify a number of properties that seem desirable in such a mechanism, including budget balance, individual rationality, incentive compatibility, monotonicity (incentivizing agents to specify large budgets), and high stakes (transferring a large fraction of the losers' budgets to the winners). Common betting mechanisms fall short on a number of these properties. For example, double auctions, market makers, and parimutuel markets are not incentive compatible for beliefs. Shared scoring rules are incentive compatible for beliefs but have no notion of budgets. Automated market makers like Hanson's market scoring rule mechanism are not budget balanced. We propose three new betting mechanisms and evaluate them according to our desired properties. Our all-in mechanism generalizes shared scoring rules to handle budgets. Our weighted score mechanism retains every property except high/full stakes. Although our parimutuel score mechanism fails to satisfy incentive compatibility and full stakes, it is close in a quantifiable sense and may have good behavior in practice. We describe and examine dynamic variants of the above mechanisms and discuss the difficulty (impossibility?) in achieving all desired properties.


Mark Satterthwaite, Northwestern University

Title: Trading within Markets having Two-Sided Incomplete Information: An Overview

In a market with two-sided incomplete information the valuations that buyers and sellers place on the good being traded is private to them. Financial markets are an example. In this talk I will review models of both static markets in which trade among buyers and sellers occurs in a centralized market at a single point of time and dynamic markets in which buyers and sellers are matched randomly at different points of time. In reporting results I will focus on the robustness and speed of convergence to competitive outcomes.


Michael Schapira, Hagay Levin, and Aviv Zohar, The Hebrew University of Jerusalem

Title: Incentive-Compatible Distributed Routing

We present a novel game-theoretic model that captures many of the intricacies of interdomain routing in today's Internet. In this model, the strategic agents are source nodes located on a network, who aim to send traffic to a unique destination node. The interaction between the agents is dynamic and complex - asynchronous, sequential, and based on partial information.

We consider a simple distributed protocol in which all agents are instructed to behave myopically. We show that in realistic and well-studied settings, this protocol is immune to rational manipulations. I.e., not only does myopic behaviour of all players converge to a "stable" routing outcome, but no player has motivation to unilaterally deviate from the protocol. Moreover, we show that even coalitions of players of any size cannot improve their routing outcomes by collaborating.

The only protocol used for interdomain routing, namely the Border Gateway Protocol (BGP), is based on such myopic interaction. Hence, from a practical perspective, our results imply a strategic justification for BGP by showing that it possesses extremely strong game-theoretic properties.

From a theoretical perspective, the strengths of our results lie in the following facts: Firstly, unlike the vast majority of works in mechanism design, our results do not require any monetary transfers (to or by the agents). Secondly, we allow significantly more complex preferences over routes than previous works. Finally, our results are a first step in the exploration of myopic behaviour in Algorithmic Mechanism Design.


Andreas Schulz and Nelson Uhan, Massachusetts Institute of Technology

Title: Encouraging Cooperation in Sharing Supermodular Costs

The least core value of a cooperative game is the minimum penalty we need to charge a coalition for defecting that ensures the existence of a fair and efficient cost allocation. The set of all such cost allocations is called the least core. In this paper, we study the computational complexity and algorithmic aspects of computing the least core value of supermodular cost cooperative games, and uncover some structural properties of the least core of these games. We motivate the study of these games by showing that a particular class of optimization problems has supermodular optimal costs. This class includes a variety of problems in combinatorial optimization, especially in machine scheduling. We show that computing the least core value of supermodular cost cooperative games is NP-hard, and build a framework to approximate the least core value of these games using oracles that approximately determine maximally violated constraints. With recent work on maximizing submodular functions, our framework yields a 3-approximation algorithm for computing the least core value of general supermodular cost games.

We also apply our approximation framework to two particular classes of cooperative games: schedule planning games and matroid profit games. Schedule planning games are cooperative games in which the cost to a coalition is derived from the minimum sum of weighted completion times on a single machine. By specializing some of the results for general supermodular cost cooperative games, we are able to show that the Shapley value is an element of the least core of schedule planning games, and design a fully polynomial time approximation scheme for computing the least core value of these games. Matroid profit games are cooperative games with submodular profits: the profit to a coalition arises from the maximum weight of a matroid basis. We show that an element of the least core and the least core value of matroid profit games can be computed in polynomial time.


Chaitanya Swamy, University of Waterloo and Ron Lavi, Israel Institute of Technology

Title: Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Montonicity

We consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design, where the machines are the strategic players. This is a multidimensional scheduling domain, and the only known positive results for makespan minimization in such a domain are O(m)-approximation truthful mechanisms. We study a well-motivated special case of this problem, where the processing time of a job on each machine may either be "low" or "high", and the low and high values are public and job-dependent. This preserves the multidimensionality of the domain, and generalizes the restricted-machines (i.e., {pj ,8}) setting in scheduling. We give a general technique to convert any c-approximation algorithm to a 3c-approximation truthful-in-expectation mechanism. This is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion. When the low and high values are the same for all jobs, we devise a deterministic 2-approximation truthful mechanism. These are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain.

Our constructions are novel in two respects. First, we do not utilize or rely on explicit price definitions to prove truthfulness; instead we design algorithms that satisfy cycle monotonicity. Cycle monotonicity is a necessary and sufficient condition for truthfulness that is a generalization of value monotonicity for multidimensional domains. However, whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains, ours is the first work that leverages cycle monotonicity in the multidimensional setting. Second, our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem, and then converting it into a truthful-in-expectation mechanism. This builds upon a technique of, and shows the usefulness of fractional mechanisms in truthful mechanism design.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on October 18, 2007.