DIMACS Workshop on Economic Aspects of Information Sharing

February 7 - 8, 2013
Stanford University, Stanford, California

 Mackenzie Room (Huang Room 300)
 Huang Engineering Center
 475 Via Ortega
 Stanford, CA 94305
Organizers:
Nadia Fawaz, Technicolor, Nadia.Fawaz at technicolor.com
Ashish Goel, Stanford University, ashishg at stanford.edu
Stratis Ioannidis, Technicolor, Stratis.Ioannidis at technicolor.com
S. Muthukrishnan, Rutgers University, muthu at cs.rutgers.edu
Presented under the auspices of the Special Focus on Information Sharing and Dynamic Data Analysis and the DIMACS Special Focus on Algorithmic Foundations of the Internet. This workshop is also sponsored by Stanford University, Technicolor, and Google (Student Sponsor).

Abstracts:


Susan Athey, Stanford University

Title: Bottlenecks on the Internet and Platform Competition.

One of the central features of the internet is all of content published there. Incentives for content creation are partially determined by monetization received from online advertising. This talk will discuss how competition in internet search and in advertising platforms more broadly affects equilibrium industry structure in various sectors. The impacts of competition and internet innovation on search advertising market design, on the news industry, and on who benefits from personal data and how it is used will be considered.


Moshe Babaioff, Microsoft Research, Shahar Dobzinski, Sigal Oren, and Aviv Zohar

Title: On Bitcoin and Red Balloons

Many large decentralized systems rely on information propagation to ensure their proper function. We examine a common scenario in which only participants that are aware of the information can compete for some reward, and thus informed participants have an incentive not to propagate information to others. One recent example in which such tension arises is the 2009 DARPA Network Challenge (finding red balloons). We focus on another prominent example: Bitcoin, a decentralized electronic currency system.

Bitcoin represents a radical new approach to monetary systems. It has been getting a large amount of public attention over the last year, both in policy discussions and in the popular press. Its cryptographic fundamentals have largely held up even as its usage has become increasingly widespread. We find, however, that it exhibits a fundamental problem of a different nature, based on how its incentives are structured. We propose a modification to the protocol that can eliminate this problem. Bitcoin relies on a peer-to-peer network to track transactions that are performed with the currency. For this purpose, every transaction a node learns about should be transmitted to its neighbors in the network. As the protocol is currently defined and implemented, it does not provide an incentive for nodes to broadcast transactions they are aware of. In fact, it provides an incentive not to do so. Our solution is to augment the protocol with a scheme that rewards information propagation. Since clones are easy to create in the Bitcoin system, an important feature of our scheme is Sybil-proofness. We show that our proposed scheme succeeds in setting the correct incentives, that it is Sybil-proof, and that it requires only a small payment overhead, all this is achieved with iterated elimination of dominated strategies. We complement this result by showing that there are no reward schemes in which information propagation and no self-cloning is a dominant strategy.

Joint work with Shahar Dobzinski, Sigal Oren, and Aviv Zohar.

You can find the paper at: http://arxiv.org/abs/1111.2626, and http://research.microsoft.com/apps/pubs/?id=156072

A short description appeared at ACM SIGecom Exchanges: http://www.sigecom.org/exchanges/volume_10/3/BABAIOFF.pdf


Benjamin Brooks, Princeton University

Title: The Limits of Price Discrimination

We provide a comprehensive analysis of the welfare consequences that result when a monopolist gains access to additional information about consumers' tastes, beyond the prior distribution. The additional information may be used to charge different prices to different segments of the market. We show that the segmentation and pricing induced by additional information can achieve every combination of prot and consumer surplus such that 1) the monopolist's prots are at least as high as under the prot maximizing policy without additional information, 2) consumer surplus is non-negative, and 3) total welfare does not exceed the surplus produced by socially efficient trade. Finally, we extend the analysis to account for the possibility of quantity and quality discrimination as well as a non-linear cost function.


Augustin Chaintreau, Columbia University

Title: Transactional Privacy: Unknotting the Privacy Tussle with Economics

Personal data has become one of the hottest commodities - the "new currency" of the Web. Indeed it is an invaluable resource of public interest: information collected in social media potentially offers unprecedented knowledge on human behaviors and our collective actions, paving the way to our society's progress in development, health, energy and other public decisions. Unfortunately, merely accessing most of this sensitive data is today's most vexing challenge for many scientists, while users express increasing concerns on the way this information is collected and used. Massive amounts of data are stored and processed behind close doors, and multiplied among 3rd parties. Many researchers today are engaged against commercial services in a privacy tussle, offering tools to prevent today's unregulated, and increasingly harmful, personal data collection.

In this talk, we argue that a more pragmatic, albeit ambitious, approach is needed. We propose Transactional Privacy, a new design that protects user's control of their data along with the economic interest they are entitled to claim. By an economic rethink of the monetization of personal data, we propose a natural complement to privacy preserving schemes. We instantiate this design in a information market architecture centered on personal data transactions. Our preliminary results indicate that Transactional Privacy is feasible within today's communication architecture, can protect users and companies against various forms of cheating, and offers one of the first plausible answer to the difficult deployment of privacy preserving schemes.


Anupam Datta, Carnegie Mellon University

Title: Audit Games

Audit mechanisms are essential to guard against privacy threats from authorized insiders in organizations (such as hospitals, banks, and Web services providers) that hold large volumes of personal information. We study economic considerations in the design of effective audit mechanisms that combine inspections to detect privacy violations by insiders and incentives to encourage privacy-respecting behavior. We model the audit process as a game between a defender (e.g, a hospital) and an adversary (e.g., a hospital employee). The utility function of the defender captures the defender's incentives (e.g., loss from privacy violations detected by a government audit, cost of inspections). For Byzantine adversaries, we design a regret minimizing audit mechanism that learns from experience to prescribe a resource allocation for inspections in each round of the repeated game. We prove that the mechanism quickly converges to the best fixed resource allocation for the defender [BCDS'11]. For rational adversaries, the defender's action space in the audit game includes, in addition to a resource allocation for inspections, a punishment rate to deter the adversary from committing violations. We present an additive FPTAS that solves a non-convex quadratic optimization problem to compute an approximate Stackelberg equilibrium of this game [BCDPS'13]. We use our model to predict observed practices in industry (e.g., differences in punishment rates of doctors and nurses for the same violation) and the effectiveness of public policy interventions (e.g., data breach notification laws and government audits) in encouraging organizations to adopt accountable data governance practices [BCDS'12].


   [BCDS'11] J. Blocki, N. Christin, A. Datta, A. Sinha, Regret Minimizing Audits:
   A Learning-Theoretic Basis for Privacy Protection, in Proceedings of 24th IEEE
   Computer Security Foundations Symposium, June 2011	
	
   [BCDS'12] J. Blocki, N. Christin, A. Datta, A. Sinha, Audit Mechanisms for
   Provable Risk Management and Accountable Data Governance, in Proceedings of 3rd
   Conference on Decision and Game Theory for Security, November 2012.
	
   [BCDPS'13] J. Blocki, N. Christin, A. Datta, A. Procaccia, A. Sinha, Audit
   Games, Manuscript, January 2013.



Edward Felten, Princeton University

Title: Toward a Healthier Data Privacy Ecosystem

Data privacy seems to be a frustrating issue for almost everyone. Companies face complex, overlapping legal and social constraints, as they struggle to balance privacy against business incentives. Users find laws and practices confusing, and they worry about the scope and implications of information collection. Governments try to keep up as markets and technologies evolve at Internet speed. These problems has no simple cure, but we can at least diagnose some of the causes and consider what a healthier end state might be. We will focus on two problem areas, the economic and the conceptual. The economic problems stem from market failures, which lead to a non-optimal tradeoff between privacy and online functionality. The conceptual problems are deeper and exist because our public discourse about privacy is built on unsound ideas about information and inference. Addressing these problems will require us to reconcile our discourse, laws, and practices with new ideas emerging from the technical community.


Laszlo Gyarmati, Telefonica

Title: Detecting Price and Search Discrimination On the Internet

Price discrimination, setting the price of a given product for each customer individually according to his valuation for it, can benefit from extensive information collected online on the customers and thus contribute to the profitability of e-commerce services. Another way to discriminate among customers with different willingness to pay is to steer them towards different sets of products when they search within a product category. We empirically demonstrate the existence of signs of both price and search discrimination on the Internet, uncover the information vectors used to facilitate them, and also outline the structure of the applied measurement framework. Furthermore, we introduce a distributed watchdog system that allows users to detect discriminatory practices online.


Thibaut Horel, INRIA

Title: Budget Feasible Mechanisms for Experimental Design

In the classical experimental design setting, an experimenter with a budget has access to a population of multiple potential experiment subjects, each associated with a vector of public features as well as a cost. Conducting an experiment with subject reveals an unknown output value to the experimenter. Typically, it is assumed that such outputs can be expressed as a function of the public features; the experimenter's goal is to select which experiments to conduct, subject to her budget constraint, to infer this function as accurately as possible.

We initiate the study of mechanisms for experimental design. In this setting, subjects are strategic and may lie about their costs. In particular, assuming that the relationship between outputs and features is linear, we formulate the Experimental Design Problem (EDP) as finding a set S of subjects that maximize the information gain in when it is learned through linear regression methods (the so-called D-optimality criterion). We present the first known deterministic, polynomial time, truthful, budget feasible mechanism for EDP. Our mechanism yields a constant factor (12) approximation, and we show that no truthful, budget-feasible algorithms are possible within a factor 2 approximation. Our approach generally applies to a wider class of learning problems and obtains polynomial time universally truthful (i.e., randomized) budget feasible mechanism, also within a constant factor approximation.

Joint work with Muthu Muthukrishnan and Stratis Ioannidis


Katrina Ligett, Caltech

Title: Running a Survey when Privacy Comes at a Cost

n this talk, we consider the problem of estimating a potentially sensitive (individually stigmatizing) statistic on a population. We suppose that individuals are concerned about their privacy, and experience some cost as a function of their privacy loss.

Nevertheless, they would be willing to participate in the survey if they were compensated for their privacy cost. These cost functions are not publicly known, however, nor do we make Bayesian assumptions about their form or distribution. Individuals are rational and will misreport their costs for privacy if doing so is in their best interest.

We'll discuss subtleties in modeling privacy costs in this situation, along with a simple model and mechanism for eliciting participation.


Andrea Montanari, Stanford University

Title: Information Sharing in Recommendation Systems

Modern online services aggregate information from millions of users about their interests and preferences. On the basis of this information, they provide to users `intelligent' ways to navigate through a universe of items. Obvious examples include e-commerce companies (e.g. Amazon, Netflix, etc), that recommend products to buy or rent on the basis of the user's past choices. Recommendation systems provide algorithms and protocols whereby information is aggregated and processed jointly across multiple users to provide feedback to each one of them. As such, recommendation systems are intrinsically based on information sharing, and imply a privacy loss on the part of users. I will discuss to which extent this privacy loss might be unintended or undesired, and a possible framework for controlling this privacy loss.

Based on joint work with J. Bento, S. Bhagat, N. Fawaz, S. Ioannidis, N. Taft, U. Weinsberg, A. Zhang.


Hamid Nazerzadeh, University of Southern California

Title: Dynamic Mechanism Design with Costly Information Acquisition

I present an optimal mechanism for an auction setting where bidders can obtain more information about their valuations at a cost. This is motivated by targeting technologies in online advertising where advertiser can purchase information about the users from third-party companies such as Blue Kai and eXelate. We show that the optimal mechanism is sequential and allows only bidders with high valuations to purchase information.

This is based on a joint work with Negin Golrezaei.


Balaji Prabhakar, Stanford University

Title: Designing Large-scale Nudge Algorithms

In many of the challenges faced by the modern world, from overcrowded road networks to overstretched healthcare systems, large benefits for society come about from small changes by very many individuals. We survey the problems and the cost they impose on society. We describe a framework for designing "nudge algorithms" and a series of pilot projects which aim to nudge behavior in networks such as transportation, wellness and recycling. Pilots have been conducted with Infosys Technologies, Bangalore (commuting) and Accenture-USA (wellness), and two are ongoing: in Singapore (public transit congestion) and at Stanford (congestion and parking). Some salient themes are the use of low-cost sensing (RFID, smartphones) and networking technology for sensing individual behavior, and the use incentives and social norming to nudge the behavior. We present some preliminary results from the pilots.


Bindu Reddy, CEO, MyLikes

Title: Social Publishing - Predictably Irrational

MyLikes is a social advertising platform that enables social publishers to monetize their audience. This talks focuses on how ad-technology interplays with social behaviour.


Aaron Roth, University of Pennsylvania

Title: Private Equilibrium Release, Large Games, and No-Regret Learning

We consider -large games-, which, like commuter traffic or stock investing, are strategic interactions among many players, each of whom has only a small effect on the welfare of others. One might like to design mechanisms in such games to suggest equilibrium play to the participants, but there are two potential concerns. The first is privacy: the computation of an equilibrium may reveal sensitive information about the utility function of one of the agents (i.e. his destination for that day's drive, or his stock portfolio). The second concern relates to incentives: it may be beneficial for one of the agents to misreport his utility function to cause the mechanism to select a preferred, purported "equilibrium" of the reported game. We show how differential privacy can be brought to bear to solve both of these problems: we give a privacy preserving mechanism for computing the equilibria of a large game, which in turn implies an approximately truthful equilibrium selection mechanism.

This is joint work with Michael Kearns, Mallesh Pai, and Jon Ullman


Yaron Singer, Google Research

Title: Online Learning and Posted Prices in Procurement

The growing availability of platforms that incentivize users to generate and share content raises the question of how to procure such services efficiently using the minimum information necessary. In many such scenarios rather than having users explicitly reveal their cost for performing a service, we are instead interested in posted price mechanisms that simply present users take-it-or-leave-it offers and learn from their responses.

In this talk we will present results on the power of posted price mechanisms in the budget feasibility model which is a general framework for procurement. Our main results show that in several interesting cases posted prices can perform nearly as well as mechanisms that require users to reveal their full information. We will also discuss generalizations and applications to social network advertising, crowdsourcing, and privacy auctions.


Dan Suciu, University of Washington

Title: Query-Based Data Pricing

Data has value, and is increasingly being bought and sold on the Web. But current pricing mechanisms are, however, very naive. By far the most common case is that of a fixed price for the entire data set. The big customers can typically afford to purchase the data they need (e.g. the price of one Gartner Report is in the range of thousand of dollars), but small customers often need only a few data items from the entire data set and cannot afford to pay the full price.

In this talk I will discuss a framework for pricing data that allows the seller to set explicit prices for a set of views of her choice, and allows the buyer to buy ANY query; the price of the query is derived automatically from the explicit prices set by the seller. We call this framework ``query-based pricing''. A pricing function must satisfy an important property: it must be "arbitrage-free", in the sense that it must prevent the buyer from obtaining the answer to some query by purchasing and combining cheaper queries. Arbitrage-freeness is related to "query-view determinacy", a concept that has been well studied in database theory, and to the "privacy budget" in differential privacy. While the theoretical complexity of computing an arbitrage-free price in a general setting is high, there exist simple practical approaches using ILP solvers.

Joint work with M. Balazinska, B. Howe, P. Koutris, Daniel Li, Chao Li, G. Miklau, P. Upadhyaya


Omar Tawakol, CEO, BlueKai

Title: The Economics of Sharing and Monetizing Anonymous First & Third Party Data

We will discuss different pricing models for monetizing third party anonymous data. We will also examine the peculiar economics of non-rival goods like data and why there are economic advantages for using exclusive first party data assets.


Jean Walrand, UC Berkeley

Title: Impact of Internet Industry Structure on Revenue and Welfare

What is the impact of the internet industry structure on revenue, return on investment and user welfare? Should a large content provider own the content distribution network and possibly the transit networks? Integrated ownership may improve investment efficiency but what is its impact on user welfare? The presentation investigates such questions by analyzing models of joint production of Internet services.

Joint work with Jordi Domingo Pascual (PUC, Barcelona).


Georgios Zervas, Yale University

Title: Restaurants Fake it Until They Make It: Promotional Reviews on Yelp

Consumer review websites have proliferated over the past decade. Because of the low cost of crowdsourcing, these review websites provide consumers with more information than ever before. At the same time, businesses have the incentive to game the process either by leaving favorable reviews for themselves or unfavorable reviews for competitors. In this paper, we construct a data set consisting of all reviews left on Yelp, as well as all reviews that Yelp has removed from the main review website for being identified as suspicious (fake). We investigate the patterns of filtered reviews, and the economic incentives to game the system. We find that promotional reviews are predominantly either one or five stars, and that businesses with worse or less established reputations are most likely to submit fake reviews. This stands in stark contrast with advertising decisions, which is an alternative way for businesses to improve their reputation. Businesses with more reviews and higher ratings are most likely to advertise.

Joint work with Michael Luca (HBS).


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on Janaury 24, 2013.