DIMACS/DyDAn Workshop on Internet Privacy: Facilitating Seamless Data Movement with Appropriate Controls

September 18 - 19, 2008
DIMACS Center, CoRE Building, Rutgers University, Piscataway, NJ

Organizers:
Dan Boneh, Stanford University, dabo at cs.stanford.edu
Ed Felten, Princeton University, felten at cs.princeton.edu
Helen Nissenbaum, New York University, helen.nissenbaum at nyu.edu
Presented under the auspices of the DIMACS Special Focus on Algorithmic Foundations of the Internet, the DIMACS Special Focus on Communication Security and Information Privacy and the Center for Dynamic Data Analysis (DyDAn).

Abstracts:

Faiz Currim and Eunjin Jung, University of Iowa

Title: Q-RICE: Query Rewriting and Policy Integration for access Control and Enforcement

Managing the security and privacy of patient records is increasingly important in the health care field. The challenges include complying with access control policy requirements from multiple sources like participating institution guidelines, the terms of collaboration, as well as legal requirements such as HIPAA. Previous work has suggested fine-grained access control (FGAC) mechanisms to handle this problem. Early FGAC mechanisms involved tuple-labeling (currently used in MIFAR and similar systems), which is not scalable. Delays hinder data movement between researchers and often lead them to bypass access control mechanisms. Newer FGAC mechanisms combine role-based access control and query rewriting. However, these have some of their own limitations.

Our FGAC system integrates policy management and query modification. We extend policy management to handle multiple data sources, the common SQL data types and aggregate queries. For example, imagine two policies allow a user to see only aggregated results over a range for anonymity, on two different but overlapping ranges. A query over the intersection of these two ranges may or may not be accepted depending on constituent data types. For effective query evaluation, we use a number of heuristics while rendering the outcome of policy management.


James Grimmelmann, New York Law School

Title: Peer-Produced Privacy Violations

Social network sites offer a compelling social experience; users create satisfying identities, relationships, and communities. This social experience, however, is built on the pervasive reuse of personal information. Users must supply extensive personal information to participate. That information is then redeployed to convince others to do the same. The resulting dynamics are viral. Users systematically underestimate the privacy risks involved and experience incessant privacy violations. Those violations are peer-produced, that is, they arise out of the distributed interactions of similarly situated individuals, rather than from the overbearing actions of a powerful central entity. This fact encourages a skeptical attitude towards mandatory data portability and towards purely technical measures for ensuring privacy on social network sites


Aggelos Kiayias, University of Connecticut

Title: Pirate Evolution in Broadcast Encryption Schemes

Pirate Evolution in Broadcast Encryption Schemes This talk will overview pirate evolution, an attack concept against broadcast encryption schemes. A pirate evolution strategy is a method a pirate can use to schedule the compromised key material it possesses so that it can maximize its overall reception time in a content distribution system. In the talk we will describe pirate evolution strategies for current broadcast encryption schemes. Moreover, countermeasures to pirate evolution will be discussed that exhibit tradeoffs between the level of susceptibility to pirate evolution and the efficiency parameters of the underlying encryption scheme.


Adam Smith, Pennsylvania State University

Title: Pinning Down "Privacy" in Statistical Databases

This talk discusses a recent, rigorous approach to privacy in statistical databases. This is a specific, important aspect of "privacy" problems in the Internet and in modern information systems generally.

Consider an agency holding a large database of sensitive personal information (perhaps medical records, census survey answers, or web search records). The agency would like to discover and publicly release global characteristics of the data (say, to inform policy and business decisions) while protecting the privacy of individuals' records. This problem is known variously as "statistical disclosure control", "privacy-preserving data mining" or simply "database privacy".

We describe "differential privacy", a notion which emerged from a recent line of work in theoretical computer science that seeks to formulate and satisfy rigorous definitions of privacy for such statistical databases. We also sketch some basic techniques for achieving differential privacy. The techniques are reminiscent of (but different from) those used in noise-tolerant machine learning and robust statistics.

Based on several papers with Cynthia Dwork, Shiva Kasiviswanathan, Homin Lee, Frank McSherry, Kobbi Nissim and Sofya Raskhodnikova, published or to appear at TCC 2006, STOC 2007, KDD 2008 and FOCS 2008.


Peter Swire, Ohio State University

Title: Security, Obscurity,and Information Sharing

For this conference on "seamless data movement with appropriate controls" a key question is what data should move seamlessly. A slogan among computer security experts is that there is no security through obscurity. In contrast, military and other security experts in the physical world are inclined to say that "loose lips sink ships," so that sharing more data will undermine security. "Appropriate controls" are needed for privacy as well as security reasons. This talk draws on a continuing research project about appropriate intellectual structures for information sharing in a networked world.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on September 16, 2008.