DIMACS Workshop on Current Trends in Cryptology

April 29 - May 1, 2013
AT & T Building
33 Thomas Street
New York City, New York

Organizers:
David Cash, Rutgers University, david.cash at cs.rutgers.edu
Martijn Stam, University of Bristol, Martijn.Stam at bristol.ac.uk
Yevgeniy Vahlis, AT&T Security Research Center, evahlis at att.com
Presented under the auspices of the DIMACS Special Focus on Cybersecurity.

Abstracts:


Nishanth Chandran, AT&T

Title: Secure Computation on Sparse Networks in the Presence of Malicious Nodes and Edges

The problem of almost everywhere, a.e.) secure computation was formulated by Garay and Ostrovsky [Eurocrypt 08] to model the more realistic setting where not all parties, on the network running the computation, share a communication channel. In such a situation, parties, or nodes) are connected by a sparse communication network and must communicate with other nodes in the network via their neighbors. We consider this question in the information-theoretic setting, and hence, MPC amongst all honest nodes cannot be guaranteed, leading to some of them being ``given up'' and left out of the computation. The number of such nodes is a function of the underlying communication graph and the adversarial set of nodes.

We significantly improve upon the parameters of existing works by constructing a graph of degree poly-log in n, along with an MPC protocol between n parties, tolerating a constant fraction, t = \alpha n) of corrupted nodes. We also improve upon the number of nodes that are given up and only leave out O(t/log n) of them. We then consider a.e. secure computation, when an adversary can additionally corrupt even the communication channels, or edges) in the network. Here, we construct a graph of degree O(n^\epsilon), for constant 0<\epsilon<1) and an MPC protocol between n parties, tolerating a constant fraction of corrupted nodes and edges.

This talk is based on joint works with Juan Garay and Rafail Ostrovsky.


Kai-Min Chung, Cornell

Title: Can Theories be Tested? A Cryptographic Treatment of Forecast Testing

How do we test if a weather forecaster actually knows something about whether it will rain or not? Can we distinguish a ``charlatan'' from a true expert? Can we evaluate whether a scientific theory actually predicts some physical phenomenon?

Intuitively, a ``good'' forecast test should be *complete*---namely, a forecaster knowing the distribution of Nature should be able to pass the test with high probability, and *sound*---an uninformed forecaster should only be able to pass the test with small probability.

In this talk, we present a comprehensive complexity-theoretic study of the feasibility of sound and complete forecast testing, introducing various notions of both completeness and soundness, inspired by the literature on interactive proofs. Our main result is an incompleteness theorem for our most basic notion of computational sound and complete forecast testing: If Nature is implemented by a polynomial-time algorithm, then every complete polynomial-time test can be passed by a completely uninformed polynomial-time forecaster with high probability.

We will additionally discuss alternative notions of soundness and completeness and present both positive and negative results for these notions.

Joint work with Edward Lui and Rafael Pass.


Yevgeniy Dodis, NYU

Title: Key Derivation Without Entropy Waste

We revisit the classical question of converting an imperfect source X of min-entropy k into a usable m-bit cryptographic key for some underlying application P. If P has security delta, against some class of attackers) with a uniformly random m-bit key, we seek to design a key derivation function, KDF) h that allows us to use R=h(X) as the key for P and results in comparable security delta' close to delta.

Seeded randomness extractors provide a generic way to solve this problem provided that k > m + 2*log(1/delta), and this lower bound on k, called "RT-bound") is known to be tight in general. Unfortunately, in many situation the "waste" of 2*log(1/delta) bits of entropy is significant, motivating the question of designing KDFs with less waste for important special classes of sources X or applications P.

We discuss the following new positive and negative results in this regard:

-We design efficient KDFs for all unpredictability applications P (signatures, MACs, one-way functions, etc.) which can either, 1) extract all of the entropy k = m with a very modest security loss delta' = O(delta*loglog(1/delta)), or alternatively,, 2) achieve optimal security delta' = O(delta) with a very modest entropy loss k = m + loglog(1/delta). In the process, we abstract out a clean, information-theoretic notion of, k,delta,delta')-unpredictability extractors, relate this notion to the previously-known notion of (min-entropy) condensers, and improve the state-of-the-art parameters for such condensers.

We abstract a class of "square-friendly" applications, which includes all unpredictability, but also many important indistinguishability, applications, such as CPA-secure encryption and weak PRFs). For this class of applications, we design efficient KDFs which still substantially beat the RT-bound, and can either, 1) extract all of the entropy k = m with reasonable security loss delta' = sqrt{delta}, or alternatively,, 2) achieve optimal security delta' = O(delta) with a modest entropy loss k = m + log(1/delta).

We show that efficient samplability of the source X does not help, in general, to beat the RT-bound, as well as improve our results for unpredictability and square-friendly applications). This resolves the SRT, samplable RT) conjecture of Dachman-Soled et al. [DGKM12] in the affirmative, and also shows that the existence of computationally-secure extractors beating the RT-bound implies the existence of one-way functions.


Timothy Edgar, Brown

Title: Legal Challenges of Addressing Computer Insecurity While Protecting Privacy and Free Expression, Or 'Why Aren't We Using These Crypto Tools?'

The United States has a new military command for cyberspace, with the Director of the National Security Agency (NSA) as its commander.

At the same time, the Secretary of State has announced that the -YN4freedom to connectN! is an aspect of fundamental human rights and has criticized countries that attempt to filter the Internet. Despite major advances in cryptology, computer networks remain insecure, as sensitive data is leaked or stolen at increasing rates. This talk will examine the legal powers available to addressing network and computer insecurity and their impact on privacy, civil liberties and other fundamental values, as well as the policy reasons for the uneven deployment of technological safeguards to protect data. What contributions can advances in cryptology make for addressing computer insecurity while protecting privacy?


Nelly Fazio, CCNY

Title: Broadcast Steganography

We initiate the study of broadcast steganography, BS), an extension of steganography to the multi-recipient setting. BS enables a sender to communicate covertly with a dynamically designated set of receivers, so that the recipients recover the original content, while unauthorized users and outsiders remain unaware of the covert communication. One of our main technical contributions is the introduction of a new variant of anonymous broadcast steganography that we term anonymous identity-based encryption with pseudorandom ciphertexts_, oABE$). Our oABE$ construction achieves sublinear ciphertext size and is secure in the standard model. Besides being of interest in its own right, oABE$ enables an efficient construction of BS secure in the standard model against adaptive adversaries that also features sublinear ciphertexts.

Joint work with Antonio R. Nicolosi and Irippuge Milinda Perera.


Rosario Gennaro, CCNY

Title: Fully Homomorphic Message Authenticators

We define and construct fully homomorphic message authenticators. In such a scheme, anybody can perform arbitrary computations over authenticated data and produce a short tag that authenticates the result of the computation. The user verifies this tag with her private key to ensure that the claimed result is indeed the correct output of the specified computation over previously authenticated data, without needing to know the underlying data itself. For example, a user can outsource the storage of large amounts of authenticated data to a remote server, and the server can later non-interactively certify the outputs of various computations over this data with only a short tag. Our construction uses fully homomorphic encryption in a novel way.

Joint work with Daniel Wichs.


Craig Gentry, IBM

Title: Candidate Constructions for Functional Encryption and Best Possible Obfuscation

We describe a construction for a functional encryption scheme (FE) that can realize arbitrary polynomial time circuits and withstands arbitrary collusions of secret-key holders. We use a simulation-based notion of security, but bypass the known impossibility results by allowing only a single challenge ciphertext and providing only selective security.

The core of our construction is a functional encryption scheme for branching programs, that uses the recent multilinear encoding schemes of Garg et al. We also reduce its security in the standard model to an ad-hoc (but non-interactive) computational problem, related to the underlying multilinear schemes. To gain some confidence in the security of our assumption we prove that it is secure in a generic model, thereby demonstrating its resilience against a natural wide class of attacks. We then apply a variant of a recent transformation of Gorbunov et al., using randomized encoding and low-depth PRFs to extend the reach of our scheme from NC1 to any polynomial-size circuit.

We observe that the hardness assumption that we use to prove security of our FE scheme for branching programs immediately implies a method for obfuscating branching programs, satisfying the Goldwasser-Rothblum notion of "best possible" obfuscation and the Barak et al. notion of indistinguishability obfuscation. However, bootstrapping our NC1 result to achieve best-possible obfuscation for arbitrary polynomial-size circuits raises technical difficulties not found in the functional encryption setting; here we require even more unconventional assumptions.

Joint work with Sanjam Garg, Shai Halevi, Mariana Raykova, Amit Sahai, and Brent Waters.


Charanjit S. Jutla, IBM

Title: Decision Procedures for Simulatability

We address the question of automatically proving security theorems in the universally composable (UC) model for ideal and real functionalities composed of if-then-else programs with uniform random number generation and data objects from the additive group of $\F_{2^m}$. We prove that for this restricted yet powerful language framework, there is an effective procedure to decide if a real functionality realizes an ideal functionality, and this procedure is in computational time independent of $m$, which is essentially the security parameter.

To this end, we consider multivariate pseudo-linear functions, which are functions computed by branching programs over data objects from the additive group of $\F_{2^m}$. The conditionals in such programs are built from equality constraints over linear expressions, closed under negation and conjunction.

Let $f_1, f_2,...,f_k$ be $k$ pseudo-linear functions in $n$ variables, and let $f$ be another pseudo-linear function in the same $n$ variables. We show that if $f$ is a function of the given $k$ functions, then it must be a pseudo-linear function of the given $k$ functions. This generalizes the straightforward claim for just linear functions. Proceeding further, we generalize the theorem to randomized pseudo-linear functions. We also prove a more general theorem where the $k$ functions can in addition take further arguments, and prove that if $f$ can be represented as an iterated composition of these $k$ functions, then it can be represented as a probabilistic pseudo-linear iterated composition of these functions. Additionally, we allow $f$ itself to be a randomized function, i.e. we give a procedure for deciding if $f$ is a probabilistic sub-exponential, in $m$) time iterated function of the given $k$ randomized functions. The decision procedure runs in computational time independent of $m$.

(Joint work with Arnab Roy.)


Yael Kalai, MSR

Title: Reusable Garbled Circuits and Succinct Functional Encryption

Garbled circuits, introduced by Yao in the mid 80s, allow computing a function f on an input x without leaking anything about f or x besides f(x). Garbled circuits found numerous applications, but every known construction suffers from one limitation: it offers no security if used on multiple inputs x. In this talk we will show how to construct reusable garbled circuits. The key building block is a new succinct single-key functional encryption scheme.

Functional encryption is an ambitious primitive: given an encryption Enc(x) of a value x, and a secret key sk_f for a function f, anyone can compute f(x) without learning any other information about x. In this talk we will show how to construct a succinct functional encryption scheme for any polynomial-time function f where succinctness means that the ciphertext size does not grow with the size of the circuit for f, but only with its depth. The security of our construction is based on the intractability of the Learning with Errors, LWE) problem and holds as long as an adversary has access to a single key sk_f, or even an a priori bounded number of keys for different functions).

This is joint work with Shafi Goldwasser, Raluca Ada Popa, Vinod Vaikuntanathan, and Nickolai Zeldovich


Aggelos Kiayias, U. Athens

Title: Anonymous yet Authentic: Secure Multiparty Computation in the Two-Tier Trust Model

Secure multiparty computation (MPC) as a service is becoming a tangible reality. In such a setting, a population of clients wish to utilize a set of servers to delegate privately and reliably a given computation on their inputs. The deployment of the service has a cost, leading resource-constrained providers to utilize a pre-existing set of servers in a cost-effective way.

In this paper we introduce the two-tiered trust model for MPC which naturally captures the problem the service provider faces in the above setting. In this model, the set of pre-existing servers are divided into two tiers of trust, the first of which is typically small and initially fully trusted while the second is typically much larger but entirely untrusted. The main objective is to design an MPC protocol that maximizes the overall resiliency of the system in terms of number of corruptions, compensating for the fact that the service provider is oblivious to the initial state of the tier-2 servers. In this setting, an optimally resilient MPC protocol is one that can withstand a number of, adaptive) corruptions that is less than half the number of first-tier servers plus less than half the initially uncorrupted second-tier servers. Given that a majority of second-tier servers can be initially corrupted, making use of this resource seems, prima facie, implausible.

Somewhat surprisingly, we show that such a protocol exists assuming the adversary is unable to distinguish uncorrupted servers across tiers. Instrumental in our construction is the design of a novel message delivery mechanism, termed anonymous yet authentic communication, which may be of independent interest.

This is joint work with Juan Garay, Ran Gelles, David Johnson and Moti Yung.


Vlad Kolesnikov, Bell Labs

Title: Scalable Private Database Querying for Arbitrary Formulas

In this talk, I will discuss the Columbia-Bell Labs project on scalable private DB querying. This work is part of IARPA-sponsored effort described in the preceding talk and abstract by Hugo Krawczyk. Namely, we consider complete and scalable provable security of DBMS, including access control, protection of the data, and, importantly, hiding the SQL query from the server, all while supporting a rich query set. We are restricted by severe performance requirements (10TB, 100M record DB, performance "just a little slower than an insecure DBMS").

I will present our approach, discuss its benefits and tradeoffs, and highlight some issues that arose in our efforts to achieve both provable security and scale.

This talk is based on works with Seung Geol Choi, Wesley George, Fernando Krell, Tal Malkin, Vasilis Pappas and Binh Vo.


Hugo Krawczyk, IBM

Title: Searchable Encryption in the Real World: Theory and Practice

I will discuss some advances in practical solutions to the problem of searchable encryption in which a data owner D outsources a database to an external server E in encrypted form. Later D can authorize clients to search the data at E while hiding information about the database and queried values from E, and preventing clients from learning information they are not authorized for. We also consider the PIR-like requirement by which the data owner D needs to authorize queries while minimizing the information it learns about queried values.

This work is part of an IARPA-funded program that seeks practical and provable solutions to the above problem that supports complex queries in databases of very large size. This project has raised a variety of remarkable challenges at many levels, from basic cryptographic design to security modeling/analysis to major system and implementation efforts. I'll discuss some of these challenges, including the "challenge of being imperfect" (giving up all-but-negligible security while still being able to formally characterize and upper bound the system's leakage), and many other questions that this work raises.

I will also provide an overview of our own solutions that support search via any boolean expression applied to sets of keywords associated with documents in the database. On the practical side, our implementation of the proposed protocol supports boolean search over databases with billions of entries (document-keyword pairs), e.g., a US-scale census database with 100 million records each with 250 associated keywords and a very large collection of crawled web-pages that includes, among others, a full snapshot of the English Wikipedia.

Joint work with David Cash, Stanislaw Jarecki, Charanjit Jutla, Marcel Rosu, and Michael Steiner.


Abhishek Jain, BU & MIT

Title: Concurrently Secure Computation in the Multiple Ideal Query Model

The multiple ideal query (MIQ) model was introduced by Goyal, Jain and Ostrovsky [Crypto'10] as a relaxed notion of security which allows one to construct concurrently secure protocols in the plain model. The main question relevant to the MIQ model is how many queries must we allow to the ideal world adversary?

In this work, we continue the study of the MIQ model and prove severe lower bounds on the number of ideal queries per session. In particular, we show that constant number of queries per session are not sufficient to achieve security in the MIQ model. Further, it is impossible to achieve security in the MIQ models with any polynomial number of queries, if the protocol is constant-round.

Interestingly our negative results use ideas from the work on obfuscation using tamper-proof hardware, even though our setting does not involve any hardware tokens.

(Joint work with Vipul Goyal.)


Huijia Lin, BU & MIT

Title: Constant-Round Concurrent Zero Knowledge From P-Certificates

We present a constant-round concurrent zero-knowledge protocol for NP. Our protocol relies on the existence of families of collision-resistant hash functions, and a new, but in our eyes, natural complexity-theoretic assumption: the existence of P-certificates that is, "succinct" non-interactive proofs/arguments for P. As far as we know, our results yield the first constant round concurrent zero-knowledge protocol for NP with an explicit zero-knowledge simulator based on any assumption.


Anna Lysyanskaya, Brown

Title: Controlled Malleability for NIZK and Signatures: Definitions, Constructions and Applications

Depending on the application, malleability in cryptography can be viewed as either a flaw or -- especially if sufficiently understood -- -- and restricted -- a feature. In this talk we explore malleability as a feature for two important cryptographic building blocks: non-interactive zero-knowledge proofs (NIZKs) and digital signatures. We define and construct controlled-malleable NIZK and signatures, and show how to apply them to compactly verifiable shuffles and delegatable anonymous credentials.

This will be a survey talk based on a collection of joint papers with Melissa Chase, Markulf Kohlweiss, and Sarah Meiklejohn.


Mohammad Mahmoody, Cornell

Title: On the (Im)Possibility of Tamper-Resilient Cryptography

We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider p-tampering attackers that may tamper with each bit of the honest parties' random tape with probability p, but have to do so in an "online" fashion. We present both positive and negative results:

* Any secure encryption scheme, bit commitment scheme, or zero- knowledge protocol can be "broken" with probability p by a p-tampering attacker. The core of this result is a novel technique for biasing the output of bounded-value functions, which may be of independent interest, and provides an alternative proof of the classic Santha-Vazirani theorem).

* Assuming the existence of one-way functions, cryptographic primitives such as signatures, identification protocols can be made resilient to p-tampering attacks for any p = 1/n^c, where c > 0 is a constant and n is the security parameter.

Joint work with Per Austrin, Kai-Min Chung, Rafael Pass, and Karn Seth.


Tal Malkin, Columbia

Title: Multi-Party Computation of Polynomials and Branching Programs without Simultaneous Interaction

Halevi, Lindell, and Pinkas recently proposed a model for secure computation that captures communication patterns that arise in many practical settings, such as secure computation on the web. In their model, each party interacts only once, with a single centralized server. Parties do not interact with each other; in fact, the parties need not even be online simultaneously.

In this work we present a suite of new, simple and efficient protocols for secure computation in this ``one-pass'' model. We give protocols that obtain optimal privacy for the following general tasks:

1) Evaluating any multivariate polynomial $F(x_1, \ldots, x_n)$, modulo a large RSA modulus $N$), where the parties each hold an input $x_i$. (2) Evaluating any branching program over the parties' inputs, where each input is read once in the program.

As a special case, these function classes include all previous functions for which an optimally private, one-pass computation was known, as well as many new functions, including variance and other statistical functions, string matching, classification algorithms and some classes of finite automata and decision trees.

Joint work with Dov Gordon, Mike Rosulek, and Hoeteck Wee


Tal Moran, IDC, Israel

Title: Inchworm: Secure Computation, One Step at a Time

The Inchworm project is an implementation of a secure computation "Virtual Machine": allowing parties to perform a secure computation in the RAM model, in which a program is executed as a sequence of instructions, read from memory. This is the model most programmers today are familiar with. While practical secure computation has made great strides in recent years all existing implementations work in the circuit model (to the best of our knowledge), where the function to be computed is described as a logical circuit, usually consisting of and/or/not/xor gates). This model is harder for most programmers to understand, and although it is possible to transform RAM programs automatically into circuits, the transformation can incur a significant cost. Moreover, if we are willing to relax privacy and reveal the running time of the program, RAM programs have a huge advantage: the actual running time of the protocol can be much smaller than the size of the corresponding circuit, whose size is proportional to the maximal running time.

The idea of implementing RAM-based secure computation has been around for a while, and several recent (theoretical) works have considered this as their main motivation. "In theory", the basic idea is simple: we can design a logical circuit for a CPU, share the CPU state between the parties, and execute each cycle of the CPU as a (circuit-based) secure computation (external memory can be emulated using an oblivious RAM protocol). In practice, this outline leaves a lot of questions unanswered. In this talk we will describe some of the challenges we faced in designing a "virtual CPU" for secure computation, and some of the ideas we used to overcome them.


Antonio Nicolosi, Stevens

Title: Hard Learning Problems Over Non-Commutative Groups

Building on the success of the Learning Parity with Noise (LPN) and Learning With Errors ( LWE) problems as source of intractability for cryptographic applications, Baumslag et al. recently proposed generalized learning problems over arbitrary, possibly non-commutative) groups. We study the average-case hardness of an instantiation of this framework with a family of non-commutative groups known as free Burnside groups of exponent 3. In our main result, we demonstrate that this problem, termed Learning Burnside Homomorphisms with Noise (Bn-LHN), enjoys a strong form of random self-reducibility. Along the way, we also prove a sequence of lemmas regarding homomorphisms of free Burnside groups of exponent 3 that may be of independent interest.

Joint work with N.Fazio, K.Iga, L.Perret, and W.E.Skeith III.


Rafael Pass, Cornell

Title: Interactive Coding, Revisited

How can we encode a communication protocol between two parties to become resilient to adversarial errors on the communication channel? This question dates back to the seminal works of Shannon and Hamming from the 1940's, initiating the study of error-correcting codes (ECC). But, even if we encode each message in the communication protocol with a "good" ECC, the error rate of the encoded protocol becomes poor (namely O(1/m) where m is the number of communication rounds). Towards addressing this issue, Schulman (FOCS'92, STOC'93) introduced the notion of interactive coding. We argue that whereas the method of separately encoding each message with an ECC ensures that the encoded protocol carries the same amount of information as the original protocol, this may no longer be the case if using interactive coding. In particular, the encoded protocol may completely leak a player's private input, even if it would remain secret in the original protocol. Towards addressing this problem, we introduce the notion of knowledge-preserving interactive coding, where the interactive coding protocol is required to preserve the "knowledge" transmitted in the original protocol. Our main results are as follows.

o The method of separately applying ECCs to each message is essentially optimal: No knowledge-preserving interactive coding scheme can have an error rate of 1/m, where m is the number of rounds in the original protocol.

o If restricting to computationally-bounded (polynomial-time) adversaries, then assuming the existence of one-way functions (resp. subexponentially- hard one-way functions), for every eps > 0, there exists a knowledge-preserving interactive coding schemes with constant error rate and information rate n^eps (resp. 1/polylog(n)) where n is the security parameter; additionally to achieve an error of even 1/m requires the existence of one-way functions.

o Finally, even if we restrict to computationally-bounded adversaries, knowledge-preserving interactive coding schemes with constant error rate can have an information rate of at most o(1/ log n). This results applies even to non-constructive interactive coding schemes.

Joint work with Kai-Min Chung and Sidharth Telang.


Manoj Prabakharan, UIUC

Title: A Full Characterization of Completeness for Two-party Randomized Function Evaluation

We close a long line of work that has pursued a full characterization of completeness of (potentially randomized) finite functions for 2-party computation that is secure against active adversaries. The first such complete function was discovered a quarter of a century ago [Kilian, FOCS 1988]. Since then the question of which finite 2-party functions are complete has been studied extensively, leading to characterizations in many special cases. In this work, for the first time, we answer this problem in full.

The main tools in our solution include:

* A new notion of statistically testable games: a kind of interactive proof in the information-theoretic setting where both parties are computationally unbounded but differ in their knowledge of a secret. We show that evaluation of a 2-party function is statistically testable if and only if the function is redundancy free.

* A new formal notion of redundancy in a 2-party function, and a linear-algebraic characterization of this notion.

* An extension to a converse of Shannon's channel coding theorem, where an adversary can adaptively choose the channel based on its view.

* A new set of protocol tools, building upon ideas of [Ishai, Prabhakaran, and Sahai 2008] and others, that enable us to show how to use any complete function $f$ to implement any, randomized) circuit $C$ using only $O(|C| + \kappa)$ calls to $f$, where $\kappa$ is a statistical security parameter. This establishes a universal notion of quantitative ``cryptographic complexity'' of (2-party) functions, independent, up to constants) of which complete finite 2-party functions it is defined with respect to.

We believe that many of these new notions and tools will be of independent interest. Further, we pose the quantitative cryptographic complexity of (2-party) functions as an important, but little understood measure of complexity, with close connections to circuit complexity.

(Joint work with Daniel Kraschewski, Hemanta K. Maji and Amit Sahai.)


Thomas Ristenpart, U. Wisconsin

Title: Message-Locked Encryption and Secure Deduplication

We formalize a new cryptographic primitive, Message-Locked Encryption (MLE), where the key under which encryption and decryption are performed is itself derived from the message. MLE provides a way to achieve secure deduplication, space-efficient secure outsourced storage), a goal currently targeted by numerous cloud-storage providers. We provide definitions both for privacy and for a form of integrity that we call tag consistency. Based on this foundation (we make both practical and theoretical contributions. On the practical side, we provide ROM security analyses of a natural family of MLE schemes that includes deployed schemes. On the theoretical side the challenge is standard model solutions, and we make connections with deterministic encryption, hash functions secure on correlated inputs and the sample-then-extract paradigm to deliver schemes under different assumptions and for different classes of message sources. Our work shows that MLE is a primitive of both practical and theoretical interest.


Christian Schaffner, U. Amsterdam

Title: The Garden-Hose Model

We define a new model of communication complexity, called the garden-hose model. Informally, the garden-hose complexity of a function f:{0,1}^n x {0,1}^n to {0,1} is given by the minimal number of water pipes that need to be shared between two parties, Alice and Bob, in order for them to compute the function f as follows: Alice connects her ends of the pipes in a way that is determined solely by her input x \in {0,1}^n and, similarly, Bob connects his ends of the pipes in a way that is determined solely by his input y \in {0,1}^n. Alice turns on the water tap that she also connected to one of the pipes. Then, the water comes out on Alice's or Bob's side depending on the function value f(x,y).

We provide basic observations about the garden-hose complexity and establish a connection to classical complexity theory by proving that all functions computable in log-space have polynomial garden-hose complexity.

Finally, we show an interesting connection between the garden-hose model and the, in)security of a certain class of quantum position-verification schemes.

Joint work with Harry Buhrman, Serge Fehr, Florian Speelman http://arxiv.org/abs/1109.2563, proceedings of ITCS 2013


Adam Smith, Penn State

Title: Coupled-Worlds Privacy: Exploiting Adversarial Uncertainty in Statistical Data Privacy

We propose a new framework for defining privacy in statistical databases that enables reasoning about, and taking advantage of, adversarial uncertainty about the data. Roughly, our framework requires indistinguishability of the real world in which a mechanism is computed over the real dataset, and an ideal world in which the same mechanism is computed over a ``scrubbed'' version of the dataset, e.g. (one in which an individual user's data is removed). In each case, the underlying dataset X is drawn from the same distribution in some class \Delta, specified as part of the definition) which represents the adversary's uncertainty about~$X$.

We argue that our framework provides meaningful guarantees in a broader range of settings as compared to previous efforts to model privacy in the presence of adversarial uncertainty, technically (because the class of allowed priors is always convex). We also show that several natural, *deterministic* mechanisms, including releasing exact averages, ``truncated'' histograms, or certain discrete statistical estimators) satisfy our definitional framework under realistic assumptions on the distribution of the underlying data.


Hoeteck Wee, George Washington

Title: Attribute-Based Encryption for Circuits

In an attribute-based encryption (ABE) scheme, a ciphertext is associated with descriptive values x, a secret key is associated with functions f, and a secret key decrypts the ciphertext to recover the plaintext iff f(x) = 1. Moreover, the scheme should be secure against collusions of users, namely, given secret keys for polynomially many functions, an adversary learns nothing about the message if none of the secret keys can individually decrypt the ciphertext.

We present ABE schemes for *arbitrary polynomial size circuits*, where the public parameters and ciphertext grow linearly with the depth of the circuit. Our construction is secure under the standard learning with errors, LWE) assumption. Previous constructions of attribute-based encryption were for Boolean formulas, captured by the complexity class NC1.

In the course of our construction, we present a novel framework for constructing ABE schemes based on a new primitive, a two-to-one recoding scheme.

Joint work with Sergey Gorbunov and Vinod Vaikuntanathan.


Daniel Wichs, Northeastern

Title: Dynamic Proofs of Retrievability via Oblivious RAM

Proofs of retrievability allow a client to store her data on a remote server (``in the cloud'') and periodically execute an efficient audit protocol to check that all of the data is being maintained correctly and can be recovered from the server. For efficiency, the computation and communication of the server and client during an audit protocol should be significantly smaller than reading/transmitting the data in its entirety. Although the server is only asked to access a few locations of its storage during an audit, it must maintain full knowledge of all client data to be able to pass.

Starting with the work of Juels and Kaliski (CCS '07), all prior solutions to this problem crucially assume that the client data is static and do not allow it to be efficiently updated. Indeed, they all store a redundant encoding of the data on the server, so that the server must delete a large fraction of its storage to `lose' any actual content. Unfortunately, this means that even a single bit modification to the original data will need to modify a large fraction of the server storage, which makes updates highly inefficient. Overcoming this limitation was left as the main open problem by all prior works.

In this work, we give the first solution providing proofs of retrievability for dynamic storage, where the client can perform arbitrary reads/writes on any location within her data by running an efficient protocol with the server. At any point in time, the client can execute an efficient audit protocol to ensure that the server maintains the latest version of the client data. The computation and communication complexity of the server and client in our protocols is only polylogarithmic in the size of the client's data. The starting point of our solution is to split up the data into small blocks and redundantly encode each block of data individually, so that an update inside any data block only affects a few codeword symbols. The main difficulty is to prevent the server from identifying and deleting too many codeword symbols belonging to any single data block. We do so by hiding where the various codeword symbols for any individual data lock are stored on the server and when they are being accessed by the client, using the algorithmic techniques of oblivious RAM.

Joint work with David Cash and Alptekin Kupcu.


Vassilis Zikas, UCLA

Title: Rational Protocol Design: Cryptographic Security Against Incentive-driven Attacks

Existing work on ``rational cryptographic protocols'' treats each party (or coalition of parties) running the protocol as a selfish agent trying to maximize its utility. In this work we propose a fundamentally different approach which not only captures the incentive-driven reasons of (some of the) protocol participants to deviate from their prescribed behavior, but that is also better suited to modeling a (multi-party) protocol under attack from an external entity. Specifically, we define a two-party game between an attacker and a protocol designer. The goal of the attacker is to break security properties such as correctness or confidentiality (depending on its utility function), possibly by corrupting protocol participants; the goal of the protocol designer is to prevent the attacker from succeeding.

We lay the theoretical groundwork for a study of cryptographic protocol design in this setting by providing a methodology for defining the problem within the traditional simulation paradigm. Our framework provides ways of reasoning about important cryptographic concepts, e.g. (adaptive corruptions or attacks on communication resources) not handled by previous game-theoretic treatments of cryptography. We also prove composition theorems that for the first time provide a sound way to design rational protocols assuming ``ideal communication resources'', e.g. (broadcast, authenticated channels) and then instantiate these resources via standard cryptographic tools.

Finally, we investigate the problem of secure function evaluation (SFE) in our framework, where the attacker has to pay for each party it corrupts. Our results demonstrate how knowledge of the attacker's incentives can be used to circumvent known impossibility results for SFE in the presence of arbitrarily malicious attackers.

Joint work with Juan Garay, Jonathan Katz, Ueli Maurer and Bjoern Tackmann.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on April 23, 2013.