DIMACS Workshop on Cryptography and its Interactions: Learning Theory, Coding Theory, and Data Structures

July 11 - 13, 2016
DIMACS Center, CoRE Building, Rutgers University

Organizers:
David Cash, Rutgers University, david.cash at cs.rutgers.edu
Yuval Ishai, Technion and UCLA, yuvali at cs.technion.ac.il
Amit Sahai, UCLA, sahai at cs.ucla.edu
Presented under the auspices of the DIMACS Special Focus on Cryptography as part of the DIMACS/Simons Collaboration in Cryptography and the DIMACS Special Focus on Cybersecurity.

Abstracts:


Raluca Ada Popa, University of California, Berkeley

Title: Arx: A Strongly Encrypted Database System

In recent years, encrypted databases have emerged as a promising direction that provides data confidentiality without sacrificing functionality: queries are executed on encrypted data. However, existing practical proposals rely on a set of weak encryption schemes that have been shown to leak sensitive data.

In this paper, we propose Arx, the first practical and functionally rich database system that encrypts the data only with strong encryption schemes offering semantic security. Instead of embedding the computation into special encryption schemes as in prior work, Arx embeds the computation into data structures, which it builds on top of traditional encryption schemes. Arx introduces two new database indices, Arx-RANGE for range and order-by-limit queries, and Arx-EQ for equality queries, both of which combine cryptographic primitives with data structures enabling computation.

We show that Arx supports real applications such as ShareLatex and a health data cloud provider, and that its performance overhead is modest.

Joint work with Rishabh Poddar and Tobias Boelter.


Adi Akavia, Academic College of Tel Aviv-Yafo

Title: Topology-Hiding Computation Beyond Logarithmic Diameter

A distributed computation in which nodes are connected by a partial communication graph is called \emph{topology-hiding} if it does not reveal information about the graph (beyond what is revealed by the output of the function). Previous results [Moran, Orlov, Richelson; TCC'15] have shown that topology-hiding computation protocols exist for graphs of logarithmic diameter (in the number of nodes), but the feasibility question for graphs of larger diameter was open even for very simple graphs such as chains, cycles and trees.

In this work, we take a step towards topology-hiding computation protocols for arbitrary graphs by constructing protocols that can be used in a large class of {\em large-diameter networks}, including cycles, trees and graphs with logarithmic circumference. Our results use very different methods from [MOR15] and can be based on a standard assumption (such as DDH).

Joint work with Tal Moran.


Sanjeev Arora, Princeton University

Title: Easiness: the More Interesting Phenomenon in Machine Learning

Decades of work has led to a precise theory of computational hardness, which also underlies cryptography. Its methodology inherently views the world in an adversarial way (even when the results are supposedly about average-case hardness). But increasingly one finds this view unsuitable for understanding machine learning. The interesting computational phenomenon of this age is that simple gradient-based optimization (e.g., deep learning) suffices to solve seemingly intractable learning problems. Developing theoretical understanding of this phenomenon of easiness feels a more interesting scientific problem than further refinement of old hardness results.


Eli Ben Sasson, Technion

Title: Public-setup computational integrity from quasi-linear PCPs

Interactive proof (IP) and argument systems have many promising applications to secure decentralized payment systems like Bitcoin and Zerocash. The diversity of applications calls for IP systems that can be applied efficiently to "natural" languages in NTIME(T(n)) and which scale-well, asymptotically and concretely, with instance size.

Prior implementations of scalable IP systems rely on a trusted setup phase that involves a secret trapdoor, that, if compromised, ruins security.

In this talk I will discuss our recent implementation of a scalable IP system that has no trapdoors, and which is founded on efficient quasi-linear probabilistically checkable proofs (PCP)s.

Based on joint works with Iddo Ben-Tov, Alessandro Chiesa, Ariel Gabizon, Daniel Genkin, Matan Hamilis, Evgenya Pergament, Michael Riabzev, Mark Siberstein, Eran Tromer and Madars Virza.


Andrej Bogdanov, The Chinese University of Hong Kong

Title: Secret Sharing: A Probabilistic Perspective

A secret sharing scheme can be modelled as a collection of distributions that are "locally" indistinguishable but "globally" distinguishable. This probabilistic perspective is complementary to the algorithmic one which puts emphasis on the sharing and reconstruction procedures. It enables us to (partially) answer the following questions:

1. Is linear algebra necessary for secret sharing? The answer is essentially no: We show that for every pair of constants 0 < s < r ≤ 1 and sufficiently large n there exists a scheme for n parties in which perfect secrecy holds against any sn parties, reconstruction can be done by any rn parties, and the sharing and reconstruction algorithms are constant-depth circuits of size polynomial in n.

2. How do perfect security and statistical security relate in the context of low-complexity secret sharing schemes? We give a complete answer when the reconstruction function is an OR of all n shares.

3. Can the share size in Shamir's scheme be reduced when the secret is a single bit? We show that Shamir's scheme is in fact optimal in this respect up to a constant additive term for any number of parties n and any threshold value 1 < t < n.

The results are obtained via new connections of secret sharing to approximation theory and game theory.

The talk will be based on joint works with Siyao Guo, Yuval Ishai, Ilan Komargodski, Emanuele Viola, and Christopher Williamson.


Elette Boyle, IDC Herzlia

Title: Oblivious RAM and Coding?

An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern: i.e, for every input the memory locations accessed are similarly distributed. Since their inception, ORAMs have become an invaluable tool in designing cryptographic systems, where observable memory access patterns crucially must not leak sensitive information.

We briefly survey known upper and lower bounds on the overhead required to obliviously simulate general RAM programs. In particular, we discuss: the original lower bound of Goldreich and Ostrovsky within a restricted "balls and bins" model; recent evidence in a joint work with Naor against this bound when this restriction is lifted; and possible directions for stronger upper/lower bounds using coding theory.


Flavio P. Calmon, IBM Research

Title: Estimation-Theoretic Tools for Security and Privacy

In this work, we use tools from information theory, statistics and estimation theory to characterize the fundamental limits of estimation when only partial statistics of the data are known. More specifically, we introduce security metrics and associated results based on the spectrum of the conditional expectation operator, called the principal inertia components (PICs). The PICs allow a fine-grained decomposition of the dependence between a hidden and an observed variable which, in turn, is useful for deriving fundamental bounds for estimation problems, and for measuring information leakage in secure communication models. We show how the PICs can be used to quantify the fundamental privacy-utility tradeoff and to design privacy-assuring mechanisms. In addition, we motivate the potential application of the PICs to quantify information-theoretic limits of simple ciphers.

This work was done in collaboration with Muriel Medard (MIT), Ali Makhdoumi (MIT), Mark Christiansen (National University of Ireland Maynooth), and Ken R. Duffy (National University of Ireland Maynooth).


Alessandro Chiesa, UC Berkeley

Title: Testing Proximity to Codes with Interactive Oracle Proofs, with More Efficiency and in Zero Knowledge

Interactive oracle proofs (IOPs) are a new proof system model introduced in [BCS16,RRR16] that combine aspects of IPs and PCPs. I will discuss several constructions of IOPs that obtain results for proximity testing to linear codes that are not known to be achievable via IPs or PCPs alone.


Gil Cohen, Caltech

Title: Recent Advances in Non-malleable Extractors and Privacy Amplification Protocols

Dodis and Wichs (STOC'09) devised a framework for constructing two-round privacy amplification protocols against active adversaries, which is instantiated by the so-called non-malleable extractor. Motivated by this connection, and by further applications, an extensive effort was directed towards constructing non-malleable extractors. In this talk we will survey this line of research.

No prior knowledge is assumed.


Amit Daniely, Google

Title: Non-cryptographic Limitations on Learning

Traditionally, hardness of learning was proved assuming that certain collections of functions form collections of one-way trapdoor permutations. In this talk I will describe a new method to establish hardness of learning based on different type of assumptions. Under a natural generalization of Feige's assumption about random K-SAT and K-XOR, this method substantially reduces gaps between known upper and lower bounds. Concretely, it follows that:

1. Learning DNFs is hard.
2. Learning an intersection of super logarithmic number of halfspaces is hard.
3. Learning Halfspaces in the presence of a tiny amount of noise is hard.
4. Several additional learning problems are hard. In particular, virtually all problems that were previously shown intractable (under various, mostly cryptographic, assumptions).

Based on joint work with Nati Linial and Shai Shelev-Shwartz.


Yevgeniy Dodis, New York University

Title: Spooky Encryption and its Applications

Consider a setting where inputs x_1,...,x_n are encrypted under independent public keys. Given the ciphertexts c_i = Enc(pk_i,x_i), Alice outputs ciphertexts c'_1,...,c'_n that decrypt to y_1,...,y_n, respectively. What relationships between the x_i's and y_i's can Alice induce?

If the scheme is homomorphic then "local" (component-wise) relationships are achievable. On the other hand security of the scheme disallows "signaling" relationships, meaning that y_i cannot depend on x_j for j \neq i. However, there are also relationships which are neither signaling nor local. Dwork et al. [DLNNR01] asked if it is possible to have encryption schemes that support such "spooky" relationships. Answering this question is the focus of our work.

We show that under the Learning with Errors assumption, there exists an encryption scheme supporting a large and natural class of such spooky relationships. For this result, the public keys all depend on common public randomness. Our second result shows that, assuming sub-exponentially hard indistinguishability obfuscation (iO) (and additional more standard assumptions), we can remove the common randomness and choose the public keys completely independently.

We discuss several implications of spooky encryption. Firstly, it gives a strong counter-example to a method proposed by Aiello et al. [ABOR00] for building arguments for NP from homomorphic encryption. Secondly, it gives a simple 2-round multi-party computation protocol where, at the end of the first round, the parties can locally compute an additive secret sharing of the output. Lastly, it yields a function secret sharing (FSS) scheme for all functions.

Joint work with Shai Halevi, Ron Rothblum, and Daniel Wichs.


Juan Garay, Yahoo Labs

Title: Bootstrapping the Blockchain

The Bitcoin backbone protocol [Eurocrypt 2015] extracts two basic properties of Bitcoin's underlying blockchain data structure, called "common prefix" and "chain quality," and shows how fundamental applications such as consensus and a robust public transaction ledger can be built on top of them based on "proofs of work" (POWs), assuming an adversarial hashing power strictly less than 1/2 and no adversarial pre-computation - or, alternatively, the existence of an unpredictable "genesis" block. In this work we show how to remove the latter assumption, presenting a "bootstrapped" backbone protocol that builds genesis blocks "from scratch" in the presence of adversarial pre-computation and which relies solely on POWs. This is joint work with Aggelos Kiayias, Nikos Leonardos and Giorgos Panagiotakos.


Sivakanth Gopi, Princeton University

Title: 2-Server PIR with Sub-polynomial Communication

A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the ith bit of an n-bit database replicated among two servers (which do not communicate) while not revealing any information about i to either server. In this work we construct a 1-round 2-server PIR with total communication cost n^{O(loglogn/sqrt(logn))}. This improves over the currently known 2-server protocols which require O(n^{1/3}) communication and matches the communication cost of known 3-server PIR schemes. Our improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives.

Joint work with Zeev Dvir.


Russell Impagliazzo, University of California, San Diego

Title: Simultaneous Secrecy and Reliability Amplification for a General Channel

We present a general notion of channel for cryptographic purposes, that can model either a (classical) physical channel or the consequences of a cryptographic protocol, or any hybrid. We consider simultaneous security and reliability amplification for such channels. We start with a channel where the intended receiver gets the transmitted bit except with some probability and the attacker can guess the transmitted bit except with a somewhat higher probability. We wish to use the channel to define one where the receiver gets the transmitted bit almost certainly while only negligible information is leaked to the attacker.

We show that simultaneous security and reliability amplification is not possible for the most general model of channel, but, at least for some values of the parameters, it is possible for a restricted class of channels that still includes both standard information-theoretic channels and keyless cryptographic protocols. Even in the restricted model, we require that for the original channel, the failure chance for the attacker must be a factor c more than that for the intended receiver. We show that for any c > 4, there is a one-way protocol (where the sender sends information to the receiver only) which achieves simultaneous secrecy and reliability.

From results of Holenstein and Renner, there are no such one-way protocols for c < 2. On the other hand, we also show that for c > 1.5, there are two-way protocols that achieve simultaneous secrecy and reliability. We propose using similar models to address other questions in the theory of cryptography, such as using noisy channels for secret agreement, trade-offs between reliability and security, and the equivalence of various notions of oblivious channels and secure computation.

Joint work with: Ragesh Jaiswal, Valentine Kabanets, Bruce M. Kapron, Valerie King, and Stefano Tessaro.


Charanjit Jutla, IBM Research

Title: Upending Stock Market Structure Using Secure Computation

The stock markets have two primary functions: that of providing liquidity and price discovery. While the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, M. O'Hara (Journal of Finance, 2003) has established that both liquidity and price discovery affect asset pricing, and in particular asset returns.

While the cost of liquidity provision is borne by investors, and is clearly detrimental to asset returns, periodic price discovery has both positive and negative consequences for asset pricing. In this work we propose using cryptography, and in particular multi-party secure computation and zero-knowledge proofs, to setup a novel stock market structure that, to a large extent, removes the negative consequences of liquidity costs and periodic price discovery. Interestingly, the proposed market structure takes us back to the early days of stock markets, i.e. periodic call markets, but with the not so "trusted" auctioneer replaced by distributed computing where no individual party (or small coalition) gets to know the order book.


Seny Kamara, Brown University

Title: SQL on Structurally-Encrypted Databases

We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS '15). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.

This is joint work with Tarik Moataz.


Adam Klivans, University of Texas at Austin

Title: The Complexity of Learning Neural Network Architectures

We will survey recent hardness results on the worst-case learnability of commonly studied neural networks. Given the computational difficulty of the problem, we discuss a distributional approach that yields a variety of efficient algorithms.


Dakshita Khurana, UCLA

Title: Breaking the Three Round Barrier for Non-malleable Commitments

We construct two-message non-malleable commitments with respect to opening in the standard model, assuming only one-to-one one-way functions. Our protocol consists of two unidirectional messages by the committer (with no message from the receiver), and is secure against all polynomial-time adversaries in the standard synchronous setting.

Pass (TCC 2013) proved that any commitment scheme with non-malleability with respect to commitment, using only 2 rounds of communication, cannot be proved secure via a black-box reduction to any "standard" intractability assumption. We extend this by showing a similar impossibility result for commitments with non-malleability with respect to opening, another standard notion of non-malleability for commitments, for any 2-message challenge-response protocol, as well.

However, somewhat surprisingly, we show that this barrier breaks down in the setting of two unidirectional messages by the committer (with no message from the receiver), for non-malleability with respect to opening.

Our techniques depart significantly from the commit-challenge-response structure followed by nearly all prior works on non-malleable protocols in the standard model. Our methods are combinatorial in nature: also giving an alternate construction of (unconditionally secure) split-state non-malleable codes.

Our protocol, together with our impossibility result, also resolves the round complexity of block-wise non-malleable codes (Chandran et. al.) w.r.t. natural black-box reductions.

This is joint work with Vipul Goyal and Amit Sahai.


Vlad Kolesnikov, Bell Labs

Title: Overlaying Circuit Clauses for Secure Computation

An important drawback in the circuit-based SFE is its linear in circuit size cost, even though often only a small portion of it needs to be evaluated. This is the case, e.g., when the program contains switch statements, and evaluates one of several branches based on a private input or an internal variable. We show how to greatly reduce this overhead. Our approach relies on graph-theoretic and cryptographic technical contributions.

First, given a set S = {C1, ..., Ck} of Boolean circuits, we show how to construct a universal for S circuit C0, which is much smaller than Valiant's universal circuit or a circuit incorporating all C1,...,Ck. Namely, given C1,...,Ck and viewing them as directed acyclic graphs (DAGs) D1, ..., Dk, we embed them in a new graph D0. The embedding is such that SFE of any of C1,...,Ck could be implemented by an SFE of a (specially programmed) circuit corresponding to D0. In our experiments on 32 diverse circuits, we obtain factor 6 circuit size reduction, i.e. |D0| < SUM |Di| / 6.

Second, we show how to improve Garbled Circuits (GC) and GMW-based SFE using such S-universal circuit. We also improve semi-private function evaluation (SPF-SFE), whose goal is to hide from GC evaluator (privacy-critical) portions of the evaluated function.

In SPF-SFE, there is no cost in programming of the clauses. In the GC case, compared to state of the art, we incur factor 13 overhead due to private programming of the clauses.

The most interesting case here is the application to the GMW approach. We provide a novel observation that in GMW the cost of processing a gate is almost the same for 5 or more Boolean inputs, as it is for the usual case of 2 Boolean inputs. In our context, this means that GMW gates can be programmed almost for free, based on the secret-shared programming of the clause.

This is joint work with Sean Kennedy and Gordon Wilfong (Bell Labs).


Ilan Komargodski, Weizmann Institute of Science

Title: How to Share a Secret, Infinitely

Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the k-threshold access structure, where the qualified subsets are those of size at least k. When k=2 and there are $n$ parties, there are schemes where the size of the share each party gets is roughly log(n) bits, and this is tight even for secrets of 1 bit. In these schemes, the number of parties n must be given in advance to the dealer.

We consider the case where the set of parties is not known in advance and could potentially be infinite. Our goal is to give the i-th party arriving a small share as possible as a function of t. Our main result is such a scheme for the k-threshold access structure where the share size of party t is (k-1)*log t + poly(k)*o(log t). For k=2 we observe an equivalence to prefix codes and present matching upper and lower bounds of the form log t + loglog t + logloglog t + O(1). Finally, we show that for any access structure there exists such a secret sharing scheme with shares of size 2^{t-1}.

Based on a joint work with Moni Naor and Eylon Yogev.


Swastik Kopparty, Rutgers University

Title: Decoding Reed-Muller Codes on Product Sets

We give a polynomial time algorithm to decode multivariate polynomial codes of degree $d$ up to half their minimum distance, when the evaluation points are an arbitrary product set $S^m$, for every $d < |S|$. Previously known algorithms can achieve this only if the set $S$ has some very special algebraic structure, or if the degree $d$ is significantly smaller than $|S|$. We also give a near-linear time randomized algorithm, which is based on tools from list-decoding, to decode these codes from nearly half their minimum distance, provided $d < (1-\epsilon)|S|$ for constant $\epsilon > 0$.

Our result gives an $m$-dimensional generalization of the well known decoding algorithms for Reed-Solomon codes, and can be viewed as giving an algorithmic version of the Schwartz-Zippel lemma.


Ranjit Kumaresan, MIT

Title: Privacy-Preserving Smart Contracts

Cryptographic technologies such as encryption and authentication provide stability to modern electronic commerce. Recent technologies such as Bitcoin have the potential to further enhance the way we conduct electronic commerce. In this talk, I will describe my work in developing a robust theory of privacy-preserving contracts that are self-enforcing and do not require third-party intervention. Starting from Bitcoin-inspired abstractions, the theory enables building provably secure and scalable applications on top of cryptocurrencies.


Xin Li, Johns Hopkins University

Title: Almost Optimal Non-malleable Extractors and Privacy Amplification Protocols

Privacy amplification is a basic problem in cryptography, where two parties communicate over an adversarially controlled channel to convert their shared weak random secret into nearly uniform secret random strings. The primary goal is to use as few number of interactions to achieve entropy loss as small as possible. A key ingredient in designing privacy amplification protocols is an object called non-malleable extractor. Previously, explicit constructions of such extractors require entropy at least log^2 (n/\epsilon) for input length n and error \epsilon, which translates into security parameter at most \sqrt{k} when the shared secret has entropy k.

In this talk I'll describe the first explicit construction of a non-malleable extractor that only requires entropy log^{1+o(1)} (n/\epsilon), which in turn gives optimal privacy amplification protocols for nearly all possible security parameters.

Based on joint work with Eshan Chattopadhyay.


Shachar Lovett, UC San Diego

Title: Random local functions as pseudo-random generators

Suppose that you have n truly random bits x_1,...,x_n, and you want to use them to generate m>>n pseudo-random bits y_1,...,y_m using a local encoding, i.e. each output bit y_i is a function of d=O(1) of the random bits. In the polynomial regime of m=n^s, s>1, the only known solution is via "random local functions", where a predicate P:{0,1}^d -> {0,1} is fixed, and each y_i is computed by applying P to a random (and publicly known) subset of x_1,...,x_n.

In this work, we characterize which properties of P are needed for: (i) The sequence of pseudo-random bits to be a small bias generator (that is, fool F_2 linear tests) (ii) The sequence of pseudo-random bits to be resilient to algebraic attacks (such as linearization attacks, XL, F4, F5, etc), and more generally attacks expressed in the Polynomial Calculus proof system

This extends previous works by several authors, which answered this question in some restricted settings; it proves and/or refutes several recent conjectures for such a characterization, as well as conjectures for the related problem of refuting a random CSP.

Joint work with Benny Applebaum. http://eccc.hpi-web.de/report/2015/172/.


Tal Malkin, Colombia University

Title: Non-malleable Codes for Bounded Depth, Bounded Fan-In Circuits

Informally, non-malleable codes provide a means of transforming an adversarial channel into a channel whose output is either identical to or unrelated to the input. We show how to construct efficient, unconditionally secure non-malleable codes for bounded output locality. In particular, our scheme is resilient against functions such that any output bit is dependent on at most n^δ bits, where n is the total number of bits in a codeword and 0≤δ<1 a constant. Notably, this tampering class includes NC^0.

Joint work with Marshall Ball, Dana Dachman-Soled, and Mukul Kulkarni.


Moni Naor, Weizmann Institute of Science

Title: Bloom Filters and Cryptography

Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency or correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure.

In this talk, we will consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we will consider Bloom filters, a data structure for approximate set membership, and prove a tight connection between Bloom filters in this model and cryptography.

Joint work with Eylon Yogev.


Kobbi Nissim, Harvard University and Ben-Gurion University

Title: Accessing Data While Preserving Privacy

We initiate a formal study of the privacy-efficiency tradeoff of secure database systems. Such systems allow storing data on a remote server and accessing it efficiently while maintaining the privacy of the stored data. While strong cryptographic tools (such as FHE, ORAM, SFE) can be used for this task, implementations of secure database systems also experiment with a combination of weaker primitives with the goal of striking a good privacy-efficiency tradeoff. We provide abstract models that capture the basic properties of secure database systems and identify their fundamental leakage channels. With these models we can provide an implementation independent investigation of their inherent security-efficiency tradeoffs and point to some inherent limitations. In particular, we provide generic reconstruction attacks where the server (or an eavesdropper on the communication line) recovers the secret attributes of records stored in the database.

We also present a new model of differentially private storage. We give a generic construction of differentially private storage that combines ORAM and differentially private sanitizers as well as more efficient constructions and lower bounds for the case where access pattern is revealed.

We have implemented some of our attacks and algorithms, and report on their efficiency.

Joint work with Georgios Kellaris, George Kollios, and Adam O'Neill.


Charalampos Papamanthou, University of Maryland

Title: IntegriDB: Verifiable SQL for Outsourced Databases

We present INTEGRIDB, a system allowing a data owner to outsource storage of a database to an untrusted server, and then enable anyone to perform verifiable SQL queries over that database. Our system handles a rich subset of SQL queries, including multidimensional range queries, JOIN, SUM, MAX/MIN, COUNT, and AVG, as well as (limited) nestings of such queries. Even for tables with 10^5 entries, INTEGRIDB has small proofs (a few KB) that depend only logarithmically on the size of the database, low verification time (tens of milliseconds), and feasible server computation (under a minute). Efficient updates are also supported. We prove security of INTEGRIDB based on known cryptographic assumptions, and demonstrate its practicality and expressiveness via performance measurements and verifiable processing of SQL queries from the TPC-H and TPC-C benchmarks.


Chris Peikert, University of Michigan

Title: How (Not) to Instantiate Ring-LWE

The *learning with errors over rings* (Ring-LWE) problem---or more accurately, family of problems---has emerged as a promising foundation for cryptography due to its practical efficiency, conjectured quantum resistance, and provable *worst-case hardness*: breaking certain instantiations of Ring-LWE is at least as hard as quantumly approximating the Shortest Vector Problem on any ideal lattice in the ring.

Despite this hardness guarantee, several recent works have shown that some instantiations of Ring-LWE (for particular rings and error distributions) can be broken by certain algebraic attacks. While the affected instantiations are not supported by worst-case hardness theorems, this state of affairs raises natural questions about what other instantiations might be vulnerable, and in particular whether certain classes of rings are inherently unsafe for Ring-LWE.

In this talk we review the known attacks on Ring-LWE and vulnerable instantiations. We give a new, unified exposition which reveals an elementary geometric reason why the attacks work, and provide rigorous analysis to explain certain phenomena that were previously only exhibited by experiments. In all cases, the insecurity of an instantiation is due to the fact that its error distribution is insufficiently ``well spread'' relative to its ring.

On the positive side, we show that any Ring-LWE instantiation which satisfies (or only almost satisfies) the hypotheses of the `"worst-case hardness of search" theorem is *provably immune* to broad generalizations of the above-described attacks. This holds for the ring of integers in any number field, so the rings themselves are not the source of insecurity in the vulnerable instantiations. Moreover, the hypotheses of the worst-case hardness theorem are nearly minimal ones which provide these immunity guarantees.


Tom Ristenpart, Cornell Tech

Title: Stealing Machine Learning Models and Using Them to Violate Privacy

Machine learning models are increasingly being trained on sensitive data and deployed widely. This means that attackers can obtain access to the models directly or indirectly via APIs that allow querying feature vectors to receive back predictions based on the model. I will present our recent work exploring confidentiality and privacy threats in this context.

First, I will discuss our recent work showing how to steal machine learning models from prediction APIs. We show algorithms for extracting replicas of target linear models, neural networks, and decision trees from production machine learning cloud services such as Amazon Prediction API and BigML. I'll then explore one specific privacy threat that arises from access to models, what we call model inversion. In the model inversion attacks we develop, an adversary can exploit access to a model and some partial information about a person to improve their ability to guess sensitive information about that person, such as their genotype, an image of their face, and private lifestyle behaviors.

This talk will cover joint work with Matthew Fredrikson, Ari Juels, Eric Lantz, Somesh Jha, Simon Lin, David Page, Michael K. Reiter, Florian Tramer, and Fan Zhang.


Noga Ron-Zewi, Institute for Advanced Study and DIMACS

Title: Towards Optimal Deterministic Coding for Interactive Communication

We show an efficient, deterministic interactive coding scheme that simulates any interactive protocol under random errors with nearly optimal parameters.

Specifically, our coding scheme achieves a communication rate of 1 - Õ(\sqrt{epsilon}) and a failure probability of exp(-n/ \log n), where n is the protocol length and each bit is flipped independently with constant probability epsilon. Prior to our work, all nontrivial deterministic schemes (either efficient or not) had a rate bounded away from 1. Furthermore, the best failure probability achievable by an efficient deterministic scheme with constant rate was only quasi-polynomial, i.e., of the form exp(-polylog(n)) (Braverman, ITCS 2012). The rate of our scheme is essentially optimal (up to poly-logarithmic factors) by a result of Kol and Raz (STOC 2013).

Our coding scheme builds on the recent work of Brakerski and Kalai (FOCS 2012) which gave the first efficient randomized interactive coding scheme with constant rate and exponentially small failure probability, and was inspired in turn by some other known cryptographic protocols. We observe that the coding scheme of Brakerski and Kalai can alternatively be viewed as an instance of code concatenation, a notion standard in coding theory, and exploit this connection to obtain improved interactive coding schemes also in the deterministic setting. This highlights a new connection between cryptographic and coding-theoretic techniques.

Joint work with Ran Gelles, Bernhard Haeupler, Gillat Kol and Avi Wigderson


Guy Rothblum, Samsung Research America

Title: From Obfuscation to the Security of Fiat-Shamir for Proofs

The Fiat-Shamir paradigm is a heuristic for converting three-round identification schemes into signature schemes, and more generally, for collapsing rounds in constant-round public-coin interactive protocols. This heuristic is very popular both in theory and in practice, and its security has been the focus of extensive study. In particular, this paradigm was shown to be secure in the so-called Random Oracle Model. However, in the plain model, mainly negative results were shown.

We give a positive result for the security of this paradigm in the plain model. Specifically, we construct a hash function for which the Fiat Shamir paradigm is secure when applied to proofs (as opposed to arguments), assuming the existence of a sub-exponentially secure indistinguishability obfuscator, the existence of an exponentially secure input-hiding obfuscator for the class of multi-bit point functions, and the existence of a sub-exponentially secure one-way function.

Joint work with Yael Kalai and Ron Rothblum.


Ron Rothblum, MIT

Title: Constant Round Interactive Proofs for Delegating Computations

Interactive proofs, introduced by Goldwasser, Micali and Rackoff, have had a dramatic impact on Complexity Theory and Cryptography. In particular, the celebrated IP=PSPACE Theorem [LFKN92,Shamir92] allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a large number of communication rounds and heavy computation for generating the proof.

We introduce new interactive proofs that are very efficient in the number of rounds and computation time, that are particularly well suited for delegating bounded-space computations (e.g., in the context of cloud computing). Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space there exists an interactive proof that satisfies the following strict efficiency requirements:

(1) the honest prover runs in polynomial time, (2) the verifier is almost linear time (and under some conditions even sub linear), and (3) the interaction consists of only a constant number of communication rounds. Prior to this work, very little was known about the power of efficient, constant-round interactive proofs.

Joint work with Omer Reingold and Guy Rothblum.


Shubhangi Saraf, Rutgers University

Title: High rate locally-correctable and locally-testable codes with sub-polynomial query complexity

Locally decodable codes (and the closely related locally correctable codes) are error-correcting codes that admit efficient decoding algorithms: They give a method to encode k bit messages into n bit codewords such that even after a constant fraction of the bits of the codeword get corrupted, any bit of the original message (or original codeword for locally correctable codes) can be recovered by only looking at only a sub linear or even just constant number of bits of the corrupted codeword.

Locally testable codes are codes that admit efficient testing algorithms: They give a method to encode k bit messages into n bit codewords such that there is an effect tester that can query an n-bit string in only a sub linear or even just constant number of bits of the corrupted codeword and decide if the string is near or far from a true codeword.

The tradeoff between the rate of a code and the locality/efficiency of its decoding and testing algorithms has been studied extensively in the past decade, with numerous applications to complexity theory and pseudorandomness.

In this talk I will discuss some recent results giving efficient sub-polynomial query decoding and testing algorithms for high rate error correcting codes. I will also highlight some of the most interesting challenges that remain.

Based on joint work with Swastik Kopparty, Or Meir and Noga Ron-Zewi.


Ido Shahaf, Hebrew University

Title: Searchable Symmetric Encryption: Optimal Locality in Linear Space via Two Dimensional Balanced Allocations

Searchable symmetric encryption (SSE) enables a client to store a database on an untrusted server while supporting keyword search in a secure manner. Despite the rapidly increasing interest in SSE technology, experiments indicate that the performance of the known schemes scales badly to large databases. Somewhat surprisingly, this is not due to their usage of cryptographic tools, but rather due to their poor locality (where locality is defined as the number of non-contiguous memory locations the server accesses with each query). The only known schemes that do not suffer from poor locality suffer either from an impractical space overhead or from an impractical read efficiency (where read efficiency is defined as the ratio between the number of bits the server reads with each query and the actual size of the answer).

We construct the first SSE schemes that simultaneously enjoy optimal locality, optimal space overhead, and nearly-optimal read efficiency. Specifically, for a database of size $N$, under the modest assumption that no keyword appears in more than $N^{1 - 1/\log \log N}$ documents, we construct a scheme with read efficiency $\tilde{O}(\log \log N)$. This essentially matches the lower bound of Cash and Tessaro (EUROCRYPT '14) showing that any SSE scheme must be sub-optimal in either its locality, its space overhead, or its read efficiency. In addition, even without making any assumptions on the structure of the database, we construct a scheme with read efficiency $\tilde{O}(\log N)$.

Our schemes are obtained via a two-dimensional generalization of the classic balanced allocations ("balls and bins") problem that we put forward. We construct nearly-optimal two-dimensional balanced allocation schemes, and then combine their algorithmic structure with subtle cryptographic techniques.

Joint work with Gilad Asharov, Moni Naor and Gil Segev.


Barak Shani, The University of Auckland

Title: Applications and Limitations of the SFT Algorithm

The Goldreich-Levin hardcore bit is an early example of the use of learning theory in cryptography. The work of Kushilevitz and Mansour and of Akavia, Goldwasser and Safra on learning significant Fourier transforms (SFT) led to interesting applications in cryptography and to bit security in particular. However, some of the most important bit security problems (e.g. the security of ECDH bits) are still open. In my talk I will explain some of the applications of the SFT algorithm, as well as the open problems. I will also present an obstacle that limits the uses of the SFT algorithm.


Roberto Tamassia, Brown University

Title: Efficient Verifiable Range and Closest Point Queries in Zero-Knowledge

We present an efficient method for answering one-dimensional range and closest-point queries in a verifiable and privacy-preserving manner. We consider a model where a data owner outsources a dataset of key-value pairs to a server, who answers range and closest-point queries issued by a client and provides proofs of the answers. The client verifies the correctness of the answers while learning nothing about the dataset besides the answers to the current and previous queries. Our work yields for the first time a zero-knowledge privacy assurance to authenticated range and closest-point queries. Previous work leaked the size of the dataset and used an inefficient proof protocol. Our construction is based on hierarchical identity-based encryption. We prove its security and analyze its efficiency both theoretically and with experiments on synthetic and real data.

Joint work with Esha Ghosh and Olga Ohrimenko.


Yael Tauman Kalai, Microsoft Research

Title: Interactive Coding with Optimal Round and Communication Blowup

We construct an interactive coding scheme, a notion introduced by Schulman (FOCS 1992, STOC 1993). Loosely speaking, we show how to convert any two-party interactive protocol into one that is resilient to constant-fraction of *insertion* and *deletion* errors, while preserving computational efficiency, and blowing up the communication complexity and the *round* complexity by a constant factor that approaches 0 as the error-rate approaches 0. We also construct an error-resilient scheme with communication blowup of 1+\tilde{O}(eps^{1/4}), where eps is a bound on the error-rate, and round blowup of O(1). We give evidence that this communication blowup is optimal (in our model).

Previous works were not concerned with the round complexity, and typically assumed that one bit is sent per round. Moreover, most previous works considered only substitution errors or erasures errors.

We consider the model where in each round each party may send a message of *arbitrary* length, where the length of the messages and the length of the protocol may be adaptive, and may depend on the private inputs of the parties and on previous communication. This model is known as the (synchronous) message passing model, and is commonly used in distributed computing, and is the most common model used in cryptography.

This is joint work with Klim Efremenko and Elad Haramaty.


Nikos Triandopoulos, Boston University

Title: Taking Authenticated Range Queries to Arbitrary Dimensions

We study the problem of authenticated multi-dimensional range queries over outsourced databases, where an owner outsources its database to an untrusted server, which maintains it and answers queries to clients. Previous schemes either scale exponentially in the number of query dimensions, or rely on heuristic data structures without provable bounds. Most importantly, existing work requires an exponential, in the database attributes, number of structures to support queries on every possible combination of dimensions in the database. We present the first schemes that (i) scale linearly with the number of dimensions, and (ii) support queries on any set of dimensions with linear in the number of attributes setup cost and storage. We achieve this through an elaborate fusion of novel and existing set-operation sub-protocols. We prove the security of our solutions relying on the q-Strong Bilinear Diffie-Hellman assumption, and experimentally confirm their feasibility.

Joint work with Dimitris Papadopoulos and Stavros Papadopoulos [CCS, 2014].


Gregory Valiant, Stanford University

Title: Information-Theoretically Secure Databases

We introduce the notion of a database system that is information theoretically "Secure In Between Accesses"--a database system with the properties that 1) users can efficiently access their data, and 2) while a user is not accessing their data, the user's information is information theoretically secure to malicious agents, provided that certain requirements on the maintenance of the database are realized. We stress that the security guarantee is information theoretic and everlasting: it relies neither on unproved hardness assumptions, nor on the assumption that the adversary is computationally or storage bounded.

We propose a realization of such a database system and prove that a user's stored information, in between times when it is being legitimately accessed, is information theoretically secure both to adversaries who interact with the database in the prescribed manner, as well as to adversaries who have installed a virus that has access to the entire database and communicates with the adversary.

The central idea behind our design is the construction of a "re-randomizing database" that periodically changes the internal representation of the information that is being stored. To ensure security, these remappings of the representation of the data must be made sufficiently often in comparison to the amount of information that is being communicated from the database between remappings and the amount of local memory in the database that a virus may preserve during the remappings. The core of the proof of the security guarantee is the following communication/data tradeoff for the problem of learning sparse parities from uniformly random n-bit examples: any algorithm that can learn a parity of size k with probability at least p and extracts at most r bits of information from each example, must see at least p⋅(n/r)^{k/2}c_k examples.

Joint work with Paul Valiant.


David Zuckerman, University of Texas at Austin

Title: Explicit Two-Source Extractors and Resilient Functions

We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least log^C n for a large enough constant C. Our extractor outputs one bit and has error n^{-\Omega(1)}. The best previous extractor, by Bourgain, required each source to have min-entropy .499n.

A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n^{1-delta}, for any delta>0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown n-q bits are chosen almost polylog(n)-wise independently, and the remaining q=n^{1-\delta} bits are chosen by an adversary as an arbitrary function of the n-q bits. The best previous construction, by Viola, achieved q=n^{1/2 - \delta}. Non-malleability also plays a role in our construction.

Our explicit two-source extractor directly implies an explicit construction of a 2^{(log log N)^{O(1)}}-Ramsey graph over N vertices, improving bounds obtained by Barak et al. and matching independent work by Cohen.

Joint work with Eshan Chattopadhyay.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on July 6, 2016.