Title: Non-malleable Codes in the Split-State Model
Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of tampering functions F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into a constant number of shares, and the attacker is allowed to arbitrarily tamper with each share individually. The split-state tampering arises in many realistic applications, such as the design of non-malleable secret sharing schemes, motivating the question of designing efficient non-malleable codes in this model. Prior to this work, non-malleable codes in the split-state model received considerable attention in the literature, but were either (1) constructed in the random oracle model [DPW10], or (2) relied on advanced cryptographic assumptions (such as non-interactive zero-knowledge proofs and leakage-resilient encryption) [LL12], or (3) could only encode 1-bit messages [DKO13].
As our main result, we give two constructions of efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model. Our first construction [ADL14] has two shares of length O(n^7) each (where n is the length of the message), and its analysis uses some a surprising connection with some results from additive combinatorics and linearity testing. In contrast, our second construction [ADK14] uses 5 shares of length O(n^2) each, and only relies of variour properties of randomness extractors, such as the alternating extraction theorem of [DP07].
Joint works with Yevgeniy Dodis, Tomasz Kazana and Shachar Lovett.
Title: SHA-3 and Tree-based Hash Modes
NIST announced a competition to develop a new cryptographic hash algorithm in 2007. After several workshops and 3 rounds of competition Keccak was selected as the winner in 2012. Since the selection of the winner NIST has been developing a Federal Information Processing Standard (FIPS) based on the Keccak algorithm. In August 2014 NIST will hold a workshop where several different modes of operation are discussed. In particular modes for tree-based hashing and incremental hashing will be presented to facilitate the increased performance when hashing large datasets. Several question arise regarding tree-based hash. What kind of parallelism is more desired: SIMD, multiprocessor, or a combination? How to deal with depth of trees and computation overhead issues? What are efficient incremental hashing techniques for deriving a new hash when only a small piece of the data is modified.
Title: Parallel Programming Models
Modern high-performance computing architectures are composed of many- ond multicore architectures with vector processing capabilities, which then are often clustered in networks to achieve greater aggregate performance. In this talk, we provide a first glimpse at programming standards that address multithreading (OpenMP), vectorization in accelerators (OpenCL) and message passing (MPI) as well as issues one has to be aware of as a programmer.
Title: Multivariate Cryptography
Multivariate public key cryptosystems (MPKC) are one of the four main families of post-quantum public key cryptosystems. In a MPKC, the public key is given by a set of quadratic polynomials and its security is based on the hardness of solving a set of multivariate polynomials. This lecture gives a general introduction to the multivariate public key cryptosystems including the main designs, the main attack tools and the mathematical theory behind. We will present state of the art research in the area. We will also present some results on how parallel computation tools (SIMD and multicore) have been used in implementations of the MPKCs and the attacks.
Title: Functioning Hardware from Functional Specifications
For performance at low power, tomorrow's chips will be mostly application-specific logic only powered when needed. I propose synthesizing it from the functional language Haskell. My approach -- rewriting to a simple dialect that enables a syntax-directed translation -- enables parallelization and distributed memory systems. Transformations include scheduling arithmetic operations, replacing recursion with iteration, and improving data locality by inlining recursive types. I am developing a compiler based on these principles.
Title: Learning With Errors - Use and Cryptanalysis
The Learning With Errors (LWE) problem is one of the most important tools for lattice based cryptography. In this talk, we show the typical framework for using LWE for a public-key encryption scheme and show how we used this framework to build a lightweight scheme. One of the main challenges is to find a suitable parameter set that provides security in the presence of high speed computers and simultaneously allows to run the scheme on embedded devices. In order to do this, we have to estimate the efficacy of the best known attacks. We will therefore take a brief look at one of the most promising attacks that is very suitable for parallelization and show how to estimate its complexity.
Title: Multicore Architectures
Modern multicore architectures exhibit complex topologies and multiple bottlenecks. On the other hand, processor designers have done a great job in making the "common case" fast, i.e., optimizing the hardware for executing "prevalent" code patterns with high performance. Thus, simple machine models often allow us to make sense of the typical performance behavior of loop-based program code. In this talk we describe contemporary multicore architecture, as far as it is relevant for understanding the key performance features. Then we introduce simple performance models that are based on the central concept of "bottlenecked computing". These models provide insight into the relevant performance limitations and guide the way towards promising code optimizations.
Title: Implementing Homomorpic Encryption
In this talk I will survey homomorphic encryption and the lattice-based and algebraic techniques that are used in its implementation. I will cover on a high level the construction that we have implemented, and discuss some of the recent work that we did to improve their efficiency.
Title: Secure Multi-party Computation on Multicore Hardware
Secure multi-party computation (MPC) is often based on artihmetic or boolean circuits, which have ample implicit parallelism. This makes MPC an ideal candidate for multicore implementation. SMPCC is a compiler that transforms C programs into distributed multi-party computations, where each party can take advantage of a multicore processor. I will discuss the design and implementation of SMPCC, with a focus on its multicore architecture.
Joint work with Yevgeniy Vahlis (Bionym Inc.)
Title: Public-Key Exchange Using Extensions by Endomorphisms and Matrices over a Galois Field
In this talk, I describe a public key exchange protocol based on an extension of a semigroup by automorphisms (more generally, by endomorphisms). One of its special cases is the standard Diffie-Hellman protocol, which is based on a cyclic group. However, when our protocol is used with a non-commutative (semi)group, it acquires several useful features that make it compare favorably to the Diffie-Hellman protocol. Here we suggest a couple of instantiations of our general protocol, with a non-commutative semigroup of matrices over a Galois field as the platform and show that security of the relevant protocols is based on quite different assumptions compared to that of the standard Diffie-Hellman protocol. Our key exchange protocols with this platform are quite efficient, too: with private keys of size 127 bits and public keys of size 1016 bits, the run time is 0.03 s on a typical desktop computer.
This is a joint work with H.T.Lam and V.Shpilrain.
Title: Parallelization of the Gauss Sieve Heuristic on Multi-core CPU-chips
Lattice-based cryptography is particularly attractive since lattice-based cryp- tosystems are believed to be quantum immune, i.e., resistant to attacks operated with quantum computers. The safety of lattice-based cryptosystems is based on the hardness of specific lattice problems, and becomes vulnerable only if these prob- lems are solved in a timely manner. One of these problems is to find a very short vector in a given lattice, a problem referred to as α-SVP, an approximation of the Shortest Vector Problem (SVP). The SVP consists in finding the nonzero vector v of a given lattice L, whose norm ||v|| is the smallest among the norms of all nonzero vectors in the lattice L and is denoted by λ1 (L). The α-SVP consists in finding a nonzero vector v of a given lattice L, whose norm is at most αλ1 (L). Algorithms that aim at solving the α-SVP often make use of algorithms that solve the SVP, referred to as SVP-solvers. Hence, the safety of lattice-based cryp- tosystems is directly connected with the tractability of SVP-solvers. However, the computational tractability of these algorithms is yet to determine, especially in the presence of brand-new high performance architectures, such as multi-core CPUs. This talk will address the implementation of two important SVP-solvers, List- Sieve and GaussSieve, in multi-core CPU-chips. In particular, it will address (1) the challenges involved in the parallelization of these algorithms on shared-memory sys- tems, and how the inherently sequential steps of the algorithms can be parallelized very effectively with very fined grained synchronization primitives, (2) the opportu- nities for the optimization and vectorization of their computational work-flows and (3) algorithmic improvements that the algorithms benefit from.
Title: Behavioral Cybersecurity
Undoubtedly one of the biggest challenges facing the Cybersecurity community is the human weaknesses in the use or abuse of protocols designed to improve security, such as in the community's celebrated efforts in cryptology and digital signatures. These problems are not new, as has been well-documented in the Bletchley Park efforts from World War II.
However, despite our technical expertise, few of us are equipped to understand and benefit from the behavioral science questions such as, what motivates a hacker to launch a DDoS attack? What prevents an ordinary user from making the effort to create a relatively safe password?
In order to address these questions, in the Department of Systems and Computer Science at Howard University, we are modifying our undergraduate concentration in Cybersecurity to include both modified CS courses to address behavioral science issues, as well as requiring certain psychology courses for thos in this concentration, which we call "Behavioral Cybersecurity."
This experimental curriculum development is being supported by the Project Kaleidoscope and the American Association of Colleges and Universities.
Title: Multicore Tree Hashing with Functional Programming
We report on our progress in implementing general-purpose tree-hashing on multicore computing devices: We implemented several tree-hashing modes (i.e. tree shapes), using the Sakura encoding1 and writing the implementation in the functional languages Clojure and Haskell, using the languages' mechanisms for specifying parallel computation. We argue that functional languages are appropriate mechanisms for implementing algorithms across multiple cores. We have designed a tree-hash description language, enabling the concise description of a large class of useful tree shapes.
As one part of the general process of establishing a new standard for cryptographic hashing, NIST plans to standardize methods for tree hashing, generalizing the Merkle hash tree to a broader class of sound tree shapes. They plan to make use of the same authors' Sakura encoding to unambiguously describe these shapes. The choice of shape itself was left outside the scope of Sakura, and must be communicated separately between users and verifiers of a tree hashing mode. In order to do this flexibly and concisely we have designed a tree-hash description language, which can specify a superset of the classes of modes illustrated in the Sakura document.
For parallel programming, protecting data structures from stray updates or accesses is hard. In order to do so, traditional imperative programming languages use exclusion mechanisms such as locks; however, using these mechanisms is difficult to understand and therefore error-prone. Immutable data structures eliminate these errors, although at a performance cost (which we plan to quantify in this work). Functional programming languages such as Clojure and Haskell and their libraries use immutable data structures natively. Additionally, they have parallel-programming constructs, which enable us to operate at a higher level of abstraction. The simplicity of the implementation and the immutability of the data structures considerably simplify reasoning about their correctness.
We implemented our tree-hash description language using two different functional programming languages, namely Clojure, which is untyped, and Haskell, which is strongly typed. In each of these languages we implemented tree hashing, parametrized over a standard hash function (e.g. SHA-256 and Keccak). Our preliminary performance measurements have revealed some promising scale-up with respect to the number of cores in use.
Joint work with Stuart Haber.
Title: Towards Practical Implementations of Fully Homomorphic Encryption
One of the first major breakthroughs of computer science in the 21st century is the demonstration of public-key Fully Homomorphic Encryption (FHE). Unfortunately, FHE was not practical when it was discovered - it was several orders of magnitude too inefficient to be economically feasible. Under funding from the DARPA PROCEED we have been accelerating the development of practical implementations of Fully Homomorphic Encryption (FHE) scheme in software for single-core and multi-core environments, and in FPGA hardware implementations. For the past three years, our team has succeeded in accelerating various aspects of the FHE implementation concept toward practical implementation and use. Our work, in collaboration with other PROCEED performers, is resulting in implementations of FHE schemes developed under PROCEED that have achieved multiple orders of magnitude improvement in computation. Further means of parallel software and hardware acceleration, such as for multicore and FPGA hardware, can improve the speed of computation even further by several additional orders of magnitude.
Specifically, our contribution to the has been the development of encryption primitives customized for parallelization on multi-core and FPGA based hardware to accelerate the computation on encrypted data using an FHE cryptosystem based on a variation of the NTRU scheme with additional with additional support for efficient key switching and modulus reduction operations to reduce the frequency of bootstrapping operations. Cipher texts in our scheme are represented as rectangular matrices of 64-bit integers. This bounding of the operand sizes has allowed us to take advantage of modern code generation tools developed by Mathworks to implement ANSI C and VHDL code for FPGA circuits directly from rapidly prototyped code in Matlab and Simulink models.
This talk will review our advances in FHE, from theory, implementation and application perspectives. We discuss our design and implementations of cryptographic primitives and that can make efficient use of multi/many-core and parallel computing capabilities in both software and hardware. We also discuss ours plan to continue this research that will enable practical secure out-sourced computation for specific application domains, such as to support homomorphic encryption in resource-limited environments such as on smartphones.
Title: Codes for Security Against Computationally Unbounded Adversaries
Modern cryptography assumes a computationally bounded adversary, and bases security on the hardness of mathematical problems. Advances in computer technologies combined with ground breaking developments in algorithms, rejuvenates the question of providing security without any computational assumptions.
We consider the problem of security and reliability of data against a computationally unbounded adversary, that has limitations on its access to the underlying physical system. Example scenarios are, a secure storage system that the adversary cannot read, but can corrupt by adding noise to it; or a scenario where a sender and a receiver are connected by multiple disjoint paths that only some are inaccessible to the adversary. In this talk we give an overview of this problem, define security and reliability goals and efficiency measures, look at some of the current results, and discuss open problems for future research.
Title: Design for Security: The Hardware-Up Principle
In this talk, I will describe a new design principle for security: the hardware-up principle. Hardware-up security means that systems should be secured starting from hardware instead of the existing popular approach where software layers are secured, assuming that the lower layers are secure when they are not. I will discuss how systems designed for security from hardware-up offer unique advantages unavailable in current protection systems: a smaller attack surface, energy-efficient execution, and the ability to reason about security compositionally.
I will illustrate hardware-up benefits through three case studies.
For the first hardware-up case study, I will discuss how we can prevent attackers from taking advantage of unintentional hardware design flaws. Taking microarchitectural side channels as an example, I will discuss a new methodology that computer architects can use to reason micro architectural side-channels at processor design time.
Attackers can also intentionally weaken hardware to break systems. In the second case study, I will discuss how hardware itself can be created in a manner that provides assurance that its security has not been compromised due to design-time backdoors. I will describe our technique for silencing backdoors and a prototype built using this technique that incurs less than 8% area overhead and negligible performance overheads.
Finally, I will describe a hardware malware detector, a first of its kind, that is vastly simpler to implement compared to a traditional software malware detector.
Case study 1: SVF
Case study 2: Hardware Backdoors
Case study 3: Malware Detector