**Abstracts:**

Title: Dynamic Proofs of Retrievability from Oblivious RAM

Proofs of Retrievability allow a client to store her data on a remote server and periodically execute an efficient audit protocol to check that all of the data is being stored correctly. Starting with the work of Juels and Kaliski (CCS '07), all prior solutions to this problem crucially assume that the client data is static and do not allow it to be efficiently updated. Indeed, they all store a redundant encoding of the data on the server, meaning that even a single bit modification to the original data will require a modification of a large fraction of the server storage, which makes updates highly inefficient. Moreover, this property was crucial for their security arguments, and overcoming this limitation was left as the main open problem by all prior works.

In this work, we give the first solution providing proofs of retrievability for dynamic storage, where the client can perform arbitrary reads/writes on any location within her data on the server. At any point in time, the client can execute an efficient audit protocol to ensure that the server maintains the latest version of the client data. The computation and communication complexity of the server and client in our protocols is only polylogarithmic in the size of the client's data. Our technical approach combines prior schemes with the previously unrelated algorithmic techniques of "Oblivious RAM."

Title: Theory and applications of random partition processes

Partition models have been applied in problems of clustering and classification in a range of areas including linguistics, ecology and astronomy. In this talk, we discuss stochastic processes on the state space of partitions of the set of natural numbers. In particular, we focus on consistent and exchangeable families of partition-valued Markov processes, including representation theorems and convergence rates to the stationary measure. We also talk briefly about some future applications of these models in population genetics and phylogenetics.

Title: Directed Width Parameters: Algorithms and Structural Properties

Treewidth of an undirected graph measures how close the graph is to being a tree. Several problems that are NP-hard on general graphs are solvable in polynomial time on graphs with bounded treewidth. Motivated by the success of treewidth, several directed analogues of treewidth have been introduced to measure the similarity of a directed graph to a directed acyclic graph (DAG). Directed treewidth, D-width, DAG-width and Kelly-width are some such width parameters.

In this talk, I will present some of the algorithms and structural properties of these parameters with emphasis on Kelly-width and its equivalent characterizations. I will present some recent results and new research directions.

Title: Discrepancy minimization by walking on the edges

Minimizing the discrepancy of a set system is a fundamental problem in combinatorics. One of the cornerstones in this area is the celebrated six standard deviations result of Spencer (AMS 1985): In any system of n sets in a universe of size n, there always exists a coloring which achieves discrepancy 6\sqrt{n}. The original proof of Spencer was existential in nature, and did not give an efficient algorithm to find such a coloring. Recently, a breakthrough work of Bansal (FOCS 2010) gave an efficient algorithm which finds such a coloring. In this work we give a new randomized algorithm to find a coloring as in Spencer's result based on a restricted random walk we call "Edge-Walk". Our algorithm and its analysis use only basic linear algebra and is "truly" constructive in that it does not appeal to the existential arguments, giving a new proof of Spencer's theorem and the partial coloring lemma.

Joint work with Shachar Lovett.

Title: Using Algorithms Designed for Adversaries on Transfer Learning Problems

In this talk we consider an area of machine learning called transfer learning. The main idea behind transfer learning is to use knowledge acquired on an old machine learning problem to improve learning on a new but related problem. While there are many different techniques for transfer learning, we consider a simple model where various hyperplane algorithms that depend on p-norms have provable guarantees on performance that depend on various difference between the old and the new hyperplane. While we intend for the transfer learning algorithm to be used on distribution based problems, our techniques use algorithms that have worst-case adversarial performance guarantees. We believe the robustness of these adversarial algorithms make them useful tools in the construction of new algorithms for a wide range of learning problems.

Title: Indoor localization and robotic cartography

As mobility moves onto the center stage of telecommunications, there is an increasing need for the network system's ability to follow users and objects in all types of environments. Accurate localization enables location-based services as varied as turn by turn directions in a large public building, guided first responders intervention or personalized advertising. For a telecommunication company specifically, localization can help optimize network deployment and improve the system's energy efficiency. Thanks to the network of Global Positioning Satellites, we can theoretically determine our position anywhere on the planet, and that localization accuracy can be of the order of a meter, provided that we are in line of sight with a few satellites. But the GPS system does not work well indoors, where alternatives are needed. In this talk, we will give an overview of localization methods, focussing on Radio-Frequency (RF) based methods, such as RF fingerprinting and tracking, and present our state-of-the-art technique for WiFi-based positioning. Radio-Frequency (RF) fingerprinting exploits existing telecommunication devices and infrastructure, such as WiFi routers, relying on a database of signal strengths at different locations in an indoor space, to subsequently predict the receiver's location.

We have also investigated methods to overcome the tedious fingerprinting procedure (i.e., creating signal maps along with precise position information) and repeated calibration that are key to a good performance of any localization system. While trying to automate data acquisition and to create a systematic approach for radio-frequency mapping and fingerprinting, we built a low-cost autonomous and self-localizing robotic platform relying on a Kinect color and depth camera. We designed algorithms for real-time obstacle-avoidance-based navigation, as well as for Simultaneous Localization and Mapping based on visual odometry and computer vision techniques. We will show how our robot can localize itself while collecting and building WiFi maps in medium-sized office spaces.

Title: Designing Markets for Prediction

Mechanism design, or "inverse game theory" is an engineering arm of social science. I will discuss some of our work designing mechanisms to acquire and aggregate information with the goal of making predictions. I will focus on the engineering questions: How do they work and why? What factors and goals are most important in their design? Two somewhat nonstandard objectives are important for good prediction mechanisms: liquidity and expressiveness. Liquidity ensures that agents can be compensated for their information at any time, even when few others are around. I will describe our designs for several automated market maker algorithms that provide desired levels of liquidity. An expressive mechanism offers agents flexibility in how they communicate information; at the extreme, agents can provide any information they have in any form they like. I will discuss our work on combinatorial prediction markets that take expressiveness to the extreme.

Title: Secure Two-Party Computation in Sublinear (Amortized) Time

Traditional approaches to generic secure computation begin by representing the function $f$ being computed as a circuit. If~$f$ depends on each of its input bits, this implies a protocol with complexity at least linear in the input size. In fact, linear running time is inherent for non-trivial functions since each party must ``touch'' every bit of their input lest information about the other party's input be leaked. This seems to rule out many applications of secure computation (e.g., database search) in scenarios where inputs are huge.

Adapting and extending an idea of Ostrovsky and Shoup, we present an approach to secure two-party computation that yields protocols running in sublinear time, in an amortized sense, for functions that can be computed in sublinear time on a random-access machine~(RAM). Moreover, each party is required to maintain state that is only (essentially) linear in its own input size. Our protocol applies generic secure two-party computation on top of oblivious RAM (ORAM). We present an optimized version of our protocol using Yao's garbled-circuit approach and a recent ORAM construction of Shi et al.

We describe an implementation of this protocol, and evaluate its performance for the task of obliviously searching a database with over 1~million entries. Because of the cost of our basic steps, our solution is slower than Yao on small inputs. However, our implementation outperforms Yao already on DB sizes of $2^{18}$ entries (a quite small DB by today's standards).