DIMACS:

DIMACS Mixer Series, November 4, 2009

Abstracts:


Juan Garay, AT&T Labs, Research

Title: Secure Message Transmission with Small Public Discussion

In the problem of Secure Message Transmission in the public discussion model (SMT-PD), a Sender wants to send a message to a Receiver privately and reliably. Sender and Receiver are connected by $n$ channels, up to $tThe SMT-PD abstraction has been shown instrumental in achieving secure multi-party computation on sparse networks, where a subset of the nodes are able to realize a broadcast functionality, which plays the role of the public channel. However, the implementation of such public channel in point-to-point networks is highly costly and non-trivial, which makes minimizing the use of this resource an intrinsically compelling issue.

In this talk, after a brief introductory survey, we present the first SMT-PD protocol with sublinear (i.e., logarithmic in $m$, the message size) communication on the public channel. In addition, the protocol incurs a private communication complexity of $O(\frac{mn}{n-t})$, which, as we also show, is \emph{\optimal}. By contrast, the best known bounds in both public and private channels were linear. Furthermore, our protocol has an optimal round complexity of $(3,2)$, meaning three rounds, two of which must invoke the public channel.

Finally, we ask the question whether some of the lower bounds on resource use for a single execution of SMT-PD can be beaten on average through amortization. In other words, if Sender and Receiver must send several messages back and forth (where later messages depend on earlier ones), can they do better than the native solution of repeating an SMT-PD protocol each time?

We show that amortization can indeed drastically reduce the use of the public channel: it is possible to limit the total number of uses of the public channel to two, no matter how many messages are ultimately sent between two nodes. (Since two uses of the public channel are required to send any reliable communication whatsoever, this is best possible.)

This is joint work with Clint Givens (UCLA) and Rafail Ostrovsky (UCLA).


Jon Kettenring, Drew University

Title: Massive Datasets

Massive datasets are so labeled because of their size and complexity. They do not yield readily to standard statistical analyses. The resulting frustration has served as a spur to researchers to develop better tools. Some progress has been made, but the need for considerably more explains why this line of research remains a top priority. Interdisciplinary teamwork is at least as important as tools and can be the key to cracking the hard challenges that these datasets pose. This overview talk includes background information, examples, and statistical strategies to illustrate the state of the art.

(Reference: Wiley Interdisciplinary Reviews: Computational Statistics, 2009, 25-32.)


Troy Lee, Dept of Computer Science postdoc, Rutgers University

Title: From Adversaries to Algorithms

We will tell the happy story of a meeting of lower bounds and upper bounds. Quantum query complexity is a model of quantum computation which captures important algorithms such as Grover's search algorithm and the heart of Shor's factoring algorithm. At the same time we understand this model well enough to also prove lower bounds. We will discuss one lower bound technique known as the adversary method, and beautiful recent work of Ben Reichardt which turns this lower bound into an algorithm, showing that it characterizes quantum query complexity up to a logarithmic factor.


Mihai Patrascu, AT & T

Title: On the Possibility of Faster SAT Algorithms

We describe reductions from CNF-SAT to several natural algorithmic problems. We show that attaining any of the following bounds would improve the state of the art in algorithms for SAT:

One may interpret our reductions as new attacks on the complexity of SAT, or sharp lower bounds conditional on exponential hardness of SAT.


Athanasios (Thanos) Stathopoulos, Alcatel-Lucent

Title: Experiences with Energy-Aware Computing: From Embedded systems to Multicore systems and beyond

In this talk we will present our past and recent experiences in energy-aware computing. We will argue for the importance of accurate measurements both in an individual system space and in a large, networked system space. In the embedded and mobile computing space we will discuss the Energy Endoscope Architecture that provides detailed information on the energy consumption of 32-bit embedded platforms running Linux. In the multi-core class space, we will present a novel energy attribution and accounting architecture that can provide accurate, per-process energy information of individual hardware components. Finally, we will discuss our approach in estimating the energy consumption of networked devices on a large scale.


Mustafa Tural, Telcordia

Title: Basis Reduction and the Complexity of Branch-and-bound

Branch-and-bound is a classical method to solve integer programming feasibility problems. On the theoretical side, it is considered inefficient: it can provably take an exponential number of nodes to prove the infeasibility of a simple integer program.

In this work we show that branch-and-bound is theoretically efficient, if we apply a simple transformation in advance to the constraint matrix of the problem which makes the columns short and near orthogonal. We analyze two such reformulation methods, called the rangespace and the nullspace methods. We prove that if the coefficients of the problem are drawn from {1, ..., M} for a sufficiently large M, then for almost all such instances the number of subproblems that need to be enumerated by branch-and-bound is at most one.

Besides giving an analysis of branch-and-bound, our main result generalizes a result of Furst and Kannan on the solvability of subset sum problems to bounded integer programs.

We also present some numerical values of M which make sure that 90 and 99 percent of the reformulated problems solve at the root node. These values turned out to be surprisingly small for moderate values of n (the number of variables) and m (the number of constraints).