We have a very nice program, featuring talks by DIMACS member Jeff Kahn and five of our postdocs, Lubos Thoma, Hui Chen, Anna Gal, Scott Decatur and Amos Beimel.
The purpose of this mixer is to introduce the new postdocs and DIMACS visitors to each other and to DIMACS members from the different DIMACS institutions. The program starts with coffee at 1:45. Titles and abstracts of the talks are attached. More details will soon be available on the DIMACS web pages (http://www.dimacs.rutgers.edu).
Those wishing to attend the mixer should send mail (preferably by October 3) indicating their country of citizenship to Sanjeev Khanna (sanjeev@research.bell-labs.com), so that badges can be prepared. Directions to Bell Labs are available from the DIMACS homepage (http://dimacs.rutgers.edu/archive/Directions/Directions3). For further information about the mixer, contact Mona Singh (mona@cs.princeton.edu).
We hope to schedule additional mixers at other DIMACS locations later in the year. I look forward to seeing you at this mixer.
Fred Roberts, Director
Program for DIMACS Mixer at Bell Labs, Tuesday October 8: 1:45 Coffee 2:00 - 2:05 Fred Roberts DIMACS Director, Rutgers University Opening Remarks. 2:05 - 2:55 Jeff Kahn Focus on Discrete Probability Co-Chair Rutgers University "Random Matchings; Random Forests?" 2:55 - 3:20 Lubos Thoma Rutgers University "On the Complexity of Cover Graph Recognition for Finite Lattices." 3:20 - 3:45 Hui Chen Rutgers University "Analysis of Algorithms for Finding Maximal Independent Set in Random Hypergraphs." 3:45 - 4:00 break 4:00 - 4:25 Anna Gal Princeton University "Superpolynomial Lower Bounds for Monotone Span Programs." 4:25 - 4:50 Scott Decatur Rutgers University "PAC Learning with Nonuniform Classification Noise via Statistical Queries." 4:50 - 5:15 Amos Beimel Rutgers University "On the Applications of Multiplicity Automata in Learning." Host: Sanjeev Khanna
In the talk, I address the folowing decision problem:
Instance: an undirected graph G Problem: is G a cover graph of a lattice ?
In a joint work with V. Rodl, we proved that this problem is NP-complete. On the other hand, it follows from Alvarez' theorem that recognizing cover graphs of modular or distributive lattices is in the complexity class P.
An important tool in the proof of the first result is the following statement which may be of independent interest:
Given an integer l, l => 3, there exists an algorithm which for a graph G with n vertices yields, in time polynomial in n, a graph H with the number of vertices polynomial in n, and satisfying girth(H) => l and with the chromatic numbers of H and G being equal.
This statement and the first result answer a problem of Nesetril and Rodl.
It was shown that the expected running time of a simple parallel algorithm for finding the lexicographically first maximal independent set (lfmis) in a random simple graph is logarithmic in the input size.
We will prove a generalization of this result. We show that if a random k-uniform hypergraph has vertex set {1,2,..., n} and its edges are chosen independently with probability p , then our algorithm finds the lfmis in O(log ^3n/ log log n) expected time. The hidden constant is independent of k, p.
We obtain the first superpolynomial lower bounds for monotone span programs computing explicit functions. The best previous lower bound was $\Omega(n^{5/2})$ by Beimel, Gal, Paterson (FOCS'95). Our proof exploits a general combinatorial lower bound criterion from that paper. Our lower bounds are based on an analysis of Paley-type bipartite graphs via Weil's character sum estimates. We prove an $n^{\Omega (\log n / \log\log n)}$ lower bound for an explicit family of monotone Boolean functions in $n$ variables, which implies the same lower bound for the size of monotone span programs for the clique problem. Our results give the first superpolynomial lower bounds for linear secret sharing schemes.
We demonstrate the surprising power of monotone span programs by exhibiting a function computable in this model in linear size while requiring superpolynomial size monotone circuits and exponential size monotone formulae.
Joint work with Laszlo Babai and Avi Wigderson.
In this work, we study the problem of nonuniform classification noise and introduce a new model in which the learner knows a small partition of the examples such that examples from the same partition have the same misclassification rate. This model captures, for example, the case in which training data has differing rates of false positives as compared to false negatives. We show how to construct learning algorithms for this type of noise from SQ learning algorithms, thus showing how to learn any SQ learnable task in this setting. These results also hold when the learner knows only a polynomial-sized set of partitions to which the actual partition belongs.
1. A new simple algorithm for learning multiplicity automata. As a result, we improve the complexity for all classes that use the previously known algorithms and also obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF[2].
2. We prove the learnability of some new classes that were not known to be learnable before. Most notably, the class of polynomials over finite fields, the class of bounded-degree polynomials over infinite fields, the class of XOR of terms, and a certain class of decision-trees.
3. While multiplicity automata were shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, we study the limitations of this method. We prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulae. These results are proven by exhibiting functions in the above classes that require multiplicity automata with super-polynomial number of states.
Joint work with Francesco Bergadano, Nader H. Bshouty, Eyal Kushilevitz and Stefano Varricchio.