DIMACS Mixer Series featuring talks by Jeff Kahn (Rutgers University) and five of the DIMACS postdocs, Lubos Thoma, Hui Chen, Anna Gal, Scott Decatur and Amos Beimel.


DIMACS invites you to the second event in its Fall Mixer Series. All DIMACS members, visitors, postdocs, and students are invited to join us on Tuesday afternoon, October 8 at Bell Labs in Murray Hill.

We have a very nice program, featuring talks by DIMACS member Jeff Kahn and five of our postdocs, Lubos Thoma, Hui Chen, Anna Gal, Scott Decatur and Amos Beimel.

The purpose of this mixer is to introduce the new postdocs and DIMACS visitors to each other and to DIMACS members from the different DIMACS institutions. The program starts with coffee at 1:45. Titles and abstracts of the talks are attached. More details will soon be available on the DIMACS web pages (http://www.dimacs.rutgers.edu).

Those wishing to attend the mixer should send mail (preferably by October 3) indicating their country of citizenship to Sanjeev Khanna (sanjeev@research.bell-labs.com), so that badges can be prepared. Directions to Bell Labs are available from the DIMACS homepage (http://dimacs.rutgers.edu/archive/Directions/Directions3). For further information about the mixer, contact Mona Singh (mona@cs.princeton.edu).

We hope to schedule additional mixers at other DIMACS locations later in the year. I look forward to seeing you at this mixer.

Fred Roberts, Director

Program for DIMACS Mixer at Bell Labs, Tuesday October 8:

    1:45              Coffee 

    2:00 - 2:05       Fred Roberts
                      DIMACS Director, Rutgers University
                      Opening Remarks.

    2:05 - 2:55       Jeff Kahn
                      Focus on Discrete Probability Co-Chair
                      Rutgers University
                      "Random Matchings; Random Forests?"


    2:55 - 3:20       Lubos Thoma
                      Rutgers University 
                      "On the Complexity of Cover Graph Recognition 
                        for Finite Lattices."

    3:20 - 3:45       Hui Chen
                      Rutgers University
                      "Analysis of Algorithms for Finding Maximal
                        Independent Set in Random Hypergraphs."

                   
    3:45 - 4:00       break

    4:00 - 4:25       Anna Gal
                      Princeton University
                      "Superpolynomial Lower Bounds for Monotone 
                        Span Programs."

    4:25 - 4:50       Scott Decatur
                      Rutgers University
                      "PAC Learning with Nonuniform Classification Noise
                        via Statistical Queries."


    4:50 - 5:15       Amos Beimel
                      Rutgers University
                      "On the Applications of Multiplicity Automata
                        in Learning."



Host: Sanjeev Khanna


Title:

Random Matchings; Random Forests?

Speaker:

Jeff Kahn, Rutgers University

Abstract:

A matching chosen at random (according to uniform or, more generally, a "hard-core" distribution) from the set of all matchings of a large graph or multigraph is surprisingly nicely behaved in various ways. We will describe some of these and perhaps some consequences, and will suggest some analogous possibilities for random forests and other objects.

Title:

On the Complexity of Cover Graph Recognition for Finite Lattices.

Speaker:

Lubos Thoma, Rutgers University

Abstract:

The problem of characterizing the undirected graphs which are cover graphs of finite posets was first asked by Ore. The corresponding decision problem was shown to be NP-complete by Brightwell and by Nesetril and Rodl.

In the talk, I address the folowing decision problem:

Instance: an undirected graph G Problem: is G a cover graph of a lattice ?

In a joint work with V. Rodl, we proved that this problem is NP-complete. On the other hand, it follows from Alvarez' theorem that recognizing cover graphs of modular or distributive lattices is in the complexity class P.

An important tool in the proof of the first result is the following statement which may be of independent interest:

Given an integer l, l => 3, there exists an algorithm which for a graph G with n vertices yields, in time polynomial in n, a graph H with the number of vertices polynomial in n, and satisfying girth(H) => l and with the chromatic numbers of H and G being equal.

This statement and the first result answer a problem of Nesetril and Rodl.

Title:

Analysis of Algorithms for Finding Maximal Independent Set in Random Hypergraphs.

Speaker:

Hui Chen, Rutgers University

Abstract:

It is well known that finding a maximal independent set in a graph is in class NC, and that finding a maximal independent set in a hypergraph with fixed dimension is in RNC. It is not known whether this latter problem remains in NC when the dimension is part of the input. We will study the problem when the problem instances are randomly chosen.

It was shown that the expected running time of a simple parallel algorithm for finding the lexicographically first maximal independent set (lfmis) in a random simple graph is logarithmic in the input size.

We will prove a generalization of this result. We show that if a random k-uniform hypergraph has vertex set {1,2,..., n} and its edges are chosen independently with probability p , then our algorithm finds the lfmis in O(log ^3n/ log log n) expected time. The hidden constant is independent of k, p.

Title:

Superpolynomial Lower Bounds for Monotone Span Programs.

Speaker:

Anna Gal, Princeton University

Abstract:

The model of span programs was introduced by Karchmer and Wigderson in 1993. It is a computational model based on linear algebra with applications to lower bounds in other models, including Boolean formulae. The monotone version of this model is strongly related to the cryptographic problem of secret sharing.

We obtain the first superpolynomial lower bounds for monotone span programs computing explicit functions. The best previous lower bound was $\Omega(n^{5/2})$ by Beimel, Gal, Paterson (FOCS'95). Our proof exploits a general combinatorial lower bound criterion from that paper. Our lower bounds are based on an analysis of Paley-type bipartite graphs via Weil's character sum estimates. We prove an $n^{\Omega (\log n / \log\log n)}$ lower bound for an explicit family of monotone Boolean functions in $n$ variables, which implies the same lower bound for the size of monotone span programs for the clique problem. Our results give the first superpolynomial lower bounds for linear secret sharing schemes.

We demonstrate the surprising power of monotone span programs by exhibiting a function computable in this model in linear size while requiring superpolynomial size monotone circuits and exponential size monotone formulae.

Joint work with Laszlo Babai and Avi Wigderson.

Title:

PAC Learning with Nonuniform Classification Noise via Statistical Queries

Speaker:

Scott Decatur, Rutgers University

Abstract:

Valiant's PAC model of learning is a well studied formalization of machine learning. A wide variety of extensions have been made to the PAC model in order to capture learning settings with various types of noise or errors in the data used for training. In many of these PAC extensions, learning has been shown possible for a large class of learning tasks, specifically, those tasks learnable in Kearns' Statistical Query (SQ) model. One setting whose learnability has not yet been determined is nonuniform classification noise, in which training examples are misclassified randomly, but the probability of misclassification may depend on the example itself.

In this work, we study the problem of nonuniform classification noise and introduce a new model in which the learner knows a small partition of the examples such that examples from the same partition have the same misclassification rate. This model captures, for example, the case in which training data has differing rates of false positives as compared to false negatives. We show how to construct learning algorithms for this type of noise from SQ learning algorithms, thus showing how to learn any SQ learnable task in this setting. These results also hold when the learner knows only a polynomial-sized set of partitions to which the actual partition belongs.

Title:

On the Applications of Multiplicity Automata in Learning

Speaker:

Amos Beimel, Rutgers University

Abstract:

Recently the learnability of multiplicity automata attracted a lot of attention, mainly because of its implications on the learnability of several classes of DNF formulae. In this paper we further study the learnability of multiplicity automata. Our starting point is a known theorem from automata theory relating the number of states in a minimal multiplicity automaton for a function f to the rank of a certain matrix F. With this theorem in hand we obtain the following results:

1. A new simple algorithm for learning multiplicity automata. As a result, we improve the complexity for all classes that use the previously known algorithms and also obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF[2].

2. We prove the learnability of some new classes that were not known to be learnable before. Most notably, the class of polynomials over finite fields, the class of bounded-degree polynomials over infinite fields, the class of XOR of terms, and a certain class of decision-trees.

3. While multiplicity automata were shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, we study the limitations of this method. We prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulae. These results are proven by exhibiting functions in the above classes that require multiplicity automata with super-polynomial number of states.

Joint work with Francesco Bergadano, Nader H. Bshouty, Eyal Kushilevitz and Stefano Varricchio.


dimacs-www@dimacs.rutgers.edu
Document last modified on September 27, 1996