University Inn and Conference Center

New Brunswick Campus, Rutgers University

**Organizers:****Endre Boros**, Director of RUTCOR, Endre.Boros at rutcor.rutgers.edu

Title: Threshold decision lists

This talk discusses some questions about threshold (or linear) decision lists. These are decision lists in which the constituent tests are linear threshold functions. Such functions generalize decision lists and threshold functions and can be thought of as applying recursive linear separation to classify points of their domain.

Title: Nondisjoint decomposition of monotone Boolean functions

We discuss nondisjoint decompositions of a monotone Boolean function f of the form f =F(A,B,g(B,C)), where A,B and C are disjoint sets of variables. If B is the empty set then the theory of decompositions, studied in game theory, reliability theory etc. is well understood. However, in the general case little or nothing is known. We present some results on the lattice of components g of f for a fixed partition A,B and C, as well as on the properties of the so-called modular(bound) sets of f.

Title: Large Margin LAD Models and LAD-based Regression

We discuss some recent developments in Logical Analysis of Data (LAD), with a focus on (i) the use of the concept of large margin classifiers to build parameter-free LAD classification models, and (ii) the extension of the LAD methodology to address regression problems. Both ideas rely on simple optimization models, strongly based on the original formulation proposed by Hammer et al. in the seminal LAD papers.

We compare the proposed techniques to standard classification and regression techniques through a series of experiments on datasets from the UCI repository. Finally, we discuss some extensions of this work and of the LAD methodology in general.

Title: What Can and What Has To Be Learned From Data

Not diminishing the importance of experimental/expectation based evaluation of learning approaches, we think that there is also a need for more exact evaluations. After all, not only there are some regularities which we may learn from data, there might be some special structures hidden in the data which we may view as necessary not to miss. For rule based learning approaches we propose a notion of ?justifiability?, and try to argue that this notion is compatible with existing methods, does not lead to over-fitting, and helps to identify of what a ?good and professional? learning method is.

(Joint (old) work with Yves Crama, Peter L. Hammer, Toshi Ibaraki, Alex Kogan, and Kaz Makino.)

Title: Molecular Computing and Probabilistic Circuits

Why does the functioning of biological systems seem miraculous? One reason is that we do not know how to design systems that do what cells do, namely molecular computing. In contrast, we know how to design highly complex information systems. The fundamental reason for the successful evolution of information systems is the development of mathematical abstractions that enable efficient and robust design processes. In particular, Claude Shannon in his classical 1938 Masters Thesis demonstrated that all Boolean functions can be computed by relay circuits, leading to the development of digital logic and resulting in computer chips with over a billion transistors. Motivated by the challenge of analyzing stochastic gene regulatory networks, we generalize the notion of logic design to probabilistic logic design. Specifically, we consider relay circuits where deterministic switches are replaced by probabilistic switches. We present efficient algorithms for synthesizing probabilistic relay circuits that can compute arbitrary probability distributions.

Title: Exclusive and essential sets of implicates of a Boolean function

(joint work with Endre Boros, Alex Kogan, Petr Kucera and Petr Savicky)

In this talk we shall study relationships between CNF representations of a given Boolean function f and certain sets of implicates of f. We introduce two definitions of sets of implicates which are both based on the properties of resolution. The first type of sets, called exclusive sets of implicates, is shown to have a functional property useful for decompositions. The second type of sets, called essential sets of implicates, is proved to possess an orthogonality property, which implies that every CNF representation and every essential set must intersect. The latter property then leads to many interesting consequences, which we plan to discuss in the talk.

Title: Those Were the Days... RUTCOR 1983-1987

RUTCOR turned 25 year old in 2008. I was fortunate enough to be among the very first students to attend the brand new Interdisciplinary Program in Operations Research, from 1983 to 1987, under the guidance of Peter Hammer. In this presentation, I would like to share with the audience some memories of those exciting times.

Title: Models of Voting Power in Corporate Networks

In this talk, we propose to rely on power indices to measure the amount of control held by individual shareholders in corporate networks. The value of the indices is determined by a complex voting game viewed as the composition of interlocked weighted majority games; the compound game reflects the structure of shareholdings. We describe an algorithmic approach which allows to deal with the complexity of computing power indices in real-world financial networks, and we identify some of the research questions which arise in connection with this approach.

(Joint work with Luc Leruth.)

Title: Adding Unsafe Constraints to Improve SAT Solver Performance

We examine the structure of CNF representations of common problems, such as bounded model checking, in Formal Verification. We observe that this structure arises in a variety of other difficult problems as well. We show why such structures are difficult for current Satisfiability solvers: namely, that inferences typically can be discovered only after a large number of variables have been set. Thus, for example, clause learning is not as effective on such structures as we would like it to be. We propose some ideas which may lead to significantly reduced solution times for these and other problems. These have have to do with guessing inferences apriori BASED ON SOLUTION STRUCTURE and not formula structure as is typically done in preprocessing. That is, the structure of solutions that can be found for smaller versions of a problem are examined for necessary patterns of variable values - either to support or reject the solutions - and constraints are added to the original formula to enforce inclusion or rejection of these patterns. Adding guessed constraints can reduce search enormously but may eliminate some solution traces, so at some search depth these guessed constraints are removed and search continues to the end. The trick is to prevent all solutions from being lost if at least one exists. For Formal Verification problems this means, at worst, getting a result with a certain confidence. Some other problems can be solved more directly. For example, this idea was used to find a van der Waerden number W(2,6) = 1132 by first getting a bound then using solution patterns to reduce a full search to something manageable. We are still refining this technique.

Title: Improved Branch and Bound Methods for Maximum Monomial Agreement

(joint work with Jonathan Eckstein)

The NP-hard Maximum Monomial Agreement (MMA) problem consists of finding a single logical conjunction that best fits a weighted dataset of "positive" and "negative" binary vectors.Computing a classifier by boosting methods in machine learning involves a maximum agreement subproblem at each iteration, although typically high-dimensional subproblems have been solved by heuristic methods. Here, we describe an exact branch and bound method for maximum agreement over Boolean monomials, improving on earlier work of Goldberg and Shan. In particular, we develop a tighter upper bounding function and an improved branching procedure that exploits knowledge of the bound and the dataset, while having a lower branching factor. Experimental results show that the new method is able to solve larger problem instances and runs faster within a linear programming boosting procedure applied to medium-sized datasets from the UCI repository.

Title: Correlation immune functions and learning

A Boolean function f is correlation immune if each input variable of f is independent of the output of f, with respect to the uniform distribution on the domain of f. The parity function is correlation immune, as is the Boolean function f = 1 if all input variables of f have equal values. Correlation immune functions have been extensively studied in cryptography; the lack of correlation between individual inputs and output is a desirable property of an encoding function. Correlation immune functions pose a problem to standard machine learning algorithms, though, since this same lack of correlation makes it more difficult to distinguish between essential (relevant) and inessential variables of a target function.

We begin by reviewing previous results on correlation immune functions. We then address the problem of identifying essential variables of correlation immune functions. This is a difficult task if only uniform random examples of the target function are available. Using a new structural lemma on Boolean functions, we show how access to examples from non-uniform distributions can be exploited in order to efficiently accomplish the task.

Title: Writing Good Logic Models

Logic-based modeling can result in problem formulations that are natural and easy to debug. To ease solution, they should be written or automatically reformulated to achieve two goals: to be as nearly consistent as possible (in the constraint programming sense) and to gave a tight continuous relaxation. The former reduces search and backtracking, and the latter provides better relaxation bounds. This talk shows how to accomplish these goals for formulas of propositional logic, cardinality clauses, and implications between cardinality clauses.

Title: Participating in timetabling competition ITC2007 with a general purpose CSP solver

We report our experience of participating in ITC2007 with a general purpose CSP (constraint satisfaction problem) solver which we have developed in these years by using tabu search idea. In particular, we describe how complicated hard and soft constraints of timetabling problems are formulated and handled.

Title: Fourier Analysis and Boolean Function Learning

Fourier analysis has informed a number of positive and negative results regarding the learnability of Boolean functions. In turn, learning research has led to the discovery of Fourier properties of various Boolean function classes. I will survey Fourier properties and associated learning results for several interesting Boolean classes, including monotone functions, disjunctive normal form (DNF) expressions, constant-depth circuits more generally, and halfspaces. Some flavor of the underlying Fourier analysis will be included.

Title: Reverse-engineering Country Risk Ratings: A Combinatorial Non-cursive Model

The central objective of this paper is to develop a transparent, consistent, self-contained, and stable country risk rating model, closely approximating the country risk ratings provided by Standard and Poor's (S&P). The model should be non-recursive, i.e., it should not rely on the previous years' S&P ratings. The set of variables selected here includes not only economic-financial but also political variables. We propose a new model based on the novel combinatorial-logical technique of Logical Analysis of Data (which derives a new rating system only from the qualitative information representing pairwise comparisons of country riskiness). We also develop a method allowing to derive a rating system that has any desired level of granularity. The accuracy of the proposed model's predictions, measured by its correlation coefficients with the S&P ratings, and confirmed by k-folding cross-validation, exceeds 95%. The stability of the constructed non-recursive model is shown in three ways: by the correlation of the predictions with those of other agencies (Moody's and The Institutional Investor), by predicting 1999 ratings using the non-recursive model derived from the 1998 dataset applied to the 1999 data, and by successfully predicting the ratings of several previously non-rated countries. This study provides new insights on the importance of variables by supporting the necessity of including in the analysis, in addition to economic variables, also political variables (in particular "political stability"), and by identifying "financial depth and efficiency" as a new critical factor in assessing country risk.

Title: Recognition of positive k-interval functions

Intervals [a_1,b_1],...,[a_k,b_k] of n-bit integers represent Boolean function f on n variables if these intervals span just the integers having as their binary representation some truepoint of f. The correspondence of integers and Boolean vectors and hence also the interval representation depends on the order which determines significancies of individual bits. We consider the problem of recognizing whether a given positive Boolean function can be represented by a specified number of intervals with respect to a suitable order of bits. We present algorithms for threshold and regular functions, an asymptotically optimal algorithm recognizing whether a function given by a positive DNF can be represented by 2 intervals and some ideas about the possibilities of generalizing this algorithm for any number of intervals.

Title: Coverable Boolean Functions

(joint work with Endre Boros, Ondrej Cepek, and Alex Kogan)

An essential set of implicates E of a given Boolean function f is a set of implicates, which can be derived by resolution from prime implicates and such that for every implicate C which is a resolution of implicates C_1 and C_2, if C belongs to E, then one of the parent clauses C_1 and C_2 must belong to E as well. These sets have many interesting properties, among them the fact that the number of pairwise disjoint essential sets of implicates of f is less than or equal to the number of clauses in a minimum CNF. A class of functions is called "coverable" if equality holds instead of the above inequality for every function in the class. We shall show in this talk, that the classes of acyclic, quasi-acyclic, quadratic and CQ functions are coverable. Interestingly for each of these classes we also have a polynomial time minimization algorithm. This talk is a continuation of the talk to be given by Ondrej Cepek.

Title: Large-Scale Optimization Strategies for Optimal Cancer Treatment Design

Treatment planning for radiation therapy is inherently complex due to the number of input parameters involved. The complexity is amplified by the uncertainty of target shapes due to organ motion, by dose estimation, by availability of biological information, and by competing multiple clinical objectives within the planning procedure. In this talk, we describe some of our experience in cancer treatment design related to these issues. Various optimization methods will be contrasted, and computational challenges will be discussed.

Title: Combinatorial Patterns for Probabilistically Constrained Optimization Problems

We propose a new framework for the solution of probabilistically constrained optimization problems by extending some recent developments in combinatorial pattern theory. The method involves the binarization of the probability distribution and the generation of a consistent partially defined Boolean function (pdBf) representing the combination (F,p) of the binarized probability distribution F and the enforced probability level p. We represent the pdBf representing (F,p) as a disjunctive normal form taking the form of a collection of combinatorial patterns. We propose a new integer programming- based method for the derivation of combinatorial patterns and present several methods allowing for the construction of a disjunctive normal form that defines necessary and sufficient conditions for the probabilistic constraint to hold. The obtained disjunctive forms are then used to generate deterministic reformulations of the original stochastic problem. The method is implemented for the solution of a numerical problem. Extensions to the present study are discussed.

Title: On various relaxations based on reformulation-linearization for 0-1 MIPs and their specialization to pseudoboolean optimization

(joint work with Endre Boros, Ondrej Cepek, and Alex Kogan)

The use of valid inequalities deduced from optimizing over the rank-1 Lift&Project Closure has been shown to provide good quality bounds for some classes of quadratic pseudoboolean optimization problems such as MAX-2SAT. We will discuss links with other families of relaxations, in particular the RLT hierarchy ( Sherali & Adams, 1990). We show that known simple links between rank-1 Sherali-Adams (RLT) relaxation and rank-1 Lift-and-Project closure do not readily extend to rank 2 or more.To investigate the latter case, we introduce a new hierarchy of relaxations, intermediate in strength between the RLT hierarchy and the L&P hierarchy.This new hierarchy is shown to coincide with RLT for MIPs arising from pseudoboolean function minimization. Some preliminary computational results involving rank-2 relaxations will be reported, concerning both 0-1optimization problems without special structure, and MIP formulations related to linearly constrained pseudoboolean optimization.

Title: Boolean decision trees: problems and results, old and new

Boolean decision trees are perhaps the simplest algorithmic model for computing Boolean functions. In this model, the cost of a computation is simply the number of input bits that are accessed. There is a natural notion of a randomized decision tree, which makes use of randomness in choosing which input bits to look at. As with any model of computation, a primary goal of the research is to obtain upper and lower bounds for specific functions and general classes of functions. The methods are quite varied, including adversary arguments, algebraic/combinatorial topology, information theory, and Fourier analysis.

In this talk, I'll give an overview of some recent and not-so-recent results, and some open problems.

Title: The Largest Number of Prime Implicants of k-term DNF and Cube Partitions

It is an often discovered result that every k-term DNF can have at most 2^k-1prime implicants and this bound is sharp. We complete this result by giving an explicit description of all k-term DNF with the maximal number of prime implicants: these are DNF that can be represented as a certain kind of decision tree.

The proof uses a splitting lemma for partitions of the hypercube into neighboring cubes. We then consider the splittability of general cube partitions, measuring the influence of variables for the cube partition. We also discuss the connection between this topic and decompositions of complete graphs into complete bipartite graphs.

This is joint work with Balazs Szorenyi and Gyorgy Turan

Title: Structures of Renamable Horn and q-Horn Formulae

In this talk, we will revisit the problem of recognizing renamable Horn and q-Horn formulae, my first research topic on Boolean functions supervised by Dr. Hammer. Using a graph associated with any disjunctive normal form, we identify the structures of renamable Horn and q-Horn formulae, in terms of paths and cycles of the graph, which can be used to recognize renamable Horn or q-Horn formulae in linear time.

Title: On approximate Horn minimization

It is known that minimizing Horn formulas is NP-hard, but the problem has an efficient O(n)-approximation algorithm, where n is the number of variables. We consider lower and upper bounds for the approximability of different versions.

Joint work with Amitava Bhattacharya, Bhaskar DasGupta and Dhruv Mubayi.

Title: Multi-level logic with constant depth: recent research from Italy

(joint work by Anna Bernasconi, Valentina Ciriani, Roberto Cordone, Fabrizio Luccio, Linda Pagli, Tiziano Villa)

In this talk we survey recent research on logic representations with 3 and 4 levels, which trade-off optimality vs. computing time, exploring algebraic forms between classical sum-of-products and the still largely unchartered domain of unrestricted multi-level logic. In particular we present: EXOR-Sum-of-Products (EXOR-SOP or SPP) forms and bounded-fanin SPPs; EXOR-Projected Sum-of-Products (EP-SOP), consisting in a four level network; Projected Sum-of-Products (P-SOP), based on projections of minimal SOP forms onto generic subsets of the Boolean space.

Available exact and heuristic minimization algorithms will be mentioned, together with complexity results.

Title: On the Chvatal-Complexity of Binary Knapsack Problems

There is a famous result of Chvatal giving a theoretical iterative procedure, which determines the integer hull of a polyhedral set. It starts from the polyhedral set itself and in each iteration new cuts are introduced. This result is considered the theory of the Gomory method of integer programming. The simplest integer programming problem is the knapsack problem. The number of iterations in Chvatals procedure is investigated in the case of some special binary knapsack problems.

Title: The Expressive Power of Binary Submodular Functions

(joint work with David Cohen and Peter Jeavons)

It has previously been an open problem whether all Boolean submodular functions can be decomposed into a sum of binary submodular functions over a possibly larger set of variables. This problem has been considered within several different contexts in computer science, including computer vision, artificial intelligence, and pseudo-Boolean optimisation. Using a connection between the expressive power of valued constraints and certain algebraic properties of functions, we answer this question negatively.

Our results have several corollaries. First, we characterize precisely which submodular functions of arity 4 can be expressed by binary sub- modular functions. Next, we identify a novel class of submodular functions of arbitrary arities which can be expressed by binary submodular functions, and therefore minimised efficiently using a so-called expressibility reduction to the Min-Cut problem. More importantly, our results imply limitations on this kind of reduction and establish for the first time that it cannot be used in general to minimise arbitrary submodular functions. Finally, we refute a conjecture of Promislow and Young on the structure of the extreme rays of the cone of Boolean submodular functions.

, Rutgers University

Title:

Previous: Program

Workshop Index

DIMACS Homepage

Contacting the Center

Document last modified on December 18, 2008.