**Logical Analysis of Data**

Inessa Epstein, Rutgers University, New Brunswick, NJ

Mentor: Professor Peter Hammer, Rutgers University, New Brunswick, NJ

One of the major goals of LAD (Logical Analysis of Data) is to classify new observations in a way consistent with past classifications. The available information consists of an archive or previous observations. Each observation consists of a vector of attribute values and the outcome of it. A specific characteristic of LAD is the detection of logical patterns which distinguish logical patterns in one class from all other observations. A pattern characteristic for a specific class is a combination of attribute values (or sets of values) occurring together only in some observations from the considered class. An important feature of LAD methodology is the possibility of using patterns in explaining the results of classification to human experts by standard formal reasoning. The general purpose of LAD is to observe data for which a positive or negative result is known and to make predictions for data not in the set. Unfortunately, such predictions usually cannot be 100% accurate. What I am working on now is identifying vectors with attributes that are unlike the original observed data and, as a result, accurate predictions for such data would be difficult to make.

A** Boolean vector** is a vector with its only possible values being 0 and 1. A

Once we have a partially defined Boolean function for a subset of the vector space, we can then employ a procedure for classifying a set of new points that we can refer to as the testing set. The procedure for classifying points with unknown results is as follows: 1. Start with a training set for which result is known. 2. Use training set to generate positive and negative patterns. 3. For each new point, check whether it is covered by more positive or negative patterns and classify accordingly.

This theory works somewhat well and makes decent predictions, but, unfortunately, sometimes points are misclassified and mistakes are made. My problem this summer is to investigate the nature of such mistakes.

For this purpose, we defined a notion of distance for each point that we call a rank of exclusion. Rank of exclusion is the minimum number of variables needed to prove that a vector is not in the original training set.

Training Set:

x_{1 }x_{2} x_{3} x_{4}

1 1 1 0 1

1 1 1 1 1

1 0 0 1 0

1 0 1 0 0

Rank1 = -x_{1 }

Rank2 = -x_{1 } v x_{2 }-x_{3 }v -x_{3 }-x_{4}

Rank3 = -x_{1 } v x_{2 }-x_{3 }v -x_{3 }-x_{4 } v x_{2}x_{3}x_{4}

Vector **0** 0 0 1 Exclusion rank: 1

Vector 1 **1 0 ** 1 Exclusion rank: 2

Vectors with a lower exclusion rank are less similar to the original training set and, intuitively, less information is available about them

Conjecture: Vectors with a higher exclusion rank have a higher probability of being classified correctly.