DIMACS Workshop on Applications of Lattices and Ordered Sets to Computer Science

July 8 - 10, 2003
DIMACS Center, CoRE Building, Rutgers University

Jonathan Farley, Massachusetts Institute of Technology
Mel Janowitz, DIMACS / Rutgers University, melj@dimacs.rutgers.edu
Jimmie Lawson, Louisiana State Univeristy, lawson@math.lsu.edu
Michael Mislove, Tulane University, mwm@math.tulane.edu
Presented under the auspices of the Special Focus on Mathematics and the Foundations of Computer & Information Science .


Samson Abramsky, Oxford University

Title: Domains and Interaction

We survey the passage from traditional forms of denotational semantics based on domain theory, which view program meaning as functions, to process models in which the fundamental notion is interaction. We show how domain theory can be used to capture ideas from Geometry of interaction and game semantics, leading among other things to a simple construction of a fully abstract model for PCF. We then show how these ideas can be developed to provide a basis for a high-level approach to quantum computation.

Mick Adams, SUNY, New Paltz

Title: Two Notions of Universality Considered for Bounded Lattices and Kleene Algebras

For an equationally defined class ${\bf V}$ of algebras, the notions of {\it categorical universality} (namely, that the category of all directed graphs together with all their compatible mappings be isomorphic to a full subcategory of ${\bf V}$) and $Q$-{\it universality} (namely, that the lattice of all subquasivarieties of any quasivariety of algebraic structures of finite type be a divisor lattice of the lattice of all subquasivarieties of ${\bf V}$) give a sense of the structural richness of the class. For example, it is a consequence of the categorical universality of ${\bf V}$ that every monoid may be represented as the endomorphism monoid of each member of a proper class of non-isomorphic algebras each of which belongs to ${\bf V}$. Furthermore, it is a consequence of the $Q$-universality of ${\bf V}$ that the lattice of subquasivarieties of ${\bf V}$ is uncountable and fails every non-trivial lattice identity.\\

Recently a general connection has been established between these two seemingly independent notions. We will give an overview of that relationship and illustrate it with bounded lattices and Kleene algebras. The techniques employed are of a combinatorial nature.

Roland Backhouse, University of Nottingham

Title: The Theory of Fixed Points and Galois Connections Applied to Language-Processing Problems

In 1971, J.H. Conway introduced the notion of a "factor" of a regular language. He established that factors can be organised in a matrix which is reflexive and transitive, and he showed how factors can be used to establish several non-trivial regularity-preserving properties of functions on languages.

Subsequently, Backhouse and Lutz (1977) observed a curious relationship between Conway's factor matrix and the Knuth-Morris-Pratt string-matching algorithm; specifically the KMP algorithm can be reformulated as computing the "factor graph" (the starth root of the factor matrix) of a certain language derived from the pattern.

However, exactly why factors should play a role in string matching remained something of a mystery.

This talk resolves the mystery. We review a number of well-known algorithms, showing how they involve a combination of fixed points and Galois connections. We use this understanding to summarise a methodology for developing algorithms for more difficult language-processing problems. The talk is concluded by showing how Conway's factor theory is used to derive an algorithm for the language-inclusion problem, an instance of which is theKnuth-Morris-Pratt string matching algorithm.

Glenn Bruns, Bell Labs

Title: Lattices in Multi-Valued Model Checking

A model checker is a tool that checks whether a finite-state system model satisfies a property written as a temporal logic formula. Model checkers are typically used to check correctness properties of hardware and software systems. Model checking can be adapted to new kinds of applications by the use of multi-valued temporal logic, in which the truth values are elements of a lattice. For example, three-valued model checkers are well suited to check correctness properties of partially-defined system model, in which information about system states is incomplete.

Another application is temporal logic query checking, in which the problem is to determine the propositions that can be put in a "hole" in a temporal logic formula such that the formula will hold of a system. In this talk I will describe multi-valued temporal logic, discuss some applications, and show how lattice theory plays a part in the design of multi-valued model checkers.

Bob Coecke, Oxford

Title: Probability as Order

I will establish the following equation [1]:

Quantitative Probability = Logic + Partiality of Knowledge + Entropy

A finitary probability space (=all probability measures on {1, ... ,n}) can be fully and faithfully represented by the pair consisting of an abstract partially ordered set and Shannon entropy. The abstract partially ordered set itself can be obtained via a systematic purely order-theoretic procedure (which embodies introduction of partiality of knowledge) on an (algebraic) logic.

This procedure actually applies to any poset. When taking the n-element powerset we obtain the above poset. When taking a finitary quantum logic (=the lattice of subspaces of a Hilbert space) we obtain the domain of quantum states introduced by Keye Martin and myself [2,3]. The concrete partial order on classical states used to establish the above was also introduced in [2,3]. It is a bounded complete lattice.

We will put all this in the perspective of order-theoretic research in the context of quantum physics [4,5].

Ernie Cohen, Microsoft

Title: Some Open Problems in Kleene and Omega Algebras

Omega algebra - an extension of Kleene algebra to omega-regular expressions - has proved to be a useful tool, particularly for reasoning about concurrency control. We describe the algebra and some open problems related to its decision problems and model theory.

Federico Crazzolara, NEC Europe, C&C Research Laboratories

Title: Event-based methods for security protocols.

The events of a security protocol and their casual dependency can play an important role in the analysis of security properties. Using a certain kind of Petri nets, a compositional event-based semantics is given to an economical, but expressive language for describing security protocols, so the events and dependency of a wide range of protocols are determined once and for all. The net semantics relates to strand spaces and inductive rules, as well as trace languages and event structures, hence unifying a range of approaches.

Melvin Fitting, CUNY

Title: Bilattices

A bilattice is a structure consisting of two lattices with interconnecting conditions. Varying these conditions gives rise to different families of bilattices. Bilattices were introduced by Matt Ginsberg in the late 1980's, with intended applications to artificial intelligence. I came to understand that they provided remarkably good machinery for the semantics of logic programming languages. Indeed, stable (or answer set) semantics, in a bilattice setting, proved to be quite elegant once all the detail had been abstracted away. In a similar way, they were applicable to philosophical theories of truth in the Kripke style. And Arnon Avron and others have shown interesting relationships with formal many-valued logics.

The bloom has gone from logic programming, but answer set semantics still lives. It has supplied a uniform method for the solution of quite a number of graph-related problems, for instance. What I will do in my talk is sketch the basic theory of bilattices, including the fixpoint theorems that apply to answer set semantics. I believe others will discuss answer set applications. But, applications aside, bilattices have a coherent elegance that makes them worth a look for their own sakes.

Deborah S. Franzblau, CUNY/College of Staten Island

Title: Implementing Operations on Set Covers via Lattice Algebra

It is sometimes possible to reduce the cost of storing or processing a set by representing the set with a set cover, a union of special, simpler subsets. For example, a pixel image, a subset of unit squares on a grid, can be represented more compactly as a union of rectangles.

In applications in which such a set must be maintained dynamically during additions and deletions, the following problem arises.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *


Given set covers X and Y for the respective sets S and T, that is, collections of subsets whose unions are S and T, define and construct the following efficiently:

(1) (sum) X + Y, a set cover for S \union T;

(2) (intersection) X \intersect Y, a set cover for S \intersect T;

(3) (difference) X - Y, a set cover for S/T (the elements of S not in T).

In this problem, one is given a universal set U, containing S and T, and a family C of those subsets of U allowed in a covering. Covers X and Y are collections of sets chosen from C. For example, U could be the set of unit squares on a grid and C the collection of grid rectangles.
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

There are two key issues. The first is ensuring that each operation is well- defined; that is, each input pair leads to exactly one output. For example, there could be many different set covers for S/T. The second is ensuring that the computation time and size of the output is reasonable relative to the sizes of the underlying sets.

In this talk, I show that one can address both of these issues by using the structure of a distributive lattice defined on collection of all set covers. The addition and intersection of set covers correspond to the join and meet of the lattice, and the difference corresponds to a pseudocomplement on the lattice. Moreover, the algebraic properties of these lattice operations lead to relatively simple and efficient algorithms for set cover operations. If time allows, I will also discuss implementation for covers of pixel images with rectangles on a grid, and for covers of vertex sets of a hypercube with faces of the hypercube (which has applications to logic circuit design).

Patrice Godefroid, Bell Laboratories, Lucent Technologies

Title: Partial-Order Methods for Model Checking

Verification by state-space exploration, often referred to as "model checking", is an effective approach for analyzing the correctness of concurrent reactive systems. The main limitation of this approach is the often excessive size of the state space. Partial-order methods are algorithms for dynamically pruning the state spaces of concurrent systems without incurring the risk of any incompleteness in the verification results. These methods are used to limit state explosion in popular model-checking tools such as the protocol verification system Spin and the software model checker VeriSoft. In this talk, we will present an overview of partial-order methods, and discuss the contributions that these algorithms have made to the practice of software model checking.

Joshua Guttman, MITRE

Title: A Decidable Class of Security Protocols

When the class of security protocols is broadly defined, security properties such as secrecy and authentication are undecidable (Durgin, Lincoln, Mitchell, Scedrov, 1999). Recently, two separate papers (Ramanujam, Suresh, 2003 and Blanchot, Podelsky 2003) have proposed very similar subclasses that include almost all realistic protocols, but have decidable security problems.

The purpose of this talk is to explain their results within the strand space framework, and to give a novel proof via the protocol independence theorem (Guttman, Thayer 2000). That theorem gives a general condition under which protocols that meet some security goal separately are guaranteed still to meet it if run together on the same network.

Michael Huth, Imperial
Shekar Pradan

Title: Consistent Partial Model Checking

We propose assertion-consistency (AC) semi-lattices as suitable orders for the analysis of partial models. Such orders express semantic entailment, multiple-valued analysis, maintain internal consistency of reasoning, and subsume finite De Morgan lattices. We classify those orders that are finite and distributive and apply them to design an efficient algorithm for multiple-viewpoint checking, where checks are delegated to single-viewpoint models --- efficiently driven by the order structure. Instrumentations of this algorithm enable the detection and location of inconsistencies across viewpoint boundaries.

To validate the approach, we investigate multiple-valued models and their compositional property semantics over a finite, distributive AC lattice. We prove that this semantics is computed by our algorithm above whenever the primes of the AC lattice constitute `projected' single-viewpoints and the order between primes results in refinements of single-viewpoint models. As a case study, we discuss a multiple-valued notion of state-machines with first-order logic plus transitive closure.

George Markowsky, University of Maine

Title: Posets, Lattices and Computer Science

An overview of the different types of posets that are useful in computer science. Examples from the theory of programming and data mining will be discussed.

Keye Martin, Oxford University

Title: Physics and domain theory

Domain theory arises in physics and in a grey area on the border of physics and computer science that no one really seems to know what to call. Here are some recent examples: (1) The density operator formulation of quantum mechanics is a picture perfect example of Scott's original idea of `partial' and `total' elements; (2) Entropy is the least fixed point of the copying operator above complexity; (3) Epistemic motion exists and domain theoretic kinematics is what one uses to determine the complexity of Grover's algorithm for quantum searching (for example).

These are three examples, all very different from one another, and there is much to be said about each. But there are other instances of domain theory being used in physics too. There are even physicists using domain theory currently who *do not realize* they are using domain theory. Some details of what we have mentioned above can be found on the website (http://web.comlab.ox.ac.uk/oucl/work/keye.martin). This talk will make use of some of these ideas and maybe some others. The main point: The usual notions from domain theory all find fun applications in physics, and physics suggests some new ideas for domain theory.

Catherine Meadows, Naval Research Laboratory

Title: Applications of Lattices to Computer Security

We discuss applications of lattices to computer security, in particular to multilevel security and survivability. We discuss a model we developed for the aggregation problem, which occurs when data classified at one level when considered individually are classified at a higher level when considered in the aggregrate. We then show how this work was applied by Millen to an analog of the aggregation problem that arises in the modeling of survivable systems.

Michael Mislove, Tulane University

Title: Precedence-Inclusion Patterns and Relational Learning

I will introduce precedence-inclusion patterns, which are sets with a strictly partially ordered set of strict partial orders, along with some additional structure. The definition of these structures reflects how multiple partial orders interact in a number of situations such as in text, images, and video. In particular, precedence-inclusion patterns generalize constituent structure trees familiar to computational linguists. Our interest in these objects was initially sparked by their connection with problems of relational learning. I will outline the basic properties of precedence-inclusion patterns, and I will present a basic result for finite patterns: each finite set of finite precedence-inclusion relations has a minimal most specific generalization that is unique up to isomorphism, and I will explain how this result relates to relational learning.

Bernhard Moeller, University of Augsburg

Title: Modal Kleene Algebra

Besides its applications within formal language theory, Kleene algebra supports an abstract treatment of programs or state transition systems. Kozen's Kleene algebra with tests (KAT) extends this classical framework by an abstract representation of sets of states. Recently, the present author and colleagues have enriched KAT with domain/codomain operations that determine the set of initial/final states of a transition system.

In these Kleene algebras with domain (KAD) one can define image and inverse image and hence the backward and forward diamond and box operators of dynamic logic. Generalizing to arbitrary predicate transformers, we can even obtain a Kleene star for monotonic predicate transformers. This yields an LKAD ("left KAD", without right-distributivity). The diamond then becomes an LKAD homomorphism. The use of this predicate transformer algebra is exemplified in the derivation of some general formulations of greedy-like algorithms.

Frank Oles, IBM

Title: Precedence-Inclusion Patterns and Relational Learning

I will introduce precedence-inclusion patterns, which are sets with a strictly partially ordered set of strict partial orders, along with some additional structure. The definition of these structures reflects how multiple partial orders interact in a number of situations such as in text, images, and video. In particular, precedence-inclusion patterns generalize constituent structure trees familiar to computational linguists. Our interest in these objects was initially sparked by their connection with problems of relational learning. I will outline the basic properties of precedence-inclusion patterns, and I will present a basic result for finite patterns: each finite set of finite precedence-inclusion relations has a minimal most specific generalization that is unique up to isomorphism, and I will explain how this result relates to relational learning.

Dusko Pavlovic, Kestrel Institute

Title: Minimal bicompletions

The minimal bicompletion of a category A is a complete and cocomplete category CA, with the embedding A -->CA, which preserves all limits and colimits that may exist in A, and moreover both generates and cogenerates CA. When A is a poset, then CA is the familiar Dedekind-MacNeille completion.

The task of generalizing this construction to categories was put forward in Lambek's 1966 book on completions (Springer Lecture Notes in Mathematics Vol 24!). In 1974, Isbell has shown that this task has no solution, at least for the usual algebraic notion of (co)generating. I shall describe a solution for a weaker notion of (co)generating, arguably more appropriate for categories than the algebraic one.

The minimal bicompletion turns out to be related to some categorical constructions used in semantics of computation, in particular the Chu construction, and the double gluing. Its possible role in the semantical toolkit will be briefly discussed.

Alex Pogel, New Mexico State University
Tim Hannan and Lance Miller

Title: The visualization of weighted lattices for data representation

Formal Concept Analysis (FCA) has been applied in research projects in a variety of areas outside mathematics, usually as an exploratory data analysis tool. A crucial step of these applications is the visualization of a data set via the presentation of a lattice diagram. The lattices are typically small and lattice drawing methods use only the order structure of the lattice (as is true of drawing methods appearing in general order theory). We are using this traditional lattice construction (the closure system arising from a binary relation) to view time series data, but our applications involve the visualization of larger lattices and call for lattice drawing methods that call on both weights and order in the related lattices. We will discuss this problem in the context of the visual representation of data, and present some of our attempts at creating successful visualization methods based on weight and order.

Ales Pultr, Charles University
B. Banaschewski

Title: A General View of Approximation

 As an example consider the set of decadic rationals
$\alpha = b_k\cdots b_0 . a_1\cdots a_n$ ordered by

\alpha \preceq \alpha '= b'_l\cdots b'_0 . a'_1\cdots a'_m
 (\alpha - 10^{-n}, \alpha + 10^{-n}) \subseteq
(\alpha ' -10^{-m}, \alpha ' + 10^{-m})

(``$\alpha $ is a better approximation of a real number than $\alpha '$") stratified by $R_n=\setof{b_k\cdots b_0 . a_1\cdots a_n}{n\ \text{fixed}}$, and represent (uniformly continuous) real functions $f:\RR\to\RR$ by systems $(f_{nm}:R_n\to R_m)_{m,n}$ where the approximations $f_{mn}$ can be viewed as multivalued maps mapping infinite expansions, with the digits beyond the precisions indicated left indetermined. This leads to a definition of an {\em approximation system} as a collection of data $(X,\preceq,\A)$ where $(X,\preceq) $ is a poset and $\A$ a suitably defined system of subsets $A\subseteq X$ (the strata of precision), and {\em approximation maps} as suitably defined multimaps that are ``arbitrarily close to maps in any chosen precisions"; in more detail, an approximation map $f:(A,\preceq,\A)\to(Y,\preceq,\B)$ is a relation $f\subseteq X\times Y$ such that if $xfy$, $x\preceq x'$ and $y\preceq y'$ then $x'fy'$, that if there exists a $z$ with $z\preceq x_1,\dots,x_n$ and $x_ify_i$ then there is a $u$ with $u\preceq y_1,\dots,y_n$ (``if $x_i$ can all approximate one thing then so can their images") and, finally, that for each $B\in\B$, $f^{-1}\in\A$.

The resulting category {\bf ApprSys} can be shown to be equivalent to the category of uniform locales, with a fairly transparent interpretation of the localic maps. Another category with which {\bf ApprSym} is equivalent is that of Scott information systems stratified by precision of information contained in the predicates, and approximable maps respecting the precisions.

Stefan Schmidt, New Mexico State University

Title: Secret sharing schemes realizing access hierarchies

Georg Struth, University of Augsburg

Title: A Calculus for Set-Based Program Development

We propose an algebraic calculus for set-based program development. First, we reconstruct a fragment of set theory via atomic distributive lattices (ADL). Semantically, ADL extends boolean reasoning about sets by element-wise reasoning; it avoids presupposing a universal set. Operationally, ADL yields abstract, concise, elegant proofs from few elementary principles. Second, we develop a focused automated proof-search procedure for ADL with simple deduction and effective reduction and simplification rules. Proof-search is guided by rewriting techniques. The procedure decides several subclasses. Its main application is the proof-support for formal methods like B or Z.

Miroslaw Truszczynski, and Victor W. Marek, University of Kentucky
Marc Denecker and K.U.Leuven

Title: Ultimate approximations of lattice operators and their applications in knowledge representation

We study fixpoints of operators on lattices and bilattices in a systematic and principled way. The key concept is that of an approximating operator, a monotone operator on the product bilattice, which gives approximate information on the original operator in an intuitive and well-defined way. With any given approximating operator our theory associates several different types of fixpoints, including the Kripke-Kleene fixpoint, stable fixpoints and the well-founded fixpoint, and relates them to fixpoints of operators being approximated. Compared to our earlier work on approximation theory, this approach provides a more intuitive and better motivated construction of the well-founded and stable fixpoints. In addition, we study the space of approximating operators by means of a precision ordering and show that each lattice operator O has a unique most precise - we call it ultimate - approximation. We demonstrate that fixpoints of this ultimate approximation provide useful insights into fixpoints of the operator O. We then discuss applications of these results in logic programming.

James Worrell, Tulane University
(joint work with Joel Ouaknine, CMU)

Title: Universality and Language Inclusion for Open and Closed Timed Automata

We analyze a recursively defined domain of probabilistic processes, first studied by Desharnais, Gupta, Jagadeesan and Panangaden. The domain models simulation for a class of reactive probabilistic transition systems, called labelled Markov processes. Our main results are as follows:

Guo-Qiang Zhang, Case Western Reserve University

Title: Sequents, Lattices, and Logic Programming

(Tentative) This talk gives an analysis of freely generated frames from entailment relations. We obtain completeness results under the unifying principle of the spatiality of coherence logic. In particular, the domain of disjunctive states, derived from the hyperresolution rule as used in disjunctive logic programs, can be seen as the frame freely generated from the opposite of a sequent structure. This can be seen as a robust way to assign meanings to disjunctive logic programs, in the realm of ``partial models''.

Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 29, 2003.