Trust Management Under Law-Governed Interaction


Naftaly H. Minsky and Victoria Ungureanu
Affiliation: Rutgers University
Abstract: Modern distributed systems tend to be conglomerates of heterogeneous subsystems which have been designed separately, by different people, with little, if any knowledge of each other --- and which may be subject to different security policies. A single software agent operating within such a system may find itself interacting with, or even belonging to, several subsystems, and thus be subject to several disparate security policies. For example, an agent may be classified at a certain military-like security level, which effects the kind of document it can get; it may carry certain ``capabilities'' meant to provide him with certain access rights; and, while accessing certain financial information, it may be subject to the ``Chinese Wall'' security policy, under which one's access right depends on access history. If every such policy is expressed by means of a different formalism and enforced with a different mechanism, the situations can get easily out of hand.

We propose to deal with this problem by means of a recently developed security mechanism for distributed systems called Law-Governed Interaction (LGI). LGI can associate a singular mode of interaction with any given group of distributed agents, subjecting all such interactions to an explicitly specified ``law,'' that defines the security policy regarding this mode of interaction. An agent operating under a given law L can be trusted implicitly to satisfy the policy defined by this law, without having to validate each operation with some trusted server. This makes LGI scalable to a significant extend, and it contributes to the fault tolerance of this mechanism.

LGI can thus support a wide range of security models and policies, including: conventional discretionary models that use capabilities and access-control lists, mandatory lattice-based access control models, and the more sophisticated models and policies required for commercial applications. Moreover, under LGI, a single agent may be involved in several different modes of interactions, and thus be subject to several disparate security policies. All such policies would be defined by laws expressed in a single formalism, and be enforced in a unified manner.

Another advantage of the proposed security mechanism is that it completely hides from the users all aspects of key management. The trust between the interacting agents under LGI is the result of constraints being imposed on the exchange of messages between them.

For more information, see http://athos.rutgers.edu/~minsky.