Because these protocols tended to be short and not terribly complicated, informal arguments were used to prove their correctness. However, when running in parallel, the behavior of these protocols is more difficult to analyze. Asynchronous composition is already difficult to reason about, and adding issues of who knows what and when makes reasoning about security protocols extremely difficult. One recent approach taken by Bellare and Rogaway and by Shoup and Rubin, is to try to provide a rigorous mathematical proof of the correctness of a protocol [3,21]. They use properties of pseudo-random functions and mathematical arguments to prove that an adversary does not have a statistical advantage when trying to discover a key in a session key distribution protocol.

One of the earliest successful attempts at formally reasoning about
security protocols involved developing a new logic in which one could
express and deduce security properties. The earliest such logic is
commonly referred to as the BAN logic and is due to Burrows, Abadi,
and Needham [6]. Their syntax provided constructs for
expressing intuitive properties like ``A said X,'' ``A believes X,''
``K is a good key,'' and ``S is an authority on X.'' They also
provide a set of proof rules which can then be used to try to deduce
security properties like ``A and B believe K is a good key'' from a
list of explicit assumptions made about the protocol. This formalism
was successful in uncovering implicit assumptions that had been made
and weaknesses in a number of protocols. However, this logic has been
criticized for the ``protocol idealization'' step required when using
this formalism. Protocols in the literature are typically given as a
sequence of messages. Use of the BAN logic requires that the user
transform each message in the protocol into formulas *about* that
message, so that the inferences can be made within the logic. For
example, if the server sends a message containing the key ,then that step might need to be converted into a step where the server
sends a message containing , meaning that the key is a good key for communication between *A* and *B* .
An attempt to give this logic a rigorous semantics was made by Abadi
and Tuttle [2]
and other attempts to improve or expand the logic can be found
in [22]. The BAN logic remains popular because of
its simplicity and high level of abstraction.

Recent work in the use of modal logics for verifying security protocols includes the development of a logic that can express accountability [13]. Kailar convincingly argues that in applications such as electronic commerce, it is accountability and not belief that is important. Like their counterparts in the paper world, one would like people to be held accountable for their electronic transactions. This means that it is not enough for the individual participants to believe that a transaction is taking place. They must be able to prove to a third party that a transaction is taking place. Kailar provides a syntax which allows such properties to be expressed and a set of proof rules for verifying them. Similar to the BAN logic, Kailar's accountability logic is at a very high level of abstraction. Still, Kailar is able to use it to analyze four protocols and to find a lack of accountability in a variant of one of CMU's Internet Billing Server Protocols.

An orthogonal line of research revolves around trying to automate the
process of verification when using these logics. Craigen and Saaltink
attempt this by embedding the BAN logic in EVES [7].
The automation resulting from this experiment was not satisfactory.
By building a forward-chaining mechanism and changing some of the
rules, they were able to build a system that would try to develop the
entire theory of a set of axioms (find the closure of a set of
formulas under the derivation rules). Kindred and Wing went further
by proposing a theory-checker *generator* [14]. They
provide a formal and well defined framework with assurances about
correctness and termination. In addition, their system generates
theory checkers for a variety of logics including BAN, AUTLOG, and
Kailar's accountability logic.

The third technique can be placed in the general category of model checking. The common approach here is to model the protocol by defining a set of states and a set of transitions that takes into account an intruder, the messages communicated back and forth, and the information known by each of the principals. This state space can then be traversed to check if some particular state can be reached or if some state trace can be generated. The first attempt at such a formalism is due to Dolev and Yao [8]. They develop an algorithm for determining whether or not a protocol is secure in their model. However, their model is extremely limited. They only consider secrecy issues, and they model only encryption, decryption, and adding, checking, or deleting a principal name.

Meadows used an extension of the Dolev-Yao model in her PROLOG based model checker [17]. In her system, the user models a protocol as a set of rules that describe how an intruder generates knowledge. These rules model both how the intruder can generate knowledge on its own by applying encryption and decryption, and how the intruder can generate new knowledge by receiving responses to messages it sends to the principals participating in the protocol. In addition, the user specifies rewrite rules that indicate how words are reduced. Typically, there are three rules used to capture the notion of equality and the fact that encryption and decryption are inverse functions. These rules are:

encrypt(X,decrypt (X,Y)) Y decrypt(X,encrypt (X,Y)) Y id_check(X,X) yes

To perform the verification, the user supplies a description of an insecure state. The model checker then searches backwards in an attempt to find an initial state. This is accomplished naturally in PROLOG by attempting to unify the current state against the right hand side of a rule and thus deducing from the left hand side what the state description for the previous state must be. If the initial state is found, then the system is insecure, otherwise an attempt is made to prove that the insecure state is unreachable by showing that any state that leads to this particular state is also unreachable. This kind of search often leads to an infinite trace where in order for the intruder to learn word A, it must learn word B, and in order to learn word B, it must learn word C, and so on. For this reason a facility for formal languages is included which allows the user to prove that no word in a set of words (or language) can be generated by the intruder. The technique involves the following steps:

- Show that the word in question is in the language.
- Show that knowledge of any word in the language requires previous knowledge of another word in the language.
- Show that the initial state does not contain any word in the language.

This initial model checker was still too limited. In particular, it did not allow the modeling of freshly generated nonces or session keys. The model checker evolved into the NRL Protocol Analyzer [18] which allowed for these operations. In addition the model changed to include the states of the participants as well as the state of the intruder while still maintaining the old paradigm of unifying against the right hand sides of transition rules in order to generate predecessor states. However, if anything, the model has become more complex, and it still suffers from the most important weaknesses of the original system. There is no systematic way of converting a protocol description into a set of transition rules for the NRL Analyzer. The model checker also relies heavily on the user during the verification much in the same way a theorem prover relies on the user to guide it during the search for a proof. Finally, the algorithms used in the NRL Analyzer are not guaranteed to terminate, and so a limit is placed on the number of recursive calls allowed for some of the model checking routines.

Woo and Lam propose a much more intuitive model for authentication
protocols [23]. Their model resembles sequential programming
with each participating principal being modelled independently. There
is an easy and obvious translation from the common description of a
protocol as a set of messages to their model. Their models are also
more intuitive because they consider all possible execution traces
instead of considering just the set of words obtainable by the
intruder. They are concerned with checking for what they call
*secrecy* and *correspondence properties*. The secrecy property is
expressed as a set of words (usually keys) that the intruder is not
allowed to obtain. The correspondence properties can express things
of the form if principal A finishes a protocol run with principal B,
then principal B must have started (participated in) the protocol run
with A. However, they do not provide a general logic in which to
formalize security properties, nor do they provide an automated tool.
Instead they present a set of inference rules with which you can prove
correspondence assertions about a model [24]. In
addition, the description of their model, while intuitive, is not very
precise or formal.

Bolignano presents a model that is almost a middle point between these last two [4]. Like Meadows, Bolignano emphasizes the algebraic properties of the intruder when trying to derive words. The state of the intruder then is the set of words it can generate, while the state of the participants is determined by the values of the variables that correspond to the protocol and their program counters. A number of rules to reason about what information is contained in what messages are provided which can then be used to prove properties about a protocol. In the example given, all properties, including authentication, are given in terms of an invariant that must be proven. Because the invariant must be proven to hold for all protocol steps, this can become unwieldy very quickly.

Other recent work in this area has involved trying to use generic verification tools to verify security protocols. In [16], Lowe uses the FDR model checker for CSP [12] to analyze the Needham-Schroeder Public-Key Authentication Protocol [19]. Lowe succeeded in finding a previously unpublished error in the protocol. The fact that he was able to use a generic model checker is promising as well. Unfortunately, the CSP model for the protocol is far from straightforward. In addition, the model is parameterized by the nonces used by the participants. This means that it only models a single run of the protocol. In order to prove the general protocol correct he must prove a theorem that states that the general protocol is insecure only if this restricted version is insecure.

Leduc and others recently used the LOTOS language [5] and the Eucalyptus tool-box [9] to analyze the Equicrypt protocol [15]. What makes this an interesting case study is the fact that the Equicrypt protocol is a real system currently under design for use in controlling access to multimedia services broadcast on a public channel. They were able to find a couple of security flaws in this proposed system using these generic tools.

Gray and McLean propose encoding the entire protocol in terms of temporal logic [10]. Much like symbolic model checking, they describe the model by giving formulas that express the possible relationships between variable values in the current state and variable values in the next state. This makes their framework more formal than the others, but much more cumbersome as well. They provide a simple example and prove a global invariant for this example. The few subcases they consider are very straightforward but their technique demands very long proofs even for the extremely simple example they present. They argue that their technique could be automated but provide no tool for their system.

Abadi and Gordon propose the spi calculus, an extension of the pi calculus with cryptographic primitives, as another model for describing and analyzing cryptographic protocols [1]. The spi calculus models communicating processes in a way that is very similar to CSP and CCS. The spi calculus provides constructs for output on a channel, input on a channel, restriction, composition, testing for equality, pairs and projections, encryption, decryption and for branching on equality to zero. What sets the spi calculus (and the pi calculus) apart from other calculi is the dynamic nature of the scope of restriction. The restriction operator can be thought of as creating a new name to which only processes within the scope of the restriction operator can refer. However, one of these processes could output this new name outside the scope of the restriction operator allowing another process to refer to it. In the pi calculus, these new names can be thought of as private channels. In the spi calculus, the restriction operator is used to model nonces and keys. So far, protocol models have been verified by comparing to a slightly altered model that is ``obviously'' correct, and is, therefore, at the same level of abstraction as the protocol model.

A more concrete and complete model is presented by Heintze and
Tygar [11]. They view protocols as a set of agents
modeled as non-deterministic finite state machines. The actions of a
principal who must follow the protocol depend on the local state of
that principal and so are in some sense restricted. The actions of
adversaries are not restricted by the protocol and hence they are
allowed to perform any actions consistent with their current
knowledge. (In other words, they cannot send messages that they
cannot generate from their current knowledge). Their model also
includes a notion of belief, which along with the sequence of sends
and receives, defines the local state of a principal. Security is
then split into secret-security and time-security. A model is
secret-secure if all beliefs are universally valid. In particular if
any principal ever believes that a message *M* is only shared among
the principals in *S* , then it is always the case that if *A* knows
*M* then . A model is time-secure if all beliefs eventually
expire. In other words, if *b* is a belief held by a principal *A* at
event *e* then there is an event *e*' such that *b* is not held at any
event following *e*' . The authors go on to prove that the questions
``Is P secret-secure?'' and ``Is P time-secure?'' are undecidable.
While this model does a good job of capturing what one means by
``security,'' the model seems too complex to be used in practice.