Using a Multimodal Logic to Express Conflicting Interests in Security Protocols

Antti Huima (antti.huima@hut.fi) -
Tuomas Aura (tuomas.aura@hut.fi)
Helsinki University of Technology, Finland(note)

Abstract:

A log of cryptographic protocols are designed to work in situations where the participants of the protocols have different, or even conflicting, interests. These conflicts must be taken into account when the protocols are developed in order to avoid pitfalls. More generally, it must be ensured that the rights of the different parties can be protected. Usually, the rights and protecting them are included in the underlying goals of a protocol. However, no formal method of examining them do exist. In this paper we propose a formal method that can be used to analyse situations involving conflicting interests and rights to be protected. The method is based on a standard multimodal logic S4n. With the logic, we formalize and validate reasoning done in protocol design. As an example we analyse a secret election protocol appeared in the literature.

1 Introduction and Motivation

A lot of cryptographic protocols hare designed to work in situations where the participants of the protocols have different, or even conflicting, interests. Various specialized cryptographic protocols put extra emphasis on ensuring fairness to all involved parties in such situations. For example, designers of electronic payment schemes have to consider the interests of banks, customers, merchants, national banks and economies and law-enforcement agencies. Similarly, specialized digital signature systems must guarantee different properties to signers, verifiers and law-enforcement. Nevertheless, there is no general framework for representing and analysing the conflicting interests.

When systematic proof techniques and automatic verification tools for cryptographic protocols are reaching the level where the actually will be used by protocol developers, it is especially important that the verified properties correspond to the application requirements. That is why we believe that more insight into the derivation of the design goals and the relationships between the formal requirements is needed.

In this paper we present a formal method that can be used to express conflicting interests and to analyse situations that involve them. With the logic, we formalize and validate reasoning done in protocol design. The method that we use is based on a standard multimodal logic known S4n, applied to the current problem domain. We use both propositional and first-order versions of the logic.

This paper is structured as follows.

2 What Are Modal Logics?

  Modal logics are logics that examine qualified truth. For example, ideas such as ``necessarily true'', ``possibly true'' and ``known to be true'' appear in the applications of modal logics. [Fit93] is a well-written introduction to basic modal logics. More classic works include [Che80,HC84]. A more thorough treatise of the theoritical aspects is found for instance in [HM92].

A propositional modal logic with only one modality is obtained from standard propositional calculus by adding to the language a unary operator 1$\Box$ (pronounced ``box''); it is the modality operator . Different axioms can be given. Often these axioms reflect the interpretation of the modality. For example, if $\Box \phi$ is read as ``necessarily $\phi$'' then the following axiom:

\begin{displaymath}
\Box \phi \supset\phi\end{displaymath}

makes sense. It says that if something is necessarily true, then it is true. On the other hand, interpreting $\Box \phi$ as ``it is believed that $\phi$'', the axiom above is questionable. For it is possible to believe something that is not true.

Different sets of axioms create different modal logics. Some of the logics are given standard names. For example, the logic that obeys the three modal axioms


   \begin{align}
& \Box(P \supset Q) \supset(\Box P \supset\Box Q)
\\  
& \Box P \supset\Box \Box P
\\ & \Box P \supset P\end{align}

is given the standard name S4.

Both Hilbert-style and tableaux-style proof methods exist for modal logics.

Multimodal logics are modal logics that have more than one modality. Relevant to us is the multimodal logic S4n where every modality independently obeys (1)-(3).

3 The Problem

  In this section we describe the problem of conflicting interests.

Cryptographic protocols are often used in situations where the protocol is meant to ensure protection of the rights of the participating entities. For instance,

a)
In digital cash schemes the authorities must be able to detect and demonstrate fraud when it happens.

b)
Also, the cash users should be able to detect bank stealing their legally owned cash and demonstrate that stealing has happened.

c)
In voting protocols, the voters must be able to detect and demonstrate if some parties try to swindle. For example, if the voters' legal votes are dropped out or changed.

d)
If commercial information services are provided over the network, the providers need to be able to demonstrate plausibly that certain parties have used the services, in order to resolve conflicts that might arise.

These issues have been addressed, at least partially, in existing protocols--but on a per case basis. Although formal proofs have been presented about the workings of protocols, the discussion of high-level properties pertaining to the protection of rights have been informal and not well studied.

This is a problem, because the ultimate goals of the protocol are unclearly stated. Technical results of what a protocol achieves can be given and proven using existing methods. However, there is no apparatus for interpreting these results in a standard way. We want to present one here.

4 Presenting Our Method

  We begin to develop a formal apparatus to tackle the problem described in the previous section. The central notion will be that of demonstrability to a third party .(note) We are interested in claims and clauses such that ``If X is true then Alice can demonstrate that [to a third party]'', or ``Alice can demonstrate [to a third party] that X implies Y''.

This is because by being able to demonstrate the real situation to anyone else she can claim justice for her.

We use ${\mathsf P}_{A}({\phi})$ to denote that ``A can demonstrate $\phi$[to a third party]'' (from now on we shall drop ``[to a third party]'' out). In fact, we shall make ${\mathsf P}_{p}$ a modality for every entity p in the system.

4.1 Defining the Logic

Definition 2676

Let ${\cal P}$ be a set of participants. We are interested in the following multimodal logic:

This definition, though lacking rigor, defines a standard multimodal logic called S4n. Because the binding we make to the set of participants ${\cal P}$, we denote any specific instanse of the logic by ${\mathbf {S4}}{\cal P}$.

We assume that the set ${\cal P}$ is finite. For this reason we may use the notations $(\forall {p} \in {\cal P})[{X}]$and $(\exists {p} \in {\cal P})[{X}]$ in the apparent sense (i.e. quantify over participants or modalities). We will also use n-ary predicates, computable functions and equality operator where necessary.

We need to make proofs in our logic. The example proofs in this paper will be Hilbert-style. We will use short hand markings in our proofs and use well-established deduction rules in addition to the two basic deduction rules, Modus Ponens(note) and Necessitation(note).(note) S4n is shown to be decidable in [HM92]. Tableaux-based proof methods are quite complicated to systematize, but exist however.

4.2 Why These Axioms?

We must, of course, give some justification to the axioms we have just presented, remembering the interpretation that we just gave for the modalities--demonstrability. We will first consider the three modal axiom schemas and then discuss a problem that inavoidably appears.

${\mathsf P}_{p}({X \supset Y}) \supset
 ({\mathsf P}_{p}({X}) \supset{\mathsf P}_{p}({Y}))$.
This is the basic axiom of all frame based modal logics, to which S4 belongs. When we interpret the modality in this axiom in the way suggested above, we read: ``If p can demonstrate that Y is necessarily true when X is, then if p can additionaly demonstrate X, he can demonstrate Y also.'' If I can prove that Y follows from X and that X is true, then I have just shown that Y must be true also, despite of the interpretation of the subclauses X and Y. For this reason we see that this axiom captures an essential aspect of demonstrability.

${\mathsf P}_{p}({X}) \supset X$.
We claim that X cannot be false if someone is able to prove X, which is what this axiom captures. This implication is inherent in our choice of the interpretation of the modality. Namely, you cannot prove a thing that does not hold! For this reason we see that this notion is also a part of the concept of demonstrability.

If the assumptions we make about a system are not consistent this axiom has especially great potential to be the cause of contradictory derivations. We assume that our notion of provability has the property captured by this axiom. Thus we must ensure ourselves that the assumptions we make do not enable anyone to prove anything that should be false. In our examples this consisteness requirement is quite easily seen to hold.

${\mathsf P}_{p}({X}) \supset{\mathsf P}_{p}({{\mathsf P}_{p}({X})})$.
The last axiom claims, that everyone who is able to prove X, despite of the interpretation of X, is also able prove that he/she is able to prove that. This makes sense because by giving the proof of X, the ability to prove X is undeniably demonstrated.

The problem of logical omniscience. This wording is used to refer to a difficulty that appears inevitably in logics like the one we use due to the Necessitation rule, though originally it refers to logics of knowledge [FHMV95]. Namely, if X is always true (it is, for example, a tautology), then we get using Necessitation $(\forall {p} \in {\cal P})[{{\mathsf P}_{p}({X})}]$. In essence, all participants must be able to prove all universally true clauses to a third party, despite of the possible complexity of the proofs. This however, turns out not to be a problem in our practical applications.

4.3 General Analysis Process

In modal logics there are two kinds of true formulas: those that are locally and those that are globally true. In our application the locally true formulas will represent facts that hold but that cannot be necessarily proven by anyone. The globally true formulas are facts that can be necessarily proven by everyone.

Our proposed analysis process goes as follows:

1.
Examine the situation/protocol at hand.
2.
Choose appropriate predicates and propositions to model the situation.
3.
Model the logical interrelations between the predicates using clauses of ${\mathbf {S4}}{\cal P}$. These clauses constitute the set ${\cal W}$ which we assume to be locally but not globally true. In other words, the clauses represent facts but they cannot be necessarily proven.
4.
Find the situations where some participants' rights must be protected. Model them using clauses of ${\mathbf {S4}}{\cal P}$, attaching the clauses to the pertaining participants.
5.
Map these clauses representing ``critical situations'' into another set of clauses, requirements ${{\cal R}}$. The idea is that if the requirement clauses are true, then the participants can protect their rights as they were modelled in the previous step.
6.
Try to prove that if ${\cal W}$ is locally true, then is ${{\cal R}}$ also. A standard notation for this is

\begin{displaymath}
\emptyset \vdash_{{\mathbf {S4}}{\cal P}}{\cal W} \supset{\cal R}\end{displaymath}

We will drop $\emptyset$ out, however, as it denotes the set of global premises which we will have always empty.

4.4 Critical Situations

Having referred to critical situations above we now define them.

Definition 2722 (Critical situation)

A certain situation is critical for A if, when in the situation, A needs to be able to know that the situation holds and to prove that. In particular, critical situations include situations where

For example, assume that you have bought a new car and find after two weeks that there is no gear box in the vehicle. This is a critical situation for you, as the seller has cheated. It does not help you much to only know that the gear box is missing, you must also be able to show that it is missing and that you paid for it in order to get justice.

On the other hand, assume that you paid only for the car, but suddenly you find that there are is also a brand new stereo rack mounted in the car which you didn't paid for. This is not a critical situation for you per se because your rights are not being deprived by you receiving a free rack. You do not need to be able to prove that you have a free rack.(note)

The main point of critical situations is that when in one, an entity must be able to prove that the situation really holds. This the subject of the next subsections.

4.5 Critical Pairs, Critical Sets and Mapping Them to Requirements

Definition 2726 (Critical Pair)

Let $\phi$ denote a situation that is critical for p. Then the pair $\langle \phi , p \rangle$ is a critical pair.

Definition 2734 (Mapping)

Let $\Phi$ denote all clauses of an ${\mathbf {S4}}{\cal P}$logic. We define the mapping ${\mathbf {G}}: \Phi \times {\cal P} \mapsto 
\Phi$ as

\begin{displaymath}
{\mathbf {G}}(\langle \phi , p \rangle) =_{\mathit{def}}
 \phi \supset{\mathsf P}_{p}({\phi})\end{displaymath}

For ${\cal S} \subseteq \Phi$ we use $
{\mathbf{G}}({\cal S})$ to denote

\begin{displaymath}
\left\{{\mathbf{G}}(\phi) \; \vert \; \phi \in {\cal S}\right\}\end{displaymath}

${\mathbf G}$ maps critical pairs into clauses that, when true, guarantee that when ever in a critical situation, the participants concerned can demonstrate it. This is exactly what we discussed in the previous section.

We can now define our method more formally.

4.6 Making a Long Story Short

Let ${\cal C}$, a set of critical pairs, and ${\cal W}$, containing rules describing the system at hand, be given.

Definition 2742 (Solution)

Let ${\cal S} \subseteq \Phi$. ${\cal S}$ is a solution of $\langle {\cal C}, {\cal W}\rangle$ iff

\begin{displaymath}
\vdash_{{\mathbf {S4}}{\cal P}}({\cal W} \cup {\cal S}) \supset{\mathbf{G}}(\cal C)\end{displaymath}

Thus, if a set of clauses ${\cal S}$ is a solution for $\langle {\cal C}, {\cal W}\rangle$, then the following holds: if ${\cal S}$ is true in addition to ${\cal W}$, then all participants' rights can be protected in the sense defined previously (they can demonstrate, giving a proof, that their rights are being dishonored if that is the case). We want to separate ${\cal W}$ from ${\cal S}$ for clarity; ${\cal S}$ can contain different assumptions about the behaviour of the participants etc. that cannot be taken directly to be properties of a protocol being examined. However, no theoretical difference between the two types of premises exists.

4.7 Assuming Certain Ideality

It appears that such deductions as ``if A has payed the bill, she must have a receipt'' occur often in applications. However, this kind of thinking suffers of some inconsistency when stated formally. We would like to translate the clause above to the form

\begin{displaymath}
{\mathit {billPayed}} \supset{\mathit {haveReceipt}}\end{displaymath}

This has the contraposition

\begin{displaymath}
\neg{\mathit {haveReceipt}} \supset\neg{\mathit {billPayed}} \end{displaymath}

But this appears to lead to inconsistent results, if A has paid the bill but for some reason has lost the bill. Namely, she has indeed paid the bill, although the logic claims otherwise. For this reason we need sometimes to assume that such situations do not take place. Another solution would be to model these exceptions also, but it would lead to unnecessary complications. The same assumptions must be made in any traditional paper-based system: lost certificates may lead to contradictory results.

4.8 Proving disprovability

  Another commonly occurring hard situation is that where A must be able to prove that B cannot prove $\phi$ although A cannot give a direct proof for $\neg\phi$. This happens when $\phi$ is beneficial for B and unbeneficial for A, thus $\neg\phi$ is beneficial for A. A wants to show that B can prove $\phi$ whenever it is true. Then proving that B cannot do that, A has proven by contraposition that $\phi$ is false. In the terms of logic:


          \begin{align}
% latex2html id marker 220
{\mathsf P}_{A}({\phi \supset{\mathsf P...
 ...eg\phi}) & \qquad \text{from (\ref{xx9}) and (\ref{xx10})
by axiom 1}\end{align}

The crucial assumption is (5), and it must be discussed. Namely, it formalizes the argument that if B is able to prove $\phi$and it is beneficial for him to do so, then he will exhibit a proof. Or, when B is asked to give a proof but he does not, it is concluded that B cannot give a proof. This is quite commonsense in everyday life. For example, a contract may contain wording such as: ``A receipt must be presented in two weeks of the request. In the case of failure to do that, the receipt is assumed not to exist.'' For this reason we will use this rule when necessary, though giving a comment when it is used.

5 Examples

  In this section we give two examples. First we give a very easy example to demonstrate the method. The second example is about analysing a voting protocol.

5.1 A Small Example

In this example we consider a situation that takes place between a phone user U and a phone company C. The user subscribes to telecommunication services provided by the company. We have three predicates:

Note that we are just talking about ``a call'' and not making any distinction between different calls. That is, we assume that every call is independently charged.

Naturally ${\cal P} = \{C, U\}$.

We assume that there exists the following relationship between the predicates: the user does not pay a call before he has dialled it. We assume also that both the user and the company are able to prove when the user is being charged. This is reasonable, because otherwise the bill has not much legal force. (Other relationships will later be added in the solution set ${\cal S}$.) We thus get

\begin{displaymath}
{\cal W} = \left\{\begin{array}
{l}
{\mathit {paid}} \supset...
 ...P})[{{\mathsf P}_{p}({{\mathit {charge}}})}]\end{array}\right\}\end{displaymath}

We proceed to define the critical pairs. From the user's point of view, the company should charge him only if he has made a call that is not yet paid. From the company's point of view, it should be able to charge the user always when this is the situation. From these two observations we get the set of critical pairs

\begin{displaymath}
{\cal C} = \left\{\begin{array}
{l}
\langle \neg({\mathit {c...
 ...paid}})
\supset{\mathit {charge}}), C\rangle\end{array}\right\}\end{displaymath}

From these we get the set of requirements ${\cal R} = {\mathbf G}({\cal C})$:

\begin{displaymath}
{\cal R} = \left\{\begin{array}
{l}
\neg({\mathit {charge}} ...
 ...mathit {paid}}) \supset{\mathit {charge}})})\end{array}\right\}\end{displaymath}

We should then try to prove $\vdash_{{\mathbf {S4}}{\cal P}}{\cal W} \supset{\cal R}$. This turns out to be impossible, however. The reason is that the parties cannot prove anything about the predicates ${\mathit {called}}$ and ${\mathit {paid}}$.(note)

We will now show that

\begin{displaymath}
{\cal S} = \left\{\begin{array}
{l}
{\mathsf P}_{U}({{\mathi...
 ...}({\neg{\mathsf P}_{U}({{\mathit {paid}}})})\end{array}\right\}\end{displaymath}

is a solution of $\langle {\cal C}, {\cal W}\rangle$, i.e. that $\vdash_{{\mathbf {S4}}{\cal P}}({\cal W} \cup {\cal S}) \supset{\cal R}$. Indeed,


                \begin{align}
% latex2html id marker 354
{\mathit {paid}} \supset{\mathit {calle...
 ...it {called}} \vee paid)})\end{array}& \qquad \text{from (\ref{yy17})}\end{align}

(30) is equal with the first requirement. The second requirement is proved using a similar argument. Note that actually (15) (item of ${\cal W}$) was not used at all.

The solution set ${\cal S}$ can be interpreted as resulting from a contract of the form: ``the company must be able to demonstrate dialled call when requested. Otherwise all requests of charge will be declared void'', and similarly for the customer's paid bills. This is exactly the situation discussed in Sec. 4.8.

5.2 Analysing the Two-Agency Secret Election Protocol

We analyse a modified version [Sal96,Sch96] of the Two-Agency Secret Election Protocol, proposed originally by Salomaa et al. [NSS91,Sal91].

5.2.1 Presenting the Protocol

The protocol goes as follows. There are two agencies: one for legitimizating the voters (L) and another for computing the votes (C). We have also a voter (A).

Step 1:
A asks L for an identification number.
Step 2:
L checks if A is allowed to vote. If this is the case and A has not received it before, L sends A a random identification number iA.
Step 3:
After all voters have requested their identification numbers, L sends the set of the granted numbers, N, to C.
Step 4:
A chooses a secret identification number sA. Then he sends C the triple $\langle i_A, v_A, s_A\rangle$ where vA is her vote.
Step 5:
C finds out whether $i_A \in N$ or not. If it is in the set, C removes iA from N and adds sA to the list of voters that voted for vA.
Step 6:
When the voting period is over, C publishes a list of pairs $\langle s_p, v_p\rangle$, linking every sp to the corresponding vote.

In the following we assume that L works honestly and correctly. Of course, if L were cheating voters could, for instance, get many identification numbers. This would break the whole system immediately. (L can be replaced by running an all-or-nothing-disclosure-of-secrets protocol with C [NSS91].)

Our analysis will concentrate on the rights of A, an individual voter.

We also make the following minor modification to the protocol. If somebody tries to vote multiple times with the same identification number the first vote sent is the one registered by C in Step 5. We change this to let C choose the vote to use from the many votes sent by the party instead.

5.2.2 First-order Modal Logic

At this point we will begin to use first-order modal logic, i.e. add predicates and quantifiers to the previously propositionally based logic we have used. (We use rigid designators [Fit93].) Incorporating predicate calculus into modal logics contains some subtleties due to the amount of choices available regarding to the interpretation of the semantics of the quantifiers.

5.2.3 Model

We will use the following sets: ${\cal I}$ is the set of possible identification numbers, ${\cal S}$ is the set of possible secret identifications, ${\cal V}$ is the set of possible voting strategies and ${\cal O}$ is the set of voters.

The predicates that we have chosen to model the protocol are presented in Figure 5.2.3.


  
Figure 1: Predicates used in analysing the Two-Agency Protocol
\begin{figure*}
\begin{center}
\begin{tabular}
{r\vert l}
${\mathit {sent}}(o, \...
 ...on number $i$\space to
$C$\space in step 3\end{tabular}\end{center}\end{figure*}

In order to get ${\cal W}$ we do the following observations:

When we state these observations formally we get the following clauses ${\cal W}$:


  \begin{align}
(\forall o, o' \in {\cal O}; i \in I)[({\mathit {granted}}(o,i) \w...
 ...blised}}(p) \supset{\mathsf P}_{a}({\neg{\mathit {published}}(p)})}]]\end{align}

We let ${\cal P} = \{A, C, L\}$.

We use also the derived predicates:


  \begin{align}
{\mathit {sentLegal}}(o, i) \equiv &
 (\exists v \in {\cal V};
 s ...
 ... (\exists p' \in {\cal S} \times {\cal V})[{\mathit {dupPub}}(p, p')]\end{align}

Here ${\mathit {sentLegal}}$ is true for a voter and an identification number if the voter has sent at least one triple using the identification number and the identification number is valid, i.e. granted by L. ${\mathit {noPair}}$ is true for an identification number if there exists no $\langle s, v\rangle$-pair that the voter had sent with the identification number and C had published in the protocol step 6. ${\mathit {notCounted}}$ is true for a voter if there exists an identification number that the voter has used when sending at least one vote to C but none of the votes sent with the identification number do appear in the list.

${\mathit {duplicate}}$ is true for two $\langle s, v\rangle$-pairs when they have been sent using the same identification number. ${\mathit {dupPub}}$ denotes the situation that two pairs are published that are ${\mathit {duplicate}}$. ${\mathit {noSender}}$ is true for a similar pair if nobody has sent it with a valid identification number. Finally, ${\mathit {fraudulentPair}}$ is true for a pair if it has been published but either does not have a legal sender or another published pair exists that has been sent with the same identification number.

Critical pairs for A are then


  \begin{align}
& \langle {\mathit {notCounted}}(A), A \rangle
\\ & \langle (\exis...
 ...n {\cal S} \times {\cal V})[{\mathit {fraudulentPair}}(p)], A \rangle\end{align}

In essence, the situations where A's rights are in danger are those where her vote is ignored or modified (45) and those where illegal votes are counted (46).

We get then the requirements


\begin{align}
&{\cal R} = \left\{
\begin{array}
{l}
{\mathit {notCounted}}(A) \s...
 ... \times {\cal V})[{\mathit {fraudulentPair}}(p)]})\end{array}\right\}\end{align}

5.2.4 Checking the Requirements

Trying to prove $\vdash_{{\mathbf {S4}}{\cal P}}{\cal W} \supset{\cal R}$, we get stuck. Starting from (45) and looking at the definition of ${\mathit {notCounted}}$, the first problem seems to be that A cannot prove that she has sent anything. Indeed, this is a problem in the protocol. Because A cannot prove that she sent a vote to C, C can claim that he never received it.

Another problem is that A cannot prove ${\mathit {granted}}(A,i)$ for any i.

Problems are encountered when examining (46) also. A has no way to prove ${\mathit {noSender}}$-clauses, for example.

We thus see that the protocol as formalized above does not fulfil the requirements we stated as the critical pairs (45)-(46). This reflects real world problems and/or bad formalization.

We will now begin to revise the formalization and the assumptions concerning the system until we find a result that fulfils (45) and (46). We do this trying to get a better voting protocol.

We start from (45) and work top-down. We make first the assumptions (their interpretations will be discussed below)


  \begin{align}
& (\forall i \in {\cal I})[
{\mathit {sentLegal}}(A,i) \supset
 {\...
 ...thit {noPair}}(i)) \supset
 {\mathsf P}_{A}({{\mathit {noPair}}(i)})]\end{align}

(48) and (49) together ensure that (45) is handled:


      \begin{align}
% latex2html id marker 634
{\mathit {notCounted}}(A) & \qquad \tex...
 ...}_{A}({{\mathit {notCounted}}(A)}) & \qquad \text{from (\ref{step5})}\end{align}

Similarly we take care of (46) by assuming


  \begin{align}
&(\forall p \in {\cal S} \times {\cal V})
[{\mathit {noSender}}(p)...
 ...upPub}}(p, p') \supset
 {\mathsf P}_{A}({{\mathit {dupPub}}(p, p')})]\end{align}

Thus (48), (49), (57) and (58) together form a solution set of the system. What has been assumed? (48) states that A can demonstrate when appropriate that she has sent a triple with a legal identification number. (49) formalizes that A can show that none of the $\langle s, v\rangle$pairs sent by her in triples with an identification number are published in the public list. (57) claims that A can prove that a pair has no legal sender when that is the case, and (58) that she can always demonstrate, when appropriate, that the same identification number was used to make two counted votes.

We now see how the protocol can be modified to satisfy the first two of the four necessary assumptions, i.e. (48) and (49).

The straightforward (48) has actually a problem. Namely, if A does prove that she has sent a triple with i, then she reveals the connection between i and herself and the anonymousness of her vote is lost. We assume that a trusted third party can be used to mediate without disclosing A's identity. This is just a design choice we make; we could try to remove his problem by making another assumptions instead of (48). Similar problems appear with (49).

We must now examine how to implement (48) and (49).

(48) is clearly a consequence of


  \begin{align}
& (\forall i \in {{\mathit {I}}})[{\mathit {granted}}(A,i) \supset...
 ...\supset{\mathsf P}_{A}({{\mathit {sent}}(A,\langle i, s, v\rangle)})]\end{align}

In order to be able to implement (59) we assume that the identification numbers granted to voters have certificates that include the intended user's name. (60) can be implemented by requiring C to send a signed certificate (receipt) of all the received messages to the senders.

Thus making these two changes to the protocol we can ensure (48). (49) is a bit more complicated as a non-existence must be proved. We accomplish this using an indirect proof, assuming:


  \begin{align}
& {\mathsf P}_{A}((\forall o \in {\cal O}; i \in {\cal I}; v \in {...
 ... {\mathsf P}_{A}({\neg{\mathsf P}_{C}({\neg{\mathit {noPair}}(i)})})]\end{align}

Here (61) says that whenever something has been sent using a legal identification number, C is able to prove that; and A is able to prove that this holds. This can be implemented so that i's itself are public keys and every voter is sent a private key corresponding to the public one. The private keys are used to sign the triples. C receives the public keys in the list N, and L certifies this list. (62) is an instance of the situation presented in Sec. 4.8. A is able to demonstrate that C cannot prove $\neg{\mathit {noPair}}(i)$ when this is the case.

We will give a sketch of a proof that with these assumpions, (49) follows. It is straightforward to prove that whenever $\neg{\mathit {noPair}}(i) \wedge {\mathit {given}}(i)$,${\mathsf P}_{C}({\neg{\mathit {noPair}}(i)})$. This is based on (61). As this is commonly provable by all participants, we can let in particular A be able to prove that. From that the conclusion follows via a contraposition argument using (62).

5.3 Conclusive Remarks

We sum up what we have done at this point:

1.
We found that the protocol did not fulfil our requirements. This is a signal of either real problems or formalization bugs. In this case they represent real problems at least to some extent.
2.
We were able to modify the protocol and the assumptions regarding to it in a way that some of the requirement are satisfied, i.e. the voter can detect and demonstrate that her vote has been modified or neglected. What we done was the following: (1) the identification numbers are associated with their meant receivers by certificates; (2) the voters get signed receipts when they send their triples; (3) C is able to prove that it has received a certain triple. This is done by making the identifier numbers actually pairs of public and private keys and using them to certify the votes anonymously; and (4) when C fails to give a proof of a triple received, it can be assumed guilty of a fraud.

We find that the method we used was easy and intuitive to use and it helped us to find problems and examine solutions to them.

6 Conclusions and Future Work

  In this paper we presented a new method for analysing security protocols from the perspective of protecting the rights of the participants. The method was based on a standard multimodal logic commonly known as S4n. We defined our method and discussed some details of it. We demonstrated the method by analysing a voting protocol appeared in the literature.

The main contribution of our work to the area of trustworthy security engineering is giving a language to formally argue about conflicting interests and protection of rights; and proposing an applicable method that forces one to examine the underlying assumptions of a protocol to a great detail.

It may be possible to use the logic here in conjunction with existing authentication logics such as BAN [BAN90]. This is an area that should be examined. More case studies are needed in order to weight the usefulness of the method. Making proofs by hand is currently very cumbersome. Lemmas and more intuitive notation for certain commonly occurring clauses should be developed. The details of using a first-order multimodal logic still needs more attention. The case studies show this to be a promising direction. However, research on this subject is still in a very preliminary phase. Finally, other methods for examining conflicting interests could be developed.

References

BAN90
Michael Burrows, Martin Abadi, and Roger Needham.
A logic of authentication.
ACM Transactions on Computer Systems, 8(1):18-36, February 1990.

Boo93
George Boolos.
The Logic of Provability.
Cambridge University Press, 1993.

Che80
B.F. Chellas.
Modal Logic: an Introduction.
Cambridge University Press, Cambridge, 1980.

FHMV95
Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi.
Reasoning about Knowledge, chapter Logical Omniscience.
The MIT Press, 1995.

Fit93
Melvin Fitting.
Basic modal logic.
In Handbook of Logic in Artificial Intelligence and Logic Programming, volume 1, pages 365-448. Oxford Science Publications, 1993.

HC84
G.E. Hughes and M.J. Cresswell.
A Companion to Modal Logic.
Methuen and Co., London, 1984.

HM92
J.Y. Halpern and Y. Moses.
A guide to completeness and complexity for modal logics of knowledge and belief.
Artificial Intelligence, 54:319-379, 1992.

NSS91
Hannu Nurmi, Arto Salomaa, and Lila Santean.
Secret ballot elections in computer networks.
Computers & Security, 10(6):553-560, October 1991.

Sal91
Arto Salomaa.
Verifying and recasting secret ballots in computer networks.
In H. Maurer, editor, New Results and New Trends in Computer Science, pages 283-289. Springer-Verlag, 1991.
Lecture Notes in Computer Science No. 555.

Sal96
Arto Salomaa.
Public-Key Cryptography.
Springer-Verlag, 2nd edition, 1996.

Sch96
B. Schneier.
Applied Cryptography: Protocols, Algorithms, and Source Code in C.
John Wiley & Sons, New York, 2nd edition, 1996.

Footnotes

...Finland
We want to thank Dr. Ilkka Niemelä for the valuable help that he gave us through sharing his expertise on the area of modal logics; and his time. He is however not responsible for the contents of this paper in any way.

...party.
We append the postfix ``to a third party'' in order to clearly distinguish the concept from that of ``real'' logics of provability, such as GL. See e.g. [Boo93].

...Ponens
From $\phi$ and $\phi
\supset\gamma$ infer $\gamma$.

...Necessitation
From $\phi$ infer ${\mathsf P}_{p}({\phi})$.

...Necessitation
Moreover, we will not refer to a semantic interpretation of the logic at all. This is a deliberate choice whose purpose is to avoid the extra complication that presentation of semantics would add to this paper.

...rack.
Of course, this thinking has nothing to do with ethical issues.

...${\mathit {paid}}$.
Were we to refer to the semantics we could easily present a counter model here. We do, however, skip this issue as mentioned previously.