next up previous
Next: Information Algorithms Up: A Model Checker for Previous: The Model

Normalized Derivations

Intuitively speaking, if B represents some set of information that is known by a principal, then the principal also knows (can generate) all the information in $\overline{B}$. In general $\overline{B}$ is an infinite set; however, we usually are not interested in the set of everything that a principal knows, but instead whether or not a specific message $x \in {\cal M}$ can be generated by a principal. This leads us to the following definition.

Let $x \in \overline{B}$ be a message. A derivation of x from B is an alternating sequence of sets of messages and rule instances written as follows:

\begin{displaymath}
B_{0} \stackrel{R_{0}}{\rightarrow} B_{1}
\stackrel{R_{1}}{\rightarrow} \cdots B_{k-1}
\stackrel{R_{k-1}}{\rightarrow} B_{k} \end{displaymath}

where:

For example, let $B = \{ \{a\}_{k} \cdot b, k^{-1} \}$. We derive $x = a \cdot b$ as follows:

1.
$B_{0} = B = \{ \{a\}_{k} \cdot b, k^{-1} \}$
2.
$R_{0} = \langle \{\{a\}_{k} \cdot b \}, 3, \{\{a\}_{k}, b \} \rangle$
3.
$B_{1} = \{ \{a\}_{k} \cdot b, k^{-1}, \{a\}_{k}, b \}$
4.
$R_{1} = \langle \{\{a\}_{k}, k^{-1} \}, 5, \{ a \} \rangle$
5.
$B_{2} = \{ \{a\}_{k} \cdot b, k^{-1}, \{a\}_{k}, b, a \}$
6.
$R_{2} = \langle \{ a, b \}, 2, \{ a \cdot b \} \rangle$
7.
$B_{3} = \{ \{a\}_{k} \cdot b, k^{-1}, \{a\}_{k}, b, a, a \cdot
b \}$ which contains x

We would now like to introduce the notion of a normalized derivation, but first we must introduce the notion of shrinking rules and expanding rules by defining a metric $\mu:
{\cal M} \rightarrow {\Bbb N}$. We then define a shrinking rule to be a rule such that for every instance of the rule $\langle I, N,
O\rangle$ we have:

\begin{displaymath}
\max_{m \in I}\mu(m) \gt \max_{m \in O}\mu(m) \end{displaymath}

Analogously, an expanding rule is a rule for which every instance $\langle I, N,
O\rangle$ we have: \begin{displaymath}
\max_{m \in I}\mu(m) < \min_{m \in O}\mu(m) \end{displaymath}

We can now define a normalized derivation as follows:

\begin{displaymath}
B_{0} \stackrel{R_{0}}{\rightarrow} B_{1}
\stackrel{R_{1}}{\rightarrow} \cdots B_{k-1}
\stackrel{R_{k-1}}{\rightarrow} B_{k} \end{displaymath}

is a normalized derivation if and only if for all $0 \leq i < k,
N_{i}$ is an expanding rule implies $N_{j}$ is an expanding rule for all $i < j \leq k$. In other words, all shrinking rules appear to the left of all expanding rules. Recall that in our notation, $R_{i}$ is the rule instance $\langle I_{i}, N_{i},
O_{i} \rangle$,

For example, in our model, we will define our metric $\mu$ inductively as follows:

Note that $\mu(m)$ is well defined when $m = \{ m_{1} \}_{k_{1}} = \{
m_{2} \}_{k_{2}}$, because the perfect encryption assumption implies that $m_{1} = m_{2}$ and $k_{1} = k_{2}$. In the case $m = m_{1}
\cdot m_{2} = m_{1}' \cdot m_{2}'$ either $m_{1}$ is a substring of $m_{1}'$ or $m_{1}'$ is a substring of $m_{1}$. Without loss of generality, assume $m_{1} = m_{1}' \cdot b$. Then it must be the case that $m_{2}' = b \cdot m_{2}$ because we have $m = m_{1} \cdot m_{2} =
m_{1}' \cdot b \cdot m_{2} = m_{1}' \cdot m_{2}'$. Therefore

\begin{displaymath}
\mu(m) = \mu(m_{1} \cdot m_{2}) = \mu(m_{1}' \cdot b \cdot m_{2}) =
\mu(m_{1}' \cdot m_{2}'). \end{displaymath}

The message derivation rules from section 5 can now be categorized. With these definitions, rules 3 and 5 are shrinking rules and rules 2 and 4 are expanding rules.

We now show that in our model, there is a derivation of x from B if and only if there is a normalized derivation of x from B . First we need the following lemma.

Lemma 1: Let $B_{0} \stackrel{R_{0}}{\rightarrow} B_{1}
\stackrel{R_{1}}{\rightarrow} B_{2}$ be a derivation of length 2 such that $N_{0}$ is an expanding rule and $N_{1}$ is a shrinking rule. Then there exists a derivation $B_{0}' \stackrel{R_{0}'}{\rightarrow}
B_{1}' \stackrel{R_{1}'}{\rightarrow} \cdots B_{k-1}'
\stackrel{R_{k-1}'}{\rightarrow} B_{k}'$ such that

1.
$N_{1}', \ldots, N_{k-1}'$ are expanding rules.
2.
$B_{0} = B_{0}'$
3.
$B_{2} \subseteq B_{k}'$

Proof:

Theorem 2: Let $B \subseteq {\cal M}$ be a set of messages. Then $x \in \overline{B}$ if and only if x has a normalized derivation from B .

Proof: If x has a normalized derivation from B then clearly this is a derivation and by definition $x \in \overline{B}$.

For the other direction, let $x \in \overline{B}$. Then there exists some derivation

\begin{displaymath}
\Gamma = B_{0} \stackrel{R_{0}}{\rightarrow} B_{1}
\stackrel...
 ...ghtarrow} \cdots B_{k-1}
\stackrel{R_{k-1}}{\rightarrow} B_{k} \end{displaymath}

such that $x \in B_{k}$. Let $S = \{ i \vert R_{i}$ is a shrinking rule and $\exists j < i$ such that $R_{j}$ is an expanding rule }. If S is empty, then $\Gamma$ is a normalized derivation and we are done. Otherwise, we can induct on the size of S . Let $r = \min S$. By repetitively using Lemma 1, we can move $R_{r}$ to the left, until either it is the leftmost rule, or it is immediately to the right of another shrinking rule. Since the original derivation is finite and since each time we apply Lemma 1, rule $R_{r}$ moves one slot to the left, we need apply Lemma 1 only a finite number of times. If $R_{r}$becomes the leftmost rule, then clearly there are no expanding rules to the left of $R_{r}$. If $R_{r}$ is now immediately to the right of another shrinking rule $R_{s}$, then there are still no expanding rules to the left of $R_{r}$ because then there would be an expanding rule to the left of $R_{s}$ in the original derivation and so $s \in
S$ and $s < r$ contradicting the minimality of r . Now we have a new derivation of x , $\Gamma'$, which is still finite. Since the application of Lemma 1 does not add any new shrinking rules, S' , the new S , satisfies $S' = S - \{ r \}$. Furthermore |S'| = |S| - 1 , so by the inductive hypothesis we can transform $\Gamma'$ into a normalized derivation of x .

Corollary 3: Given $x \in {\cal M}$ and $B \subseteq {\cal M}$,determining if $x \in \overline{B}$ is decidable.

Proof: By Theorem 2, $x \in \overline{B}$ if and only if x has a normalized derivation from B . We therefore try to find a normalized derivation or show that none exists. First we repeatedly apply shrinking rules to $B = B_{0}$ creating new sets $B_{i}$. Since there are a finite number of rules, each rule creates a finite number of new words, each smaller (by the metric $\mu$) than each of the words used as an input to the rule, and $B_{0}$ is finite to begin, there are only a finite number of $B_{i}$'s and hence we only apply shrinking rules a finite number of times. Let us call this final set $B_{s}$. Since $B_{s}$ is the result of repeatedly applying all possible shrinking rules to B , x has a normalized derivation from B if and only if it has a derivation from $B_{s}$ which uses only expanding rules. Furthermore, the length of a minimal derivation of x from $B_{s}$ is bounded by $\mu(x)$ since each expanding rule creates a words that are longer than the words used as inputs to the rule. Since there are a finite number of expanding rules and $B_{s}$ is itself finite, we can simply try all possible sequences of expanding rules of length less than or equal to $\mu(x)$ in a finite number of steps. Therefore, this whole algorithm is guaranteed to terminate.

In the proof of Lemma 1, the majority of cases displayed a kind of independence of rules. Intuitively, independence means that applying one rule does not increase the set of things that can be derived using the other rule. More formally, a shrinking rule s is independent of an expanding rule e if for each pair of instances $\langle I_{s}, s, O_{s} \rangle$ and $\langle I_{e}, e, O_{e}
\rangle$ we have one of the following:

1.
$O_{e} \cap I_{s} = \phi$: The output of the expanding rule cannot be used as input to the shrinking rule. This is the case for pairing and decryption and for encryption and projection.
2.
$O_{s} \subseteq I_{e}$: The information gained by applying the shrinking rule was already present when applying the expanding rule. This could be the case when for encryption and decryption using the same key.

Note that this property applied to almost all cases of Lemma 1 and that the only real work in proving Lemma 1 came from the case of the pairing rule and projection rule because these are not independent. The other pairs of rules were independent because of the ``perfect encryption assumption.'' In general, this exchanging property (Lemma 1) need only be shown for pairs of rules that are not independent.


next up previous
Next: Information Algorithms Up: A Model Checker for Previous: The Model