Scyther Coding
Scyther Coding
DOI:
10.6100/IR614943
Document Version:
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please
follow below link for the End User Agreement:
www.tue.nl/taverne
PROEFSCHRIFT
door
geboren te Geleen
Dit proefschrift is goedgekeurd door de promotor:
Copromotoren:
dr. S. Mauw
en
dr. E.P. de Vink
The work on this thesis has been carried out under the auspices of the research
school IPA (Institute for Programming research and Algorithmics). The author was
employed at the Eindhoven University of Technology and supported by the Dutch
Organization for Scientific Research (NWO) within the project “Security Analysis
for Multi-Applet Smart Cards” (SAMASC), grant number 612.064.102.
Preface
I first got into contact with computers when I was 10 years old. At a friend’s house
I saw a Philips MSX-2 home computer, for which one could write programs in
MSX-Basic. The thought of making the computer do something I wanted instantly
attracted me, even more so than the games you could play on it. I nagged my
parents to buy me one. Because they did not immediately give in, I decided to
proceed and borrowed a book on Basic in the local library1 . When I was 11, a
Dutch broadcasting company wrote out bi-weekly programming contests. Although
I did not have a computer, I participated anyway. I sent in my solution and, much
to everyone’s surprise, won the contest. I remember receiving for this a book called
“Fatherhood” by B. Cosby, much to my disappointment.
The highly anticipated home computer arrived when I was 13, and I moved from
Basic to machine code. I wrote my first game in machine code somewhat later,
with which I won a much bigger MSX contest at age 15. From that point on, my
high school years were dominated by designing programs during the day (i. e. during
school) and programming them in the evening, as much as I was allowed. When I
finished high school, my goal was clear. I did not want to do more programming.
Rather, I wanted to learn more about the theoretical foundations of computing, and
decided to study Computer Science at the university.
My first year at the university went smoothly and in an optimistic mood I decided
to simultaneously study philosophy at another university, from my second year on.
As the interest in MSX slowly waned, it was replaced by another upcoming and
irresistible trend: the internet. Not realising I already had enough on my hands,
I also started an internet company. As a result I could not put sufficient time in
any of my interests to make them work, causing huge delays. I decided to focus
on theoretical computer science and finished my Masters degree. Having chosen a
single path I felt committed enough to start a Ph.D. in the Formal Methods group
at the university.
During the first year of my Ph.D. studies, I read papers and brainstormed about any-
thing that seemed remotely interesting. My supervisors, Sjouke Mauw and Erik de
Vink, tried (and for the most part, managed) to guide me into interesting directions.
One of the directions was trying to understand what the essential elements of secu-
rity protocols were, as the literature immediately seemed to jump to computational
i
ii
models (and conclusions), skipping any foundations. This triggered the develop-
ment of the formal security protocol model described in this thesis. Development of
a new protocol model meant reinventing some existing security properties, and as a
side result new ones were even invented, such as a new form of authentication.
Although the developed model seemed to work on paper, validation was needed.
Manual verification of protocols with known properties seemed to work just fine,
so the next step was to develop tools and really put the theory to the test. After
an initial attempt by a student, I developed a prototype tool. Tests with this tool
seemed to validate the theory, and I could reproduce results regarding protocols
from the literature. Furthermore the performance of the tool was much better than
the other security tools I had access to at that point, even though this had not been
the original motivation. Meanwhile, reading more literature revealed that there
was a new security protocol tool, of which a very high performance was claimed.
Although this tool was not available to me to actually confirm this performance, the
method that was described in the paper seemed very similar to what my manual
proofs looked like. This lead to the development of a new algorithm, consisting of
a blend of the model already developed, and the method presented in the paper.
When the new tool prototype was completed, the performance was unlike anything
that I had seen thus far. This opened the way to applying the tool to case studies
which had never been done before, such as investigating compositionality problems.
From this point on, there were more interesting routes open than I had time to write
papers about. There are still many half-developed ideas which are not included in
this thesis, but I hope to flesh out in the near future.
Since my initial interest in computers years ago, I have completed many side quests.
I enjoyed all these thoroughly, learnt a lot from them, and I dare say they have
enriched me as a person. But, in the end, it seems there is no escape, and I always
find myself returning to computer science. It is time to continue on the main quest.
Acknowledgements
For the last four years, I owe many people thanks. Not only because people taught
me things, but also because of their support, as well as the great working atmosphere.
I spent by far the most time in discussion with Sjouke Mauw, which was always a
great pleasure. From Sjouke I learnt many things (except for juggling), for which
I am very grateful. Without his influence this thesis would have looked completely
different (i. e. worse) and would probably have progressed less smoothly. Sjouke’s
faith in me helped me to accomplish things I might not have achieved otherwise.
A lot of time I also spent with Erik de Vink, who would always strive for technical
perfection. Typically, Erik asked the right questions which forced me to make my
thoughts comprehensible for others. I thank them both for allowing me the final
sprint of these last six months, which has probably been fairly horrific for all parties
involved. The warriors are still standing! (Just don’t nudge them, because they
might fall over.)
iii
I owe thanks to my roommates, first Jerry den Hartog, and later Hugo Jonker, for
putting up with my rants, interesting working hours, and plenty of coffee breaks.
Jerry was always a great first filter for my brainstorming sessions during the first few
years, and his precision helped clarify problem areas. Later, I thoroughly enjoyed
Hugo’s cheerful mood and his amazing ability to improve himself. Thanks guys!
That I enjoyed my Ph.D. studies so much was also a joint effort of the FM group
members. For this, the main responsible person is surely Jos Baeten, who runs the
Formal Methods group in his very unique way, making it seem like an easy job(s).
This has been very inspiring. I especially appreciated the way in which everybody
was always ready to give a helping hand. This included advice on various topics,
from paper writing to (university) politics, as well as simply helping me to arrange
things during times of pressure. For this, I owe all FM-buckets.
Regarding this thesis, I thank Rob Nederpelt for improving the Dutch summary. I
also thank the students who assisted during the years on case studies, prototypes
and other Scyther-related investigations: Gijs Hollestelle, Lutger Kunst, Niek Palm,
and Ingmar Schnitzler.
I also want to thank Ricardo Corin and Martijn Warnier. Together we started
the association of Security Ph.D. students in the Netherlands (SPAN). The relaxed
atmosphere at the SPAN meetings ensured that we always had a great time. Martijn
and I also had some great meetings outside of work, at which we would contemplate
the future whilst enjoying a drink, until it was time to catch the last train.
I thank the reading committee, prof.dr. D.A. Basin, prof.dr. W.J. Fokkink and
prof.dr. B.P.F. Jacobs for assessing this thesis. Their insightful comments have
helped to improve this thesis. I also thank dr. S. Etalle for partaking in my promo-
tion committee.
I owe my parents, Fred and José, and my brother Koen, for unconditionally sup-
porting me through easy, and less easy times. I could not wish for anything more.
Finally, writing this thesis would never have been possible without the ever loving
support of my wife. Given my tendency to work interesting (i. e. all) hours, my great
concentration skills (i. e. neglecting everything else), and relaxed and optimistic way
of planning (i. e. requiring insane schedules around deadlines for everything), this
certainly is no easy task. Yen-Ha: thank you for everything.
Contents
Preface i
1 Introduction 1
1.1 Historical context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Black box security protocol analysis . . . . . . . . . . . . . . . . . . 4
1.3 Research question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Overview of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Operational Semantics 9
2.1 Domain analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Security protocol specification . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Role terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 Event order . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.3 Static Requirements . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Describing protocol execution . . . . . . . . . . . . . . . . . . . . . . 21
2.3.1 Runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Threat model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.1 Network threat model . . . . . . . . . . . . . . . . . . . . . . 28
2.4.2 Agent threat model . . . . . . . . . . . . . . . . . . . . . . . 29
2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Security Properties 33
3.1 Security properties as claim events . . . . . . . . . . . . . . . . . . . 33
3.2 Secrecy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
v
vi
3.3.1 Aliveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Synchronisation . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.3 Message agreement . . . . . . . . . . . . . . . . . . . . . . . . 44
3.4 Authentication hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.5 Verifying injective synchronisation . . . . . . . . . . . . . . . . . . . 54
3.5.1 Injectivity of synchronisation . . . . . . . . . . . . . . . . . . 55
3.5.2 The LOOP property . . . . . . . . . . . . . . . . . . . . . . . 59
3.6 Proving security properties of the NS/NSL protocol . . . . . . . . . 62
3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4 Verification 69
4.1 Trace patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.1.1 Representing classes of traces by trace patterns . . . . . . . . 70
4.1.2 Realizable trace patterns . . . . . . . . . . . . . . . . . . . . 73
4.1.3 Explicit trace patterns . . . . . . . . . . . . . . . . . . . . . . 77
4.1.4 Complete characterization . . . . . . . . . . . . . . . . . . . . 79
4.2 Characterization algorithm . . . . . . . . . . . . . . . . . . . . . . . 83
4.2.1 Basic idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2.2 Optimization: decryption chains . . . . . . . . . . . . . . . . 84
4.2.3 Algorithm preliminaries . . . . . . . . . . . . . . . . . . . . . 85
4.2.4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.2.5 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.3 Verification of security properties by means of characterization . . . 93
4.3.1 Verifying secrecy properties . . . . . . . . . . . . . . . . . . . 94
4.3.2 Verifying authentication properties . . . . . . . . . . . . . . . 94
4.3.3 Correctness and complete characterizations . . . . . . . . . . 95
4.4 Prototype implementation: Scyther . . . . . . . . . . . . . . . . . . . 95
4.4.1 Requirements and design . . . . . . . . . . . . . . . . . . . . 95
4.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.4.3 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.4.4 Choosing a heuristic . . . . . . . . . . . . . . . . . . . . . . . 99
4.4.5 Choosing a bound on the number of runs . . . . . . . . . . . 102
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
vii
Bibliography 161
Samenvatting 181
Summary 183
CV 184
1
Introduction
1
2 Chapter 1 Introduction
If we want to send a secret message consoles, and the diameter of the rod is such
that two characters can be written around the rod, we write c o n s along the front
side of the rod, as in Figure 1.1 on the previous page. We turn the rod and write
the remaining characters o l e s. Now, if we unwrap the paper from the rod, and
read the characters, we find that the encrypted message is coolness, as can be seen
in Figure 1.2.
If the recipient wants to decode it, he wraps the paper again around a rod of the
same diameter. If he uses a wrong diameter, e.g. such that three characters can be
written around the rod, the message will read clsonsoe, and he will not receive
the correct message. Here, the diameter of the rod acts as a key. Even when the
encryption method is known to the enemy, encrypting and decrypting requires a key
which is only known by the sender and the recipient.
Throughout history, the use of encoding schemes has evolved significantly. Tradi-
tional encryption relies on the fact that an encoding scheme is kept secret. This
design principle is sometimes referred to as “security through obscurity”, and is still
often applied today.
It is possible to design stronger encryption schemes. In the 19th century, Auguste
Kerckhoff stated that an encryption scheme should be secure even if everything
except the key is known to the enemy. This is often referred to as Kerckhoff’s law.
In the 20th century Claude Shannon formulated a similar notion, stating that “the
enemy knows the system”, which is referred to as Shannon’s maxim.
Just before and during the second world war, cryptography was used extensively,
also by the German forces. Although different machines were used, the most well-
known are the Enigma devices, based on a system with multiple rotors. Initial
attempts to crack the code started as early as 1932 by Polish researchers. Based on
their findings, by the end of the war a team of researchers in Bletchley park (which
included Alan Turing) was able to break coded messages on a daily basis. This
result was not due to the fact that they had recovered German Enigma machines
which they could analyse. Rather, it involved the development of dedicated code-
breaking machines (which took several years) that tried to recover the key from the
1.1 Historical context 3
encrypted text. Thus, to some extent, the German Enigma machines were secure in
the sense of Kerckhoff’s law.
From 1949 onwards, when Claude Shannon published his seminal paper on informa-
tion theory, many scientific publications have appeared on the topic of cryptography,
and for a large part these focus on finding new encryption schemes. These encryp-
tion schemes were no longer just inventions with claims, but they were developed on
a more proper mathematical foundation: cryptographic schemes are derived from
known hard mathematical problems. To prove the security of a scheme, one proves
that if an adversary could crack the scheme, he would also be able to solve a math-
ematical problem, which is known to be very difficult.
In 1976 Diffie and Hellman publish their key paper in which they introduce a mech-
anism that later became known as asymmetrical encryption. We can give a some-
what imprecise metaphor to provide intuition about this scheme: the Diffie Hellman
encryption scheme allows everybody to create their own particular padlock with cor-
responding key. Suppose a woman called Alice wants to securely receive messages
which she only can read. She creates a padlock and key. Instead of having to se-
curely transmit the key, she keeps the key to herself, and simply hands out copies
of the (open) padlock to everybody else. If her brother, Bob, wants to send a se-
cret message to her, he puts it in a box, closes the box using Alice’s padlock. Now
nobody except for Alice will be able to read the message. This scheme circumvents
the problem with traditional symmetrical cryptography, where both the sender and
the recipient must have the same key.
This breakthrough has resulted in a number of variations on these schemes, leading
to international standards for encryption mechanisms. Both symmetric and asym-
metric encryption schemes are used extensively today for the encryption of internet
traffic, wireless communications, smart-card applications, cell phone communica-
tions, and many other applications.
From this brief historical account, one might conclude that a secure encryption
scheme is the “holy grail” of communication security. Once the perfect scheme
is found, all communications will be safe, and we don’t ever have to worry again.
Unfortunately, this is not the case. Cryptography is not enough to guarantee security
in communications. And that is why this thesis is not about cryptography.
4 Chapter 1 Introduction
contents, insert his own messages, or reroute or simply transmit messages. Together,
these three properties are known as the Dolev-Yao model: cryptography is perfect,
messages are abstract terms, and the network is under full control of the intruder.
Given a security protocol, it is possible to develop mathematical techniques to derive
security properties of a protocol under the Dolev-Yao assumptions. As a result, the
work of Dolev and Yao has been responsible for a branch of research which can
be roughly summarized as black-box security protocol analysis. However, building
the three properties sketched above into a precise mathematical model, with clear
assumptions, and clearly defined security properties, has proven to be a hazardous
task.
The defining example for this area of research illustrates some of the subtleties. It
concerns the so-called Needham-Schroeder public key protocol from [145], published
in 1978, five years before the abstract model of Dolev and Yao. The basic version of
this protocol consists of three messages that are exchanged between two partners. It
is intended to provide authentication for both agents. It was assumed to be correct
for over 20 years; now it is or is not correct, depending on the situation in which
it is used. The reason for this change is not to be found in more powerful analysis
methods. Instead, it is the assumptions about the intruder that have changed.
In 1989 Burrows, Abadi and Needham published their ground breaking paper [44] on
a logic for authentication (which became known as BAN logic), which also depends
on the same black box assumptions as the Dolev-Yao model. Using this logic, they
were able to prove that several protocols satisfy a form of authentication. 1 Among
these protocols was the Needham-Schroeder public key protocol. Thus, it seemed
that the (intuitively correct) protocol was now formally proven correct. A version of
the protocol made its way into the Kerberos [20] protocol. Almost twenty years after
the publication of the Needham-Schroeder protocol, in 1996, Gavin Lowe claimed
to have found an attack on the protocol. It turns out that Lowe’s attack required a
stronger intruder than the one Dolev and Yao originally had in mind. Around 1980,
networks were considered to be something that was used by honest users: attacks
would come from the outside. Thus, the intruder was not one of the regular users of
the system. During the nineties, this view of networks changed. Now, many large
networks were used by users, which were not necessarily trusted. Lowe’s attack
requires that the intruder is, or has compromised, a regular user. As a result, the
intruder model was modified, and the intruder was assumed from now on to control
a number of regular users of the system.
In the same paper, Lowe introduced a mechanized procedure to find attacks on such
protocols. A high level description of the protocol is processed by a program called
Casper, which models the behaviour of the protocol and the possible operations
of the intruder in terms of a process algebra. Similarly, the security properties
of the protocol are translated into a second set of processes that model the ideal
behaviour of the system, which would occur if the security properties are satisfied.
1 In retrospect the main contribution of this paper seems to be that the logic made some of
the Dolev-Yao assumptions explicit, and gave a possible mathematical definition of the notion of
authentication.
6 Chapter 1 Introduction
The Casper tool uses a model-checking tool for this process algebra, to verify whether
the actual protocol model has the same set of behaviours as the ideal behaviour
model. If these behaviours are the same, the intruder has no real influence over the
protocol execution. Using this procedure, Lowe was able to automatically find the
attack on the Needham-Schroeder protocol, and was also able to show that no such
attack exists on a repaired version of the protocol, which later became known as the
Needham-Schroeder-Lowe protocol.
After Lowe’s breakthrough, a large number of security protocol formalisms and tools
have been developed. We discuss some of them in Chapter 7. On the one hand,
many security protocol formalisms focus solely on protocol description, and there
are no tools available for them that are based on a formal semantics. On the other
hand, most of the tools have only implicit links with formalisms, and lack formal
definitions of the model and properties that are actually being verified. This makes
it very difficult to interpret the results.
These observations lead to the research question addressed in this thesis.
The goal of this thesis is to investigate how to combine a formal semantics for security
protocols with semantically correct verification, in the sense that the verification
results correspond to the formal protocol semantics.
Research question: How to integrate developments in state-of-the-art black-box
security protocol analysis into a formal semantics with tool support?
By state-of-the-art we refer to recent advancements in the efficiency of protocol
verification, in which protocols such as the Needham-Schroeder protocol can be
verified or falsified within seconds. By black-box we refer to the Dolev-Yao style
of formal analysis. We want to combine a formal semantics, with intuitive formal
definitions of security properties, with efficient tool support.
We start off by developing an explicit foundation in terms of an operational seman-
tics, which allow us to formally model security protocols, and define all possible
behaviours of such protocols. Next, we provide formal definitions of existing and
new security properties. The combination of formal definitions of protocol behaviour
and security properties allows us to verify whether a property holds. We develop a
fully automatic method to verify or falsify security properties and implement this
in a tool set. We apply the automatic tools to establish results about the interac-
tion between security protocols. We further illustrate the manual application of the
security protocol formalism by developing a family of multi-party protocols.
Our research covers the full range of the analysis methods, from developing a formal
model describing protocol analysis, to the development of verification tools, and
applications of both the model and tools to case studies.
1.4 Overview of the thesis 7
We briefly sketch the organization of the thesis, and summarize the contributions
made in each chapter.
In Chapter 2 we present a new security protocol model for defining security protocols
and their behaviour. We make protocol execution in this model explicit by means
of an operational semantics. The result is a role-based security protocol model that
is agnostic with respect to the number of concurrent protocols. The model makes
several assumptions about protocol analysis explicit, and allows e.g. for a derivation
of the intruder knowledge from the protocol description. Within the protocol model,
security properties are modeled as local claim events. This chapter is based on [65].
The contribution made in this chapter consists of the development of a new security
protocol model, used to describe protocols and their behaviour. The model provides
several novel features.
Chapter 4: Verification
The prototype tool is used in Chapter 5 for the automated analysis of the parallel
execution of multiple protocols. This situation can occur e. g. in embedded systems,
like smart-card protocols or cellular phone applications. This results in the discovery
of several new attacks and results about safe practices. This chapter is based on [62].
For this chapter, the theoretical contribution consist of a definition of so-called multi-
protocol attacks. Practical contributions consist of the analysis of multi-protocol
attacks on a large set of protocols, and the subsequent discovery of several new
multi-protocol attacks. Two patterns are identified which capture possible problems
with protocols that are executed over the same network.
A further application of the model and tool is examined in Chapter 6, where the
Needham-Schroeder-Lowe protocol is generalized to a family of multi-party authen-
tication protocols. A proof of correctness is sketched of secrecy and synchronisation
of the generalized version of Needham-Schroeder-Lowe is given. The developed pro-
tocols can serve as an efficient basis for multi-party synchronising protocols. This
chapter is based on [66].
The contributions in this chapter include a link between the notion of multi-party
synchronisation and the minimal number of messages in a protocol. A family of
multi-party authentication protocols is developed. A parameterized proof is sketched
which proves the correctness of the generalized version of the Needham-Schroeder-
Lowe protocol.
Finally, we discuss related work in Chapter 7 and close off with conclusions and
future work in Chapter 8.
2
Operational Semantics
We first conduct an analysis of the main concepts of security protocols. The purpose
of this analysis is to make the design decisions explicit and to decompose the problem
into smaller parts.
We start with an informal description of involved concepts. A security protocol
distinguishes a number of behaviours. Each such behaviour we will call a role.
We have, for instance, the initiator role and the responder role in a protocol. A
system consists of a number of communicating agents. Each agent performs one
or more roles (possibly from several security protocols). Thus, the system does
not execute “the protocol”. Rather, the system executes protocol roles, performed
by agents. A role performed by an agent is called a run. For instance, agent
9
10 Chapter 2 Operational Semantics
Alice can perform two initiator runs and one responder run in parallel. The agents
execute their runs to achieve some security goal (e.g. the confidential exchange of
a message). While agents try to achieve their goals, an intruder may try to oppose
them. The capabilities of the intruder determine its strength in attacking a protocol
run. However, threats do not only come from the outside. Agents taking part in a
protocol run may also conspire with the intruder and try to invalidate the security
goals. In order to resist attacks, an agent can make use of cryptographic primitives
when constructing messages.
Given this global description, we can identify the following components of the secu-
rity protocol model.
Protocol specification
Agent model
Communication model
Threat model
Cryptographic primitives
Security requirements
We will discuss each of these aspects, list their points of variation and make ap-
propriate design decisions. Of course, every subdivision of the problem is artificial,
but we found that here it helps in adding structure and restricting the problem
space. The sub-models mentioned are not independent entities. For instance, the
protocol specification makes use of the provided cryptographic primitives and the
communication model is connected to the intruder model if the intruder has control
over (some part of) the network.
Agent model. Agents execute the roles of the protocol. The agent model is based on
a closed world assumption. By this we mean that honest agents show no behaviour
other than the behaviour described in the protocol specification. Thus, unless speci-
fied explicitly in the protocol, an honest agent will never leak classified information.
The closed world assumption does not imply that an agent will only execute one run
of the protocol. We assume that an agent may execute any finite number of runs in
parallel (in an interleaved manner). The agent model also describes how an agent
interprets a role description. An agent executes its role description sequentially,
waiting at read events until an expected input message becomes available. This
implies that an agent ignores unanticipated messages. More specifically, an incom-
ing message will be matched against the expected message format as described by
the protocol specification. Our semantics will be parameterized over this matching
function, e.g. to allow for detection of type-flaw attacks.
Communication model. The communication model describes how the messages be-
tween the agents are exchanged. We have chosen for the model of asynchronous
communication. In order to express various different threat models, as discussed in
the next paragraph, we define the communication model using two multiset buffers.
These buffers are shared by all agents.
Threat model. In 1983 Dolev and Yao laid the basis for a network threat model that
is currently the one most widely used [79]. In the Dolev-Yao model the intruder has
complete control over the communication network. The intruder can intercept any
message and insert any message, as long as he is able to construct its contents from
his knowledge. The commonly used variants of the Dolev-Yao model also implicitly
include conspiring agents. Here we make the notion of conspiring agents explicit,
which allows for a systematic derivation of the initial knowledge of the intruder.
Intruder models that are weaker than the Dolev-Yao model are also of interest, for
instance when studying protocol stacks or special communication media. Wireless
communication, for instance, implies that an intruder has the choice of jamming or
eavesdropping, but not both for the same message. Therefore, we will consider the
intruder model as a parameter of our semantics.
assumption states that nothing can be learned of a plain text from its encrypted
version, without knowing the decryption key.
protocol Needham-Schroeder
pk(r), sk(i) pk(i), sk(r)
i r
nonce ni
{| i, ni |}pk(r)
nonce nr
{| ni, nr |}pk(i)
{| nr |}pk(r)
ni -synch ni -synch
As a running example in this chapter, we use the short version of the Needham-
Schroeder protocol from [145], referred to as NS . Throughout this thesis we use
Message Sequence Charts (MSC) to illustrate security protocols and attacks. MSC
is an ITU-standardized protocol specification language [105]. Figure 2.1 contains
2.2 Security protocol specification 13
an MSC of the NS protocol. The initiator i holds her own secret key sk(i) and
the public key pk(r) of the responder r. Symmetrically, the responder r possesses
his own secret key sk(r) and the public key pk(i) of the initiator i. We denote
encryption of a message m with a key k as {| m |}k . The initiator first creates a
new nonce ni, denoted by the box, and then sends her name i together with the
nonce ni, encrypted with the public key pk(r), to the responder. After receipt of
this, the responder generates a new nonce nr and sends it, together with the earlier
nonce ni, covered by the public key pk(i) to the initiator. She, in turn, unpacks
the message and returns the nonce nr of the responder, encrypted with his public
key. Security claims are denoted by hexagons. Both the initiator and the responder
claim that the authentication property ni -synch holds. The details of such security
claims will be explained in Chapter 3.
A protocol specification defines the exchange of message terms between agents. Note
that the terms that occur in the protocol specification are different from those that
we will define later for the execution model. Here we define role terms, which are
used in the specification.
Definition 2.1 (Basic term sets). We define the basic sets used to construct Role
terms:
• Const, denoting constants which are freshly generated for each instantiation
of a role, and are therefore considered local constants.
In Table 2.1 on the next page we show some typical elements of these sets, as used
throughout this thesis.
Definition 2.2 (Role Terms). We define the set of Role Terms as the basic term
sets, extended with constructors for pairing and encryption, and we assume that
pairing is right-associative.
Functions from the set Func are considered to be global, and have an arity which
must be respected in all terms. Note that there is no term f to express “the function
f ”. When we want to refer to the function f (with domain X), we will use f as a
shorthand for {f (x) | x ∈ X}.
If global constants occur in a protocol, we model them as functions of arity zero.
Terms that have been encrypted with a term, can only be decrypted by either the
same term (for symmetric encryption) or the inverse key (for asymmetric encryp-
tion). To determine which term needs to be known to decrypt a term, we define a
function that yields the inverse for any role term.
−1
: RoleTerm → RoleTerm
−1
We require that is its own inverse, i.e. (rt−1 )−1 = rt.
Throughout this thesis we will assume that pk and sk are functions of arity 1 that
map to asymmetric keys, such that ∀R ∈ Role : pk(R)−1 = sk(R) ∧ sk(R)−1 =
pk(R). In particular, we will use pk to denote the public keys of the agents, and sk
to denote the private key of the agents.
Example 2.3 (Signing). We model the secret key sk(R) of a role R as the inverse
of the public key pk(R). Encryption of a message m with the public key is modeled
as {| m |}pk(R) . Signing a message does not directly correspond to encryption with the
secret key. Rather, a so-called “message digest” is encrypted, and sent along with
the message. In order to correctly model signing of a message m with a private key
sk(R), we need to introduce a hash function h. Signing m then results in the term
(m, {| h(m) |}sk(R) ).
All other terms rt are considered to be symmetric keys, for which we have rt −1 = rt
unless stated otherwise. Thus, although we use the notation {| |} for both types of
encryption, the type of encryption that is applied can be derived from the key that
is used. Note that we explicitly allow for composed keys, e. g. as in {| rt1 |} (rt2,rt3) .
We now turn to describing the protocol behaviour. We describe protocols as sets of
roles, which in turn consist of role events.
Definition 2.4 (Role events). We define the set of events RoleEvent using two
2.2 Security protocol specification 15
new sets, labels Label and security claims Claim, which we explain below.
Event send ` (R, R0 , rt) denotes the sending of message rt by the agent bound to R,
intended for the agent bound to R0 . Likewise, read ` (R0 , R, rt) denotes the reception
of message rt by the agent bound to R0 , apparently sent by the agent bound to R.
Event claim ` (R, c, rt) expresses that R upon execution of this event expects security
goal c to hold with optional parameter rt. A claim event denotes a local claim, which
means that it only concerns role R and does not express any expectations at other
roles.
rt v rt
rt1 v (rt1, rt2)
rt2 v (rt1, rt2)
rt1 v {| rt1 |}rt2
rt2 v {| rt1 |}rt2
The subterm operator v identifies subterms of a term, which also includes keys
used in the encryption of terms. Note that terms of the form f (rt) have no proper
subterms, as they are considered to be non-composed terms.
Besides terms to be sent and received, a role specification describes the initial know-
ledge needed to execute the role. This initial knowledge is a set of terms.
16 Chapter 2 Operational Semantics
Definition 2.6 (Role Knowledge). We define the role knowledge set RoleKnow
as the set of all sets of role terms that do not contain variables as a subterm.
We write [ε1, ε2] to denote the list of two events ε1 and ε2. We write [ε1, ε2] · [ε3]
to denote the concatenation of two lists. Often we will omit the square brackets for
single events and write ε1 · ε2 to denote [ε1, ε2].
We write A → 7 B to denote a partial function, which maps some elements of A to
elements of B.
We will use MR P (R) as a shorthand for the initial knowledge of role R in a protocol
specification P . In many cases we omit the parameter P if the intended protocol is
clear from the context. The notation MR is a contraction of M , which we will later
use to denote knowledge, and R, used to denote roles.
Role events are required to be unique within a protocol description, which can
be enforced by the labeling scheme. This allows us to define a function role :
RoleEvent → Role. Given a role event, the function role yields the role the event
belongs to. We leave the protocol as an implicit parameter when the intended
protocol is clear from the context.
Example 2.9 (Role description). The following role description models the ini-
tiator role of the Needham-Schroeder protocol, without any security requirements.
This role description follows from Figure 2.1 on page 12 by selecting the left-most
axis and its associated events. Notice that we have to clarify which constructs in the
terms are variables (because they receive their value at reception of a message) and
which are constants (because they are determined by the role itself ). Therefore, we
consider i, r ∈ Role, ni ∈ Const, sk, pk ∈ Func, pk(i)−1 = sk(i), pk(r)−1 = sk(r),
1, 2, 3, 4 ∈ Label , and V ∈ Var .
Each role of the protocol corresponds to list of events. In other words, a (sequential)
structure is imposed on the set of protocol events belonging to a role R. This total
ordering is denoted by ≺R . Thus, for any role R ∈ Role and ε1, ε2 ∈ RoleEvent
such that role(ε1) = R and role(ε2) = R we have that
ε1 ≺R ε2 ∨ ε1 = ε2 ∨ ε1 R ε2
This relation prescribes how send protocol events and read protocol events corre-
spond.
Example 2.11 (Event order and communication relation). For the example
protocol NS , the role orderings ≺i and ≺r on the roles i and r, respectively, are as
follows:
In the previous section we have explained the abstract syntax for a protocol spec-
ification. A proper protocol specification will also have to satisfy a number of
well-formedness requirements. Not any sequence of send, read and claim events
is considered a security protocol. For example, we require that an agent must be
able to construct the terms he sends out, and that he cannot examine the content
of encrypted terms of which he does not know the key.
Well-Formed Roles. For each role, we require that it meets certain criteria. These
range from the fairly obvious, e.g. each event in a role definition has the same role
that executes it (referred to as the actor of the event), to more subtle requirements
regarding the messages. For the messages we require that the messages that are
sent can actually be constructed by the sender. This is satisfied if the message is in
the knowledge of the sending role. For variables we require that they first occur in
a read event, where they are instantiated, before they can occur in a send event.
For read events the situation is a bit more complex. As can be seen in the example
above, which describes the initiator role of the Needham-Schroeder protocol, a read
event imposes structure upon the incoming messages, in the form of a pattern. A
receiver can only match a message against such an expected pattern if his knowledge
satisfies certain requirements.
We introduce a predicate WF (Well-Formed) to express that a role definition meets
these consistency requirements, using an auxiliary predicate RD (Readable) and a
knowledge inference relation `: RoleKnow × RoleTerm.
Roles can compose and decompose pair terms. A term can be encrypted if the
agent knows the encryption key, and an encrypted term can be decrypted if the
agent knows the corresponding decryption key.
Definition 2.12 (Knowledge inference operator). Let M be a role knowledge
set. The knowledge inference relation `: RoleKnow ×RoleTerm is defined inductively
as follows, for all role terms rt, rt1, rt2, k:
rt ∈ M =⇒ M ` rt
M ` rt1 ∧ M ` rt2 =⇒ M ` (rt1, rt2)
M ` rt ∧ M ` k =⇒ M ` {| rt |}k
M ` (rt1, rt2) =⇒ M ` rt1 ∧ M ` rt2
M ` {| rt |}k ∧ M ` k −1 =⇒ M ` rt
Example 2.13 (Inference and encryptions). From a term set that consists of
the term {| m |}k , but not k −1 , it is impossible to infer m or k, i. e.
{| m |}k 0 m ∧ {| m |}k 0 k
Given {| m |}k and k −1 , we can infer m. If k is an asymmetric key (and thus k 6= k −1 )
and given {| m |}k and k −1 , it is not possible to infer k.
k 6= k −1 ⇒ {| m |}k , k −1 0 k
2.2 Security protocol specification 19
Example 2.14 (Inference and subterms). Given two terms m1 and m2 with
{m1} ` m2, it is in general not the case that m2 v m1. A counterexample is given
by m1 = (rt1, rt2) and m2 = (rt2, rt1), where rt1 6= rt2. The converse does not
hold either, e.g. because given terms k and m and {m} 0 k, we have that k v {| m |} k
but also that {| m |}k 0 k.
Example 2.15 (Inference and functions). Given a term f (rt) where f ∈ Func,
there is no further inference rule that applies. Thus, it is impossible to infer rt.
In this respect functions in our framework act as hash functions. We can safely
model keys (e.g. k(rt1, rt2), sk(rt1)) as hash functions in the same way, because as
we will see later, the agent names are known initially to the agents as well as the
intruder. Modeling {k(rt1, rt2)} ` rt1, rt2 therefore does not increase the set of
inferable terms for any of the parties involved. (Within the model, all sets to which
the inferences operator is applied will allready include rt1 and rt2.)
RD(M, rt) =
True if rt ∈ Var
RD(M ∪ {rt2}, rt1) ∧ RD(M ∪ {rt1}, rt2) if rt ≡ (rt1, rt2)
(M ` {| rt1 |}rt2 ) ∨ (M ` rt2−1 ∧ RD(M, rt1)) if rt ≡ {| rt1 |}rt2
M ` rt otherwise
In particular, if we look at the case in which rt ≡ (rt1, rt2), being able to read such
a message can correspond to first extracting some key from rt2 which can be used
to decrypt rt1, or vice versa.
We can now construct the predicate WF : Role × RoleSpec that expresses that a
role is well formed. The first argument of this predicate is always the role in which
the event occurs. It is used to express that the role that executes an event (i. e. the
sender of a send event) should match the role in which it occurs. Terms occurring in
a send or claim event must be inferable from the knowledge, while terms occurring
in a read event must be readable according to the definition above. Furthermore, an
agent should only read messages directed to him, and should only send out messages
20 Chapter 2 Operational Semantics
that are labeled with the correct sender. If a term is used as a parameter of a claim
event, there is no such requirement, because whether or not this is appropriate
depends on the function of the parameter within the claim, as we will see in the
next chapter.
Definition 2.17 (Well-formedness). Let R be a role, and let M be a role knowl-
edge set. Let s be a list of role events.
For a protocol specification P : Role → 7 RoleSpec we require that all roles are well-
formed with respect to their initial knowledge, which is expressed by ∀R ∈ dom(P ) :
WF (R, P (R)), where we use the notation dom(f ) to express the domain of a partial
function f .
Example 2.18 (Correctness of role descriptions). Consider the following in-
correct role description:
The read event in wrong2 contains a subterm {| V |}k2 . The intention is that V is
initialised through this read. However, since k2 is a symmetric key, and k2 is not
in the knowledge of the role, the value of V cannot be determined through this read.
Therefore, this role description is not well-formed. A correct role description is the
following:
wrong2corrected(i) = ( {i, r, k},
read 1 (r, i, {| i, r, W |}k )·
send 2 (i, r, W ) )
2.3 Describing protocol execution 21
2.3.1 Runs
A protocol specification describes a set of roles. These roles serve as a blueprint for
what the actual agents in a system can do. When a protocol is executed, each role
can be executed a number of times, possibly in parallel and by multiple agents. We
call such a single, possibly partial, execution of a role description a run. From an
implementation point of view, a run is similar to a thread of an agent. In our model,
two runs executed by the same agent are independent and share no variables.
Executing a role turns a role description into a run. This is referred to as instanti-
ation. In order to instantiate a role we have to bind the role names to the names
of actual agents and we have to make the local constants unique for each instantia-
tion. Furthermore, we have to take into account that the bindings of values to the
variables are local to a run too. Thus, the set of terms occurring in a run differs
from the set of terms used in role descriptions.
We assume existence of a set RID to denote run identifiers and a set Agent to
denote agents. Run terms are defined similarly to role terms. The difference is
that abstract roles are replaced by concrete agents that local constants are made
unique by extending them with a run identifier, and that variables are instantiated
by concrete values. The run term set also includes the set IntruderConst of basic
run terms generated by an intruder. This set will only be used from Section 2.4.2
onwards, where it will be defined and explained in more detail.
Definition 2.21 (Instantiation function). For each run, there is a relation be-
tween the role terms and the run terms. A role term is transformed into a run term
22 Chapter 2 Operational Semantics
by applying an instantiation.
The first component of an instantiation determines with which run identifier the
constants are extended. The second component determines the instantiation of
roles by agents. The third determines the valuation of the variables.
We extend the inverse function −1 to RunTerm. The functions roles : RoleTerm →
P(Role) and vars : RoleTerm → P(Var ) determine the roles and variables occurring
in a term. We extend these functions to the domain of RoleSpec in the obvious way.
Often we will extract the first component, the run identifier, from an instantia-
tion. We will use the notation runidof (inst) to denote the run identifier from an
instantiation inst.
As we will see later on, each run in the system will have a unique run identifier by
construction. Because the knowledge of a role is already statically defined by the
role description, we can omit it from the run specification.
2.3 Describing protocol execution 23
The system that we consider consists of a number of runs executed by some agents.
Communication between the runs is asynchronous (buffered). In order to conve-
niently model different types of intruder behaviour, we will route communication
through two buffers. We have one output buffer from the sending run and one input
buffer from the receiving run (for a discussion on the expressive power of such a
construction, see [84]). The intruder capabilities will determine how the messages
are transferred from the output buffer to the input buffer.
Both the output buffer and the input buffer store sent messages. Messages contain a
sender, a recipient, and a run term: MSG = Agent × Agent × RunTerm. A buffer is
a multiset of such messages: Buffer = M(MSG). Abusing notation, we will use the
set MSG as if MSG ⊂ RunTerm, to avoid redefining existing operators on RunTerm
for the set MSG.
The full network model also contains elements of the intruder model, which have
not been discussed at this point. In particular, to define the notion of state in our
transition system, we need to include a set of terms from P(RunTerm), which will
be used in Section 2.4 to represent the dynamic intruder knowledge.
Definition 2.25 (Network state). The state of a network of agents executing
roles in a security protocol is defined by
and thus a state contains the intruder knowledge, the contents of the output buffer,
the contents of the input buffer, and the (remainders of the) runs that still have to
be executed.
Messages from the buffer are accepted by agents if they match a certain pattern,
specified in the read event. We introduce a predicate Match that expresses that a
message matches the pattern for some instantiation of the variables. The definition
of this predicate is a parameter of our system, but we will give an example of a
straightforward match.
For example, we define for each variable a set of run terms which are allowed values.
We introduce an auxiliary function type : Var → P(RunTerm) that defines the set
of run terms that are valid values for a variable. We give three possible examples for
defining type, which each result in a slightly different semantics for the read events.
Example 2.26 (Type matching: no type flaws). In most cases we assume
that the set of run terms is partitioned into a finite number of sets S1 , . . . Sn . For
example, these can represent nonces, agent names, session keys, encryptions, tuples,
etc. For the typed matching model we require that
∀V ∈ Var : ∃i : type(V ) = Si
Intuitively, this means that if a variable is of type “Nonce”, it will only match with
run terms that are nonces.
Example 2.27 (Simple type matching: basic type flaws). A second option
is a slightly relaxed instance of the type matching model, with only three partitions,
24 Chapter 2 Operational Semantics
where S1 represents all encrypted terms, S2 all tuples, and S3 all other terms.
In this case, the match predicate cannot distinguish between an agent name or a
nonce, as both are elements of S3 . It is thus possible that a read event that expects a
nonce term matches with a message that contains an agent name. When this occurs,
we speak of a basic type flaw.
Example 2.28 (No type matching: all type flaws). In the previous setting,
there is still a distinction between the three partitions. Another possibility is no type
matching, in which we have that
Here, a variable can match with any term. It can therefore happen that a variable
is instantiated with a tuple.
Throughout this thesis, we assume the type matching model, which distinguishes
e.g. agent names from nonces as in Example 2.26, unless stated otherwise.
Definition 2.29 (Well-typedness of instantiation). We define the predicate
Welltyped on (Var → P(RunTerm)) that expresses that a substitution is well-typed:
Using this predicate, we define the typed matching predicate Match : Inst ×
RoleTerm × RunTerm × Inst. The purpose of this predicate is to match an in-
coming message (the third argument) to a pattern specified by a role term (the
second argument). This pattern is already instantiated (the first argument), but
may still contain free variables. The idea is to assign values to the free variables
such that the incoming message equals the instantiated role term. The old instanti-
ation extended with these new assignments provides the resulting instantiation (the
fourth argument).
Definition 2.30 (Match). For all inst, inst0 ∈ Inst, pt ∈ RoleTerm, m ∈
RunTerm, the match predicate is defined as:
inst pt m inst0
Match( (1, ρ, ∅), X, nr]2, (1, ρ, {X 7→ nr]2}) )
Match( (1, ρ, ∅), {| r, ni |}pk(i) , {| B, ni]1 |}pk(A) , (1, ρ, ∅) )
In the first example above, the pattern is simply a single variable X, with a type that
contains the set of local constants. From the first parameter we find that X has not
2.3 Describing protocol execution 25
been instantiated yet. Thus, the agent expects to receive any value of this type. The
message is nr]2, which is an element of the type of X. Therefore, the message fits
the pattern, and upon reading would assign to the variable X the read term nr]2.
The second example above shows a more complicated pattern and a message that can
be accepted. The pattern consists of a role name r and a local constant ni, encrypted
with the public key of a role i. From the instantiation with ρ we find that in this
particular run, the r role is performed by B, and the i role by A. Furthermore,
the local constant is unique to this run, of which the run identifier is 1. Thus, the
agent reading this pattern gets exactly what he expects. There are no uninstantiated
variables in the pattern. He expects exactly the term {| B, ni]1 |} pk(A) , and any other
message will not match this pattern.
Some examples where the predicate does not hold, can be produced if we assume type
matching, and the type of X is the set Agent ∪ Const]RID ∪ IntruderConst. In the
last column inst0 we write to denote that the predicate is false for any value of
inst0 .
inst pt m inst0
¬Match( (1, ρ, ∅), nr, nr]2, )
¬Match( (1, ρ, ∅), X, (nr]2, ni]1), )
¬Match( (1, ρ, ∅), {| i, ni |}pk(i) , {| B, ni]1 |}pk(A) , )
In the first example above, the run with run identifier 1 expects to read his own
local constant nr]1. In the second example, the message does not fit the type of the
variable X. The third example the encryption key is the correct one, but the agent
was expecting the name A inside, not B.
In order to define the behaviour of the system, a connection is made between the
protocol descriptions (i. e. roles and their events) and their execution (i. e. when they
are instantiated). In particular, when an instantiation function is combined with a
role event, we call the result a run event.
Definition 2.32 (Run event). A run event is a tuple (inst, ε), where inst ∈ Inst
and ε ∈ RoleEvent. We use RunEvent to denote the set of all run events.
We write ReadRunEv for the set of run events corresponding to read events,
SendRunEv for those corresponding to send events, and ClaimRunEv for claim
events. Together, these three sets combined form the set RunEvent.
The behaviour of the system is defined as a transition relation between system states.
A protocol description allows for the creation of runs. The runs that can be created
are defined by the function runsof .
Definition 2.35 (Possible runs). The runs that can be created by a protocol P
are defined by the function runsof : Protocol → P(Run) :
runsof (P ) =
n o
(θ, ρ, ∅), P (R) θ ∈ RID ∧ ρ ∈ roles(P (R)) × Agent ∧ R ∈ dom(P )
Definition 2.36 (Active run identifiers). Given a set of runs F , we define the
set of active run identifiers as
n o
runIDs(F ) = θ (θ, ρ, σ), ev ∈ F
Let P ∈ Protocol . Then the basic derivation rules for the system are given in Ta-
ble 2.3 on the facing page. The create rule expresses that a new run can only be
created if its run identifier has not been used yet. The send rule states that if a
run executes a send event, the sent message is added to the output buffer and the
executing run proceeds to the next event. The read rule determines when an input
event can be executed. It requires that the (partially) instantiated pattern specified
in the read event should match any of the messages from the input buffer. Upon
execution of the read event, this message is removed from the input buffer and the
executing run advances to the next event. If the input buffer, contains more than
one instance of the message, only one is removed. The claim rule expresses that an
enabled claim event can always be executed. The transmit rule describes transmis-
sion of a message from the output buffer to the input buffer without interference
from the intruder. Notice that in all these cases the intruder knowledge is not af-
fected. The dynamical behaviour of the intruder knowledge will be defined by the
network/intruder rules in Section 2.4.1.
2.4 Threat model 27
m ∈ BS
[transmit]
transmit(m)
hM, BS , BR, F i −−−−−−→ hM, BS \ {m}, BR ∪ {m}, F i
Definition 2.38 (Initial network state). In the initial state of the system both
buffers are empty, and no runs have been created yet. Thus the initial state of the
system is given by
s0 = hM0 , ∅, ∅, ∅i
where M0 refers to the initial intruder knowledge, which we define in the Sec-
tion 2.4.2.
In this section we address the threat model parameter. We discern two different
elements of the threat model. First, we incorporate the possibility that the network
28 Chapter 2 Operational Semantics
m ∈ BS
[take]
take(m)
hM, BS , BR, F i −−−−−−→ hM ∪ {m}, BS \ {m}, BR, F i
M `m
[fake]
fake(m)
hM, BS , BR, F i −−−−−−→ hM, BS , BR ∪ {m}, F i
m ∈ BS
[eavesdrop]
eavesdrop(m)
hM, BS , BR, F i −−−−−−→ hM ∪ {m}, BS \ {m}, BR ∪ {m}, F i
m ∈ BS
[jam]
jam(m)
hM, BS , BR, F i −−−−−−→ hM, BS \ {m}, BR, F i
Example 2.39 (No intruder). In a network without an intruder we only have the
transmit rule as in Table 2.3 on page 27:
Example 2.40 (Dolev-Yao intruder). In the Dolev-Yao model the intruder has
full control over the network. Hence, there is no direct transmit rule. Every message
is read and analysed, and anything that can be constructed can be inserted into the
network:
Networkrulename ::= take(RunTerm) | fake(RunTerm)
Example 2.42 (Passive intruder). If the intruder can only eavesdrop, we have
In the agent threat model, we model the fact that a number of agents may be
compromised by the intruder. Assuming that the intruder can insert messages into
the network and read from it, the only thing that distinguishes the intruder from
a real agent, is some particular piece of knowledge representing the identity of the
agent. When an agent is compromised by the intruder, the intruder learns all the
knowledge of this agent for all the roles. This allows the intruder to act as if it were
one of the compromised agents in each of these roles.
The set Agent is partitioned into sets Agent T (denoting the trusted agents) and
Agent U (denoting the untrusted agents).
Note that in our set-up, the agents do not know which of the agents are trusted
and which are not. Thus, trusted agents might still start protocol sessions with
untrusted agents, or accept requests from untrusted agents.
Within our model, it is possible that any number of agents are compromised. In
other security protocol models, this is often reduced to a single untrusted agent,
30 Chapter 2 Operational Semantics
typically E. Whether or not this reduction is valid depends on the details of the
(protocol)model and the specific security property that is tested.2
We assume agents are immediately compromised, and we therefore model them by
adding their knowledge to the initial intruder knowledge. The intruder learns all
possible instantiations of the initial knowledge of a role.
This assumes the intruder can generate local constants just as the agents can. How-
ever, in order to correctly model the fact that local constants (such as nonces)
are uniquely generated, we must ensure these constants are syntactically different
from the constants generated by any other runs in the system. In other words,
all nonces generated by the intruder are distinct from a nonce ni]1 generated
in run 1. We could express this by introducing a distinctness constraint for the
nonces, effectively reserving certain run identifiers for the intruder generated con-
stants. Instead, we choose to model the fresh constants generated by the intruder
as elements of the set IntruderConst. Thus, the intruder can create any num-
ber of constants of each type, defined as the set IntruderConst, and we require
that the intruder can generate values of each type. More formally, we have that
∀V ∈ Var : ∃t : t ∈ type(V ) ∧ t ∈ IntruderConst.
The initial intruder knowledge will consist of e.g. the names and public keys of all
agents, and the secret keys of the untrusted agents.
To instantiate the role knowledge, we only need to know how the role names are
mapped to agent names, and information about a run or instantiation of the vari-
ables is not needed. For a protocol P , an untrusted agent E in a role R, and a term
rt that is part of the initial role knowledge of R that does not contain local constants,
the knowledge that is passed to the intruder is defined as all possible instantiations
of rt for which R is instantiated to E. The full initial intruder knowledge consists
of these terms for all roles and all untrusted agents.
M0 = IntruderConst ∪
[
h , ρ, i(rt) rt ∈ MR(R) ∧ ∀rt0 v rt : rt0 6∈ Const
R∈Role
ρ∈Role→Agent
ρ(R)∈Agent U
Example 2.45 (Initial intruder knowledge for the NS protocol). For the
2 To give an idea of an example in which this reduction is not valid, consider a protocol of three
roles R1, R2, R3, for which there is a requirement for each run that ρ(R1) 6= ρ(R2) 6= ρ(R3). Now,
if there is an attack on the protocol (along the lines of the Needham-Schroeder attack presented in
the next chapter) that requires an agent (e. g. in role R1) to communicate with untrusted agents
(in roles R2, R3), then the attack requires that there is more than one untrusted agent. In such
a case the reduction of Agent T = {E} would not be valid, as this would cause attacks on the
protocol to be missed.
2.5 Conclusions 31
M0 = IntruderConst ∪ Agent ∪
pk(Ag) Ag ∈ Agent ∪ sk(Ag) Ag ∈ Agent U
In this particular case, the initial role knowledge of both roles contributes exactly the
same information to M0 . We consider role i, with MR(i) = {i, r, sk(i), pk(i), pk(r)}.
When such a role is executed by an untrusted agent, we have that ρ(i) 6∈ Agent T . No
such restriction holds for the communication partner, and thus ρ(r) ∈ Agent. Be-
cause the initial role knowledge contains r, a compromised run can therefore contain
any agent name, yielding Agent ⊆ M0 . Second, because the role knowledge contains
pk(r), we find that compromised agents also reveal the public keys of all the agents.
Finally, because sk(i) ∈ MR(i), we have that the secret keys of the untrusted agents
are known to the intruder.
Before any events have occurred in a trace, the intruder knowledge is denoted by
M0 . Some intruder events (such as take or eavesdrop) cause the intruder knowledge
to change. Informally stated, we will write Mit to denote the intruder knowledge
just before execution of the event ti .
Where the trace t is clear from the context, we will simply write Mi .
Note that if we include the take or eavesdrop intruder rules from the previous section
in the model (as with the Dolev-Yao model), the intruder can potentially eavesdrop
on any message that is being sent. However, the intruder cannot learn more than
that which can be inferred from the sent messages.
Proof. For all intruder rules, we have that the intruder knowledge set M is changed
only by adding elements from the send buffer BS.
2.5 Conclusions
This completes our description of the base model for security protocol description.
We proceed by defining and explaining security claims in the next chapter.
3
Security Properties
In the previous chapter we presented a model for the description of security protocols
and their executions in terms of traces. In this chapter we will formally introduce
security properties into our model. We define here only forms of authentication and
secrecy. This does not mean that other properties cannot be expressed in the model.
Rather, it reflects our choice of base properties that can serve as a starting point
from which other trace properties can be easily defined.
We start off by defining the notion of claim events in Section 3.1. Then we address
two classes of properties: secrecy in Section 3.2 and authentication in Section 3.3.
We proceed by establishing in Section 3.4 a hierarchy on various strong forms of
authentication, and discuss some relative merits of the properties. We conclude by
establishing a useful result on the strongest form of authentication (synchronisation)
with respect to so-called injectivity in Section 3.5.
For our model as introduced thus far, we have several options for formalizing security
properties. As a first choice, we can decide to integrate the properties into the pro-
tocol specification, or have the properties as separate entities, e.g. using (temporal)
logic formulas. We choose to integrate the security properties into the specification
of the protocol. The main reason for this is that we consider security properties to
be an essential part of a security protocol, and a security protocol should not be
considered without knowing the exact properties it is supposed to satisfy.
We integrate the security properties into our model by introducing a special type of
role event: the claim event. The main idea behind the concept of a claim event is
locality: agents have a local view on the state of the system, based on the messages
they receive. The protocol should guarantee that based on the local view, the
agent can be sure about some properties of the global state of the system, e.g. that
something is not in the intruder knowledge, or that a certain agent is active.
To provide some intuition about local claim events, consider the protocol in Fig-
ure 3.1 on the next page. We will give a formal definition of the claim events later.
Just for this first example, we will rely on the intuition of the reader as to what
“secrecy of a term” means: if an agent communicates with non-compromised agents,
33
34 Chapter 3 Security Properties
the term in question should be secret in every possible trace of the protocol.
pk(r) sk(r)
i r
nonce ni
{| i, ni |}pk(r)
secret ni secret ni
If we analyse the protocol in Figure 3.1, we find that if an agent completes the
initiator role whilst communicating with a trusted agent, he can be sure that the
term ni he created is secret. As the nonce is encrypted with the public key of the
intended recipient, which is trusted, then only this agent can decrypt it. Whenever
a run of the i role is completed with trusted communication partners, the nonce ni
generated in the run will not become known to the intruder: We say that the claim
secret ni of the role i holds.
However, this does not mean the protocol guarantees some form of global secrecy.
In particular, if an agent completes the responder role, he cannot be sure that the
received term is secret. In Figure 3.2 on the next page we show an attack on the
protocol in 3.1. The intruder knows the public key of A, and thus he can simply
generate a message containing a nonce ne himself. The agent A will accept the
message, as there is no authentication performed on the origin of the message,
which seems to come from B. We say that the claim secret ni of the role r does not
hold. Such a falsification of a claim constitutes an attack.
The difference between the local views on security by the different roles is captured
by the local claim events. In this particular case, the claim of i is true, but the claim
of r is obviously false.
Example 3.1 (Structure of a claim definition). Let γ be a claim role event of
a protocol P . For the security properties defined in this chapter, we will require that
some predicate Q on traces(P ) × ClaimRunEv holds for each instance of γ in all
traces of P . We will say that the property is true for the protocol if and only if
trace Attack
B, pk(A) sk(A)
Run 1
Intruder
A: role r
nonce ne
{| B, ne |}pk(A)
secret ne
In the next sections we will see how secrecy and authentication can be defined in
terms of claim events.
3.2 Secrecy
The definition states that a secrecy claim γ in a protocol is true, if and only if
we have the following for all traces: for each run in which roles are mapped to
36 Chapter 3 Security Properties
trusted agents only, and secrecy of a term is claimed, the term should never be in
the knowledge of the intruder.
This definition allows us to express secrecy of terms. This also includes variables
(as in the responder role in the example). The responder does not generate the
nonce, but if the protocol is correct, an agent completing the responder role should
be ensured that the term he received is secret.
In the example attack from Figure 3.2 on the preceding page we can see that the
claim does not hold. In the attack the variable ni of the responder role is instantiated
with a term that is under control of the intruder.
In Section 3.6 we illustrate this definition by sketching a proof of a secrecy claim of
the Needham-Schroeder-Lowe protocol.
3.3 Authentication
The security property studied the most in the field of security protocol analysis is
authentication. However, contrary to the requirement of secrecy, there is no general
consensus on the meaning of authentication. In fact, as indicated by Lowe [122],
there is a hierarchy of authentication properties. We will return to this in Section 3.4.
In its most basic form, authentication is a simple existential statement about a
communication partner. A protocol description, especially when written in the form
of a Message Sequence Chart, suggests that at least two agents are communicating.
However, because the network can be under the control of an intruder, not every
role execution guarantees that there actually has been a communication partner, as
e. g. in the attack in Figure 3.2.
Authentication focusses on establishing that executing a protocol role actually guar-
antees that there is at least a communication partner in the network. In most cases
we want to establish something stronger, e.g. that the intended partner is aware he
is communicating with us, and that some protocol is run, and that messages have
been exchanged as expected.
In the next section we will start off by defining a class of simple authentication
properties, which is often called Aliveness.
3.3.1 Aliveness
The definition tells us that when an agent executes a role specification up to the
claim event, and he thinks he is talking to an agent that is trusted, then the intended
communication partner has actually executed an event.
Using the predicate AL, we can refine this definition in several ways. We give a
few examples (in accordance with e.g. [122]): weak aliveness, weak aliveness in the
correct role, and recent aliveness.
Weak Aliveness The notion of weak aliveness corresponds to the weakest possible
form of the generic aliveness class sketched above, i.e.
AL ≡ True
It simply states the intended partner is at least alive, hence the name of the class.
Example 3.4. A protocol where an agent sends the text “Hello, I am Alice.” in
plaintext to Bob, where there is only a passive (eavesdropping) intruder, satisfies
weak aliveness: when Bob receives the message, he can be sure Alice has been active.
However, for the Dolev-Yao intruder model, it does not guarantee weak aliveness.
The intruder can generate such a message, even if there is no Alice. Thus, the
protocol would need to be strengthened, e.g. by having Alice sign the message with
her secret key.
Weak Aliveness in the correct role Knowing that the communication partner is
alive is often not enough. In most cases we at least require he is executing some
particular role, as could be expected from the protocol description.
AL ≡ role(ε0 ) = R0
Recent Aliveness Although our model does not include a notion of time, we can
give an interpretation of recentness. The previous notions of aliveness guaranteed
that the communication partner is alive, but it does not tell us whether this was
before, after, or during the run that makes the claim. Recent aliveness guarantees
that an event took place during our run.
AL ≡ ∃inst00 , ε00 :
(inst00 , ε00 ) < (inst0 , ε0 ) < (inst, γ) ∧ runidof (inst00 ) = runidof (inst)
38 Chapter 3 Security Properties
3.3.2 Synchronisation
The notions of aliveness above have one main thing in common. They only consider
a role R0 (which is a parameter of the claim event) from the perspective of the role R
(in which the claim occurs), and require various properties of the agent performing
this particular role R0 . If a specification contains two roles, then upon completion of
one role, we expect there to be an agent performing a run in the other role. Similarly,
the protocol specification suggests that there are runs for each role in the protocol,
and messages are exchanged between these runs. The specification furthermore
suggests an ordering on the events, and unmodified delivery of messages. If an
agent executes a role up to some point, the protocol description suggests that some
events have occurred as expected.
In the presence of an active intruder, other behaviour (not defined by the protocol
description) might occur. For example, if there is no agent performing a certain
role, this contradicts the protocol description. We consider any behaviour that is
not specified, to be unauthenticated. This leads to a natural definition of strong
authentication, which we will call synchronisation.
In order to define synchronisation in a concrete trace, we need some mechanism to
pinpoint which agents are performing which roles. As a protocol description can
consist of a number of roles, which can be instantiated any number of times, we
need some way to express which run is (supposedly) communicating with which
other runs. Therefore, we introduce the notion of a cast, borrowing intuition from a
theatre play that is performed several times. The cast for a particular performance
of the play relates activity in the performance to particular roles. Likewise, the
concrete activities in a protocol instance at the trace level, are assigned to the roles
in the protocol. In the theater case, in different performances an actor can play
different roles. Also, the same role can be taken up, for different performances of
the play, by different actors. Likewise, run events associated with different roles may
belong to the same agent and run events that are instances for the same role may
belong to different agents. For our purposes the coupling of roles and run events
is more important, than the coupling of roles and actors. Therefore, we have the
following definition.
Abusing notation, we write role(θ) to denote the role that the run θ is an instance
of, and runidof (c) to denote the run identifier of a run event.
Definition 3.5 (Cast). A mapping Γ : RunEvent → Role → RID is called a cast
function for a trace t of the protocol P if
Γ(c)(R) = θ =⇒ role(θ) = R (3.1)
for all θ ∈ RID, c ∈ RunEvent, where c is a claim event, R ∈ dom(P ), and
Γ(c)(role(c)) = runidof (c) (3.2)
for all c ∈ ClaimRunEv .
A cast function Γ : RunEvent → Role → RID is called an injective cast function, if
Γ(c)(R) = Γ(c0 )(R0 ) =⇒ c = c0 ∧ R = R0
3.3 Authentication 39
for every two claim run events c, c0 and every two roles R, R0 . We use Cast(P, t)
to denote the collection of all cast functions for a trace t of the protocol P .
The idea behind the notion of a cast function is the assignment of roles in the context
of a concrete claim event c. We require that the run identifier assigned to a certain
role is actually executing that role, as captured by condition (3.1). Furthermore, we
require in condition (3.2) that for a claim in run θ, the claiming role is assigned the
run identifier θ. At this point, we do not require that every role will be performed
completely, leaving room for unfinished role executions.
An injective cast function is, with abuse of language, an injective mapping when
considered to be of functionality RunEvent × Role → RID, rather than injective
as a function of type RunEvent → Role → RID. The main reason of sticking to
the latter function type is the underlying intuition of a cast. The mapping Γ(c)
captures the perspective of the agent executing the instance of the role of the claim
run event c.
Example 3.6. As a running example to illustrate the synchronisation property, we
will use a classical example: the short version of the Needham-Schroeder protocol
from [145], depicted in Figure 2.1 on page 12.
We have Role = { i, r } as the set of roles with initiator i and responder r, and for
the set of protocol events we put
Observe that the nonce ni of the initiator is stored in a variable W by the responder
side. Likewise, the responder nonce nr is stored by the initiator in the variable V .
Note that on the one hand, we have for the protocol order 4P of NS that
send 3 (i, r, {| V |}pk(r) ) 4P read 3 (i, r, {| nr |}pk(r) ) 64P claim 4 (i, ni -synch)
Such a situation is relevant to the notion of synchronisation. For the claim event
claim 4 (i, ni -synch) of the initiator role there are only two preceding pairs of send and
read events, viz. those of the first two messages. For the claim claim 5 (r, ni -synch)
there are three preceding communication pairs, viz. all three messages.
40 Chapter 3 Security Properties
For the Needham-Schroeder protocol discussed above, the trace representing the well-
known Lowe attack [119] can have the following form. Let ρ1, ρ2 : Role → Agent be
defined as
ρ1 = {i 7→ A, r 7→ E}
ρ2 = {i 7→ A, r 7→ B}
The attack is also shown in a graphical form in Figure 3.3 on the facing page. This
attack violates the ni -synch claim of the r role. It uses the take and fake intruder
rules. In the attack, an agent A executes the initiator role, trying to communicate
to an agent E. Unfortunately, E has been compromised. The intruder takes the
message, and decrypts it which is possible because it knows the private key of E.
The intruder encrypts the message again, but now using the public key of B, and
fakes a messages from A to B. Now B replies to A. This message is encrypted with
the public key of A, so the intruder cannot decrypt this message, but it can change
the original sender address to E. A thinks E has replied, and thus it decrypts the
second challenge nr]2, and encrypts it with the public key of E. Thus, the intruder
learns the value nr]2, and can complete the protocol with B. This trace violates a
form of authentication, because B thinks he is talking to A, but in fact A is talking
to somebody else. Although neither A nor B are compromised, the intruder can
impersonate as A to B.
Example 3.7. In the context of the Lowe attack trace introduced above, we have,
e.g., as a cast the mapping Γ such that
where inst1 = (1, ρ1, {V 7→ nr]2}) and inst2 = (2, ρ2, {W 7→ ni]1}).
Note that the cast Γ is bounded by the general restrictions of Definition 3.5 on
page 38. In particular, in this case we use the first restriction on the cast function,
3.3 Authentication 41
nonce ni]1
{| i, ni]1 |}pk(E)
{| i, ni]1 |}pk(B)
nonce nr]2
{| nr]2 |}pk(E)
{| nr]2 |}pk(B)
ni -synch ni -synch
stating that for any claim event, a role must map to a run which is an instance of
that same role. Because of the limited trace we consider, there is in fact no choice
for the cast function. However, in general, traces can contain many interleavings of
runs, in which case there would be several alternatives.
Regarding the first two clauses above, the completely genuine run events of agent A
are matched by activity of the intruder (as E is a untrusted agent). Just bad luck for
agent A, but no guarantees can be made for agents communicating with untrusted
agents such as E. The situation for the latter two clauses is more seriously wrong.
Here, the role of responder of a trusted agent B, who assumes he is talking to
another trusted agent A, is matched by activity of agent A engaged in an protocol
session with agent E.
Using the cast function, we introduce an authentication property that captures the
following correspondence: at the trace level, we require that the same structures
42 Chapter 3 Security Properties
occur as the ones found at the protocol description level. Informally put, we require
that everything we intended to happen in the protocol description also actually
happens in the trace.
We first give the definition of the authentication property that we call non-injective
synchronisation, and explain it in detail below.
Recall that the function cont(e) extracts the instantiated contents (of the form
(a, b, m)) from a (send or read) run event e.
Definition 3.8 (NI -SYNCH ). A protocol P with a claim role event γ is called
non-injectively synchronising, notation NI -SYNCH (P, γ), if
∀t ∈ traces(P ) ∃Γ ∈ Cast(P, t)
∀c ∈ t, c = ((θ, ρ, σ), γ), rng(ρ) ⊆ Agent T ,
∀ς, % : ς ; % 4P γ ∃s, r : s < t r < t c
roleevent(s) = ς ∧ runidof (s) = Γ(c)(role(ς)) ∧
roleevent(r) = % ∧ runidof (r) = Γ(c)(role(%)) ∧
cont(s) = cont(r).
where we write s < t r to denote that the run event r is preceded by the run event s
in the trace t.
nonce nr
{| i, nr |}sk(r)
ni -synch
In order to rule out such flawed protocols, the additional property of injectivity is
required. For each instance of a claim, we expect there to be a unique set of runs
that execute the other roles as expected.
In order to capture the requirement that each claim instance corresponds to a unique
set of runs fulfilling the other roles, we modify NI -SYNCH slightly and require that
the cast function is injective, yielding the notion of injective synchronisation.
Example 3.12. For the Needham-Schroeder protocol in Figure 2.1 on page 12,
I -SYNCH holds for the role i: for each instance of the claim of role i in a trace,
there must also be a unique instance of the role r to synchronise with.
nonce nr]2
{| A, nr]2 |}sk(B)
learn {| A, nr]2 |}sk(B)
{| A, nr]2 |}sk(B)
ni -synch ni -synch
The main issue to be clarified is the notion of a data variable, which we did not use
to define synchronisation. Since the values of the variables are determined by the
contents of the messages sent and received, we can reformulate the correspondence
between the variables as a requirement on the contents of the sent and received
messages. In a two-party protocol, it is clearly the case that if two parties agree on
the values of all variables, then they agree on the contents of all messages exchanged,
and vice versa.
Agreement in a multi-party protocol means that only the initiator and the responder
agree on their shared variables. There is no such requirement for the variables
maintained by the other roles in the protocol. In order to be able to provide an
intensional definition of agreement, we will have to extend the agreement relation to
all parties involved in the protocol. Therefore, we will require that upon completion
of the protocol all parties agree on all variables. This is somewhat stronger than
the extensional definition provided by Lowe, but for many multi-party protocols
this seems to be a natural extension. Summarising, we see that the agreement
requirement translates to the demand that corresponding sends and receives have
the same contents.
Given this interpretation of agreement, it is easy to see the correspondence with
synchronisation. Like agreement, synchronisation requires correspondence on the
contents of all messages, but in addition it requires that a message is sent before
it can be received. The definition of Lowe does not bother about this send/read
order. Thus, we arrive at the following definition of non-injective agreement, which
is adapted from Definition 3.8 on page 42 by removing the requirement that send
events occur before their corresponding read event.
Definition 3.13 (NI -AGREE ). Given a protocol P with a claim protocol event γ,
non-injective agreement holds, notation NI -AGREE (P, γ), if
∀t ∈ traces(P ) ∃Γ ∈ Cast(P, t)
∀c ∈ t : c = ((θ, ρ, σ), γ)rng(ρ) ⊆ Agent T
∀ς, % : ς ; % 4P γ ∃s, r : s < t c ∧ r < t c
roleevent(s) = ς ∧ runidof (s) = Γ(c)(role(ς)) ∧
roleevent(r) = % ∧ runidof (r) = Γ(c)(role(%)) ∧
cont(s) = cont(r)
The agreement predicate expresses that for all instantiated claims in any trace of a
given security protocol, there exist runs for the other roles in the protocol, such that
all communication events causally preceding the claim must have occurred before
the claim.
Injective agreement is defined in the same way as injective synchronisation is ob-
tained from non-injective synchronisation.
It expresses that for any trace and for any run of any role in the protocol there exist
unique runs for the other roles of the protocol such that for all claims occurring
in the trace all communications preceding the claim must have occurred correctly
within these runs.
The definition of I -AGREE does not involve all communications, but only the set
of events that causally precede a claim. However, it turns out that the way in
which agreement is made precise in terms of CSP, as can be checked by compiling
Casper-code into CSP, it also takes only preceding communications into account.
For this, running-commit signals (see [164]) are introduced in the protocol. For each
role, a running signal is added to the last communication preceding the agreement
claim. In the role that includes the claim, a commit signal is added to the last
communication. Injective agreement over all roles requires that the running signals
of each role precede the commit signal. This corresponds to the order requirements
of I -AGREE .
I -SYNCH
@
R
@
NI -SYNCH I -AGREE
@
R
@
NI -AGREE
The question of whether the inclusions in Figure 3.6 are strict is harder to answer.
In part, this is due to the generic approach of our model. Since our approach is
parameterised over the intruder model, we cannot determine for a given protocol to
which class it belongs. Therefore, strictness of the inclusions can only be answered
relative to a given intruder model. Consequently, the following reasoning will be at
a conceptual level only.
If we take, e.g., a model where all agents simply follow their roles and the intruder
has no capabilities at all, then the diamond in Figure 3.6 collapses into a single
class. The same holds if the intruder can only eavesdrop on the communications.
However, in the Dolev-Yao model, all inclusions are strict, as we will see below.
The case of injectivity vs. non-injectivity has been studied extensively before. The
MSC on in Figure 3.7 shows a protocol that satisfies NI -SYNCH and NI -AGREE ,
but neither I -SYNCH , nor I -AGREE .
{| i, r |}sk(i)
ni -synch
The intruder will only be able to construct message {| i, r |}sk(i) after having eaves-
dropped this message from a previous run. Therefore every read event of this mes-
sage is preceded by a corresponding send event, so the protocol is both NI -SYNCH
and NI -AGREE . However, once the intruder has learnt this message, he can replay
it as often as desired, so the protocol is not injective.
3.4 Authentication hierarchy 49
i, r
nonce nr
{| r, nr |}pk(i)
{| nr, i |}pk(r)
i -agree
Example 3.16 (Predictable nonce). In the first example we consider the notion
50 Chapter 3 Security Properties
of predictable nonces. There may be several reasons for such predictability, such as a
bad pseudo-random generator, or the fact that the nonce is implemented as a counter.
Consider, for instance, the protocol in Figure 3.9. The purpose of this protocol is
unilateral authentication of responder r towards initiator i. This is established by
using nonce ni as a challenge. However, if the value of this nonce is predictable by
the intruder, the protocol has a major shortcoming. This is shown in the trace in
Figure 3.10 on the facing page, which displays that the run of r has finished even
before the run of i started. The consequences of this preplay attack are similar to
many well-known replay attacks. The protocol satisfies (injective and non-injective)
agreement, but does not satisfy synchronisation (both variants).
pk(r) sk(r)
i r
predictable ni
ni
{| i, ni |}sk(r)
ni -agree
This type of preplay attacks is also called suppress-replay attacks [91, 142]. As
pointed out by Chen and Mitchell [142], practical protocols such as the S/KEY user
authentication scheme suffer from this kind of attack because they use predictable
challenges. Roscoe [161] found a similar problem for the Needham-Schroeder Secret
Key protocol in the case that the initiator’s nonce is predictable.
Example 3.17 (Responder controls nonce). The second example concerns the
protocol in Figure 3.11 on page 52, in which we assume that an agent keeps a state
which is shared by all its instances of the protocol. The purpose of this protocol is
again unilateral authentication, but now the responder is in control of the nonce.
After receiving a request from the initiator, the responder sends his nonce to the
initiator. The initiator keeps a set Z in which he stores all nonces from previous
instantiations of the protocol. This is to prevent replay attacks and, thus, to ensure
injectivity. If the nonce is accepted as fresh, the initiator challenges the responder to
prove his identity, which the responder does by replying the signed nonce. This may
seem a reasonable authentication protocol, and indeed it satisfies injective agreement.
3.4 Authentication hierarchy 51
Run 1 Run 2
intruder
A: role i B: role r
predict ni]1
ni]1
{| i, ni]1 |}sk(B)
ni]1
{| i, ni]1 |}sk(B)
However, the preplay attack shown in Figure 3.12 on page 53 indicates a weakness
of the protocol. An initiator can successfully execute his side of the protocol, while
the responder was not even alive when the initiator started the protocol. This ex-
ample shows that even a complex message interaction can be preplayed. Since the
protocol does not satisfy synchronisation, this weakness can be detected by verifying
synchronisation.
Two remarks apply to this example. The first remark is that testing whether the
nonce is already in Z and the extension of Z with the nonce should be implemented
as an atomic action (a test-and-set action) to achieve the desired result. The second
remark is that this protocol still has interesting properties when leaving out the
validation of nr’s freshness by the initiator (i.e. if we remove the set Z and its
operations). The resulting protocol is not injective anymore, since the intruder may
replay the responder behaviour of an earlier run of the protocol. However, this
reduced protocol still satisfies non-injective agreement. Since the displayed attack
remains possible, the protocol does not satisfy synchronisation. Thus, we have a
stateless protocol suffering from a preplay attack. It satisfies non-injective agreement
but it does not satisfy non-injective synchronisation.
In the previous two examples we have seen how the intruder can preplay a complete
protocol session and use it later to fool the initiator into thinking that the responder
is still alive. In the third example, we see that it can already be harmful if only a
52 Chapter 3 Security Properties
Z, pk(r) sk(r)
i r
hello
nonce nr
nr
nr 6∈ Z;
Z := Z ∪ {nr}
{| i, nr |}pk(r)
{| nr, i |}sk(r)
ni -agree
Run 1 Run 2
intruder
A: role i B: role r
hello
nr]2
{| i, nr]2 |}pk(r)
{| nr]2, i |}sk(r)
hello
nr]2
{| i, nr]2 |}pk(B)
{| nr]2, i |}sk(B)
the observations of Roscoe [161], this sort of behaviour, while seemingly innocent
was certainly unexpected by the protocol designer. After finding such unexpected
behaviour, the designer has two options. First, he may decide that this behaviour
is acceptable, but then he has to take the implications of this behaviour into ac-
count and extend his mental model of the protocol. He should make sure that this
behaviour does not interfere with any other analysis which is based on the intended
order of the protocol events. Still according to Roscoe, the second option is to
strengthen the protocol as to make it compatible with the mental model again. As
pointed out by Roscoe, the TMN protocol, and a seemingly correct strengthening
thereof, suffer from the same weakness as the protocol in Figure 3.13 on the following
page.
We conclude by stating that failure of a protocol to respect synchronisation does not
always indicate an exploitable weakness of the protocol. However, such unexpected
behaviour should always receive extra attention and should at least lead to adjusting
the mental model of the protocol.
54 Chapter 3 Security Properties
{| i, pk(i) |}sk(s)
nonce nr
{| r, nr |}pk(i)
{| nr, i |}pk(r)
ni -agree
Several tools exist to verify whether a protocol satisfies some form of agreement,
e.g. [121, 7, 173]. Because synchronisation is very similar to agreement, we expect
that it will be feasible to adapt most of the verification tools to be able to handle
at least non-injective synchronisation.
In refinement or forward model-checking approaches, agreement is commonly ver-
ified by inserting running and commit or similar signals in the protocol [121, 7].
When somebody commits to some values, the other party needs to have emitted
a running signal. The commit signal corresponds to the claim in our framework,
whereas the running signal denotes the last communication of the other role that
causally precedes the claim. These signals are introduced to ease verification: in-
stead of having to inspect the trace leading up to the claim, only the set of emitted
signals needs to be inspected. In our framework, agreement is a property of the trace
prefix ending in a claim. By introducing running and commit signals, agreement
can be verified by inspecting the set of signals. In much the same way, it is possible
to verify synchronisation by introducing such signals for each communication that
precedes the claim.
In order to verify injective agreement, many tools rely on a counting argument:
if a trace prefix contains n commit signals, there should be at least n preceding
3.5 Verifying injective synchronisation 55
running signals in the trace prefix. If enough commit signals are considered in this
way, this should ensure injectivity. The main drawback of this method is that the
verification complexity is exponential in the length of the traces, and thus in the
number of running claims considered. Another approach to verifying injectivity uses
detailed knowledge of the data model: if the agreement includes data items that are
guaranteed to be unique for each instance of the claim, injectivity can be derived.
However, not all injective protocols include such unique data items. In some cases
it can also be non-trivial to establish the required uniqueness property, as we show
in the next section.
This section will focus on the verification of injectivity for protocols that satisfy non-
injective synchronisation. We propose and study a property, the LOOP property
that can be syntactically verified. We prove a theorem that shows that LOOP is
sufficient to guarantee injectivity. Our result is generic in the sense that it holds
for a wide range of security protocol models, and does not depend on the details of
message content or nonce freshness.
The remainder of this section will proceed as follows. First we elaborate on the
difficulties posed by injectivity, and informally describe our theorem. Then, we
define a class of intruder models for which our theorem holds. Then, in Section 3.5.2
we propose and study the LOOP property, and prove the main theorem.
As discussed in Section 3.3.2 protocols that satisfy NI -SYNCH can still be vulnera-
ble to so-called replay attacks. Looking back at example 3.10 on page 43 in particu-
lar, we saw that the unilateral authentication protocol from Figure 3.4 clearly does
not satisfy injectivity, as is shown by the replay attack in Figure 3.5. A simple fix
would be to have the initiator determine the value of the nonce, as in Figure 3.14.
nonce ni
ni
{| i, ni |}sk(r)
ni -synch
The introduction of a causal chain of messages from the initiator to the responder
and back to the initiator seems to do the trick. We will call such a chain a loop.
The presence of such a loop plays a key role in the discussion on injectivity below.
It is folklore that a nonce handshake is sufficient to ensure injectivity. Here we
identify a more abstract property, viz. the occurrence of a loop, which is independent
of the data model, and thus applicable to a wide range of security protocol models.
To give an indication of the limitations of the data based approach, consider a
protocol where a nonce n is created, and some function is applied to it. The result
f (n) is sent to the responder, who applies another function and replies with g(f (n)).
Now, to check whether such a protocol can be injective based on the freshness of
n in a data-based model, we need to know some details of f and g. If for example
f (x) = x mod 2, the protocol will not be injective. Therefore, we propose to only
reason about loops, which does not require any information about the contents of
messages.
We show that in our model (but also others, as we show in more detail in [70]),
and given that the intruder is at least capable of duplicating messages, the LOOP
property guarantees that a synchronising protocol is also injective. For this result to
hold, we require a specific property to hold for the intruder model. We can informally
characterize this by saying that the intruder must at least be able to duplicate
messages. To be more precise, the protocol model (including the intruder model)
must satisfy closure of the set of execution traces under swapping of events. This
class contains, e.g., the model presented in the previous chapter with a duplicating
(or stronger) intruder. Since the LOOP property can easily be verified by means of
static analysis of the security protocol description, we provide, in fact, a practical
syntactic criterion for verifying injectivity.
We extend the use of the · operator, and introduce the notation t = u1 · u2 to denote
that t is a concatenation of run event sequences u1 and u2.
For the remainder of this chapter, we assume that traces contain only role events,
and intruder events are left implicit.
Definition 3.18 (Swap). A security protocol semantics satisfies the SWAP prop-
erty if the following two conditions hold:
(i) The trace set traces(P ) is closed with respect to non-read swaps, i.e., for all
run events e0 such that roleevent(e0 ) is not a read event, it holds that
These properties state that we can shift a non-read event to the left as long as it
does not cross any other events of the same role instance. For the read event we
have an additional constraint: we can only shift it to the left if there remains an
earlier send of the same message.
Lemma 3.19 (Duplicating intruder implies SWAP). If the intruder is at least
able to duplicate messages, the SWAP property holds for our model.
Proof. The first condition, swapping non-reads, is related to the fact that indepen-
dent runs share no memory, and trivially holds based on the semantics. The second
condition, swapping reads, is met based on the duplication property. The precon-
dition of the read rule of the semantics (i. e. that the appropriate message is in the
read buffer) can be satisfied at any time by a duplicating intruder, once the message
has been sent once.
For the remainder of this section we assume the protocol P contains a claim γ.
We introduce a predicate χ and a set χ0 for protocols that satisfy non-injective
agreement for this claim γ. Given a trace t, a claim event c, and a cast Γ that maps
the roles to runs, we express the auxiliary predicate χ on the domain traces(P ) ×
Cast(P, t) × RunEvent by
χ(t, Γ, c) ⇐⇒ roleevent(c) = γ ∧
∀ς, % : ς ; % ∧ % 4P γ ∃s, r : s < t r < t c
roleevent(s) = ς ∧ runidof (s) = Γ(c)(role(ς)) ∧
roleevent(r) = % ∧ runidof (r) = Γ(c)(role(%)) ∧
cont(s) = cont(r)
The first conjunct of this predicate expresses the fact that the run executing the
claim role is determined by the parameter c. The second conjunct expresses that
in the trace t, the claim c is valid with respect to the specific cast Γ, i. e. that
the partners have executed all communications as expected. In the formula this
is expressed by the fact that send and read events are executed by the expected
runs, viz. Γ(c)(role(roleevent(s))) and Γ(c)(role(roleevent(r))), respectively, with
identical contents, and in the right order.
Given a valid synchronisation claim c in a trace t, there exists a role instantiation
function Γ such that χ(t, Γ, c) holds. The predicate χ tells us that certain events
exist in the trace. Because we want to reason about these events in the following,
we decide to make this set of events explicit. We define the set of events χ 0 (t, Γ, c)
by
χ0 (t, Γ, c) = { e ∈ t | runidof (e) = Γ(c)(role(e)) ∧ roleevent(e) 4P roleevent(c) }
Assuming that χ holds, its set of events χ0 has two interesting properties. If there
is a read in this set, there is also a matching send in the set. Furthermore, given an
event of a role in the set, all preceding events of the same role are also in the set.
58 Chapter 3 Security Properties
To prove our main result, the SWAP property from Definition 3.18 on page 56
suffices. However, to ease the explanation of the proof, we introduce two additional
lemmas. These lemmas are implied by the model and the two swap conditions.
The first lemma generalises the swapping of two events to the swapping of a set of
events. The lemma does not hold for any set of events: we now use results obtained
for a set of events defined by χ, that are involved in a synchronisation claim. Based
on the two swap properties, we can shift these events (in their original order) to the
beginning of the trace. This trace transformation function shift : P(TE ) × TE ∗ →
TE ∗ is defined by
(
t if t ∩ E = ∅
shift(E, t) =
e · shift(E, u1 · u2) if t = u1 · e · u2 ∧ u1 ∩ E = ∅ ∧ e ∈ E
Here, the intersection of a trace and a set yields the collection of elements of the
set occurring in the trace. This function effectively reorders a trace. The next
lemma formulates conditions assuring that the reordering of a trace in traces(P ) is
in traces(P ) as well.
Lemma 3.20. Given a protocol P and a trace t ∈ traces(P ), claim event c and role
instantiation function Γ:
Proof. Induction on the size of the finite set χ0 (t, Γ, c), because χ(t, Γ, c) implies
that the read events can be swapped. Recall that, by convention, each event occurs
at most once in a trace.
The lemma directly generalises to more claim instances (of the same claim). Thus,
instead of a single claim run, we can consider sets of claim runs.
Lemma 3.21. Given a trace t, a set of claim events C ⊆ t and cast Γ ∈ Cast(P, t):
[
(∀c ∈ C : χ(t, Γ, c)) ∧ t0 = shift( χ0 (t, Γ, c), t) ⇒
c∈C
t0 ∈ traces(P ) ∧ (∀c ∈ C : χ(t0 , Γ, c))
If we apply the shift function to a trace of the system, and the conditions of the
lemma are met, we get a reordered trace that is also in traces(P ). The new trace
consists of two segments: in the first segment there are only the preceding events of
the claim events in C , and all other events are in the second segment.
Intuitively, these lemmas express that the events involved in a valid synchronisation
claim are independent of the other events in the trace. A valid synchronisation can
occur at any point in the trace, because it does not require the involvement of other
runs, or of the intruder. However, other events in the trace might depend on events
3.5 Verifying injective synchronisation 59
We define a property of protocols, which we call the LOOP property. For protocols
with only two roles, it resembles a ping-pong property. First the claim role executes
an event, then the other role, and then the claim role again. For example, the LOOP
property does not hold for the protocol in Figure 3.5 on page 45, but it does hold
for the protocols in Figures 3.14 on page 55 and 2.1 on page 12.
We generalise this for multi-party protocols with any number of roles. We require
that the partner roles have an event that must occur after the start of the claim
run, but before the claim event itself.
Definition 3.22 (LOOP ). A security protocol P has the LOOP property with
respect to a claim γ if
∀ε 4P γ, role(ε) 6= role(γ)
∃ε0 , ε00 : ε0 4P ε00 4P γ ∧ role(ε0 ) = role(γ) ∧ role(ε00 ) = role(ε) (3.3)
The property tells us that for each role that has an event ε that precedes the claim
γ, there exists a loop from the claim role to the role and back. This structure is
identified in the formula by ε0 4P ε00 4P γ.
Lemma 3.23. Given a security protocol P with a claim γ: If all roles R 6= role(γ)
that have events preceding γ, start with a read event, then we have that LOOP (P, γ).
Proof. The proof of this lemma follows from the definition of the protocol order 4 P .
Let P be a protocol with a claim γ. Let R 6= role(γ) be a role with an event ε that
precedes the claim. Based on the precondition of the lemma, it starts with a read,
and thus there must be a preceding event with the same label on a different role R 0 .
Suppose R0 = role(γ): in this case we have established a loop. On the other hand,
if we have R0 6= role(γ), we again have that this role must have a preceding event
on another role. As the set of role events is finite, we ultimately end up at an event
of the claiming role. Thus we can conclude that LOOP (P, γ) holds.
In practice, this lemma tells us that the LOOP property always holds for the initi-
ating role of a protocol. Thus, we only have to check whether the LOOP property
holds for responder roles.
Now we can state a theorem, which provides a syntactic condition for the injectivity
of a protocol that synchronises.
60 Chapter 3 Security Properties
Theorem 3.24 (LOOP). Let P be a protocol with claim eent γ. Then we have
that
NI -SYNCH (P, γ) ∧ LOOP (P, γ) ⇒ I -SYNCH (P, γ)
Proof. By contradiction. Assume that the implication does not hold. Thus we have
The remainder of the proof is done in two steps. The first step of the proof establishes
a trace t of the protocol, in which there are two runs that synchronise with the same
run. In the second step we use the shifting lemmas to transform t into another trace
of the protocol. For this new trace, we will show that NI -SYNCH cannot hold,
which contradicts the assumptions.
From now on, we will omit the type information for t and Γ in the quantifiers and
assume that t ∈ traces(P ).
Given that the protocol synchronises, but is not injective, we derive from defini-
tion 3.8 on page 42 and 3.11 and formula (3.4) that
∀t ∃Γ ∀c ∈ t : roleevent(c) = γ ⇒ χ(t, Γ, c) ∧
¬∀t ∃Γ injective ∀c ∈ t : roleevent(c) = γ ⇒ χ(t, Γ, c) (3.5)
∀t ∃Γ ∀c ∈ t : roleevent(c) = γ ⇒ χ(t, Γ, c) ∧
∃t ∀Γ¬(Γ injective ∧ ∀c ∈ t : roleevent(c) = γ ⇒ χ(t, Γ, c)). (3.6)
∀c ∈ t : roleevent(c) = γ ⇒ χ(t, Γ, c) ∧
¬(Γ injective ∧ ∀c ∈ t : roleevent(c) = γ ⇒ χ(t, Γ, c)). (3.7)
Note that in (3.7) the left conjunct also occurs as a sub-formula in the right conjunct.
Rewriting yields
Making the non-injectivity for the function Γ explicit as explained in Definition 3.5
on page 38, there must exist two claim events, for which χ holds:
From the predicate χ and formula (3.9), we have that the run Γ(c1)(R1) must be
executing the role R1. Because Γ(c1)(R1) = Γ(c2)(R2) it is also executing role R2.
3.5 Verifying injective synchronisation 61
Runs only execute a single role, and thus we derive that R1 = R2. The fourth
conjunct now reduces to c1 6= c2.
Put R = R1 = R2. We choose two claim events c1, c2 such that Formula (3.9) holds
for R. Now there exists a run identifier θ such that
Γ(c1)(R) = Γ(c2)(R) = θ
In the trace t0 we now have two distinct segments. All events involved with the
synchronisation of c1 and c2 are now in the initial segment of t0 . This includes
the events of θ that precede the claim. The second segment of t0 contains all other
events that are not involved in the preceding events of c1 and c2.
We will now reorder the initial segment of t0 . To this end, we apply the shift function
a second time, now only for c1. This will also yield a trace of the protocol, because
the conditions of Lemma 3.21 on page 58 hold for t0 , as the application of shift to t
maintained the order of the events in the shifted set, which implies that χ(t 0 , Γ, c1)
holds. Thus, we also know that the trace t00 is an element of traces(P ), where
Because the shift function maintains the order of the involved events, we have that
t00 = u1 · u2 · u3, where
All events that are not involved with the synchronisation claims c1 and c2, are in
u3.
Observe that u1 includes all events of θ that are involved with the claim of the run
c1. As all events are unique, these are not part of u2. From the construction of the
involved events set, we know that all involved events of role R are also in θ, because
all other role instances are executing other roles (as indicated by Γ). This implies
that there are no events of role R in the u2 segment at all: these are all in u1.
62 Chapter 3 Security Properties
Now we have arrived at a contradiction. t00 is in the set traces(P ). The loop property
combined with NI -SYNCH requires that for each role, there is an event after the
first event of the claim role that occurs before the claim. For the run c2 all events
are in u2 (including the start and the claim), but in this segment there is no event of
role R. Thus, there can be no Γ for t00 such that χ(t00 , Γ, c2) holds. This implies that
NI -SYNCH does not hold for the protocol, which contradicts the assumptions.
In this section we will take a closer look at the Needham-Schroeder protocol from
Figure 2.1 on page 12 and illustrate our definitions. The protocol goal is to ensure
mutual authentication and as a side effect secrecy of the involved nonces. Starting
point of the protocol is a public key infrastructure. This is depicted by the initial
knowledge above each of the roles in the protocol. The initiator starts the protocol
by sending an encrypted initialisation request to the responder. The nonce is used
to prevent replay attacks. Only the responder is able to unpack this message and
replies by sending the initiator’s nonce together with his own fresh nonce. Then the
initiator proves his identity by replying the responder’s nonce.
The man-in-the-middle attack in Figure 3.3 on page 41 only requires two runs. One
of trusted agent A performing the initiator role in a session with untrusted agent E
and one of trusted agent B performing the responder role in a session with agent A.
The intruder impersonates both E and A and in this way uses A as an oracle to
unpack message from B. At the end he has fooled B into thinking that he is talking
to A, while he is talking to the intruder.
Knowing this attack, it is straightforward to reconstruct it formally with our se-
mantics. Our experience shows that when trying to prove a flawed protocol correct,
the way in which the proof fails often indicates the attack. Rather than showing
the details here, we will prove correctness of the fixed Needham-Schroeder protocol,
which is called the Needham-Schroeder-Lowe protocol. The protocol is hardened by
extending message 2 with the responder name. It is specified as follows.
nsl(i) = ({i, r, ni, pk(r), pk(i), sk(i)}, nsl(r) = ({i, r, nr, pk(i), pk(r), sk(r)},
send 1 (i, r, {| i, ni |}pk(r) )· read 1 (i, r, {| i, W |}pk(r) )·
read 2 (r, i, {| ni, V, r |}pk(i) )· send 2 (r, i, {| W, nr, r |}pk(i) )·
send 3 (i, r, {| V |}pk(r) ))· read 3 (i, r, {| nr |}pk(r) ))·
claim 4 (i, secret, ni)· claim 7 (r, secret, nr)·
claim 5 (i, secret, V )· claim 8 (r, secret, W )·
claim 6 (i, nisynch) ) claim 9 (r, nisynch) )
3.6 Proving security properties of the NS/NSL protocol 63
We assume that there are no trusted roles. For this protocol, the initial intruder
knowledge (cf. Definition 2.44) is given by
[ [
M0 = IntruderConst ∪ {a, pk(a)} ∪ {sk(e)}
a∈Agent e∈Agent U
First we introduce some notation and present results which support verification.
We define msgs(P ) as the set of all role messages sent in the protocol. The first
lemma helps to infer that secret information which is never transmitted, remains
secret forever.
Lemma 3.25. Let P be a protocol, inst an instantiation and t a term that is not a
tuple or an encryption. If t is not a subterm of any message that is ever sent, and
hinsti(t) is not a subterm of the initial intruder knowledge, then hinsti(t) will never
be known by the intruder. Formally:
The correctness of this lemma follows from Table 2.3 by observing that every run is
“peeled off” from the beginning, while taking into account that the Match predicate
is defined such that it only extends the valuation of the variables.
The next lemma is used to infer from an encrypted message reception that the mes-
sage must have been sent by an agent if it contains a component which is not known
to the intruder. In most applications of this lemma we can infer l 0 by inspection
of the role specification and we have hinsti({| m |}k ) = hinst0 i(m0 ), rather than a
subterm relation.
Lemma 3.27. Let α be a trace and let i be an index of α. If
αi = (inst, read ` (x, y, {| m |}k )) and M0 0 hinsti({| m |}k ) , and Miα 0 hinsti(m),
then there exists index j < i such that αj = (inst0 , send `0 (x0 , y 0 , m0 )) and
hinsti({| m |}k ) v hinst0 i(m0 ).
The correctness of this lemma follows from the fact that if the intruder does not
know m when the message containing {m}k is read, he could not have constructed
the encryption. Thus, it must have been sent as a subterm earlier.
The final lemma is characteristic for our model. It expresses that when two instan-
tiations of a constant (such as a nonce or session key) are equal, they were created
in the same run.
64 Chapter 3 Security Properties
Proof. We will sketch the proofs for claim 7 and claim 9 . The other claims are proven
analogously.
First observe that the intruder will never learn secret keys of trusted agents. This
follows directly from Lemma 3.25, since none of the messages contain an encryption
key in the message text. Since the set of keys known to the intruder is constant, it
must be the case that if the intruder learns a basic term he learns it from unpacking
an intercepted message which was encrypted with the key of an untrusted agent.
Proof of claim 7 . In order to prove claim 7 we assume that α is a trace with index
r7, such that αr7 = ((θr7 , ρr7 , σr7 ), claim 7 (r, secret, nr)) and rng(ρr7 ) ⊆ Agent T .
Now we assume that the intruder learns nr and we will derive a contradic-
tion. Let k be the smallest index such that hθr7 , ρr7 , σr7 i(nr) ∈ Mk+1 , and thus
hθr7 , ρr7 , σr7 i(nr) 6∈ Mk . Inspection of the derivation rules reveals that this increase
in knowledge is due to an application of the send rule, followed by an application
of the deflect rule. Therefore, there must be a smallest index p < k such that
αp = ((θ 0 , ρ0 , σ 0 ), send ` (m)) and hθr7 , ρr7 , σr7 i(nr) v hθ 0 , ρ0 , σ 0 i(m). Since we have
three possible send events in the NSL protocol, we have three cases: ` = 1, 2, or 3.
[` = 1] In the first case we have αp = ((θ 0 , ρ0 , σ 0 ), send 1 (i, r, {| i, ni |}pk(r) )). Since
constants i and ni both differ from nr, the intruder cannot learn
hθr7 , ρr7 , σr7 i(nr) from hθ 0 , ρ0 , σ 0 i(i, r, {| i, ni |}pk(r) ), which yields a contradiction.
[` = 2] In the second case αp = ((θ 0 , ρ0 , σ 0 ), send 2 (r, i, {| W, nr, r |}pk(i) )). The in-
truder can learn nr because ρ0 (i) is an untrusted agent and either
hθr7 , ρr7 , σr7 i(nr) = hθ 0 , ρ0 , σ 0 i(W ) or hθr7 , ρr7 , σr7 i(nr) = hθ 0 , ρ0 , σ 0 i(nr). We dis-
cuss both options separately.
(i) For the former equality we derive that hθ 0 , ρ0 , σ 0 i(W ) 6∈ Mp , so we can apply
Lemmas 3.26 and 3.27 to find i1 with αi1 = ((θi1 , ρi1 , σi1 ), send 1 (i, r, {| i, ni |}pk(r) )) .
This gives hθi1 , ρi1 , σi1 i(ni) = hθ 0 , ρ0 , σ 0 i(W ) = hθr7 , ρr7 , σr7 i(nr), which cannot be
the case since ni and nr are distinct constants.
3.6 Proving security properties of the NS/NSL protocol 65
(ii) That the latter equality yields a contradiction is easy to show. Using Lemma 3.28
we derive θr7 = θ0 and since run identifiers are unique, we have ρr7 = ρ0 . So
ρr7 (i) = ρ0 (i), which contradicts the assumption that ρr7 (i) is a trusted agent.
[` = 3] In the third case we have αp = ((θ 0 , ρ0 , σ 0 ), send 3 (i, r, {| V |}pk(r) )). In or-
der to learn hθr7 , ρr7 , σr7 i(nr) from hθ 0 , ρ0 , σ 0 i(i, r, {| V |}pk(r) ) we must have that
hθ0 , ρ0 , σ 0 i(V ) = hθr7 , ρr7 , σr7 i(nr) and that ρ0 (r) is an untrusted agent. Using
Lemma 3.26 we find index i2 such that αi2 = ((θ 0 , ρ0 , σ 0 ), read 2 (r, i, {| ni, V, r |}pk(i) ) .
Because hθ 0 , ρ0 , σ 0 i(V ) 6∈ Mp we can apply Lemma 3.27 to find index r2 with
αr2 = ((θr2 , ρr2 , σr2 ), send 2 (r, i, {| W, nr, r |}pk(i) ) . This gives ρ0 (r) = ρr2 (r). (†)
Next, we derive hθr2 , ρr2 , σr2 i(nr) = hθ 0 , ρ0 , σ 0 i(V ) = hθr7 , ρr7 , σr7 i(nr). Applying
Lemma 3.28 yields θr2 = θr7 and thus ρr2 = ρr7 , so ρ0 (r) = ρr2 (r) = ρr7 (r). Because
ρ0 (r) is an untrusted agent while ρr7 (r) is trusted, we obtain a contradiction. This
finishes the proof of claim 7 .
Note †: Please notice that the step in the proof marked with † fails for the
Needham-Schroeder protocol, which gives an indication of why the hardening of
the second message exchange is required.
Proof of claim 9 . Let α ∈ Trace(nsl) be a trace of the system. Suppose that for
some r9 and (θr , ρr , σr9 ) ∈ Inst, with rng(ρr ) ⊆ Agent T , we have
αr9 = ((θr , ρr , σr9 ), claim 9 (r, nisynch)) . In order to prove this synchronisation
claim correct, we must find a run executing the initiator role which synchronises
on the events labeled 1, 2, and 3, since prec(nsl, 9) = {1, 2, 3}. By applying
Lemma 3.26, we find r1, r2, r3 (0 ≤ r1 < r2 < r3 < r9) and σr1 ⊆ σr2 ⊆ σr3 ⊆ σr9 ,
such that
We have already proved that nr remains secret, so we can apply Lemma 3.27 and
find index i3 and (θi , ρi , σi3 ) such that i3 < r3 and
αi3 = ((θi , ρi , σi3 ), send 3 (i, r, {| V |}pk(r) )) ∧ hθr , ρr , σr3 i(nr) = (θi , ρi , σi3 (V ). By
applying Lemma 3.26 we obtain i1 < i2 < i3 such that
Now that we have found out that run θi is a candidate, we only have to prove that
it synchronises with run θr . Therefore, we have to establish r2 < i2, i1 < r1 and
that the corresponding send and read events match each other.
First, we observe αi2 . Since hθr , ρr , σr3 i(nr) is secret, hθi , ρi , σi2 i(V ) is secret too
and we can apply Lemma 3.27, obtaining index r20 < i2 such that
66 Chapter 3 Security Properties
αr20 = ((θr0 , ρr0 , σr20 ), send 2 (r, i, {| W, nr, r |}pk(i) )) such that we have
hθi , ρi , σi2 i({| ni, V, r |}pk(i) ) = hθr0 , ρr0 , σr20 i({| W, nr, r |}pk(i) ). This implies that we
have hθr , ρr , σr3 i(nr) = (θi , ρi , σi3 (V ) = hθr0 , ρr0 , σr20 i(nr), so from Lemma 3.28 we
have θr = θr0 , and thus r2 = r20 . This establishes synchronisation of events αi2 and
αr2 .
Next, we look at αr1 . Because hθr , ρr , σr1 i(W ) is secret (cf. claim 8), we can apply
Lemma 3.27, which gives index i10 < r1 such that
αi10 = ((θi0 , ρi0 , σi10 ), send 1 (i, r, {| i, ni |}pk(r) )) and hθr , ρr , σr1 i({| i, W |}pk(r) )) =
hθi0 , ρi0 , σi10 i({| i, ni |}pk(r) ). Correspondence of αi2 and αr2 gives
hθi , ρi , σi2 i(ni) = hθr , ρr , σr2 i(W ) = hθr , ρr , σr1 i(W ) = hθi0 , ρi0 , σi10 i(ni). By
lemma 3.28 θi and θi0 are equal, which establishes synchronisation of events αr1
and αi1 . This finishes the synchronisation proof of claim 9 .
The proof method will form the outline of the verification method described in
Chapter 4. Given the existence of certain events, we draw conclusions (directly
based on the semantics) about previous events, branching options as they arise.
The theorems used are generic and apply to all protocols. In contrast, e. g. Paulsons
inductive proofs [152] require the establishment of specific theorems (e. g. unicity,
secrecy) for each protocol.
3.7 Conclusions
(i) Multiple instances of the protocol are truly independent. They do not share
variables, memory, or time.
(ii) The intruder has the ability to duplicate messages, as holds, for example, in
the standard Dolev-Yao intruder model.
The question arises whether the theorem also holds in an intruder-less model. This is
in fact the case, but of less interest, because injectivity always holds for synchronising
or agreeing protocols when there is no intruder.
Automated verification of the LOOP property can be implemented easily. The
algorithm is an instance of the reachability problem in a finite acyclic graph, and
therefore has linear complexity.
Almost all correct authentication protocols in the literature satisfy NI -SYNCH as
well as LOOP . It seems that LOOP is a necessary condition for injectivity, in
particular for the Dolev-Yao intruder model. However, for peculiar intruder models,
LOOP is not a necessary condition for injectivity.
In the models where LOOP is also a necessary condition for injectivity, our results
imply a minimum number of messages in a multi-party authentication protocol. We
will investigate this in future work.
The LOOP property guarantees injectivity for synchronising protocols. This raises
68 Chapter 3 Security Properties
pk(r) sk(r)
i r
i, r
nonce nr
{| i, nr |}sk(r)
ni -agree
Figure 3.15: A unilateral agreement protocol, with LOOP , but not injective.
Verification
In the previous two chapters we introduced a model for security protocols and various
security properties. In this chapter we address verification of security properties,
and develop an automated verification method for security properties defined in our
model.
The verification procedure developed here is based on verification of security prop-
erties by analysing patterns. Instead of analysing all individual traces, analysis
involves classes of traces, defined by trace patterns. Trace patterns are defined by
partially ordered, symbolic sets of events. We start off by explaining trace patterns
in Section 4.1.
Trace patterns allow us to capture the class of all attack traces for a given protocol
and security property: in other words, all attacks on a protocol property must
exhibit a particular pattern, induced by the security property. If there exist traces
of a protocol that exhibit a certain attack pattern, each such trace violates the
security property, and we can conclude that the security property is false. If no
trace of the protocol exhibits the attack pattern, there exists no attack, and the
property is true.
In order to verify a security property of a given protocol, we want to determine
whether or not the attack pattern occurs in any trace. This is where the verification
procedure comes into the picture. In Section 4.2 we develop an iterative algorithm
that given a protocol and a trace pattern, decides whether or not traces of the
protocol exist in which the pattern occurs. There are two possible results of the
algorithm, (1) no such traces exist in the protocol, or (2) there exist traces of the
protocol that include the pattern. In this case, the algorithm returns a special kind
of patterns, called explicit trace patterns, which capture all the possible ways in
which the original pattern can occur in traces of the protocol.
The developed algorithm is not guaranteed to terminate. We address this issue in
Section 4.2.5, and present a modified version of the algorithm that is guaranteed to
terminate, by bounding the number of runs. This bounded version of the algorithm
has four possible outcomes: two as in the unbounded version, and two bounded
variants. The two new outcomes are (3) there exist no traces of the protocol that
include the pattern within the bound, or (4) there exist traces that include pattern
within the bound (for which explicit trace patterns are produced), but there might
be other ways in which the pattern can occur. Even though a bound is introduced,
69
70 Chapter 4 Verification
the algorithm can decide security properties for the majority of protocols, for any
number of runs.
In Section 4.3 we show how this algorithm can be used to effectively verify security
properties. In particular, we sketch how the algorithm can be used to verify secrecy
and authentication properties such as synchronisation.
In Section 4.4 we discuss a prototype implementation of the verification method
called Scyther and perform empirical analysis using the prototype. We investigate
the influence of the heuristics of the algorithm, and the choice of the bound, for a
large set of protocols.
We finish this chapter in Section 4.5 by drawing some conclusions.
The algorithm we develop in the next section is based on reasoning about classes
of traces, as opposed to individual traces. We therefore start off by defining trace
patterns, which will be used to represent classes of traces.
The difference between this definition and the old one, is that the last two rules
are new. Symbolic instantiation does not require that ρ and σ are defined for all
roles and variables that occur in the term. We have added two clauses to cover
these cases. Thus, throughout this chapter we will use hinsti(rt) to denote symbolic
instantiation. Correspondingly, the set of run terms is extended with basic run terms
of the form R]θ (for uninstantiated roles) and V ]θ (for uninstantiated variables).
The upshot of this is that we can now have run term patterns: a run term (A, R]θ)
effectively represents a class of run terms, of which (A, A), (A, B) etc., are members.
We extend this to instantiating role events, yielding instantiated role events.
Definition 4.2 (Instantiated role events). For all instantiations (θ, ρ, σ), role
events ε, role terms rt and labels `, we define instantiated role events as
send ` (h(θ, ρ, σ)i(rt))]θ if ε = send ` (rt)
h(θ, ρ, σ)i(ε) = read ` (h(θ, ρ, σ)i(rt))]θ if ε = read ` (rt)
claim ` (h(θ, ρ, σ)i(rt))]θ if ε = claim ` (rt)
The set of trace pattern events PatternEvent contains instantiated role events: 1
PatternEvent ⊇ hinsti(ε) inst ∈ Inst, ε ∈ RoleEvent
Next we define the concrete traces that match the pattern, using the pattern insta-
tiation function.
In other words: a concrete trace exhibits the pattern, if there exists an instantiation
function f , such that the instantiations of all pattern events occur in the trace, in
the order prescribed by the pattern.
traces(P, (TPE , −
→)) = traces(P ) ∩ traces((TPE , −
→))
with
e = claim 5 (r]1, ni -synch))]1
The traces of the Needham-Schroeder protocol that exhibit this pattern, include the
attack on the protocol, and the traces that we consider to be the expected execution
of the protocol.
Not all trace patterns can occur in the traces of a protocol, as the following example
shows.
4.1 Trace patterns 73
Example 4.9 (Trace pattern). Let P be a protocol with two events ε1, ε2 such
that ε1 ≺r ε2. Let e1 = hinsti(ε1) and e2 = hinsti(ε2) for some instantiation
function inst. Then we have that
This pattern does not occur in the traces of any protocol, because according to the
operational semantics, the events of a single run of any protocol conform to the
protocol role order.
Definition 4.10 (Trace pattern refinement). Let TP and TP 0 be trace patterns.
We write TP 0 v TP to denote that TP 0 is a refinement of TP , if and only if
traces(TP 0 ) ⊆ traces(TP )
(TPE ∪ {e}, −
→) v (TPE , −
→)
→, TPE ) be
A second form of refinement is adding an edge to a trace pattern. Let (−
a trace pattern, let {e1, e2} ⊆ TPE , and rt a term. Then we have that
rt
(TPE , −
→ ∪ (e1 −→ e2)) v (TPE , −
→)
→, TPE ) be a trace
A third form is substitution of uninstantiated variables. Let (−
pattern, let V ]θ be an uninstantiated variable occurring in TPE , and let rt be a
term. Then we have that
(TPE , −
→)[rt/V ]θ] v (TPE , −
→)
The resulting trace set contains less choice: where we first had the option of choosing
any value for the variable V ]θ in concrete traces, we now fix the choice to a particular
term rt.
That the above three operations constitute refinements is a direct result of defini-
tion 4.6.
Now that we have defined trace patterns, we investigate when a trace pattern is
realizable with respect to a given protocol, i.e. when there exist traces of the protocol
that include the pattern. We define three predicates on protocols and trace patterns.
The predicates find their origin in the rules of the operational semantics, and have
been tailored specifically to allow us to conclude that the intersection of the traces of
74 Chapter 4 Verification
the protocol and the pattern is not empty, i. e. that there exist traces of the protocol
that exhibit the pattern.
Because labels of role events are unique within a protocol definition, we define an
auxiliary function roleevent that maps trace pattern events to their unique role
events.
Definition 4.12 (Unique runs in a trace pattern). The first predicate
UniqueRuns on trace patterns expresses that role events should not occur twice with
the same run identifier.
This predicate does not refer to any protocol. Rather, it is a necessary requirement
of a pattern to correspond to any traces of the semantics in general. As a result we
have the following theorem.
Theorem 4.13 (Unique runs). Let TP be a trace pattern, and let P be a protocol.
Then
¬UniqueRuns(TP ) ⇒ traces(P, TP ) = ∅
The proof is a direct result of the rules of the operational semantics, as each run
executes each role event only once, and role events are unique within role definitions.
Definition 4.14 (Role consistent trace pattern). The predicate RoleConsistent
expresses consistency of a trace pattern (TPE , −→) according to the individual role
definitions of a protocol. Let P be a protocol, with the role event order ≺ r for each
role r.
RoleConsistent(P, (TPE , −
→)) = ∀e ∈ TPE :
∃inst ∈ Inst, ε ∈ RoleEvent : e = hinsti(ε) ∧
(∀ε0 ≺r ε : ∃inst0 : inst0 ⊆ inst ∧ hinst0 i(ε0 ) →
− ∗ e)
This predicate expresses that each event in the trace pattern must be an instantiation
of a role event. Furthermore it must be preceded by instantiated role events in the
order prescribed by the role definition.
Contrary to the UniqueRuns predicate, the RoleConsistent predicate is not neces-
sary for a pattern in order to represent any traces of a protocol . As an example,
consider the pattern TP that includes only the second event of a role of a well-
formed protocol P . Clearly, TP does not satisfy RoleConsistent. However, there
exist traces in which this pattern occurs, such as in traces in which the protocol P
is executed exactly as prescribed by the role definitions.
For the remainder of this chapter, we restrict the discussion to the strongest intruder
model, for which we have
The notion of a trace pattern and the resulting theorems can easily be modified for
other intruder models, but this would introduce a large amount of case distinctions.
For clarity, we choose only to discuss the strongest intruder model. In this intruder
model, read events are enabled if and only if a term matching their pattern can
be inferred from the intruder knowledge, which consists of all sent terms, conform
Lemma 2.47. This is captured by the third predicate on trace patterns.
Definition 4.15 (Read enabled trace pattern). The predicate
ReadEnabled (P, TP ) expresses that read events of the pattern TP of the pro-
tocol P are enabled, which corresponds to the intruder being able to generate the
correct message.
A trace pattern of a protocol for which the above three properties hold, is called a
realizable trace pattern of the protocol.
Definition 4.16 (Realizable trace pattern). Let TP be a trace pattern, and let
P be a well-formed protocol, such that all events in the pattern are instantiations of
events of P . If we have that UniqueRuns(TP ), RoleConsistent(P, TP ) as well as
ReadEnabled (TP ), then we say that trace pattern TP is realizable with respect to P .
A trace pattern for which the above three properties holds is called realizable because
there exist traces of the protocol that exhibit the pattern.
Theorem 4.17 (Realizable trace pattern occurs in traces of the protocol).
Let TP be a realizable trace pattern of a well-formed protocol P . Then we have that
traces(P, TP ) 6= ∅.
Proof. Let (TPE , −→) be a trace pattern and P a protocol, such that the above
conditions hold. We will construct a sequence of transitions starting from the initial
state of the semantics, corresponding to the rules in Table 2.3 on page 27, thus
creating a trace that is a trace of the protocol by construction. Initially, we have
that the trace is empty, t = [ ], the network state s = hM, BS, BR, F i is the initial
state, i. e. M = M0 , and BS = BR = F = ∅.
As an invariant, we have that at each step in the construction, each combination
of role event with a run identifier (which is unique based on UniqueRuns) occurs
either in TPE or in t, but not in both. In each step, such an event is removed from
TPE and added to t.
We repeat the following procedure until TPE is empty.
Let e be an element TPE , such that ¬∃e0 ∈ TPE : e0 → − e. Because TPE is finite
and →− is acyclic, such an event exists. Let θ = runidof (e), and let ε = roleevent(e).
If θ does not occur in e, the run θ is not part of the state s yet, and we apply
the create rule. Because RoleConsistent holds, there exists a (partial) instantiation
function inst such that all events of the run θ in TPE are instantiations of role
76 Chapter 4 Verification
events with this instantiation function. Let instθ = (θ, σθ , ρθ ) be any instantiation
function such that (1) inst ⊆ instθ , (2) instθ is not partial, in the sense that
all role names and variables occurring in role role(e) are mapped to terms. Let
run = ((θ, ρθ , ∅), P (role(e))). We extend t by this create event.
t := t · create(run)
t := t · (inst, ε) · take(m)
Effectively, the message is copied from the send buffer to the intruder knowl-
edge, and we extend M accordingly: M := M ∪ {m}.
• If e is a read event, note that the precondition for the rule is significantly more
involved than the other ones. Additionally, it requires that the message that
is to be read is in the read buffer, and that this matches with the pattern m.
Observe that based on ReadEnabled , we have that the message m, as read by
e can be inferred from the preceding send events. By construction, these are
already in the trace, and the messages are in the intruder knowledge, because
of the added take events. Note that the instantiation of variables by concrete
terms preserves the inference relation: if we have that T ` t, and we substitute
role names and variables in T and t by concrete terms, the relation still holds.
Let inst be the instantiation function that occurs last in t for the run θ. The
pattern m can contain variables or role names not yet defined in inst. For
these, we extend inst and define them to be as in instθ , ensuring that the
instantiation of the pattern is a concrete term that does not contain variables.
However, although the resulting concrete message hinsti(m) can be inferred
from M , the messages that can be inferred are not in the read buffer. Thus,
we first insert a fake event for the message m. After that we can add the read
event to the trace as for the claim event.
t := t · fake(m) · (inst, ε)
4.1 Trace patterns 77
Finally we remove e from TPE , and repeat this procedure until TPE is empty.
The trace constructed in this way is a trace of the protocol by construction.
In order to give an algorithm that can determine whether a trace pattern contains
only traces of the protocol, we construct a variation on the notion of trace pattern,
which we call an explicit trace pattern.
The main difference of an explicit trace pattern when compared to a trace pattern
is that the construction (involving the use of the encrypt or decrypt rules of the
knowledge inference operator) of the intruder knowledge is made explicit. We use
the labels on the ordering relation, which will now have the following interpretation:
rt
e1 −→ e2 denotes that the term rt, where rt is not a tuple, occurs first in the
intruder knowledge after event e1, and is needed to enable event e2. However, some
non-tuple terms such as e.g. {| rt1 |}rt2 might be constructed from two terms, and
we would not be able to determine a single point where they both occur first. To
express the event after which such a composed term occurs first, we introduce two
new events: the encr and decr events. We can now give a complete definition of the
set of trace pattern events.
Definition 4.18 (Trace pattern events). The set PatternEvent of all trace pat-
tern events is defined as
PatternEvent = hinsti(ε) inst ∈ Inst, ε ∈ RoleEvent ∪
encr (rt1, rt2, {| rt1 |}rt2 ) rt1, rt2 ∈ RoleTerm ∪
decr ({| rt1 |}rt2 , rt2−1 , rt1) rt1, rt2 ∈ RoleTerm
Note that whereas traces of a protocol contain events as tuples (inst, ε), trace pat-
terns contain concrete events hinsti(ε) to which the substitutions prescribed by inst
have been applied.
For the remainder of this chapter, we abstract away from the initial intruder knowl-
edge M0 . The reason for this is that including it would lead to a large number of
(trivial) case distinctions, and furthermore it can also be easily modeled as a single
role of the protocol that contains a send event which contains the base terms from
M0 , from which all others can be constructed.
In order to formally express explicit trace patterns, we need to introduce some
auxiliary notation. These correspond directly to the intuitive notions of (1) the
parts of a term after projecting tuples, (2) the non-tuple terms that are added to
the intruder knowledge after an event, and (3) the non-tuple terms that are required
to be in the intruder knowledge in order to enable an event.
For all terms we define the function parts : RunTerm → P(RunTerm) such that
parts(rt1) ∪ parts(rt2) if rt = (rt1, rt2)
parts(rt) = ∅ if rt = V ]θ
{rt} otherwise
78 Chapter 4 Verification
Thus, we consider all non-tuple terms that are not (uninstantiated) variables to
be a part of the term. Uninstantiated variables are omitted as the initial intruder
knowledge contains intruder constants of all types (IntruderConst).
For all labels ` and terms rt1, rt2, rt3, the parts that are considered the output of
an event e are defined as out : RunEvent → P(RunTerm), where
parts(rt1) if e = send ` (rt1)
parts(rt3) if e = encr (rt1, rt2, rt3)
out(e) =
parts(rt3) if e = decr (rt1, rt2, rt3)
∅ otherwise
For all labels ` and terms rt1, rt2, rt3, the parts that are considered the input of an
event e are defined as in : RunEvent → P(RunTerm)
parts(rt1) if e = read ` (rt1)
parts(rt1) ∪ parts(rt2) if e = encr (rt1, rt2, rt3)
in(e) =
parts(rt1) ∪ parts(rt2) if e = decr (rt1, rt2, rt3)
∅ otherwise
(i) If a term occurs first after an event e, it is an element of its out(e) set:
rt
∀e1, e2 ∈ TPE , rt : e1 −→ e2 ⇒ rt ∈ out(e1)
(iv) All events are enabled: the terms that are required for an event e (defined as
elements of the set in(e)) occur first before the event e.
rt
∀e ∈ TPE : in(e) = rt e2 ∈ TPE ∧ e2 −→ e
Three examples of explicit trace patterns (with additional annotations) can be found
in Figures 4.1, 4.2 and 4.3.
Proof. The proof proceeds by iterating over the labeled edges, marking them as
done as they are addressed.
As an invariant we take that all terms occurring as labels of edges that are marked
as done, can be inferred from the preceding set of send events. Any uninstantiated
variables can safely be ignored, as they can be instantiated by intruder constants,
as the set M0 includes intruder constants of any occurring type.
rt
Let e1 −→ e2 be an unmarked labeled edge such that there is no unmarked edge
that precedes it in the graph. We apply case distinction on the type of event of e1:
• e1 is a decrypt event. For an explicit trace pattern, we have that there exists
m,k, such that rt ∈ parts(m). Furthermore, we have that {| m |}k ,k −1 ∈ in(e1).
Based on the invariant and the fact that (TPE , − →) is explicit, we have that
{| m |}k and k −1 are inferable from the preceding events. Consequently, we
have that rt is inferable (because of {{| m |}k , k −1 } ` m, and projection).
Given a trace pattern and a protocol, we are interested in the existence of traces
of the protocol in which the pattern occurs. In particular, we are interested in
expressing the class of traces of the protocol that exhibit the pattern as a set of
explicit trace patterns. We refer to the process of capturing such a trace class by
means of a set of explicit trace patterns, as complete characterization. We owe
this terminology to the (independently developed) work of [78], in which a different
approach is taken to arrive at such characterizations. We will return to this work
later in Chapter 7.
80 Chapter 4 Verification
Run #1
any agent I#1 in role I
I -> any agent I#1
R -> any agent R#2
Const ni#1
Var nr -> nr#2
Run #2
any agent R#2 in role R
I -> any agent I#1 send_1 to R#2
R -> any agent R#2 { I#1,ni#1 }pk(R#2)
Const nr#2
Var ni -> ni#1
send_2 to I#1
{ ni#1,nr#2 }pk(I#1)
send_3 to R#2
{ nr#2 }pk(R#2)
claim_I1
Reachable
[Id 3] Protocol ns3, role I, claim type Reachable
Run #2
any agent I#2 in role I
I -> any agent I#2
R -> any agent R#1
Const ni#2
Var nr -> nr#1
Run #1
any agent R#1 in role R
I -> any agent I#2 send_1 to R#1
R -> any agent R#1 { I#2,ni#2 }pk(R#1)
Const nr#1
Var ni -> ni#2
send_2 to I#2
{ ni#2,nr#1 }pk(I#2)
send_3 to R#1
{ nr#1 }pk(R#1)
claim_R1
Reachable
[Id 1] Protocol ns3, role R, claim type Reachable
Run #2
any agent I#2 in role I
I -> any agent I#2
R -> Eve
Const ni#2
Var nr -> nr#1
send_1 to Eve
Initial intruder knowledge
{ I#2,ni#2 }pk(Eve)
sk(Eve)
decrypt pk(R#1)
ni#2
Run #1
any agent R#1 in role R
I -> any agent I#2
R -> any agent R#1 encrypt
Const nr#1
Var ni -> ni#2
send_2 to I#2
{ ni#2,nr#1 }pk(I#2)
send_3 to Eve
{ nr#1 }pk(Eve)
decrypt
nr#1
encrypt
claim_R1
Reachable
[Id 2] Protocol ns3, role R, claim type Reachable
For the initiator role of the Needham-Schroeder protocol, there is only one explicit
trace pattern, shown in Figure 4.1 on page 80. Thus, all traces that include the
initiator role, must also include the structure in the graph, which exactly corresponds
to a valid protocol execution. Thus, any synchronisation claim at the end of the
initiator role is correct.
For the responder role, there are exactly two explicit trace patterns, shown in Fig-
ures 4.2 and 4.3. The first of these corresponds to the expected protocol execution,
whilst the second is exactly the man-in-the-middle attack originally found by Lowe.
As a side result, this shows that every attack on the authentication properties of
Needham-Schroeder includes the man-in-the middle attack, as the two graphs repre-
sent a complete characterization of the responder role. Each trace that includes an
instance of the responder role either contains also a partner to synchronize with, or
it contains the man-in-the-middle attack.
All three figures are unmodified output of the prototype tool discussed in Section 4.4.
Given a trace pattern TP and a protocol P the algorithm Ψ yields a set of explicit
trace patterns that represent a complete characterization of the protocol, i. e.
[
traces(P, E) = traces(P, TP )
E∈Ψ(P,TP)
The algorithm has its origins in the Athena algorithm developed for Strand Spaces,
as described in [172], although the used framework (and the resulting scope) differs
significantly.
The intuition behind the algorithm is that a trace pattern can only occur in a trace
if it is realizable.
Recall that there were three properties that must hold for a pattern to be realizable.
First, events must be uniquely assigned to a run. Second, if an event of the pattern
is preceded in its role description by another event, then this must also be in the
pattern. Third, all read events must be enabled. Given a pattern that meets the
first two criteria, we explore all possible ways in which it can be refined, until we
can either be sure that the refined pattern cannot occur in traces of the protocol,
or the refined pattern is a realizable pattern, in which case we know that there are
traces in which the pattern occurs.
84 Chapter 4 Verification
• In the first case, we assume that the message pattern m occurs first after a send
event, we explore all possible role events. In order for m (which is a pattern,
and can include variables) to be part of a send message m0 , it must be possible
to unify a part of m0 with m, possibly by instantiating variables. Thus, we
refine the pattern: the send event is added, variables are instantiated where
necessary, and any events that precede the send in the role are also added (in
order to meet the first two requirements). After adding the event and an edge
labeled with m, the read enabled conditions for the event are met. Note that
if read events are added because they precede the send event in the role, these
will also have to be read enabled.
As an example, if we have that m = {| A, r]1 |}k , then this might have been
sent first after a role event of a run θ that sends m0 = {| i, r |}k . We try to
unify m with m0 . As a result, i]θ = A and r]θ = r]1 and thus θ = 1.
• In the second case, we assume the message pattern m occurs first after an
encrypt event, it must be the case that m is of the form {| x |}y for some patterns
x and y. We unify m with {| x |}y , and add the encrypt event. The procedure
is similar to the role event, except that instead of adding any preceding role
events, we must now ensure that the patterns x and y satisfy the read enabled
condition.
• In the third case, we assume that the message m occurs first after a decrypt
event. Thus, m must be a part of some pattern x. The intruder decrypts a
message {| x |}y using the key y −1 . As a consequence, the message {| x |}y as
well as the key y −1 must be read enabled.
The algorithm iterates until all read events are enabled, in which case the pattern
is realizable.
The algorithm sketched above will never terminate if there is a single read event
that is not enabled. This is caused by the third case, where the option is explored
in which the message was decrypted.
Assume the message pattern that is read is m. Then, for the third option, we
assume it was decrypted. This introduces two new requirements for the read enabled
property to hold. In particular, {| t1 |}t2 and t2−1 must be read enabled (where
m ∈ parts(t1)). The algorithm iterates, and explores at some point the read enabled
cases for {| t1 |}t2 . Again, in the third case, it is explored when this message is
4.2 Characterization algorithm 85
the result of a decryption. Thus, there must have been a message {| t3 |} t4 (where
t1 ∈ parts(t3)), which must be read enabled, andsoforth. Thus, in this branch of the
iteration tree, every iteration results in two new read enabled requirements, which
grow in size with each iteration.
The other two cases do not necessarily suffer from this problem. For the encryption
event, we have that the two patterns that must be read enabled decrease in size
with each iteration, ensuring termination. For the send role events, this depends
on the protocol description, and for some protocols the iteration will not terminate.
We return to this issue later.
The non-termination issue caused by repeated application of the decryption case
can be addressed by limiting the maximum number of levels of encryption occurring
in a message. However, this (arbitrary) bound can be avoided. Observe that traces
are finite sequences of events. Thus, although there can be arbitrarily many decrypt
events leading to the decryption of a message m00 with many levels of encryption,
ultimately the message m00 must occur first after a non-decrypt event. This can
be exploited by modifying the algorithm for the decrypt event case. Instead of
exploring where m occurs first after a read event, we explore all decryption chains
that might result in m. In other words, we explore the ways in which m can be part
of a larger encrypted message m00 , and investigate the possibilities for the sending
of m00 from other events.
This optimization results in a more complex algorithm that terminates in many
cases without the need of a bound on the number of encryptions in messages.
The two main mechanisms exploited by the algorithm are case distinction and trace
pattern refinement. The algorithm explores all possible refinements of a trace pat-
tern, until it can be concluded that such a refinement either represents a class of
traces of the protocol, or represents no traces of the protocol. A given trace pattern
typically does not meet the ReadEnabled requirement, and we need to add events,
or apply suitable substitutions, in order to meet these requirements.
In order to enumerate the possible ways in which a specific read event can be enabled,
the read message is traced back to its origin. Given a trace, there must be a unique
event after which the message occurs first in the intruder knowledge. We examine
the consequences for each type of event (after which a message m can occur first):
• m occurs first after a send event. If this is the case, all events that precede the
send event in the role order, must also be part of any realizable trace pattern.
• m occurs first after an encrypt event. In this case, m must be of the form
{| x |}y .
• m occurs first after a decrypt event. In this case m is a part of x, and the
decrypt event decrypts {| x |}y by applying the key y −1 .
86 Chapter 4 Verification
If we assume this event is a role event or an encrypt event, there are only a finite
number of ways in which this is possible, which we can enumerate and explore in
turn. However, no such enumeration is possible for decrypt events: a term t can be
the results of decrypting any term t0 of which includes t after repeated concatenation
and encryption with arbitrary terms. This suggest there are infinitely many options,
which we cannot enumerate. We remedy this problem by observing that such an
encrypted term t0 must originally have been the result of a non-decrypt event. After
this initial send of t0 , there are finitely many decrypt events that ultimately produce
t from t0 . We call such a sequence that starts with a non-decrypt event, followed
by zero or more decrypt events, a decryption chain. The following lemma captures
such a sequence.
Lemma 4.24 (Decryption chain). Let (TPE , − →) be an explicit trace pattern.
Let e0 be an event and rt0 a term, such that e0 ∈ TPE and rt0 ∈ in(e0 ). Then
there exist terms rt1 , rt2 , . . . , rtN and events e1 , e2 , . . . , eN +1 such that
rt rtN −1 rt rt
eN +1 −−N 1
→ eN −−−−→ · · · −−→ 0
e1 −−→ e0
where we have that eN +1 is not a decrypt event, and
∀i : 0 < i < N + 1 : ∃m, k ∈ RunTerm : rti = {| m |}k ∧ rti−1 ∈ parts(m)
Proof. For an explicit trace pattern, we have that if rt0 ∈ in(e0 ), there must exist an
rt0
event e1 such that e1 −−→ e0 . We apply case distinction. If e1 is not a decrypt event,
we have that N = 0, e1 = e0 . If e1 is a decrypt event of the form decr ({| m |}k , k −1 , m)
for some m, k, we have that rt0 ∈ parts(m). Furthermore, we have that {| m |}k ∈
in(e1 ), and we iterate for rt1 with this encrypted term. Because TPE is finite, this
iteration terminates.
Corollary 4.26 (Decryption chain does not start from an encryption). Let
(TPE , −
→) be an explicit trace pattern. Let eN +1 , . . . , e0 be a decryption chain of
(TPE , −
→). If N > 0, we have that eN +1 is not an encryption event.
The above definition assumes variables are typeless. In the remainder, we will
assume that MGU is defined in such a way that the type check constraints on
variables are met.
We generalize the notion of unification to so-called decryption unification, which
captures all the ways in which a term t1 can be unified with a (sub)term of another
term t2 (possibly after repeated decryption and projection operations), and the
terms that need to be decrypted in order to extract this unified term from t2.
Definition 4.28 (most general decryption unifier MGDU ). Let φ be a substi-
tution of uninstantiated variables by run terms. We call (φ, L) a decryption unifier
of a term t1 and a term t2, notation (φ, L) ∈ DU (t1, t2), if either
We call a set of decryption unifiers S the most general decryption unifiers of t1,t2,
notation S = MGDU (t1, t2), if and only if
The set MGDU captures all ways in which a term t1 can be the result of applying
(repeated) decryptions and projections to t2. Each element of the set consists of
a unifying substitution, and a list of terms that need to be decrypted. Each such
tuple uniquely defines a decryption sequence that starts with the sending of t2, and
results in t1 becoming known to the intruder.
Example 4.29. Let V and W be variables, such their sets of allowed substitutions
type(V ),type(W ) contain only basic terms, and thus no tuples or encryptions. Then
88 Chapter 4 Verification
we have that
n
MGDU (ni]1, (V ]1, {| W ]2 |}pk(r]2) )) = {V ]1 7→ ni]1} , [ ] ,
o
{W ]2 7→ ni]1} , [{| W ]2 |}pk(r]2) ]
Lemma 4.30 (Conditions under which the MGDU set is finite). Given two
terms t1, t2, such that for each variable V occurring in either term, type(V ) contains
no tuples and encryptions, the set MGDU (t1, t2) is finite.
Proof. Let t1 and t2 be terms. We use the following procedure to compute the set
of most general decryption unifiers:
The left half of the union is a finite branching of the order of the size of t2. The
second component is defined recursively. t2 includes finitely many applications of
the encryption operator, and each subterm contains finitely many parts. As any
subsequent substitutions do no change the number of tuples or encryptions, the
iteration is guaranteed to terminate.
If t2 contains variables which can be instantiated with tuples or encryptions, the set
MGDU is not guaranteed to be finite.
4.2.4 Algorithm
In essence, the algorithm takes a trace pattern, and applies case distinction to
split the pattern into several refined trace patterns. Each of these patterns are
split and refined further until they either (1) represent an explicit and realizable
trace pattern, or (2) represent a contradictory pattern. We say a trace pattern
is contradictory when it shares no traces with the protocol. We first explain the
splitting and refinement part, after which we address the details of contradictory
patterns, and discuss termination of the algorithm.
Let P be a protocol, and let (TPE , −
→) be a trace pattern that satisfies both
UniqueRuns and RoleConsistent.
Let S be the set of in terms of the events in TPE which have no incoming edge yet:
rt
S = (e, rt) e ∈ TPE ∧ rt ∈ in(e) ∧ ¬∃e2 : e2 −→ e
We call the elements of S the open goals of (TPE , −→). If (TPE , −→) is extended
with an edge such that a goal that was previously open is removed from S, we refer
to this process as goal binding.
If S = ∅, there are no open goals, and the trace pattern (TPE , −
→) satisfies all three
properties required for an explicit trace pattern. As a consequence it is realizable.
4.2 Characterization algorithm 89
S = ∅ ⇒ Ψ(P, (TPE , −
→)) = (TPE , −
→)
If S is not-empty, there are open goals, representing terms that are required to
enable some events, of which we don’t know yet the point at which they first occur
in the intruder knowledge. We select a single open goal from S using a heuristic.
This heuristic will be explained in detail in Section 4.4.4. For now, we just assume
a single element (e0 , rt0 ) is selected.
We proceed by refining (TPE , − →) by applying case distinction. Based on
Lemma 4.24, we have that any refinement of (TPE , − →) must contain a decryp-
tion chain eN +1 , . . . , e0 yielding the term rt0 , which is an open goal of the event
e0 . The existence of such a chain is the basis for case distinction. We outline the
possible cases before addressing them in detail:
The output of the algorithm is the union of the output of all these subcases. We
explain the details of the cases below.
The main case distinction is based on whether the events of the decryption chain
eN +1 , . . . , e0 either coincide with some events in already in TPE (case 1), or not
(case 2).
Case 1: {eN +1 , . . . , e1 } ∩ TPE 6= ∅.
Because the decryption chain shares events with TPE , there exists a unique k such
that ek ∈ TPE and ∀i : 1 ≤ i < k : ei 6∈ TPE . We explore each option: for each
event e0 ∈ TPE , we determine how e0 can be the result of repeated decryptions of
the output terms, captured by the set MGDU (e0 , t) for each t ∈ out(e0 ). For each
possible option (φ, L) of an event e0 , we construct a refined trace pattern (TPE , −
→) 0
as follows
rt
• If L = ∅, we add e0 −−→
0
e0 .
• If L 6= ∅, we add a decrypt event for each {| m |}k in the list L. We add labeled
edges, as prescribed by the decryption chain, connecting the chain of events.
As a side result, any required decryption keys will be open goals of (TPE , −→) 0 .
where chain is a function that refines a trace pattern. Given a trace pattern, an
event rts from which the chain starts, a list of terms that need to be decrypted,
and an event eg and a term rtg representing an unbound goal, it adds the events
and edges that are needed to establish a decryption chain. For decryption chains of
length 0 it is defined as
rtg
chain((TPE , − → ∪ {es −−→ eg })
→), es , [ ], eg , rtg ) = (TPE , −
chain((TPE , −
→), es , {| m |}k · L, eg , rtg ) =
{| m |}k
chain((TPE ∪ {e0 }, −
→ ∪ {es −−−−→ e0 }), e0 , L, eg , rtg )
In all cases, we refine the trace pattern by adding an instantiation of the role event.
Case 2.3.1: The run identifier of eN +1 does not occur in TPE .
Because trace pattern interpretation (4.6) abstracts away from the actual choice of
run identifiers, the particular choice of the run identifier for eN +1 is irrelevant, as
long as it does not occur in TPE yet, as the set of traces of each choice are equivalent.
Let θ be a run identifier that does not occur in TPE , and let inst = (θ, ∅, ∅).
For all explicit trace patterns, we require that RoleConsistent holds. Thus, if some
event occurs in a trace pattern, it must be the case that all events that precede it in
the role description, are also part of TPE . Let pev be the set of events that includes
the instantiation of ε and the events that precede ε according to the role definitions:
pev = {hinsti(ε0 ) | ε0 ≺r ε}
Let por be the set of edges that defines the ordering on these events (as required by
the role definitions):
[
por = {e1 →
− e2}
e1,e2∈pev
e1≺r e2
Using these abbreviations, we can define the details of the algorithm for this case:
Contradictory trace patterns In the process of refining trace patterns, we may refine
them in such a way that we can immediately conclude that the trace pattern can
never contain any traces that are traces of the protocol.
In particular, if the relation →
− contains a cycle, there can be no linear trace that
satisfies the order. As a second criterion, we observe that if constants of a run θ
occur in events that precede the first send event of the run θ, there can be no trace
in the trace pattern that is also trace of the protocol.
Results If the algorithm terminates, the result is a set of refined patterns. The
results are sound, in the sense that the traces in each resulting trace pattern are both
traces of the original trace pattern (as only refinement was applied), and also traces
of the protocol (as the trace patterns are explicit). With respect to completeness, the
method is complete if there are no variables that can contain tuples or encryptions.
4.2.5 Termination
In general, the algorithm is not guaranteed to terminate. This is in line with the
undecidability results for security protocol analysis, as e. g. in [81]. In order to
guarantee termination, we introduce a bound maxruns on the number of runs rep-
resented by the events in TPE . All states where the set of trace events represent
more than maxruns runs, are pruned from the search space.
Definition 4.31 (Bounded characterization algorithm). We improve the al-
gorithm Ψ by introducing a parameter maxruns, and by changing the algorithm in
two ways. First, we consider all trace patterns that contain more than maxruns
run identifiers to be contradictory. Second, we add an extra output boolean that is
true if and only if a trace pattern is encountered (and deemed contradictory) where
the number of runs exceeds maxruns. The resulting bounded algorithm Ψ maxruns
has signature
Lemma 4.32 (Bound on the number of runs and termination). The bounded
version of the algorithm terminates.
Proof. We associate each trace pattern TP with a tuple rnk, which is defined as
Of course, introducing a bound implies that some states might be pruned, and thus
that the characterization is no longer complete: some behaviours of the trace pattern
might be missed. However, the introduction of the incomplete boolean will signal
this case.
Theorem 4.33 (Bound on the number of runs and completeness). For all
protocols P , trace patterns (TPE , −→), bound on the number of runs maxruns, and
sets of explicit trace patterns ES , we have that
Ψmaxruns (P, (TPE , −
→)) = (ES , False) ⇒ Ψ(P, (TPE , −
→)) = ES
The theorem states that if after termination of the bounded version of the algorithm,
no patterns have been deemed contradictory on the basis of having surpassed the
maximum number of runs, the resulting realizable trace patterns represent a com-
plete characterization of the trace pattern. They capture all possible behaviours of
the trace pattern.
Proof. If no trace patterns are pruned, the bounded algorithm behaves identical to
the unbounded algorithm.
This result allows the tool to verify protocols for an unbounded number of runs
in many cases, even though it is guaranteed to terminate. In practice, unbounded
verification results occur for about 90 percent of the protocols, as we will see in
Section 4.4.5.
Note that we can have additional rules for pruning trace patterns, as long as they
are monotonous for each refinement step. For example, our prototype tool includes
rules that limit e. g. trace length, or whether agents are allowed to initiate sessions
with themselves. Again, if patterns are pruned because of such rules, this is signalled
through the boolean output.
Given a protocol and trace pattern, the algorithm sketched above yields a complete
characterization of a protocol: a set of explicit trace pattern representations, which
together capture all possible ways in which the trace pattern can occur in a trace.
This does not immediately imply we can verify security properties using this method.
94 Chapter 4 Verification
Secrecy properties are defined such that for each claim that occurs in a trace, such
that all its communication partners are trusted, some term must never be in the
intruder knowledge. We aim to construct a trace pattern that captures all possible
attacks on such a claim. This requires that the trace pattern is defined in such a
way that (1) it includes an instance of the secrecy claim, which claims that some
term t is secret, (2) all agents the run communicates with are trusted, and (3) the
intruder learns the term t.
Regarding (1), we define a trace pattern that includes all events of a run θ of the
role, up to and including the claim.
For (2), enforcing that the agents of the run are trusted, we exploit the mechanism
used for type-checking variables, defined in the type(V ) function, to define similar
constraints for the role names of the run with identifier θ, by defining e. g. type(i]θ) =
type(r]θ) = Agent T .
With respect to (3), we observe that is not immediately possible to express state-
ments about the intruder knowledge in terms of trace patterns. Rather, the intruder
knowledge is encoded implicitly in the events. We extend the set of events with
events of the form e = intruderKnows(rt), with in(e) = parts(rt) and out(e) = ∅.
Thus, the set of traces violating the secrecy claim is captured exactly to the trace
pattern that contains two events: a symbolic instance of the claim with secret term
rt, and an event intruderKnows(rt). If we apply the algorithm to this trace set, and
it turns out the complete characterization has no elements, we have that the claim
holds. If the complete characterization is non-empty, each trace pattern it contains
defines traces that violate the secrecy property.2
We apply the algorithm Ψ to the constructed trace pattern. If the set of explicit
trace patterns is non-empty, each trace of these patterns constitutes an attack on
the secrecy claim, and we can use the procedure used in the proof of Theorem 4.17
to establish an attack trace. If it is empty, no attack exists.
therefore refrain from introducing a new event, and encode such intruder knowledge constraints as
read events.
4.4 Prototype implementation: Scyther 95
of Theorem 4.17 to establish an attack trace. If the property holds for the trace
pattern, it holds for all traces defined for the pattern.
Intuitively one might expect that a correct protocol allows for its expected behaviour
and nothing else. This thought is also captured by the definition of synchronisation.
Thus, one would expect that a correct protocol has few different behaviours (and
thus only a few explicit trace patterns that include a claim), whereas an incorrect
protocol has more behaviours, some of which were not foreseen by the protocol
designers. Thus, we would expect that for a correct protocol, the complete charac-
terization set of a claim contains a single element.
For our test set of 128 protocols, which includes the vast majority of the protocols
in the SPORE library [174], there are some striking correlations between the cor-
rectness of a protocol, and the number of explicit trace patterns that are output by
the algorithm.
All protocols in the SPORE library, with the exception of the KSL protocol that
have exactly one realizable trace pattern for each role, are correct with respect to
synchronisation.
The converse also holds for the test set. In fact, all correct protocols in the set
that satisfy agreement or synchronisation, have exactly one explicit trace pattern
for each role. We conjecture that for a role that satisfies agreement as defined in
the previous chapter, there exists only one realizable trace pattern. Proving the
conjecture is left as future work.
• The input language closely follows the syntax used for describing protocols.
In Figure 4.4 on the following page we show the input for the tool used to
describe the initiator role of the Needham-Schroeder protocol.
• The semantics of the tool correspond to the model developed in the previous
two chapters.
96 Chapter 4 Verification
Regarding the design, we have opted to split the tool into two main components.
The core of the Scyther toolset is the Scyther command-line tool, which incorporates
the characterization and verification algorithms. This stand-alone tool is sufficient
to perform protocol analysis.
A second component has been developed for user convenience, and comprises an
optional graphical user interface for the Scyther command-line tool. This component
acts as a wrapper for the command-line tool, and provides a protocol editor, and
graphical output of the attacks or trace patterns.
4.4.2 Implementation
The current implementation is available for both Linux and Windows platforms,
and can be ported with little effort to other platforms.
Command-line tool
The command-line tool is written in the C language, and is optimized for verification
speed.
The tool takes as input a protocol description, and optional parameters (such as
bounds, or matching functions), and outputs a summary report, and optionally,
representations of trace patterns in XML or visual graph descriptions in the dot
language.3 In Figure 4.5 on the next page we show the input and output formats of
the tool, where a dotted line denotes an optional feature.
3 Graph descriptions in the dot language can be used to generate graph pictures using the
4.4 Prototype implementation: Scyther 97
• By default the tool returns a summary of the claim’s status; whether a claim is
true, false, or whether there is no attack within the bounds. Optionally, it can
output explicit trace patterns (which represent attacks or characterizations).
For the output there are two formats available:
– Graph output (in the dot language from the Graphviz project)
– XML output (describing the trace pattern)
• By default, the tool yields at most one attack on each claim. If multiple explicit
trace patterns are found in which a claim is violated, a heuristic is applied to
select the most feasible attack (related to the experimental results from [102]).
To give a simple example: If there are two patterns, but in one pattern an
agent A starts a run in which she tries to communicate with herself, the other
pattern is used to generate an attack. Optionally, the tool can generate all
possible attacks.
Furthermore, a set of Python bindings exist for the Scyther command-line tool,
which provide a convenient means to script Scyther experiments. As an example,
an example Python program is provided for large scale multi-protocol analysis ex-
periments, as described in the next chapter.
For the graphical user interface, speed is not the biggest concern. The interface
is written in the Python language, combined with the wxPython interface library.
The user interface adds no essential features to the command-line tool other than
improved usability.
In Figure 4.6 on the following page we show the input and output formats of the
graphical user interface, and the interaction with the command-line tool.
GraphViz package.
98 Chapter 4 Verification
4.4.3 Validation
The algorithm sketched above includes an important heuristic. Given that all read
events must be enabled, we must ensure that there are no open goals. An open goal
is a tuple (e, rt), consisting of a read event e and a non-tuple term rt, from which
one is selected for case distinction and pattern refinement. Although the algorithm
will try to bind any other open goals in further iterations, any substitutions made
by the case distinctions and refinement steps influence the branching factors further
on. Furthermore, contradictory states may occur earlier depending on the choices
made by the heuristic.
Thus, the heuristic can influence the number of states traversed, but also the maxi-
mum size of the set TPE at which contradictions are found. This means the heuristic
is important not only for the speed of the verification, but also for improving the
number of cases in which verification is complete when the algorithm is invoked
with a bound. A similar heuristic must exist in [173], however it is not explained in
detail by the authors of the Athena method.
We have devised over 20 candidate heuristics and investigated their effectiveness.
Here we report our main findings and illustrate them by means of a few selected
heuristics, ordered according to their effectiveness.
• Heuristic 2: Constants. For each open goal term rt, the number of local
constants that are a subterm of rt, is divided by the number of basic terms
that are a subterm of rt. The goal with the highest ratio is selected.
• Heuristic 3: Open goals that correspond to the keys needed for decrypt events
are given higher priority, unless these keys are in the initial intruder knowledge.
• Heuristic 4: Give priority to goals with that contain a private key as a subterm;
next, give priority to goals that contain a public key ; all other terms have
lower priority.
Regarding heuristic 4, we observe that the semantics do not explicitly mention such
concepts as ’private’ or ’public’ key (there might be no such terms, or multiple
100 Chapter 4 Verification
key infrastructures). We derive these from the initial intruder knowledge and role
descriptions by identifying function names which are never sent as a subterm, but
only as keys. This will typically include sk and pk. Second, we observe that for
some functions, all applications are in the initial intruder knowledge, usually the
public keys such as pk, whereas for others only a strict subset of the domain is part
of M0 , usually the private keys sk.
For all heuristics, we have that if two open goals are assigned the same priority
value, the open goal that was added first is selected. Tests have shown this to be
slightly more effective for all heuristics involved.
The first heuristic acts as a reference point for establishing relative effectiveness of
each heuristic. The second heuristic corresponds to the intuition that terms which
contain more local constants of particular runs, can only be bound to very particular
send events (as opposed to terms with many globals or variables), resulting in less
case distinctions. We believe a similar heuristic was used in a version of the Athena
tool. The third heuristic captures the intuition that there should be few ways in
which the intruder can gain access to a decryption key, as in general keys should
not be known to the intruder. (Unless it concerns signatures, in which case the
decryption key is the public key, which is part of the initial intruder knowledge.) For
the fourth heuristic, a strict priority is given to cases where e. g. the intruder decrypts
something with a key that is never sent by the regular agents, usually corresponding
to long-term keys, as these branches often lead to contradictory states. Finally, the
fifth heuristic is a combination of the previous three heuristics, using a lexicographic
order. For the fifth heuristic various weighting functions were also considered, of
which the lexicographical order performed best in general.
1e+07
Number of states explored
1e+06
States (logarithmic)
1e+05
1e+04
1 2 3 4 5
Heuristic type
Figure 4.7: The impact of the heuristics on the number of states traversed (for 518
claims, strict bound)
4.4 Prototype implementation: Scyther 101
Given a fairly strict bound of four runs, we investigated how each heuristic per-
formed, when applied to a test set of 128 protocol descriptions, with 518 claims.
Our test set includes the vast majority of the protocols in the SPORE library [174],
various protocols from (un)published papers, variations on existing protocols, and
new protocols, modeled by users of the Scyther tool throughout the last two years.
A time limit was set for the iteration procedure, which was only used to abort tests
for the first two heuristics. In Figure 4.7 on the preceding page we show the im-
pact of the heuristics on the number of states explored. From the graph it is clear
that heuristic 5 explores almost 40 times less states than the random heuristic 1.
Intuitively, this corresponds to avoiding unnecessary branching, and a tendency to
arrive at contradictory trace patterns in less iterations.
Because the effectiveness of the heuristics depends to a large degree on the particular
protocol under investigation, it is difficult to give an analytical explanation of the
results for the complete test set. However, it seems that the heuristics 2, 3 and 4
can be used to support each other, as is shown by the performance of heuristic 5.
100
Correct (complete characterization)
Attack found
90
80
70
Percentage of claims
60
50
40
30
20
10
0
1 2 3 4 5
Heuristic type
Figure 4.8: The impact of the heuristics on the decidability (for 518 claims, strict
bound)
This also has a direct result on the completeness of the results, which is depicted
in Figure 4.8. For heuristic 1, we get a complete result (based on complete charac-
terization) for less than 30 percent of the claims. This improves for each heuristic,
leading to an 82 percent rating for heuristic 5. In other words, if we use heuristic
5, we have that for 82 percent of the claims, the algorithm is able to either find an
attack, or verify correctness for an unbounded number of runs. In the remaining 18
percent of the cases the algorithm determines that there are no attacks involving
four runs or less, but it might be possible that there are attacks involving five or
more runs.
102 Chapter 4 Verification
For the protocols analyzed for this thesis, we did not find any attacks that involved
more than x + 1 runs, where x is the number of roles of the protocol, except for the
f N g N family of protocols from [139]. The exceptions involve a family of protocols
that was specifically tailored to be correct for N runs, but incorrect for N + 1 runs. 4
This indicates that for practical purposes, initial verification with three or four runs
would be sufficient. If verification with a low bound yields no attacks, but neither
a complete characterization, the bound can be increased.
100
Correct claims (full characterization)
False claims: attacks found
90
80
70
Percentage of claims
60
50
40
30
20
10
0
2 3 4 5 6 7
Bound on the number of runs
Figure 4.9: The impact of the bound on runs on decidability for the protocols in
the test set.
Because a higher bound on the number of runs can improve the rate of complete
characterization, but also increases verification time, there is an inherent trade-off
between completeness and verification time. We have investigated the impact of the
bound on the number of complete characterizations, within the set of protocols we
used for the previous graphs. In Figure 4.9 we show the decidability results on the
test set as a function of the bound on the runs, using heuristic 5, and using no time
limit for the tests. There is no difference between the decidability results of 6 and
7 run bounds, but in general, the higher the bound, the more claims are decided.
4 This seems to suggest a correlation between the number of roles in the protocol and the runs
involved in the attacks. In general, the undecidability of the problem [81] implies that there is no
such bound for all protocols, but maybe it is possible to establish a tight lower bound for decidable
subclasses [183].
4.4 Prototype implementation: Scyther 103
In the test case, no further attacks are found for bounds of three runs or more, but
some additional claims can be proven to be correct for an unbounded number of
results.
10000
Total time taken to verify the 518 claims in the test set
1000
Time in seconds (logarithmic scale)
100
10
1
2 3 4 5 6 7
Bound on the number of runs
Figure 4.10: The impact of the bound on runs on the total verification time for 518
claims
The drawback of a high bound on the number of runs is time. As the algorithm
employs a depth-first search, memory usage is linear in the number of runs, but
verification time is exponential. For the test set, we show verification times in Fig-
ure 4.10 for some specific bounds on the number of runs. The figure thus corresponds
to the time it took to generate the results in Figure 4.9, on a desktop computer with
an AMD 3000+ Sempron CPU running at 1.6 GHz, with 1GB of RAM.5 It is clear
that the total verification time of a large set of protocols is exponential with respect
to the maximum number of runs, even though verification time is constant for the
vast majority of the protocols. This is in line with results such as those in [162].
The protocols in our test set did not include attacks involving more than a handful
of runs. To evaluate the effectiveness of Scyther in finding large attacks, we anal-
ysed instances of the f N g N family of protocols from [139]. These protocols were
specifically designed to show theoretical possibilities of the size of attacks. For each
N > 2, the protocol f N g N has no attacks with N runs or less, but there exists an
attack involving N +1 runs. The protocol contains only two roles and four messages,
and the parameter N mainly influences the message size (by increasing the number
of nonces and variables). For this protocol, Scyther yields the expected results: for
protocols f N g N with a bound maxruns ≤ N , bounded verification is performed.
With a bound maxruns > N , an attack is found. As an extreme example, we find
5 Note that because the algorithm uses an iterative depth-first search, it uses a negligible amount
of RAM.
104 Chapter 4 Verification
the attack that involves 51 runs (on the f 50 g 50 protocol) in 137 seconds.
4.5 Conclusions
The Scyther tool has proven to be an effective tool for the verification of security
properties. It combines the possibility of verification (proving the protocol correct
within the model) and falsification (finding a concrete attack) whilst still being
guaranteed to terminate. It is the only currently existing tool capable of verifying
synchronisation. It has been extensively used by students and researchers, resulting
in the discovery of many previously unknown attacks.
We investigated several options for fine-tuning the algorithm, by e. g. evaluation of
different heuristics, and by analyzing the effect of the choice of the bound on the
verification result (percentage of protocols verified or falsified) and verification time.
Differences between Athena and Scyther The ideas behind the verification method
stem from the Athena tool as described in [171, 173]. Unfortunately, the Athena
tool is not publicly available, making it impossible to compare the tools in any
meaningful way. Due to changes in the theoretical foundations, we improve on the
basic Athena algorithm described in a number of ways:
• We extend the method to our operational semantics, allowing for e.g. multiple
key structures. As a side result, security properties are defined as claim events,
and there is no need to set up complex (and thus error-prone) scenarios for
the verification of properties. We return to this in Chapter 7.
• We make the heuristic in the algorithm explicit and analyse its impact on the
verification process.
In fact, our version of the algorithm differs significantly from the algorithm sketched
in [173]. In particular, there seems to be an important problem which is most
obvious in Definition 4.6 and 4.7 of the journal publication [173] (note that the
same problem exists in [171], and implicitly in [172]). According to Definition 4.6
of that paper, unification is defined as interm unification, roughly corresponding
to the notion that it is possible to interm-unify a term t1 not only with t1 but
also with a tuple (t1, t2). However, for this relation there exists no most general
interm unifier if variables are allowed to be instantiated with tuples. In fact, there
exists no most general interm unifier, but rather an infinite set of incomparable
4.5 Conclusions 105
unifiers for this relation. As an example, consider the interm unification of a term
A with a variable V . We have that A interm-unifies with A, as well as with (V 0 , A),
but also with ((V 0 , A), V 00 ), etc.. As the Athena algorithm is based on the Strand
Spaces approach, which models the intruder actions as protocol events, it is strictly
required that variables can be instantiated with tuples, in order to correctly model
encryption and decryption as protocol events. Consequently, the set U P (t) defined
below Definition 4.7 in the same paper cannot be guaranteed to be finite. As a
result, the next state function F is not complete-inclusive, which is required for the
completeness of the method. The upshot of this is that attacks might be missed
by the described algorithm. One of the authors of the Athena paper seems to be
aware of this problem, as described in a technical report [26], where a possible fix
is suggested by using so-called interm constraints. However, the suggested solution
is described as being “possibly undecidable”.
The problem of instantiating variables with tuples can be avoided in our version
of the algorithm, as we do not model intruder events as protocol actions. The
additional events decr and encr are only constructed on-demand by the refinement
process, and thus we can restrict ourselves to variables that are not instantiated as
tuples.
5
Multi-Protocol Attacks
In this chapter we will apply some of the methodology and tools developed in the
previous chapters. We turn our attention to one of the assumptions underlying
modern security protocol analysis, namely that the protocol under scrutiny is the
only protocol that is executed in the system.
As indicated in the previous chapters, a number of successful formal methods and
corresponding tools have been developed to analyse security protocols in recent
years. These methods are generally limited to verification of protocols that run in
isolation: for a protocol that is used over an untrusted network, the formal models
generally assume that there is only one protocol being executed over the network.
However, the assumption that a protocol is the only protocol that is executed over
the untrusted network is not realistic. Unfortunately, when multiple protocols are
used over a shared untrusted network, the problem of verifying security properties
becomes significantly harder. The cause of this is the fact that security properties
are not compositional. It may happen that two protocols that are correct when run
in isolation, are vulnerable to new attacks when used over the same network.
An attack that necessarily involves more than one protocol, is called a multi-protocol
attack. The existence of such attacks was established first by Kelsey, Schneier and
Wagner (see [111]). They devise a procedure that starts from any correct protocol.
Then they show that it is possible to construct a specially tailored protocol such that
(1) the second protocol is correct, and (2) when these two protocols are executed
over the same network, the intruder can use messages from the second protocol to
mount an attack against the first protocol. Some specific examples can be found
in literature, e. g. [5, 184]. Because the protocols in these papers are constructed
especially for the purpose of breaking some particular protocol, they seem contrived.
At the other end of the spectrum, sufficient conditions for compositionality have
been established by e.g. Guttman and Thayer in [95]. Alternative requirements
can be found in [93, 48, 47]. If all protocols that use the same network and key
infrastructure satisfy certain requirements, e.g. messages of one protocol can never
be mistaken for messages from the other protocols, compositionality of the individual
security properties is guaranteed. In that case, in order to prove correctness of the
system it suffices to prove correctness of the protocols in isolation. However, the
standard protocols found in literature do not meet these requirements, as many of
these protocols have very similar messages, often containing permutations of a list of
107
108 Chapter 5 Multi-Protocol Attacks
nonces and a list of agent names. Thus the theoretical possibility of multi-protocol
attacks remains.
A third approach to this compositionality problem are methods that aim to establish
that a given set of protocols does not interfere. A recent example of such an approach
is e.g. the invariant based approach in [71].
The question that is answered in this chapter is: Are multi-protocol attacks a re-
alistic threat which we should consider when using protocols from the literature?
This question has not been addressed before, because verification methods for se-
curity protocols traditionally do not scale well. This holds for manual methods
(e.g. proofs by hand) as well as (semi-)automatic methods. It is feasible to verify
small to medium-sized protocols in isolation, but large and/or composed protocols
have been outside the reach of most formal methods. This lack of scalability has
also induced a limited expressiveness of the semantics of security protocol models:
most models only allow for modeling a single protocol and its requirements.
Modifying semantics and tools for multi-protocol analysis was already pointed out as
an open research question in [61], and has been addressed partially in e.g. [181, 125].
The semantics from Chapter 2 can handle multi-protocol analysis in an intuitive
way. The semantics are role-based and the security properties are modeled as local
claim events. Therefore, modeling multiple protocols (and their security properties)
concurrently amounts to the protocol described by the union of the role descriptions
of each protocol.
In the previous chapter a verification tool was developed on the basis of the se-
mantics. In line with the semantics, the tool can handle multi-protocol analysis
in a straightforward manner, by concatenation of multiple protocol descriptions.
Providing the tool with the concatenation of two protocol descriptions, amounts to
verification of their security properties when both protocols are executed over the
same network. Using this tool, we have been able to verify two- and three-protocol
composition of 30 protocols from the literature. The tests have revealed a significant
number of multi-protocol attacks. Because this particular type of analysis has not
been performed before, all attacks we find are previously unreported.
We proceed by defining in Section 5.1 the notion of multi-protocol attacks, and
we describe the conducted experiments in Section 5.2. The results of the experi-
ments are analysed in Section 5.3, and two practical attack scenarios are treated in
Section 5.4. We briefly discuss some preventive measures in Section 5.5 and draw
conclusions in Section 5.6.
roles will be connected. We say that such a set of role descriptions contains multiple
protocols.
Definition 5.1 (Connected roles). We call two roles connected if they are equiv-
alent under the equivalence relation generated by the communication relation ;.
All protocols shown in this thesis up to this point are single protocols. There is only
one equivalence class of connected roles. If we join the role specifications of two or
more protocols with disjoint role sets, the result contains multiple protocols.
Now the definition of a multi-protocol attack is fairly straightforward.
Note that this definition does not put any constraints on the intruder behaviour.
Therefore, this definition is more general than the definition given in [75], where
only replaying a message from one protocol into another is allowed.
5.2 Experiments
5.3 Results
The tests reveal that there is a large number of multi-protocol attacks possible on
the selected set of protocols. Out of the 30 protocols from the literature, 23 had
security claims that are correct in isolation, but had attacks when put in parallel
with one or two other protocols from the set.
The three types of matching that were used in the test have a clear hierarchy. Any
attack that occurs in the strict type model will also occur in the other two models.
Similarly, any attack occurring in the model that allows for basic type flaws, will
also be an attack in the typeless model.
For multi-protocol attacks however, there is no such inherent monotonicity. This is
caused by the fact that the multi-protocol definition states that the property should
be correct in the context of the single protocol. Consider for example the Otway-
Rees protocol. For this protocol, the secrecy claims are true in the strict type model,
so it may be the case that there exist multi-protocol attacks on these claims in the
strict type model. For the typeless model however, there exists type-flaw attacks on
both these claims. Consequently we have from Corollary 5.4 that there can be no
multi-protocol attacks in the typeless model.
We discuss each of these three typing categories separately.
We start off with the most restricted model, in which it is assumed that the agents
can somehow check the type of the data they receive, and thus only accept terms of
the correct type. For the protocols from literature we found 17 two-protocol attacks,
and no three-protocol attacks. We found attacks violating authentication as well as
5.3 Results 111
secrecy requirements.
The vast majority of these attacks involve variants of Woo-Lam Pi protocol as
described in [186]. These protocols contain a read/send pattern which can be used as
a so-called encryption oracle, which is a protocol mechanism that allows an intruder
to encrypt arbitrary values with some key. In this case arbitrary nonces can be
encrypted with the symmetric key shared by an agent and the server. This enables
many attacks on other protocols that involve this shared key.
The remainder of the attacks share a common pattern that we call ambiguous au-
thentication, and will be explained in detail in Section 5.4.
If we relax the typing constraints on messages, such that variables can contain any
term that is not a tuple or an encryption, the number of attack possibilities increases
dramatically. Attacks in this category are called basic type-flaw attacks. Specifically,
many attacks become possible because (session) keys can be mistaken for Nonces,
leading to the revealing of the session keys. This can also cause new authentication
attacks. In our tests, we found 40 attacks using basic type-flaw mistakes.
112 Chapter 5 Multi-Protocol Attacks
As expected, the situation gets worse when any kind of type-flaw attack is possible.
Random values can now be mistaken for any tuple term or encrypted term. We
found 106 multi-protocol attacks based on all type-flaws.
We also found examples of three-protocol attacks. For example, the Yahalom-Lowe
claim of the secrecy of the received session key, is correct in isolation, and is also
correct in combination with any other protocol from the test, but can be broken
in the presence of the Denning-Sacco shared key and Yahalom-BAN protocols if all
type-flaws are possible.
To conclude, we have summarized the influence of the strictness of the matching
function (and thus the susceptibility to type-flaw attacks) in Table 5.2.
Attack example
nonce ni
i, ni
nonce nr
{| i, ni, nr |}k(r,s)
key ks
{| r, ks, ni, nr |}k(i,s)
{| i, ks |}k(r,s)
{| i, r, s, nr |}ks
secret ks secret ks
with the names of both agents and the nonces {| A, A, ni]1, ni]2 |}k(A,S) . This mes-
sage is intercepted, concatenated with itself, and sent to the Woo-Lam server S.
The server generates a fresh session key and sends back two (identical) messages
{| A, ni]1, ni]2, ks]3 |}k(A,S) . One of these is redirected to the Yahalom-Lowe i role.
This role is expecting a message of the form {| A, Key, ni]2, Nonce |}k(A,S) , where
Key is a new key and Nonce is a nonce, which he has not encountered before. Thus,
he cannot tell the difference between these terms. Because of type confusion, he
accepts the message, under the assumption that ni]1 is the fresh session key, and
that ks]3 is the responder nonce. Thus, he encrypts the key using the nonce, sends
{| A, A, ni]2, ks]3 |}ni]1 and claims that ni]1 is secret. Because the intruder knows
ni]1, this is clearly not the case. This is an attack on the Yahalom-Lowe i role.
However, we can continue the attack. The intruder intercepts this last message.
Because he knows ni]1, he can decrypt the message, and learns the key ks]3. This
enables him to create the last message that is expected by the Woo-Lam i role. This
role then claims secrecy of ks]3, which is also known to the intruder.
This basic type-flaw attack enables an intruder to break two protocols at the same
time.
114 Chapter 5 Multi-Protocol Attacks
nonce ni
i, ni
nonce nr
r, nr
{| i, r, ni, nr |}k(i,s)
{| i, r, ni, nr |}k(i,s) , {| i, r, ni, nr |}k(r,s)
key ks
secret ks secret ks
The experiments have revealed that although many multi-protocol attacks can occur,
their scope is limited if type-flaw attacks are prevented. But even if such attacks
are excluded, two main scenarios remain in which multi-protocol attacks are likely
to occur. We discuss the scenarios in this section, and we discuss some preventive
measures in the next section.
Protocol updates
The experiments have shown that multi-protocol attacks are likely for protocols that
use similar messages. We describe here a practical scenario where such similarities
arise in practice.
It often occurs that security protocols are broken in some way, and that this is
discovered after the protocol is deployed. The problem can be fixed by issuing a
security update. This is effectively a second protocol, that shares the same key
structure, which is very similar to the first one. Such a situation makes multi-
protocol attacks very likely.
As an example, we show in Figure 5.4 on page 116 a broken authentication protocol.
It is susceptible to a man-in-the-middle attack, similar to the one described in [119].
For our purposes, we only assume this protocol has been distributed to and is being
5.4 Attack scenarios 115
k(A, S) k(A, S)
Run 1 Run 2
Woo-Lam Intruder Yahalom-Lowe
A : role i A: role i
nonce ni]1
A, ni]1 nonce ni]2
A, ni]2
learns ni]1, ni]2
A, ni]2
{| A, A, ni]1, ni]2 |}k(A,S)
k(A, S)
Run 3
Woo-Lam
S: role s
key ks]3
learns ks]3
used by clients, and that we need to update it with a security fix. The easiest
way to fix the protocol is to replace the name in the first message, resulting in the
protocol in Figure 5.5 on page 117. This protocol is also known as the Needham-
Schroeder-Lowe protocol as we have seen before in this thesis, which can be proven
to be correct when run in isolation.
If the broken protocol is updated in such a way that the old version of the protocol
can still be running as well by some clients, then there is a multi-protocol attack
possible on the new protocol, as shown in Figure 5.6 on page 118. In this attack, the
intruder uses two instances of the old protocol (denoted by “Broken i” and “Broken
r”) to learn the value of a nonce ni]1. Then, an instance of the responder role of
the new protocol (“NSL r”) is completed using the initiator role of the old protocol.
Thus, an agent completing the responder role of the new protocol claims that the
nonces ni]1 and nr]3 are secret. Because the nonce ni]1 was already leaked by the
old protocol version, this claim is false.
The cause of the problems is that the messages of the updated protocol often closely
resemble the messages of the original protocol. Because of this, many possibilities
116 Chapter 5 Multi-Protocol Attacks
protocol Broken
i r
nonce ni
{| r, ni |}pk(r)
nonce nr
{| ni, nr, r |}pk(i)
{| nr |}pk(r)
secret ni secret ni
secret nr secret nr
are available for an intruder to insert messages from one protocol at unforeseen
places in the other protocol, which opens the door for multi-protocol attacks.
Ambiguous authentication
We use the term “ambiguous authentication” to refer to two or more protocols that
share a similar initial authentication phase. This can lead to ambiguity, where
protocol mismatches occur between communicating partners.
In particular, authentication protocols are often used to set up session keys for
other protocols. The resulting protocol then consists of the sequential execution of
the authentication protocol and the protocol that uses the session key. Often the
same protocols are used for authentication, which are then composed with different
follow-up protocols. In such cases ambiguous authentication can occur: although
the authentication protocol is correct in isolation, there can be a multi-protocol
attack involving different follow-up protocols.
In the experiments, ambiguous authentication occurred frequently among similar
protocols, such as in protocol families, and among broken protocols and their fixed
variants.
We give an example of this phenomenon. Consider the protocol pattern “Service 1”,
as shown in Figure 5.7 on page 119. In this figure, there is a large rectangle denoted
as protocol P . For this rectangle, we can substitute any protocol that authenticates
5.4 Attack scenarios 117
protocol Needham-Schroeder-Lowe
i r
nonce ni
{| i, ni |}pk(nr)
nonce nr
{| ni, nr, r |}pk(i)
{| nr |}pk(r)
secret ni secret ni
secret nr secret nr
the partners and generates a fresh shared secret. (For example we could insert
here the Needham-Schroeder-Lowe protocol from Figure 5.5, and take either of the
nonces as the fresh secret ta.) This protocol P is then extended by a single message
that sends the secret tb, encrypted with the fresh secret value from the protocol P ,
from the initiator i to the responder r. Given that the protocol P is correct, we can
prove that the complete protocol for Service 1 as a single protocol is correct.
Now we re-use the protocol P to implement another protocol referred to as the
Service 2 protocol. See Figure 5.8 on page 119. Here we again use the same base
protocol, but we extend it by sending a session identifier and some message m. For
the session identifier, we use the fresh random value ta from the base protocol. (If
we substitute Needham-Schroeder-Lowe for P , the protocol for Service 2 is correct
in isolation.)
If we run Service 1 in parallel with Service 2, the combined protocols are broken.
The attack is shown in Figure 5.9 on page 120. In this attack, the intruder simply
re-routes the initial messages from Service 1 to Service 2. A executes her side of
protocol steps of Service 1 as in the normal execution of the protocol. B on the other
hand also executes his side correctly, but using the steps from Service 2. Because
both services use the same initial sequence of messages, they cannot detect that the
other agent is performing steps of the wrong protocol. After this initial phase A is
halfway into Service 1, and B is halfway into Service 2. Therefore B will now use
the random value ta as a session identifier, effectively revealing it to the intruder.
118 Chapter 5 Multi-Protocol Attacks
sk(E)
Run 1 Run 2
Broken Intruder Broken
A : role i E B: role r
(r 7→ B) (i 7→ E)
nonce ni]1
{| B, ni]1 |}pk(B)
{| B, ni]1 |}pk(B)
nonce nr]2
Run 3 {| ni]1, nr]2, B |}pk(E)
NSL
B: role r learns ni]1
(i 7→ A)
{| A, ni]1 |}pk(B)
nonce nr]3
secret ni, nr
Then, when A uses this value ta as a session key for the secret tb, the intruder can
decrypt it. Thus the security claim of Service 1 is violated.
Analysis of the experiments has also indicated what is required to effectively prevent
multi-protocol attacks. We discuss methods used for type-flaw attack prevention,
which can also be used to prevent multi-protocol attacks. We furthermore discuss
verification of multi-protocol attacks.
5.5 Preventing multi-protocol attacks 119
protocol Service 1
i r
Authentication protocol P
i and r agree over the fresh
shared secret ta
{| tb |}ta
secret tb secret tb
protocol Service 2
i r
Authentication protocol P
i and r agree over the fresh
shared secret ta
ta, m
shared session ID ta
Run 1 Run 2
Service 1 Service 2
A: role i Intruder B: role r
(r 7→ B) (i 7→ A)
Protocol P Protocol P
Role i Role r
ta]2, m
learns ta]2
{| tb]1 |}ta]2
learns tb]1
secret tb]1
The experiments detailed here have shown that making sure no type-flaw attacks
can occur prevents many multi-protocol attacks. In fact, 84 percent of the attacks
in the test set can not occur in a setting without type-flaw errors. Ensuring all
messages are typed is therefore also a preventive measure for a large class of multi-
protocol attacks. For details of preventing type-flaw attacks we refer the reader
to [99, 118].
Verification
5.6 Conclusions
As a second application of the methods and tools developed in this thesis, we propose
a protocol for multi-party authentication for any number of parties, which general-
izes the well-known Needham-Schroeder-Lowe protocol. We show that the protocol
satisfies injective synchronisation of the communicating parties and secrecy of the
generated nonces. For p parties, the protocol consists of 2p − 1 messages, which we
show to be the minimal number of messages required to achieve the desired security
properties in the presence of a Dolev-Yao style intruder with compromised agents.
The underlying communication structure of the generalized protocol can serve as
the core of a range of authentication protocols.
In the context of formal black-box analysis of security protocols, several protocols
have been proposed in order to satisfy forms of mutual authentication. For an
overview of authentication protocols see [54, 174]. The Needham-Schroeder-Lowe
protocol (NSL) is such a protocol, which satisfies even the strongest forms of au-
thentication such as injective synchronisation, and has been studied extensively.
protocol
n0 n1 n2 n3
r0 r1 r2 r3
{| n0, r0, r2, r3 |}pk(r1)
{| n0, n1, r0, r1, r3 |}pk(r2)
{| n0, n1, n2, r0, r1, r2 |}pk(r3)
{| n0, n1, n2, n3, r1, r2, r3 |}pk(r0)
{| n1, n2, n3 |}pk(r1)
{| n2, n3 |}pk(r2)
{| n3 |}pk(r3)
123
124 Chapter 6 Generalizing NSL for Multi-Party Authentication
protocol
MsgB(0)
MsgB(i − 1)
MsgB(i)
MsgB(p − 2)
The NSL protocol was designed for two parties who want to authenticate each other,
and is often referred to as bilateral authentication. In many settings such as modern
e-commerce protocols there are three or more parties that need to authenticate
each other. In such a setting we could naively instantiate multiple NSL protocols
to mutually authenticate all partners. For p parties, such mutual authentication
would require p2 = (p × (p − 1))/2 instantiations of the protocol, and three times as
The existing methodologies and tools for verifying security protocols in a Dolev-
Yao setting have been focussed on protocols with a fixed number of parties. The
challenge is in proving the proposed parameterized protocol correct, using the formal
model developed here.
We proceed as follows. In Section 6.1 we generalize the Needham-Schroeder-Lowe
protocol to any number of parties. In Section 6.2 we show the security properties
that the protocol satisfies and sketch proofs of correctness and preconditions. We
discuss some variations of the pattern of the generalized protocol in Section 6.3.
In Section 6.4 we show that a previously generalized protocol for symmetrical keys
does not satisfy our authentication requirements. We draw conclusions and discuss
further work in Section 6.5.
The basic idea behind the NSL protocol is that each agent has a challenge-response
cycle to validate the other agent’s identity. These two challenge-response cycles are
linked together by identifying the response of the second agent with its challenge.
Its generalization follows the same line of thinking. Every agent conducts a
challenge-response cycle with its neighbouring agent, while combining its own chal-
lenge with a response to another agent’s challenge whenever possible. We will first
explain the four-party version of the protocol in some detail. Afterwards we give a
generalized specification for p parties.
The four-party protocol goes as follows (see Figure 6.1). First, the initiating agent
chooses which parties he wants to communicate with. He creates a new random
value, n0, and combines this with his name and the names of agents r2 and r3.
The resulting message is encrypted with the public key of r1, and sent to r1. Upon
receipt and decryption of this message, the second agent adds his own name and a
fresh nonce, and removes the name of the next agent in the line from the message.
This modified message is then encrypted with the public key of the next agent and
sent along. This continues until each agent has added his nonce, upon which the
message is sent back to the initiating agent. This agent checks whether the message
contains the nonce he created earlier, and whether all agent names match. Then he
can conclude that the other agents are authenticated. Next, in order to prove his
own identity, he sends a message containing the other agents’ nonces to r1. The
subsequent agents again check whether their own nonces are in the message, remove
this nonce, and pass the resulting message on.
This four-party protocol can be generalized to any number of agents p. In Figure 6.2
we schematically describe the communication structure of the protocol. The abstract
messages are defined below. The function next determines the next role in the list
of participants in a cyclic way. The ordered list AL(x) contains all roles, except for
role x. The protocol makes use of two types of messages. The first p messages are
of type MsgA and the final p − 1 messages are of type MsgB.
126 Chapter 6 Generalizing NSL for Multi-Party Authentication
For i such that 0 ≤ i < 2p − 1, protocol message labeled with (p, i) is now defined
by (
MsgA(i) if 0 ≤ i < p
Msg(i) =
MsgB(i − p) if p ≤ i < 2p − 1
The purpose of the protocol is to achieve authentication of all parties and secrecy
of all nonces, in the presence of a Dolev-Yao intruder that has full control over
the network. We will make this precise in the next section, but first we make two
observations. First, a run of role rx reads messages with labels x − 1 and x + p − 1,
and sends out messages with labels x and x + p. Second, a nonce created by a run
of role rx occurs in p messages, in the messages labeled with i, where x ≤ i < x + p.
The protocol can be deployed in two main ways. First, it can be instantiated for a
specific number of parties, to yield e.g. a four-party authentication protocol. In this
way it can be used instead of custom n-way handshake protocols. Second, it can
be used in its most generic form, and have the initiating role r0 choose the number
of participants p. Agents receiving messages can deduce the chosen p at runtime
from the number of agents in the first message, and their supposed role from the
number of nonces in the message. For the analysis in the next section we use the
generic form, where the number of parties p is not fixed. The properties that we
prove will then automatically hold for specific instantiations as well, where only a
certain number of parties is allowed.
6.2 Analysis
On the other hand, if any of the communication partners is not able to authenticate
himself, the protocol does not terminate successfully. So, this protocol does not
establish authentication of a subset of the selected agents.
A second observation concerning this protocol is the fact that authentication is only
guaranteed if all agents indicated in the first message of the initiator are trusted.
This means that if the decryption key of any of the agents is compromised, the
other agents in the list can be falsely authenticated. The reason is that e.g. an
agent performing role r0 only verifies the authenticity of agent r1. Verification of
the identity of an agent performing role r2 is delegated to an agent performing
role r1, and so on. This chain of trust is essential to the design of an efficient multi-
party authentication protocol. This does not mean that we restrict the Dolev-Yao
intruder model, since sessions with untrusted agents are still possible. As is the case
for the standard NSL protocol and most other authentication protocols, a session
with a compromised partner does not have to satisfy authentication.
Finally we want to mention that the proof does not only imply correctness for any
specific choice of p. Rather, we prove that the protocol is correct when p is chosen
at run-time by the initiator. Put differently, we prove correctness of the protocol in
an environment with all p-party variants running in parallel.
We will show the proofs of the security claims of the generalized version of Needham-
Schroeder-Lowe in some detail.
Proof outline. The proof exploits the fact that nonces generated by agents are
initially secret, even if the intruder learns them later. Using the secrecy of the
nonce up to the point where it is leaked, we establish a sequence of messages. If
the nonce is leaked at the end of the sequence, we establish a contradiction, which
is the basis of the secrecy proof. Once we know the nonce of a role remains secret,
we can use the same message sequence to establish synchronisation.
We assume a matching function that checks the type of the incoming messages. In
other words, we assume the messages are constructed in such a way that the recipient
can distinguish e.g. a nonce from an agent name. This assumption is addressed in
more detail in Section 6.2.3
In order to prove our results, we use the notation M (α, i) to denote the knowledge
set of the intruder after executing the events in a trace α, up to but not including
αi . If i ≥ |α| we have that M (α, i) is equal to the knowledge of the intruder after
the execution of all events in the trace.
Recall that local constants (e.g. nonces) created in runs are not known to the intruder
initially. Even if the intruder learns them at some point, there is a point before which
the constant was not known.
Lemma 6.1. Given a trace α, and a run term of the form n]θ, we have that:
Proof. We have that M0 0 n]θ by the definition of initial intruder knowledge. Fur-
thermore, the intruder knowledge is non-decreasing according to the operational
semantics rules for the Dolev-Yao intruder (i. e. according to the take and fake
rules).
If a local constant of some run is not known to the intruder, he cannot construct
terms that have this constant value as a direct subterm himself. Thus, these mes-
sages must have been learned somewhere before.
The previous result holds for all protocols in our model. In contrast, the following
lemmas only hold for the specific protocol under investigation.
Lemma 6.2. We assume a matching function that does not allow nonce variables
to be instantiated with encrypted terms. Given a trace α, run terms n]θ, m, k and
an index i, and let αi be a protocol event of the generalized NSL protocol. Given
that n]θ ∈ parts(m), we have that
M (α, i) 0 n]θ ∧ αi ∈ ReadRunEv ∧ {| m |}k = cont(αi ) ⇒
∃j : j < i ∧ αj ∈ SendRunEv ∧ {| m |}k = cont(αj )
Proof. Observe that {| m |}k is the generic form of all the read messages cont(αi ) of
the generalized NSL protocol. From the read semantics we conclude that M (α, i) `
{| m |}k . Looking at the components of this term, we find that because M (α, i) 0 n]θ
and n]θ ∈ parts(m), it must be the case that M (α, i) 0 m. In general, for all
knowledge sets M we have that
as a direct result of the definitions of ` and v. The elements of M (α, i) are either
elements of M0 , or they were used as part of a term for take transitions, which
were sent before. As m cannot be part of M0 (as it contains the nonce), in order
for {| m |}k to be in the role knowledge, it must be a subterm of a term that was
previously sent. We identify this send event with αj . We investigate the protocol
send events, and observe that variables may not be instantiated with encryptions.
This implies that {| m |}k = cont(αj ).
Lemma 6.3. Consider the generalized version of NSL. Let α be a trace, and n]θ
an instantiated constant. If we have
then we have
Proof. Given that runidof (αi ) 6= θ, the run to which αi belongs is not the run that
created the nonce. Thus, there must be a variable V of this run that is instanti-
ated with the nonce. Variables are assigned only at read events, and thus the run
runidof (αi ) must have a preceding read event αi0 of which the nonce is a subterm.
Based on this event and the fact that the nonce is secret, we find from Lemma 6.2
that there must be a previous send event with identical contents.
Intuitively, the lemma states that if an agent sends out a nonce generated in a run
(which can therefore not be part of M0 ), then it is either its own nonce or it is one
that it learned before from the send of some other agent.
The previous lemma gives us a preceding run for a message that is sent. If we
repeatedly apply this lemma to a similar situation where a nonce is received, we can
establish a sequence of events that must have occurred before a nonce is received.
This is expressed by the next lemma.
Lemma 6.4. Consider the generalized version of NSL. Let α be a trace, and rx be
the identifier of a run executing role x in which a nonce n]rx was created. If we
have
then there exists a non-empty finite sequence of events β such that the events in β
are a subset of the events in α, and
Proof. This lemma is the result of repeated application of the previous lemma.
Because the events in the trace that precede αi are finite, β must also be finite.
Note that the last conjunct expresses that the nonce is read by a certain run, it
is sent out later by that same run, unless it is the final read event (expressed by
k = |β| − 1).
The resulting sequence of events is a chain of send events that include the nonce,
where each sent message is read by the run of the next send. This lemma is used to
trace the path of the nonce from the creator of a nonce to a recipient of a message
with the nonce, as long as the nonce is not known to the intruder. In other words,
β represents a subset of send events of α through which the nonce n]rx has passed
before αi .
We now derive some additional information from the sequence of events β as estab-
lished by Lemma 6.4. Given a send event e, the type of message is either A or B,
and is denoted as mtype(e).
130 Chapter 6 Generalizing NSL for Multi-Party Authentication
where mtype(e) yields either A or B for an event e, corresponding to the two message
types of the generalized NSL protocol. Type A contains agent names and nonces,
whereas type B contains only nonces.
Proof. This lemma follows from the protocol rules. When a run creates a nonce, it
is sent out first as part of a message of type A. Messages of type A contain agent
names, whereas messages of type B do not. The run that receives such a message,
sends it out again within a type A message (containing agent names), unless it is
executing role r0, in which case it sends out the nonce within a type B message,
without agent names. After receiving a message without agent names (type B), runs
only send out type B messages.
In the lemma, the function of k is to indicate the point where the sequence changes
from type A to type B, which occurs when an agent in run r0 sends out the type B
message after reading the type A message. It can be the case that no such change
point exists. This is covered by k = |β|: all messages are of type A, and thus we
cannot immediately derive that there is an event of role r0 occurring in β.
Given a run identifier r, we denote the local mapping of roles to agents (which
is used to denote the intended communication partners of a run) with ρ(r). This
allows us to draw some further conclusions. Because the messages in the sequence
β are received as they were sent, and because the messages before k include a list
of agents, we deduce:
• The runs executing the events β0 . . . βk have the same parameter p (the number
of agents in the messages plus one) and each run has the same agent mapping
ρ.
• Given the parameter p and the number of nonces in a message, we can uniquely
determine the role of a message.
Lemma 6.6. Given a sequence β resulting from applying Lemma 6.4 for a nonce
created in a run rx executing role x, and an index k resulting from Lemma 6.5 we
have
Proof. All messages of type A explicitly contain the agent names, except for the
name of the agent whom the message is for, which is encoded in the public key that
is used. The number of agents defines the parameter p for both the send and read
event, and combined with the number of nonces, uniquely defines the role of which
the send and read events must be a part.
For all previous lemmas we required as a precondition that some nonce was secret.
In order to identify the point up to which a given nonce is secret, we have the
following lemma:
Lemma 6.7. For the generalized version of NSL, given a trace α, and a nonce n]rx
that was created by a run rx, and a trace index k:
Proof. Observe that for the generalized version of the NSL protocol, the initial
intruder knowledge only contains the secret keys of the untrusted agents. All mes-
sages that are sent by a run ry are encrypted with public keys of agents from the set
rng(ρ(ry)). Thus, from the inference rules we find that the intruder knowledge can
only derive the contents of such a message if a run communicates with untrusted
agents.
Based on the previous lemmas we can prove that nonces generated in a run that
performs role r0, and tries to communicate with trusted agents only, are kept secret.
Lemma 6.8. Given a trace α, a nonce n]rx created by a run rx in role r0, we have
that
rng(ρ(rx)) ⊆ Agent T ⇒ ∀i : M (α, i) 0 n]rx
communication with (un)trusted agents in the runs. We apply Lemmas 6.4 and 6.5
to yield a sequence of events β and an index k.
We split cases based on the type of message of the send event at αj . We distinguish
two cases, and show that both lead to a contradiction.
• The message sent of αj is of type B, so it does not contain agent names. Be-
cause the nonce n]rx was not created by the run ry, it must have been received
before. If we look at the protocol definitions, we see that each send of message
B is preceded by a read of a message containing one extra nonce, the one cre-
ated by the run. Thus, there must be a read event αj2 , j2 < j, with message
{| [n]ry, . . . , n]rx, . . .] |}pk(...) , where n]ry is the nonce created by run ry. Because
this message again contains n]rx, the intruder could not have created this mes-
sage himself, and an agent must have sent it. If we look at the messages in the
sequence β and the protocol rules, we find that there must exist an index k2,
such that runidof (βk2 ) = ry (where n]ry was sent out first), and that k2 < k
on the basis of Lemma 6.5. If we combine this with Lemma 6.6, we arrive at
ρ(ry) = ρ(rx), which yields a contradiction.
Given that the secrecy of nonces generated by role r0 holds, the following is straight-
forward:
Proof. We sketch the proof, which is not difficult given the secrecy of the nonce.
Given a trace α with a run rx executing role r0, we have that the nonce n generated
by this role is secret on the basis of Lemma 6.8. Thus, if the agent completes his
run, there must have been two indices j and i, j < i such that αj = send(p,0) (m)
and αi = read(p,p−1) (m0 ). If we use Lemma 6.4 and Lemma 6.5 we find that the
events in the sequence β are exactly the events that are required to exist for the
synchronisation of the role r0. The messages of these events are received exactly as
they were sent, which is required for synchronisation. This leaves us with only one
proof obligation. We have to show that the sequence β contains all the messages of
type A, in the right order, after the start of run rx, and before the end of run rx.
This follows directly from the role assignment in Lemma 6.6.
6.2 Analysis 133
Lemma 6.10. Given a trace α, a nonce n]rx created by a run rx executing role rx
with x > 0, we have that
Proof. Proof by contradiction, similar to that of Lemma 6.8. Assume the nonce n]rx
was created by a run rx executing role x, and is leaked by a run ry to the intruder
in some trace α. We have rng(ρ(rx)) ⊆ Agent T and rng(ρ(ry)) 6⊆ Agent T . We use
Lemma 6.4 and Lemma 6.5 to yield a sequence β and index k. We distinguish two
cases:
Theorem 6.11 (Secrecy of all nonces). For the generalized version of the NSL
protocol, we have that all nonces created in runs rx for which we have rng(ρ(rx)) ⊆
Agent T , are secret.
For non-injective synchronisation of role rx for x > 0, we not only have to prove that
all messages of type A have occurred as expected, but also all messages MsgB(x),
and that there is so-called run-consistency: for each role ry we want there to be a
single run that sends and receives the actual messages.
Lemma 6.12. Non-injective synchronisation holds for role rx, where x > 0.
Proof. Proof sketch: based on the secrecy of the nonce generated in such a role,
we determine an index k, and sequence β that precedes the last read event of the
role, with role(β0 ) = rx. Because the sequence must include an event of role r0, for
which non-injective synchronisation holds, we merge the sequences for both (as in the
previous lemma). This gives us a complete sequence of send and read events which
exactly meet the requirements for non-injective synchronisation. The messages of
these events are received exactly as they were sent. The requirement of the existence
of all messages of type A follows from Lemma 6.6, which allows us to conclude that
there is a run for each role in the protocol. Furthermore, the nonce of each these
runs is present in the message sent at αk . If we examine the protocol rules, we see
134 Chapter 6 Generalizing NSL for Multi-Party Authentication
that the message of type B is only accepted by runs whose own nonce is contained
in the message. Therefore we have that the run executing role r1 must be equal to
runidof (αk+1) , and that the read event must be labeled as the MsgB(0). Similarly,
we establish that the correct messages have been consistently read and sent by the
runs that created the nonces. Thus all conditions for non-injective synchronisation
are met.
Theorem 6.13 (Non-injective synchronisation). For the generalized version
of the NSL protocol, we have that for all runs rx with rng(ρ(rx)) ⊆ Agent T , non-
injective synchronisation holds.
6.2.3 Observations
Type-flaw attacks. We have assumed that type-flaw attacks are not possible, i.e.
agents can verify whether an incoming message is correctly typed. There are several
reasons for doing this.
Without this assumption, there are type-flaw attacks on the generalized version of
the protocol. This is not restricted to simple ones for specific instances of p, but also
multi-protocol type-flaw attacks involving instances for several choices of p in one
6.2 Analysis 135
protocol
na nb
a b
{| na, a |}pk(b)
{| na, nb |}pk(a)
{| nb |}pk(b)
attack, as in the previous chapter. Thus, we find that typing is crucial. Solutions for
preventing type flaw attacks using type information is examined in detail in e.g. [99].
Such type information can be easily added to each message, but a simple labeling
will also suffice. If we add a tuple (p, l) before each message inside the encryption,
where p is the number of participants for this instance, and l is the label of the
message, the protocol becomes robust against type-flaw attacks and multi-protocol
attacks with other instances of itself. In the proof, we used the fact that the label
could be derived from the type. If we explicitly add such a label, the proof works
in the same way for untyped models, except that the label is now explicit instead
of being derived from the type.
Using the tools described in Chapter 4, we have established that the type-flaw
attacks are not due to the specific ordering of the nonces and agent names within
the messages. In particular, we examined different options for the message contents
(without adding labels), such as reversing the order of either the agent or the nonce
list, interleaving the lists, etc. We established the existence of type-flaw attacks for
some choices of p for all variants we constructed.
take part in the protocol and we consider the first message sent by each of the roles.
If we take rx to be the last of the p roles that becomes active in the protocol, it must
be the case that before rx sends his first message, at least p − 1 messages have been
sent. Adding this to the p messages that must have been sent after that message,
yields a lower bound of 2p − 1 messages.
The communication structure from Figure 6.2 can be instantiated in several different
ways as to obtain authentication protocols satisfying different requirements. In this
section we list some of the more interesting possibilities.
Generalized Bilateral Key Exchange. First, we observe that the nonces generated
in the protocol are random and unknown to the adversary, which makes them suit-
able keys for symmetric encryption. Furthermore, if we examine the proofs, the
authentication of the messages is only derived from the encryption of the messages
of type A, not of type B. Similar to the Bilateral Key Exchange protocol (BKE)
as described in [54] we can opt to replace the asymmetric encryption for the mes-
sages of type B by symmetric encryption with the nonce of the recipient. We can
then omit this nonce from the list. We use to denote a constant representing the
empty list. This yields the following message definitions. Figure 6.4 illustrates the
four-party BKE protocol.
(
[n(i + 2) . . . n(p − 1)] if i < p − 1
nlist(i) =
if i = p − 1
MsgA(i) = {| [n0 . . . ni], AL(next(i)) |}pk(next(i))
MsgB(i) = {| nlist(i) |}n(i+1)
Using private keys. If secrecy of the nonces is not required, we can use the private
key of the sender of a message for encryption, instead of the public key of the
receiver. This gives the following protocol.
protocol
n0 n1 n2 n3
r0 r1 r2 r3
{| n0, r0, r2, r3 |}pk(r1)
{| n0, n1, r0, r1, r3 |}pk(r2)
{| n0, n1, n2, r0, r1, r2 |}pk(r3)
{| n0, n1, n2, n3, r1, r2, r3 |}pk(r0)
{| n2, n3 |}n1
{| n3 |}n2
{| |}n3
protocol
n0 n1 n2 n3
r0 r1 r2 r3
{| n0, r1, r2, r3 |}sk(r0)
{| n0, n1, r0, r2, r3 |}sk(r1)
{| n0, n1, n2, r0, r1, r3 |}sk(r2)
{| n0, n1, n2, n3, r0, r1, r2 |}sk(r3)
{| n1, n2, n3, r1, r2, r3 |}sk(r0)
{| n2, n3, r0, r2, r3 |}sk(r1)
{| n3, r0, r1, r3 |}sk(r2)
Rearranging message contents. In the proofs of correctness, we have used some (but
not all) information that distinguishes the messages in the protocol. In particular,
we used:
• The ordered agent list AL(). We used this to derive the parameter p from an
incoming message, and the order in the list is required to be able to derive
that the agent list of the sender is identical to the agent list of the recipient.
• The list of nonces. We used the number of nonces to derive the role an agent
is supposed to assume (given p).
A direct consequence of this is that the exact order of the agent list and nonce list is
not relevant. We could e.g. redefine messages of type A as to start with a reversed
list of roles, followed by the list of nonces.
As a second observation, we note that besides the distinct type of each message,
the proof did not depend on the fact that there is no other content inside the
138 Chapter 6 Generalizing NSL for Multi-Party Authentication
encryption besides nonces and agent names. Thus, we can add any payload inside
the encryption, as long as we ensure that it cannot be confused with an agent term
or a nonce.
This opens up several possibilities for establishing e.g. keys between pairs of agents
inside of the generalized NSL protocol. Next, we discuss one such option.
In this section we show why the two multi-party authentication protocols in [45]
achieve their intended goals, but do not achieve agreement, and therefore not syn-
chronisation. We want to mention explicitly that this was not a goal of the protocol
designers. Their notion of authentication is what is also known as recent aliveness.
From Definition 1 in the paper [45]:
Recent aliveness is weaker than agreement in the sense that there are no require-
ments of the role that B is playing, or on any shared data, or that messages are sent
and received correctly.
The protocols in [45] also differ in setup when compared to ours. Whilst we assume
agents have private and public keys, they assume all agents share symmetric keys
6.4 Attacks on two previous multi-party authentication protocols 139
before starting the protocol. Here we will write k(r0, r1) to denote the key shared
by the agent in role r0 and the agent in role r1.
In the paper we find two protocols. They both satisfy recent aliveness, which is
established by a challenge-response loop, and the authors make sure no reflection
attack (which can arise from using symmetric keys) is possible. However, neither
of them satisfies agreement or synchronisation. In particular, there can be much
confusion about who is playing which role. Here we show that for both protocols, the
instances for three parties already do not satisfy stronger notions of authentication
such as agreement and synchronisation.
protocol
n0 n1 n2
r0 r1 r2
n0
n1, {| r1, n0 |}k(r1,r2)
n2, {| r2, n1, r1, n0 |}k(r0,r2)
{| r0, n2, r2, n1 |}k(r0,r1)
{| r1, r0, n2 |}k(r1,r2)
protocol
n0 n1 n2
r0 r1 r2
n0
n1, {| r1, n0 |}k(r0,r1)
n2, {| r2, n1 |}k(r1,r2) , {| r2, {| r1, n0 |}k(r0,r1) |}k(r0,r2)
{| r0, n2 |}k(r0,r2) ,
{| r0, {| r2, n1 |}k(r1,r2) |}k(r0,r1)
{| r1, {| r0, n2 |}k(r0,r2) |}k(r1,r2)
The first protocol is shown in Figure 6.6, and we show an attack in Figure 6.8. In
the attack, A completes role r0 up to the claim, after starting a session in which he
wants to talk to B in role r1 and C in role r2. However, although B and C are indeed
performing actions (guaranteed by the recent aliveness), it is clear that B performs
role r1 thinking he is talking to D, and not to A. Furthermore, A ends with two
nonces, Nonce1 and Nonce2 which can both be determined by the intruder. Clearly,
there is no strong authentication property that is satisfied here.
The second protocol is shown in Figure 6.7, and we show an attack in Figure 6.9.
In this particular attack, A again starts a session with B and C. At the end of the
attack, there is no one performing role r1. B thinks A is executing role r1, and C
thinks F is executing role r1, neither of which is true. Furthermore, A ends with a
nonce Nonce2 which can be determined by the intruder.
We therefore conclude that neither one of the two protocols from [45] satisfies agree-
ment or synchronisation.
6.5 Conclusions
Run #1
A in role R0
R0 -> A
R1 -> B
R2 -> C
Const n0#1
Var n2 -> Nonce2
Var n1 -> Nonce1
send_1 to B
n0#1
Run #3
B in role R1
R0 -> D
R1 -> B fake sender D
R2 -> C
Const n1#3
Var n0 -> n0#1
read_1 from D
n0#1
send_2 to C
(n1#3,{ B,n0#1 }k(B,C))
Run #2
C in role R2
R0 -> A
R1 -> B
R2 -> C select { B,n0#1 }k(B,C)
Const n2#2
Var n1 -> Nonce1
Var n0 -> n0#1
read_2 from B
(Nonce1,{ B,n0#1 }k(B,C))
send_3 to A
(n2#2,{ C,Nonce1,B,n0#1 }k(A,C))
read_3 from C
(Nonce2,{ C,Nonce1,B,n0#1 }k(A,C))
send_4 to B
{ A,Nonce2,C,Nonce1 }k(A,B)
claim_A2
Nisynch
[Id 9] Protocol bunava13, role R0, claim type Nisynch
Run #1
A in role R0
R0 -> A
R1 -> B
R2 -> C
Const n0#1
Var T0 -> Ticket1
Var n2 -> Nonce2
send_1 to B
n0#1
Run #3
B in role R2
R0 -> D
R1 -> A
R2 -> B combine
Const n2#3
Var T2 -> Ticket2
Var n1 -> n0#1
read_2 from A
(n0#1,Ticket2)
send_3 to D
(n2#3,{ B,n0#1 }k(A,B),{ B,Ticket2 }k(D,B))
Run #2
C in role R2
R0 -> A
R1 -> F select { B,n0#1 }k(A,B)
R2 -> C fake sender F
redirect to C
Const n2#2
Var T2 -> { B,n0#1 }k(A,B)
Var n1 -> Nonce4
read_2 from F
(Nonce4,{ B,n0#1 }k(A,B))
send_3 to A
(n2#2,{ C,Nonce4 }k(F,C),{ C,{ B,n0#1 }k(A,B) }k(A,C))
read_3 from C
(Nonce2,Ticket1,{ C,{ B,n0#1 }k(A,B) }k(A,C))
send_4 to B
({ A,Nonce2 }k(A,C),{ A,Ticket1 }k(A,B))
claim_A2
Nisynch
[Id 11] Protocol bunava23, role R0, claim type Nisynch
Related Work
In the previous chapters we have developed theoretical models and tools, which have
been applied in various settings. In this chapter we discuss related work. During
the last twenty years, much research has been performed in the area of black-box
analysis of security protocols. We discuss security protocol models and security
properties in Section 7.1. We proceed by addressing related work on some recent
verification methods in Section 7.2. We close off by discussing other related work,
including multi-protocol analysis and multi-party protocols in Section 7.3.
We will briefly compare our approach to some prominent approaches: BAN logic
(because of its historic interest), Casper/FDR (because it has powerful tool support),
Strand Spaces (because this approach has much in common with ours), the Protocol
Composition Logic (as a recent promising development), and the Spi calculus (as a
process calculus based approach). This selection is by no means exhaustive, but is
meant to be representative for the approaches that are currently used.
BAN logic
143
144 Chapter 7 Related Work
Casper/FDR
Developed originally by Gavin Lowe, the Casper/FDR tool set as described in [121]
is not a formal security protocol semantics, but a model checking tool. However,
as the input is translated into a CSP process algebraic model, there is an implicit
semantics. The reason we mention it here, is that Casper/FDR is a mature tool
set, and none of the other semantics we mention has such a tool set available. For
Casper/FDR many interesting security properties have been formulated in terms
of CSP models (see e.g. [122]) and some of these have been consequently adapted
in other models. However, for Casper/FDR there is no explicit formal semantics
of the protocol language and properties except in terms of CSP. Casper/FDR has
been used for case studies on many protocols as reported e. g. in [80].
Strand Spaces
The Strand Spaces approach [182] is closely related to the use of Message Sequence
Charts which we advocate for the description of protocols and protocol runs. The
approach provides an elegant way to reason about protocol executions in the pres-
ence of a Dolev-Yao intruder, and has been used to prove many theoretical results
[96, 181, 95, 78, 180] and is the basis of several tools [140, 57, 173]. Roughly, the
main difference is that we provide a totally ordered formal semantics that makes
the relation between a protocol and its behaviour explicit, whereas Strand Spaces
describe protocol execution using a partial order on the events. The notion of a
strand is similar to our notion of run, and a strand space is the set of all possible
combinations of strands, reflecting our semantical model of interleaved runs. The
notion of a bundle closely corresponds to our notion of realizable trace pattern, and
allows one to effectively reason about classes of traces. In Strand Spaces, a protocol
is modeled as a collection of (parameterized) strands, with additional constraints
that imply which parameters denote locally generated values (nonces), and which
denote variables. Strand Spaces seem to be very tightly linked to the Dolev-Yao
intruder model and although the intruder is modeled as a collection of strands, just
like normal agents, it is not easy to vary over the intruder network model. With
respect to the security properties, we mention that both secrecy and agreement
7.1 Security protocol models 145
are expressible in the Strand Spaces model, but no immediate mechanism exists to
formalize injectivity or individual order on events, as required for injective synchro-
nisation. Another difference with the operational semantics presented here is the
way in which some subtleties are handled, e. g. nonce generation, the handling of
initial intruder knowledge and trusted/untrusted agents. In our model, such con-
cepts are explicit and solved at a syntactical level where possible, whereas in the
Strand Spaces model these are often formalized as additional requirements on e. g.
nonce occurrence (that somehow exist outside the base model), or encoded as the
parameters of a strand. In the same vein, there is no strict distinction between
local constants and variables within a strand. Thus, it seems that Strand Spaces is
mainly focused on describing the protocol execution, and only implicitly concerned
with protocol description. As a result, the exact relation between the protocol and
its execution is only implicitly defined.
Spi calculus
Each security protocol model has its own distinct means for defining security prop-
erties. Furthermore, specifying security properties is a research area in itself, and
models and logic have been developed with the specific purpose of modeling security
properties, such as e. g. [59, 86, 165].
Looking into more detail at specific security properties, we find that secrecy is han-
dled in a similar way by the majority of models. For authentication properties,
the situation is more complicated. Historically, many different interpretations of
authentication exist for communication protocols. Early authentication concepts
simply mention that one “end” of a communication channel can assure itself regard-
ing the identity of the other end, as in e.g. [145, 159]. An identity is considered to
be an “end”. These concepts seem very similar to Lowe’s later definition of aliveness
in [122]. For this form of authentication, it is only required that the party to be
authenticated performs some action to prove his identity (i.e. applying his secret
key) regardless of the context, or whom he is proving his identity to. This is a
rather weak form of authentication.
Some more advanced notions of authentication can be found in [113], for example,
where mechanisms are sketched to prevent message stream modification. This can
be prevented by achieving three subgoals: determine message authenticity, integrity,
and ordering. Although no formalisation is provided, this concept is akin to our
notion of non-injective synchronisation.
The international standard ISO/IEC 9798-1 [87] states that authentication requires
verification of an entity’s claimed identity. In response, Gollmann points out in [90]
that the concept of a sender of a message should be treated with caution, and
that replays should be prevented. Gollmann argues that authentication of an entire
communication session is done by first setting up a session key, and that further
messages are authenticated on the basis of this key. Based on this assumption,
Gollmann identifies four authentication goals. These goals explicitly assume that
the protocols are implemented using private keys and session keys. Our definition
of injective synchronisation is independent of protocol details, though. Therefore,
it can be applied to a wider range of protocols.
An alternative formulation of authentication is given by Diffie, Oorschot, and Wiener
in [76]. Here, participants are required to have matching message histories. The
message history is allowed to be partial, if the last message sent is never received,
which corresponds to our notion of messages that causally precede a claim. This
notion corresponds closely to non-injective agreement.
Intensional specifications
exception is the definition of authentication by Adi, Debbabi and Mejri in [3]. Their
authentication property requires a strict order on the messages. Furthermore, in-
jectivity of the runs is required.
The authentication definition of [3] differs from injective synchronisation on two
main points. First, it has a parameter, consisting of the set of communications to
be authenticated. In our work, this parameter is fixed: it is defined as the set of
communications that causally precede the claim event. We argue that this choice
results in the strongest possible form of authentication. If the parameter is chosen
to be a proper subset of the causally preceding communications set, it can be shown
that the authentication is strictly weaker than injective synchronisation for the nor-
mal intruder model.
A second difference is that the authentication definition is strictly tailored for pro-
tocols involving two parties that communicate directly with each other. Thus, it
cannot straightforwardly be used to express that two parties authenticate each other
when they only communicate via e.g. a server. Also, it is not clear how it generalises
to multi-party settings.
Extensional specifications
Injectivity
With respect to the analysis and verification of injectivity, we note that most ap-
proaches implement Lowe’s definition of injectivity by way of a counting strategy: in
any possible execution of the protocol the number of initiator runs may not exceed
the number of corresponding responder runs. This counting argument can easily
be used in a model-checking approach. Indeed, this is how injectivity is verified in
the Casper/FDR tool chain [121, 164]. Since it is only possible to model a finite
and fixed number of scenarios, this approach will only provide an approximation
of injectivity. Other approaches to the verification of security protocols, e.g. those
based on logics (such as [44]) or on term rewriting (such as [89]) do not consider
injectivity. The Strand Spaces [182] approach does not provide formal means to deal
with injectivity. Instead, it is proposed to check authentication based on nonces, for
example by using so-called solicited authentication tests as defined in [96]. These
tests guarantee injectivity, based on nonce freshness. Authentication and injectivity
are strongly connected in this view. In the HLPSL framework used by the Avispa
tool set [104], the definition of agreement includes a parameter, typically a nonce,
over which injectivity is established. For this approach, the injectivity is strongly
connected to the notion of freshness of the parameter. If the correct parameter is
chosen, this can be used to verify that injective agreement holds. A drawback of
this method can be that it is focused on the Dolev-Yao intruder model: for weaker
intruder models, injective agreement might hold even when there is no fresh value
to agree upon, which cannot be captured by this notion of agreement.
We mention two examples of security protocol formalisms that deal explicitly with
injectivity. Gordon and Jeffrey have devised a method [106] to verify injective cor-
respondence relations for the π-calculus. This method allows for verification by
type-checking of (injective) correspondences. A second example can be found in the
protocol logic by Adi, Debbabi and Mejri [3], where pattern matching is used to
express injectivity for two-party protocols. However, it is not clear how verification
can be done efficiently.
each other, as one strictly requires decrypt and encrypt events, and the other does
not.
The current generation of security protocol analysis methods works with abstract
black-box security protocol models that are based on abstractions (of computer
networks and cryptographic operations). Even with these simplifying abstractions,
the problem of whether a protocol actually provides the security properties it has
been designed for is undecidable [81]. Despite this fact, over the last two decades a
wide variety of security protocol analysis tools have been developed that are able to
detect attacks on protocols or, in some cases, establish their correctness.
Model Checking The second kind of verification centers around the use of model
checkers [121, 57, 52, 143, 133, 173, 30], which are fully automatic. We distinguish
three classes: tools that attempt verification (proving a protocol correct), those
that attempt falsification (finding attacks), and hybrids that attempt to provide
both proofs and counterexamples.
The first class of tools, which focus on verification, typically rely on encoding proto-
cols as Horn clauses and applying resolution-based theorem proving to them (with-
out termination guarantee). Analysis tools of this kind include NRL [133] and
ProVerif [30].
In contrast to verification, the second class of tools detects protocol errors (i. e. at-
tacks) using model checking [121, 143] or constraint solving [52, 57]. Model checking
attempts to find a reachable state where some supposedly secret term is learnt by
the intruder, or in which an authentication property fails. Constraint solving uses
symbolic representations of classes of such states, using variables that have to satisfy
certain constraints (e. g. the term {| m |}k must be inferable from the intruder knowl-
edge). To ensure termination, these tools usually bound the maximum number of
runs of the protocol that can be involved in an attack. Therefore, they can only
detect attacks that involve no more runs of the protocol than the stated maximum.
In the third class, attempts to combine model checking with elements from theorem
proving have resulted in backward-search-based model checkers. These use pruning
theorems, resulting in hybrid tools that in some cases can establish correctness of a
protocol (for an unbounded number of sessions) or yield a counterexample, but for
which termination cannot be guaranteed [173]. Scyther falls into this last category,
but is guaranteed to terminate.
Tools
Because the model checking approach has proven very successful in the last years,
the vast majority of existing tools is based on model checking rather than on theorem
proving. We briefly describe some of the basic ideas behind the most recent tools.
The selection presented here is by no means exhaustive; rather, we have tried to
include representatives of each type of verification method.
Some earlier approaches rely on general purpose verification tools:
Isabelle. Paulson [152] has proposed to state security properties such as secrecy
as predicates (formalized in higher-order logic) over execution traces of the
protocol, without limitations on the number of agents. These predicates can
be verified by induction with automated support provided by the Isabelle proof
assistant [149].
Casper/FDR. FDR [124] is a model checker for the CSP process algebra. Roughly
speaking, FDR checks whether the behaviors of a CSP process associated with
a protocol implementation are included in the behaviors allowed by its spec-
ification. FDR is provided with a user-friendly interface for security protocol
7.2 Protocol analysis tools 151
NRL. In the NRL Protocol Analyzer [133], the protocol steps are represented as
conditional rewriting rules. NRL invokes a backward search strategy from
some specified insecure state to see if it can be reached from an initial state.
It has been used for verification of e. g. the Internet Key Exchange protocol
[134]. Unfortunately, NRL is not publicly available.
Athena. The Athena [173] tool is an automatic checking algorithm for security
protocols. The algorithm described in [172] served as a starting point for the
development of Scyther. It is based on the Strand Spaces model [182] and,
when terminating, provides either a counterexample if the formula under ex-
amination is false, or establishes a proof that the formula is true. Alternatively,
Athena can be used with a bound (e. g. on the number of runs), in which case
termination is guaranteed, but it can guarantee at best that there exist no
attacks within the bound. Unfortunately, Athena is not publicly available.
ProVerif. In ProVerif [30], protocol steps are represented by Horn clauses. The
system can handle an unbounded number of sessions of the protocol but per-
forms some approximations (on random numbers). As a consequence, when
the system claims that the protocol preserves the secrecy of some value, this
is correct; however it can generates false attacks too. Recently an algorithm
was developed [4] that attempts to reconstruct attacks, in case the verification
procedure fails, adding the possibility of falsification to ProVerif.
LySatool. The LySatool [34] implements security protocol analysis based on a
process algebra enhanced with cryptographic constructs. The approach is
based on over-approximation techniques and can verify confidentiality and
authentication properties.
Constraint solver. Based on [140], in which verification in the Strand Spaces
model is translated into a constraint solving problem, an efficient constraint
solving method was developed in [57]. The method uses constraint solving,
optimized for protocol analysis, and a minimal form of partial order reduc-
tion, similar to the one used in [56]. A second version of this tool does not use
partial order reduction, enabling it to verify properties of the logic PS-LTL
[59].
OFMC. The On-the-Fly Model Checker (OFMC, [14]) is part of the AVISPA tool
set [7], and is a model checker for security protocols. It combines infinite
152 Chapter 7 Related Work
publicly verification
Tool name falsification termination
available bounded unbounded
Casper/FDR yes yes yes no yes
ProVerif yes yes1 no yes yes
Constraint solver yes yes yes no yes
LySatool yes no no yes yes
OFMC yes yes yes no yes
Scyther yes yes yes yes yes
NRL no no no yes yes
Athena (bounded) no yes yes no yes
Athena (unbounded) no yes no yes no
state forward model-checking with the concept of a lazy intruder [11], where
terms are generated on-demand during the forward model-checking process. A
technique called constraint differentiation [12] is employed to avoid exploring
similar states in different branches of the search, which is similar to the ideas
in [64]. It furthermore supports user-defined algebraic theories [13], allowing
for correct modeling of e. g. Diffie-Hellman exponentiation.
In Table 7.1 we give a brief overview of the automatic tools mentioned here. Of
the tools mentioned here, only NRL and Athena are not publicly available, shown
in the second column of the table. The third column shows whether a tool can
perform falsification, i. e. whether or not it can find attacks. Finding attacks can
be useful for understanding or repairing a protocol, as an attack often suggests
what the nature of the problem with the protocol is. The next column, verification,
reports which tools can ensure that a protocol is (correct) within the model. We
distinguish two cases. Some tools can perform unbounded verification: for some
protocols, they can establish that the property holds within the security protocol
model of the tool. Other tools (typically model-checking based approaches) can
only perform bounded verification: when such a tool finds no attack, this means
that there is no attack within a certain bound, e. g. there is no attack which involves
less than three runs. The final column indicates whether the verification procedure
of the tool is guaranteed to terminate.
A more subtle advantage of Scyther over the majority of existing tools is that there
is no need to specify so-called scenarios. A scenario is commonly used for model-
checking methods, and is a way to limit the state space, and is thus comparable to
our concept of maximum number of runs.2 In most cases (as e. g. in Casper/FDR),
one tailors the scenario in such a way that it closely resembles the attack, to reduce
1 Recently a attack reconstruction tool was introduced for ProVerif that in many cases can
These can be used in a similar way as the maximum number of runs, thus avoiding the specification
of scenarios.
7.3 Other related work 153
verification time (or even to make verification feasible). For some tools, the scenarios
even partially encode the security properties that one wants to verify (e. g. the
constraint solver, Athena). The creation of such scenarios is subtle and error-prone.
If set up incorrectly, the result can be that attacks are missed.
Although Scyther has many advantages over existing methods, it currently has a
smaller scope than some of the other tools in the table. In particular, Scyther
currently cannot handle Diffie-Hellman exponentiation or the use of the exclusive-
or operator. Thus, for protocols that include such operations, which are not yet
supported by Scyther, other state-of-the-art tools such as the AVISPA tool set
would be an appropriate choice.
With regard to multi-protocol attacks, there is only very limited related work. The
concept of multi-protocol attacks originates in [111]. Given a correct protocol, a
procedure is sketched to construct a second protocol. Both protocols can be shown
to be correct in isolation, but when these two protocols are executed over the same
network, the intruder can use messages from one protocol to attack the other pro-
tocol. An analysis of the interaction between sub-protocols of a single protocol
was performed by Meadows in [134]. Some detailed examples of multi-protocol at-
tacks for public-key infrastructures have been given in [5]. For a particular class of
attacks (so-called guessing attacks) the multi-protocol attack concept was further
refined in [128], where preventive measures are also discussed.
With respect to preventing multi-protocol attacks, we observe that most counter-
measures that prevent type-flaw attacks as in [99], introduce a mechanism to ensure
that messages are sufficiently distinct. These mechanisms effectively add distinct
tags to each message, in order to ensure that a message sent from a send event with
label ` can only match with the pattern of the read event with the same label `. If
such tags are unique throughout a collection of protocols as suggested in [62], the
“disjoint encryption” conditions suggested by Guttman and Thayer in [95] are met.
Consequently, protocols cannot interfere with each other, preventing multi-protocol
attacks.
Alternative requirements addressing protocol interference in various settings are
found in [93, 48, 47].
Regarding multi-party authentication, we find that much research has been per-
formed in the cryptographic setting, e. g. [9, 8, 41]. As a result, these types of
analysis usually only address the problem of a static adversary, and do not consider
the dynamic behaviour of the protocol and the adversary. These protocols are typ-
ically assumed to employ a multicast primitive, and based on this primitive their
complexity can be analyzed, as in e.g. [109, 137]. Unfortunately the protocols in
this category are designed to meet different goals than the protocols presented here,
and are therefore difficult to compare to our approach.
In black-box analysis of security protocols, protocols usually consist of two or three
154 Chapter 7 Related Work
roles only. All the tools mentioned previously assume the protocols have a fixed
number of participants, and are not suited for analysis of parameterized protocol
specifications. Therefore, they cannot be used to analyze multi-party protocols in
general, but they can be used to analyze specific instances of such protocols. For
example, Proverif [29] has been used to analyze instances of the GDH protocols
from [9]. In spite of the success of the black-box analysis methods, few multi-party
protocols have been constructed in this setting. As a notable exception we would
like to mention [45], where the authors construct two challenge-response protocols
for any number of parties. However, the protocols described there do not satisfy
synchronisation or agreement, as was shown in Section 6.4.
On the borderline between the cryptographic approach and the formal Dolev-Yao
approach, the Bull protocol from [43] provides an interesting example. This protocol
is flawed, as shown in [163], although a more abstract version was shown to be correct
in [150].3 Unlike the generalized NSL protocol, this protocol is based on agents
sharing a symmetric key with a server, and furthermore the server is involved in
each session.
Recently, a corpus of multi-party protocols have been established as part of the Coral
project [175], aiming to establish a reference set for multi-party protocol analysis.
Regarding proving authentication protocols correct, there have been some recent
attempts to simplify such proofs. For example, one successful approach is to use
static analysis of the protocol to prove authentication properties, as described in
e.g. [42]. However, the notions of authentication used there are weaker than syn-
chronisation or agreement, and the methods used there do not seem suitable for
proving synchronisation.
Another approach to proving authenticity is taken in [50], where a simple logic is
developed to prove authentication, assuming that some secrecy properties hold. The
idea is then to delegate the proof of the secrecy properties to a dedicated secrecy
logic. While a promising idea, in this case we have seen that the lemmas used
to prove secrecy (i.e. yielding a sequence of send events) are also used to prove
authenticity: thus, in this particular case, the proof structure seems to suggest that
splitting the proof strictly into two (one for secrecy, one for authentication) leads to
duplication of a large part of the proof.
Finally, we like to mention the preliminary work related to the development of the
models and tools in this thesis. In particular, the development of the semantics has
been guided by conclusions drawn from the work in [148]. An early version of the
Scyther tool was based on partial order reduction as developed in [64], using an
algorithm similar to the one developed in [169]. Development of the Scyther tool
has benefited from the work on attack analysis in [102], of which some heuristics
have found their way into the tool.
3 In this particular case, the cryptographic implementation differs in a significant way from
the abstract version, which allows for new attacks: the exclusive or operator does not satisfy the
properties required for black-box encryption, because under certain conditions it is possible to
retrieve the key encrypted term.
8
8.1 Conclusions
Detailed conclusions were drawn at the end of each chapter. Here, we return to the
research question posed in the introduction:
Research question: How to integrate developments in state-of-the-art black-box
security protocol analysis into a formal semantics with tool support?
As shown by this thesis, we can conclude the model presented here meets our re-
quirements. Although some issues remain for future work (detailed in the next
section), we have successfully integrated a novel analysis model with corresponding
tool support. Both the model and the tool have contributed to new results.
The semantics give rise to authentication properties with a number of theoretical
results, and have been used for the design and verification of multi-party authentica-
tion protocols. The tool has been used for research as well as education. It has been
used for the verification of existing and new protocols, and has been used to obtain
new results on multi-protocol attacks. In teaching, the semantics have been used
as a model to explain security protocol analysis. The Scyther tool has been used
to give students hands-on experience with protocol verification, and subsequently
make them aware of some of the pitfalls of protocol design, by verifying existing
protocols, or protocols they designed themselves.
In a practical sense, the work here (in the line of other black-box analysis results)
shows that the design of correct security protocols is a difficult task. The current
increase in electronic communications has resulted in the widespread application of
security protocols. Our results with regard to multi-protocol analysis have a direct
impact on current practices of internet protocols (internet protocols for payments
or transmission of private data) and embedded systems design (protocols for smart-
cards, or cellular phones). For the protocols used in these areas, the problem of
multi-protocol attacks has barely been addressed. The semantics and tools devel-
oped here can help to verify these protocols. The methodology can also be used to
educate developers about the subtle problems of protocol design. As such, the work
155
156 Chapter 8 Conclusions and Future Work
here is at least useful in three phases: (1) during the education of developers, (2)
during the protocol development process itself, and also (3) by enabling the process
of assessment and certification of protocols.
The subjects in this thesis have been presented in the same order in which they
have been researched. Work has started from a formal model, with intuitive basic
concepts, and an operational semantics. This lead to the development of new secu-
rity properties. Afterwards, tool support was developed which helped in validating
the methodology. In a later stage, the case studies have produced new results, and
serve as a further confirmation of the validity the model.
Some of the contributions made in this thesis are collected in Table 8.1 on the facing
page, and discussed in more detail below.
In Chapter 2 a new security protocol model for defining security protocols and their
behaviour was presented. The model makes protocol execution explicit by means
of an operational semantics, defining the set of traces (possible behaviours) of a
protocol. The result is a role-based security protocol model, with a parameterized
intruder model. The initial intruder knowledge is derived from the protocol descrip-
tion, as opposed to being defined separately. Within the protocol model, security
properties are modeled as local claim events.
The model was extended with the definitions of several trace-based security proper-
ties in Chapter 3. This included reformulation of existing notions of secrecy, as well
as definitions of existing properties such as agreement and liveness. Furthermore,
we introduced two new strong authentication properties: injective and non-injective
synchronisation. The security properties have been related to each other by means
of a hierarchy. A specific syntactic property, the LOOP property, was defined and
a theorem was proven which states that non-injectively synchronising protocol roles
that also satisfy LOOP, must also satisfy injective synchronisation.
Based on the security protocol model, we introduce the concept of trace classes in
Chapter 4, which can be used to reason about sets of traces. We combine reasoning
about traces with the basic ideas behind the existing Athena algorithm developed
by Song. We develop an improved algorithm, which can be used to verify security
properties or find attacks. The algorithm is semantically sound with respect to the
security protocol model. We establish a link between the output of the algorithm
and the notion of characterization as put forward by Dogmi, Guttman and Thayer.
The new algorithm improves on the Athena algorithm in a number of ways. As
a result of the new underlying semantics, security properties are defined as single
events, as opposed to being encoded in error-prone scenarios, and the algorithm
can cope with e. g. composed keys and multiple key structures. Contrary to the
Athena algorithm, the new algorithm is guaranteed to terminate, whilst retaining
completeness of the results for the vast majority of protocols.
We have implemented the algorithm in a prototype called Scyther. The performance
8.2 Summary of contributions 157
of the tool is strongly connected to two parameters. The first is the heuristic (which
was undocumented for the Athena algorithm). We investigated several options and
have shown their influence on the performance. The second parameter is the bound
on the number of runs, which is used to guarantee termination. We have investigated
the consequences of particular choices for this parameter. The prototype has been
in active use for over two years and has been successfully applied to both existing
and new verification problems, and used for education.
In Chapter 5 we gave a definition of multi-protocol attacks. Subsequently, the
Scyther tool was used for the automated analysis of multi-protocol attacks on a
158 Chapter 8 Conclusions and Future Work
set of protocols from literature. This has resulted in the discovery of several new
attacks. From the tests, we have determined likely multi-protocol attack scenarios.
A further application of the model and tool were examined in Chapter 6, where
a link was established between the minimal number of messages in a multi-party
protocol, and injective synchronisation. The Needham-Schroeder-Lowe protocol was
generalized to a family of multi-party authentication protocols, which are minimal
in the number of messages, and which satisfy synchronisation. A parametrized
proof was given for the correctness of the secrecy and synchronisation claims of the
generalized version of Needham-Schroeder-Lowe. The developed protocols can serve
as an efficient basis for multi-party synchronising protocols.
Although significant progress has been made, there is still a lot of work to be done.
We discuss some future work related to semantics, verification and applications.
Semantics
The semantics presented here serve as a basis for further study. For the core of the
model we have chosen the essential ingredients needed for security protocol analysis.
Some interesting future work remains.
Strict formalisation in a theorem proving environment: The model presented in
Chapter 2 was developed to serve as a core model for security protocol analysis,
and make the relations between important concepts precise. However, given the
complexity of the resulting model, a strict formalisation (in theorem proving envi-
ronments such as PVS [147] or Isabelle [149]) is of interest, e. g. to prove that the
model is consistent. In a second step, this would also facilitate interactive proofs of
properties of the model.
Additional algebraic operators: There are mechanisms used by protocols that are
not part of the core model yet, such as Diffie-Hellman exponentiation, and the
exclusive-or operator.
Additional security properties: Here we have only presented some forms of secrecy
and authentication. There are other security properties which we expect to be easily
integrated into the model, such as e. g. non-repudiation.
Flow-control: For some protocols, it is convenient to express flow control operations
such as branching and looping.
Time: Adding a notion of time to the model is an interesting future option. We
conjecture that adding a useful notion of time to the model adds a significant amount
of complexity to the semantics.
Extending the semantics should be a straightforward process for additional operators
and security properties, and flow control.
8.3 Future work 159
Verification
Note that any changes in the semantics also have consequences for the verification
process. We conjecture that it is possible to adapt the current algorithm for efficient
verification of protocols that use Diffie-Hellman exponentiation and exclusive-or
operations.
Verification of arbitrary trace-based security properties: The characterization algo-
rithm is not specifically tailored to any security property, and can be used as a
back-end by other tools for analysis of a large class of security properties. Scyther
can be used to generate XML descriptions of the complete characterizations of trace
patterns, which can be used as input for other verification tools. We expect that
it is possible to efficiently verify formulas of certain security protocol logics using
complete characterizations. A likely candidate would be e. g. PS-LTL as presented
in [59].
Decidability and the unbounded algorithm: The characterization algorithm decides
the security (i. e. establishes correctness for any number of runs, or finds an attack)
for more than 90 percent of the tested protocols. For the remaining ten percent, the
algorithm finds that there exists no attack within the bounds. This is in line with
the undecidability of the problem in general. However, the algorithm structure can
serve as a basis to prove decidability of certain properties for a subclass of protocols.
If one can prove that the unbounded algorithm terminates for a certain set of trace
patterns of protocols, this would imply that authentication properties are decidable
for these protocols (and for which the algorithm is a decision procedure).
Applications
There are more possible applications of the methods developed here. We suggest
some possible future applications.
Minimal protocols: In Chapter 6 we presented some protocols that used a minimal
number of messages required for establishing multi-party injective synchronisation.
Similarly, we conjecture that there are lower bounds on the number of messages
required to satisfy any security goal for a given number of parties. Furthermore,
minimality in messages is not the only possible notion of minimality. Alternatives
include e. g. computational cost, or required storage. Although some of these con-
cerns have been addressed for simple broadcast protocols in a cryptographic setting,
we conjecture that it is more efficient to address these at the black-box abstraction
level. Ultimately this can lead to establishment of a suite of protocols, all of which
are provably correct. From this set of protocols, and given a minimality criterion,
a developer can simply select the best protocol, instead of having to design a new
(and possibly flawed) protocol.
Attack taxonomy: If we compare two different attack traces (of a particular claim in
a protocol), they are usually similar. In fact, for the responder role of the Needham-
Schroeder protocol, there were only two explicit trace patterns: the trace pattern
capturing the normal execution of the protocol, and a trace pattern representing all
160 Chapter 8 Conclusions and Future Work
To conclude: Security is a very difficult thing to get right, but also critical for many
applications. The final conjecture is: Security cannot be solved by trial-and-error:
each supposed improvement can result in a new fault. Security can only be solved
by being very strict, something for which formal methods are very well suited.
Bibliography
[1] M. Abadi and A.D. Gordon. A calculus for cryptographic protocols: The Spi
calculus. Information and Computation, 148:1–70, 1999.
[2] M. Abadi and M. Tuttle. A semantics for a logic of authentication. In Proc.
10th ACM Symposium on Principles of Distributed Computing (PODC), pages
201–216, Montreal, 1991. ACM Press.
[3] K. Adi, M. Debbabi, and M. Mejri. A new logic for electronic commerce
protocols. Theoretical Computer Science, 291:223–283, 2003.
[4] X. Allamigeon and B. Blanchet. Reconstruction of attacks against crypto-
graphic protocols. In Proc. 18th IEEE Computer Security Foundations Work-
shop (CSFW), pages 140–154, Aix-en-Provence, France, June 2005. IEEE
Computer Society.
[5] J. Alves-Foss. Multiprotocol attacks and the public key infrastructure. In
Proc. 21st National Information Systems Security Conference, pages 566–576,
Arlington, 1998.
[6] S. Andova, C.J.F. Cremers, K. Gjøsteen, S. Mauw, S.F. Mjølsnes, and
S. Radomirović. Sufficient conditions for composing security protocols, 2006.
In preparation.
[7] A. Armando, D. Basin, Y. Boichut, Y. Chevalier, L. Compagna, L. Cuellar,
P.H. Drielsma, P. Heám, O. Kouchnarenko, J. Mantovani, S. Mödersheim,
D. von Oheimb, M. Rusinowitch, J. Santiago, M. Turuani, L. Viganò, and
L. Vigneron. The AVISPA tool for the automated validation of internet se-
curity protocols and applications. In Proc. Computer Aided Verification’05
(CAV), volume 3576 of Lecture Notes in Computer Science, pages 281–285.
Springer, 2005.
[8] G. Ateniese, M. Steiner, and G. Tsudik. Authenticated group key agreement
and friends. In Proc. 5th ACM Conference on Computer and Communications
Security, pages 17–26. ACM Press, 1998.
[9] G. Ateniese, M. Steiner, and G. Tsudik. New multiparty authentication ser-
vices and key agreement protocols. IEEE Journal on Selected Areas in Com-
munications, 18(4):628–639, 2000.
161
162 Bibliography
[14] D. Basin, S. Mödersheim, and L. Viganò. OFMC: A symbolic model checker for
security protocols. International Journal of Information Security, 4(3):181–
208, 2005.
[17] G. Bella, F. Massacci, and L.C. Paulson. Verifying the SET registration proto-
cols. IEEE Journal on Selected Areas in Communications, 21(1):77–87, 2003.
[20] G. Bella and L.C. Paulson. Using Isabelle to prove properties of the Kerberos
authentication system. In H. Orman and C. Meadows, editors, Workshop on
Design and Formal Verification of Security Protocols. DIMACS, 1997.
[21] G. Bella and L.C. Paulson. Kerberos version IV: Inductive analysis of the se-
crecy goals. In J.-J. Quisquater, Y. Deswarte, C. Meadows, and D. Gollmann,
Bibliography 163
[22] G. Bella and L.C. Paulson. Mechanical proofs about a non-repudiation proto-
col. In R.J. Boulton and P.B. Jackson, editors, Proc. International Conference
on Theorem Proving in Higher Order Logics (TPHOL), volume 2152 of Lecture
Notes in Computer Science, pages 91–104. Springer, 2001.
[23] M. Bellare and P. Rogaway. Provably secure session key distribution: the three
party case. In Proc. 27th Annual ACM Symposium on Theory of Computing
(STOC), pages 57–66. ACM Press, 1995.
[27] K. Bhargavan, C. Fournet, and A.D. Gordon. A semantics for web services
authentication. Theoretical Computer Science, 340(1):102–153, 2005.
[32] A. Bleeker and L. Meertens. A semantics for BAN logic. In Proc. DIMACS
Workshop on Design and Formal Verification of Security Protocols, 1997.
164 Bibliography
[33] S.C.C. Blom, J.F. Groote, S. Mauw, and A. Serebrenik. Analysing the BKE-
security protocol with µCRL. In Proc. of the 6th AMAST Workshop on Real-
Time Systems (ARTS 2004), volume 139 of Electronic Notes in Theoretical
Computer Science, pages 49–90. Elsevier ScienceDirect, 2005.
[34] C. Bodei, M. Buchholtz, P. Degano, F. Nielson, and H. Nielson. Static val-
idation of security protocols. Journal of Computer Security, 13(3):347–390,
2005.
[35] C. Bodei, P. Degano, R. Focardi, and C. Priami. Authentication via local-
ized names. In Proc. 12th IEEE Computer Security Foundations Workshop
(CSFW), pages 98–110. IEEE Computer Society, 1999.
[36] C. Bodei, P. Degano, R. Focardi, and C. Priami. Primitives for authentication
in process algebras. Theoretical Computer Science, 283(2):271–304, 2002.
[37] D. Bolignano. Towards a mechanization of cryptographic protocol verification.
In Proc. Computer Aided Verification’97 (CAV), volume 1254 of Lecture Notes
in Computer Science, pages 131–142. Springer, 1997.
[38] C. Boyd. Towards extensional goals in authentication protocols. In Proc.
DIMACS Workshop on Design and Formal Verification of Security Protocols,
1997.
[39] C. Boyd and A. Mathuria. Protocols for Authentication and Key Establish-
ment. Information Security and Cryptography. Springer, 2003. ISBN: 3-540-
43107-1.
[40] C. Boyd and J. Nieto. Round-optimal contributory conference key agreement.
In Proc. of the 6th International Workshop on Theory and Practice in Public
Key Cryptography, volume 2567 of Lecture Notes in Computer Science, pages
161–174. Springer, 2003.
[41] E. Bresson, O. Chevassut, D. Pointcheval, and J.J. Quisquater. Provably
authenticated group Diffie-Hellman key exchange. In Proc. 8th ACM Confer-
ence on Computer and Communications Security, pages 255–264. ACM Press,
2001.
[42] M. Bugliesi, R. Focardi, and M. Maffei. Authenticity by tagging and typing.
In Proc. 2nd ACM Workshop on Formal Methods in Security Engineering
(FMSE), pages 1–12. ACM Press, 2004.
[43] J. Bull and D. Otway. A nested mutual authentication protocol. Operating
Systems Review, 33(4):42–47, 1999.
[44] M. Burrows, M. Abadi, and R. Needham. A logic of authentication. ACM
Transactions on Computer Systems, 8(1):18–36, 1990.
[45] L. Buttyán, A. Nagy, and I. Vajda. Efficient multi-party challenge-response
protocols for entity authentication. Periodica Polytechnica, 45(1):43–64, April
2001.
Bibliography 165
[46] C. Caleiro, L. Viganò, and D. Basin. Deconstructing Alice and Bob. Proc. of
the Workshop on Automated Reasoning for Security Protocol Analysis, AR-
SPA 2005, 135(1):3–22, 2005.
[51] P.C. Cheng and V.D. Gligor. On the formal specification and verification
of a multiparty session protocol. In 1990 IEEE Symposium on Security and
Privacy, pages 216–233. IEEE Computer Society, 1990.
[54] J.A. Clark and J.L. Jacob. A survey of authentication protocol literature.
http://citeseer.ist.psu.edu/clark97survey.html, 1997.
[55] E. Clarke, S. Jha, and W. Marrero. Partial order reductions for security proto-
col verification. In Tools and Algorithms for the Construction and Analysis of
Systems, volume 1785 of Lecture Notes in Computer Science, pages 503–518.
Springer, 2000.
[56] E.M. Clarke, S. Jha, and W. Marrero. Verifying security protocols with Bru-
tus. ACMTSEM: ACM Transactions on Software Engineering and Methodol-
ogy, 9(4):443–487, 2000.
166 Bibliography
[57] R. Corin and S. Etalle. An improved constraint-based system for the verifica-
tion of security protocols. In M.V. Hermenegildo and G. Puebla, editors, Proc.
9th International Static Analysis Symposium (SAS), volume 2477 of Lecture
Notes in Computer Science, pages 326–341, Spain, Sep 2002. Springer.
[58] R. Corin, S. Etalle, P.H. Hartel, and A. Mader. Timed model checking of se-
curity protocols. In Proc. 2nd ACM Workshop on Formal Methods in Security
Engineering (FMSE), pages 23–32. ACM Press, 2004.
[59] R.J. Corin, A. Saptawijaya, and S. Etalle. A logic for constraint-based security
protocol analysis. In 2006 IEEE Symposium on Security and Privacy, Los
Alamitos, California, 2006. IEEE Computer Society.
[60] C.J.F. Cremers. The Scyther tool: Automatic verification of security proto-
cols. http://www.win.tue.nl/~ccremers/scyther/.
[62] C.J.F. Cremers. Feasibility of multi-protocol attacks. In Proc. of The First In-
ternational Conference on Availability, Reliability and Security (ARES), pages
287–294, Vienna, Austria, April 2006. IEEE Computer Society.
[63] C.J.F. Cremers, V. Issarny, and S. Mauw, editors. STM’05, Proc. of the first
international workshop on security and trust management. Electronic Notes in
Theoretical Computer Science. Elsevier ScienceDirect, Italy, September 2005.
[64] C.J.F. Cremers and S. Mauw. Checking secrecy by means of partial order
reduction. In D. Amyot and A.W. Williams, editors, SAM 2004: Security
Analysis and Modelling, volume 3319 of Lecture Notes in Computer Science,
pages 177–194, Ottawa, Canada, September 2004. Springer.
[68] C.J.F. Cremers, S. Mauw, and E.P. de Vink. Formal methods for security
protocols: Three examples of the black-box approach. NVTI newsletter, 7:21–
32, 2003. Newsletter of the Dutch Association for Theoretical Computing
Scientists.
[69] C.J.F. Cremers, S. Mauw, and E.P. de Vink. A syntactic criterion for injectiv-
ity of authentication protocols. In P. Degano and L. Vigano, editors, ARSPA
2005, volume 135 of Electronic Notes in Theoretical Computer Science, pages
23–38. Elsevier ScienceDirect, July 2005.
[70] C.J.F. Cremers, S. Mauw, and E.P. de Vink. Injective synchronisation: an
extension of the authentication hierarchy. Theoretical Computer Science, 2006.
In preparation.
[71] A. Datta, A. Derek, J.C. Mitchell, and D. Pavlovic. Secure protocol composi-
tion. In Proc. 1st ACM Workshop on Formal Methods in Security Engineering
(FMSE), pages 11–23. ACM Press, 2003.
[72] A. Datta, A. Derek, J.C. Mitchell, and D. Pavlovic. Abstraction and refine-
ment in protocol derivation. In Proc. 17th IEEE Computer Security Founda-
tions Workshop (CSFW), pages 30–45. IEEE Computer Society, 2004.
[73] A. Datta, A. Derek, J.C. Mitchell, and D. Pavlovic. A derivation system
and compositional logic for security protocols. Journal of Computer Security,
13(3):423–482, 2005.
[74] D. Denning and G. Sacco. Timestamps in key distribution protocols. Com-
munications of the ACM, 24(8):533–536, 1981.
[75] X. Didelot. COSP-J: A compiler for security protocols, 2003.
http://web.comlab.ox.ac.uk/oucl/work/gavin.lowe/Security/Casper/
COSPJ/secu.pdf.
[76] W. Diffie, P.C. van Oorschot, and M.J. Wiener. Authentication and authen-
ticated key-exchanges. Designs, Codes and Cryptography, 2(2):107–125, 1992.
[77] D. Dill. The Murφ verification system. In Proc. Computer Aided Verifi-
cation’96 (CAV), volume 1102 of Lecture Notes in Computer Science, pages
390–393. Springer, 1996.
[78] S. Dogmi, J.D. Guttman, and F.J. Thayer. Skeletons and the shapes of bun-
dles. http://www.ccs.neu.edu/home/guttman/skeletons.pdf, 2006.
[79] D. Dolev and A.C. Yao. On the security of public key protocols. IEEE
Transactions on Information Theory, 29(12):198–208, March 1983.
[80] B. Donovan, P. Norris, and G. Lowe. Analyzing a library of security protocols
using Casper and FDR, July 1999.
[81] N. Durgin, P.D. Lincoln, J.C. Mitchell, and A. Scedrov. Undecidability of
bounded security protocols. In Proc. of the FLOC’99 Workshop on Formal
Methods and Security Protocols (FMSP’99), 1999.
168 Bibliography
[82] N.A. Durgin and J.C. Mitchell. Analysis of security protocols. In M. Broy
and R. Steinbruggen, editors, Calculational System Design, pages 369–395.
IOS Press, 1999.
[83] N.A. Durgin, J.C. Mitchell, and D. Pavlovic. A compositional logic for protocol
correctness. In Proc. 14th IEEE Computer Security Foundations Workshop
(CSFW), pages 241–272, 2001.
[86] R. Focardi and F. Martinelli. A uniform approach for the definition of security
properties. In World Congress on Formal Methods (1), volume 1708 of Lecture
Notes in Computer Science, pages 794–813. Springer, 1999.
[88] C. Fournet, A.D. Gordon, and S. Maffeis. A type discipline for authorization
policies. In M. Sagiv, editor, Proc. 14th European Symposium on Programming
(ESOP), volume 3444 of Lecture Notes in Computer Science, pages 141–156,
Edinburgh, UK, 2005. Springer.
[91] L. Gong. Variations on the themes of message freshness and replay—or the
difficulty of devising formal methods to analyze cryptographic protocols. In
Proc. 6th IEEE Computer Security Foundations Workshop (CSFW), volume
3835, pages 131–136. IEEE Computer Society, 1993.
[94] J.F. Groote and A. Ponse. The syntax and semantics of µCRL. In A. Ponse,
C. Verhoef, and S.F.M. van Vlijmen, editors, Algebra of Communicating Pro-
cesses ’94, Workshops in Computing Series, pages 26–62. Springer, 1995.
[95] J.D. Guttman and F.J. Thayer. Protocol independence through disjoint en-
cryption. In Proc. 13th IEEE Computer Security Foundations Workshop
(CSFW), pages 24–34. IEEE Computer Society, 2000.
[96] J.D. Guttman and F.J. Thayer. Authentication tests and the structure of
bundles. Theoretical Computer Science, 283(2):333–380, 2002.
[97] C. He and J.C. Mitchell. Analysis of the 802.11i 4-way handshake. In WiSe
’04: Proc. of the 2004 ACM workshop on Wireless security, pages 43–50. ACM
Press, 2004.
[99] J. Heather, G. Lowe, and S. Schneider. How to prevent type flaw attacks on
security protocols. Journal of Computer Security, 11(2):217–244, 2003.
[100] N. Heintze, J.D. Tygar, J.M. Wing, and H.-C. Wong. Model checking elec-
tronic commerce protocols. In Proc. USENIX 1996 Workshop on Electronic
Commerce, pages 147–166. USENIX Association, 1996.
[103] G.J. Holzmann. The model checker SPIN. IEEE Transactions on Software
Engineering, 23(5):279–295, 1997.
[107] J.R. Burch, E.M. Clarke, K.L. McMillan, D.L. Dill, and L.J. Hwang. Symbolic
Model Checking: 1020 States and Beyond. In Proc. 5th Annual IEEE Sympo-
sium on Logic in Computer Science (LICS), pages 1–33, Washington, D.C.,
1990. IEEE Computer Society. citeseer.lcs.mit.edu/burch90symbolic.
html.
[108] M. Just and S. Vaudenay. Authenticated multi-party key agreement. In ASI-
ACRYPT: Advances in Cryptology – ASIACRYPT: International Conference
on the Theory and Application of Cryptology, volume 1163 of Lecture Notes
in Computer Science, pages 36–49. Springer, 1996.
[109] J. Katz and M. Yung. Scalable protocols for authenticated group key exchange.
In Proc. of Crypto’03, volume 2729 of Lecture Notes in Computer Science,
pages 110–125. Springer, 2003.
[110] C. Kaufman, R. Perlman, and M. Speciner. Network security: private commu-
nication in a public world (2nd ed.). Computer Networking and Distributed
Systems. Prentice Hall PTR, 2002.
[111] J. Kelsey, B. Schneier, and D. Wagner. Protocol interactions and the chosen
protocol attack. In B. Christianson, B. Crispo, T.M.A. Lomas, and M. Roe,
editors, Proc. 5th International Workshop on Security Protocols, volume 1361
of Lecture Notes in Computer Science, pages 91–104. Springer, 1997.
[112] R. Kemmerer, C. Meadows, and J. Millen. Three systems for cryptographic
protocol analysis. Journal of Cryptology, 7:79–130, 1994.
[113] S.T. Kent. Encryption-based protection for interactive user/computer com-
munication. In SIGCOMM ’77: Proc. of the fifth symposium on Data com-
munications, pages 5.7–5.13. ACM Press, 1977.
[114] Y. Kim, A. Perrig, and G. Tsudik. Group key agreement efficient in commu-
nication. IEEE Transactions on Computers, 53, 2004.
[115] L. Lamport. Password authentication with insecure communication. Commu-
nications of the ACM, 24(11):770–772, 1981.
[116] N.Y. Lee and M.F. Lee. Comments on multiparty key exchange scheme. Op-
erating Systems Review, 38(4):70–73, 2004.
[117] Y.R. Lee, H.S. Sook, and H.K. Lee. Multi-party authenticated key agreement
protocols from multi-linear forms. Applied Mathematics and Computation,
159(2):317–331, December 2004.
[118] Y. Li, W. Yang, and C. Huang. On preventing type flaw attacks on security
protocols with a simplified tagging scheme. Journal of Information Science
and Engineering, 21(1):59–84, 2005.
[119] G. Lowe. Breaking and fixing the Needham-Schroeder public-key protocol
using FDR. In Proc. 2nd International Conference on Tools and Algorithms for
the Construction and Analysis of Systems (TACAS), volume 1055 of Lecture
Notes in Computer Science, pages 147–166. Springer, 1996.
Bibliography 171
[120] G. Lowe. Some new attacks upon security protocols. In Proc. 9th IEEE Com-
puter Security Foundations Workshop (CSFW), pages 162–169. IEEE Com-
puter Society, 1996.
[121] G. Lowe. Casper: A compiler for the analysis of security protocols. In Proc.
10th IEEE Computer Security Foundations Workshop (CSFW), pages 18–30.
IEEE Computer Society, 1997.
[122] G. Lowe. A hierarchy of authentication specifications. In Proc. 10th IEEE
Computer Security Foundations Workshop (CSFW), pages 31–44. IEEE Com-
puter Society, 1997.
[123] G. Lowe. Towards a completeness result for model checking of security proto-
cols. In Proc. 11th IEEE Computer Security Foundations Workshop (CSFW).
IEEE Computer Society, 1998.
[124] Formal Systems (Europe) Ltd. Failure-Divergence Refinement — FDR2 Users
Manual, 2000.
[125] M. Maffei. Tags for multi-protocol authentication. In Proc. of the 2nd Interna-
tional Workshop on Security Issues in Coordination Models, Languages, and
Systems, volume 128(5) of Electronic Notes in Theoretical Computer Science,
pages 55–63. Elsevier ScienceDirect, August 2005.
[126] P. Maggi and R. Sisto. Using SPIN to verify security properties of crypto-
graphic protocols. In Proc. of the 9th International SPIN Workshop on Model
Checking of Software, volume 2318 of Lecture Notes in Computer Science,
pages 187–204. Springer, 2002.
[127] S. Malladi, J. Alves-Foss, and R. Heckendorn. On preventing replay attacks
on security protocols. In Proc. International Conference on Security and Man-
agement, pages 77–83. CSREA Press, 2002.
[128] S. Malladi, J. Alves-Foss, and S. Malladi. What are multi-protocol guessing
attacks and how to prevent them. In Proc. 7th International Workshop on
Enterprise Security, pages 77–82. IEEE Computer Society, June 2002.
[129] F. Martinelli. Analysis of security protocols as open systems. Theoretical
Computer Science, 290(1):1057–1106, 2003.
[130] S. Mauw and V. Bos. Drawing Message Sequence Charts with LATEX. TUG-
Boat, 22(1-2):87–92, March 2001.
[131] S. Mauw, W.T. Wiersma, and T.A.C. Willemse. Language-driven system
design. International Journal of Software Engineering and Knowledge Engi-
neering, 14(6):625–663, 2004.
[132] C. Meadows. Analyzing the Needham-Schroeder public-key protocol: A com-
parison of two approaches. In E. Bertino, H. Kurth, G. Martella, and E. Mon-
tolivo, editors, Proc. 4th European Symposium on Research in Computer Se-
curity (ESORICS), volume 1146 of Lecture Notes in Computer Science, pages
351–364. Springer, 1996.
172 Bibliography
[134] C. Meadows. Analysis of the Internet Key Exchange Protocol using the NRL
protocol analyzer. In Proc. 20th IEEE Symposium on Security & Privacy,
pages 216–231. IEEE Computer Society, 1999.
[135] C. Meadows. Open issues in formal methods for cryptographic protocol anal-
ysis. Lecture Notes in Computer Science, 2052:21, 2001.
[136] A.J. Menezes, P.C. Van Oorschot, and S.A. Vanstone. Handbook of Applied
Cryptography. CRC Press, Inc., 5th edition, 2001.
[138] J. Millen and V. Shmatikov. Symbolic protocol analysis with products and
Diffie-Hellman exponentiation. In Proc. 16th IEEE Computer Security Foun-
dations Workshop (CSFW). IEEE Computer Society, 2003.
[139] J.K. Millen. A necessarily parallel attack. In N. Heintze and E. Clarke, editors,
Workshop on Formal Methods and Security Protocols, Trento, Italy, 1999.
[140] J.K. Millen and V. Shmatikov. Constraint solving for bounded-process cryp-
tographic protocol analysis. In Proc. 8th ACM Conference on Computer and
Communications Security, pages 166–175. ACM Press, 2001.
[142] C.J. Mitchell and L. Chen. Comments on the s/key user authentication
scheme. Operating Systems Review, 30(4):12–16, 1996.
[146] P.C. van Oorschot. Extending cryptographic logics of belief to key agreement
protocols. In Proc. 1st ACM Conference on Computer and Communications
Security, pages 232–243. ACM Press, 1993.
Bibliography 173
[147] S. Owre, J.M. Rushby, and N. Shankar. PVS: A prototype verification system.
In Proc. 11th International Conference on Automated Deduction (CADE),
volume 607 of Lecture Notes in Computer Science, pages 748–752. Springer,
1992.
[148] N. Palm. Bewijzen van security protocollen in een trace model. Master’s
thesis, Technische Universiteit Eindhoven, Department of Mathematics and
Computer Science, 2004.
[149] L.C. Paulson. Isabelle: a generic theorem prover, volume 828 of Lecture Notes
in Computer Science. Springer, 1994.
[150] L.C. Paulson. Mechanized proofs for a recursive authentication protocol. In
Proc. 10th IEEE Computer Security Foundations Workshop (CSFW), pages
84–95. IEEE Computer Society, 1997.
[151] L.C. Paulson. Proving properties of security protocols by induction. In Proc.
10th IEEE Computer Security Foundations Workshop (CSFW), pages 70–83,
Rockport, Massachusetts, 1997. IEEE Computer Society.
[152] L.C. Paulson. The inductive approach to verifying cryptographic protocols.
Journal of Computer Security, 6:85–128, 1998.
[153] L.C. Paulson. Inductive analysis of the Internet protocol TLS. ACM Trans-
actions on Information and System Security, 2(3):332–351, August 1999.
[154] L.C. Paulson. Relations between secrets: Two formal analyses of the Yahalom
protocol. Journal of Computer Security, 9(3):197–216, 2001.
[155] L.C. Paulson. SET cardholder registration: the secrecy proofs. In R. Goré,
A. Leitsch, and T. Nipkow, editors, Automated Reasoning — First Interna-
tional Joint Conference, IJCAR 2001, volume 2083 of Lecture Notes in Arti-
ficial Intelligence, pages 5–12. Springer, 2001.
[156] D. Peled. Ten years of partial order reduction. In Proc. Computer Aided
Verification’98 (CAV), pages 17–28. Springer, 1998.
[157] A. Perrig and D. Song. Looking for diamonds in the desert – extending au-
tomatic protocol generation to three-party authentication and key agreement
protocols. In Proc. 13th IEEE Computer Security Foundations Workshop
(CSFW), pages 64–76. IEEE Computer Society, 2000.
[158] G.D. Plotkin. A structural approach to operational semantics. Technical Re-
port DIAMI FN-19, Computer Science Department, Aarhus University, 1981.
[159] G.J. Popek and C.S. Kline. Encryption and secure computer networks. ACM
Computing Surveys, 11(4):331–356, 1979.
[160] A.W. Roscoe. Modelling and verifying key-exchange protocols using CSP
and FDR. In Proc. 8th IEEE Computer Security Foundations Workshop
(CSFW), pages 98–107, Kenmare, 1995.
174 Bibliography
[164] P. Ryan and S. Schneider. Modelling and Analysis of Security Protocols: the
CSP Approach. Addison-Wesley, 2001. With M.H. Goldsmith, G. Lowe and
A.W. Roscoe.
[170] V. Shmatikov and J.C. Mitchell. Finite-state analysis of two contract signing
protocols. Theoretical Computer Science, 283, 2002.
[171] D. Song. Athena: a new efficient automatic checker for security protocol anal-
ysis. In Proc. 12th IEEE Computer Security Foundations Workshop (CSFW),
pages 192–202. IEEE Computer Society, 1999.
[172] D. Song. An Automatic Approach for Building Secure Systems. PhD thesis,
UC Berkeley, December 2003.
[173] D. Song, S. Berezin, and A. Perrig. Athena: A novel approach to efficient auto-
matic security protocol analysis. Journal of Computer Security, 9(1/2):47–74,
2001.
[177] S.G. Stubblebine and R.N. Wright. An authentication logic with formal seman-
tics supporting synchronization, revocation, and recency. IEEE Transactions
on Software Engineering, 28:256–285, 2002.
[178] P.F. Syverson and P.C. van Oorschot. A unified cryptographic protocol logic.
CHACS Report 5540-227, NRL, 1996.
[180] F.J. Thayer, J.C. Herzog, and J.D. Guttman. Honest ideals on strand spaces.
In Proc. 11th IEEE Computer Security Foundations Workshop (CSFW), pages
66–77. IEEE Computer Society, 1998.
[181] F.J. Thayer, J.C. Herzog, and J.D. Guttman. Mixed strand spaces. In Proc.
12th IEEE Computer Security Foundations Workshop (CSFW), pages 72–82.
IEEE Computer Society, 1999.
[182] F.J. Thayer, J.C. Herzog, and J.D. Guttman. Strand spaces: Proving security
protocols correct. Journal of Computer Security, 7:191–230, 1999.
[183] F.L. Tiplea, C. Enea, and C.V. Birjoveneanu. Decidability and complexity
results for security protocols. In E.M. Clarke, M. Minea, and F.L. Tiplea,
editors, VISSAS, volume 1 of NATO Security through Science Series D: In-
formation and Communication Security, pages 185–211. IOS Press, 2005.
[184] W.G. Tzeng and C.M. Hu. Inter-protocol interleaving attacks on some au-
thentication and key distribution protocols. Information Processing Letters,
69(6):297–302, 1999.
[187] H. Zhou and S. Foley. Fast automatic synthesis of security protocols using
backward search. In M. Backes and D. Basin, editors, Proc. 1st ACM Work-
shop on Formal Methods in Security Engineering (FMSE), pages 1–10. ACM
Press, 2003.
Index of subjects
basic term, 13 E, 29
bounded characterization algorithm, 92 event
content of, 26
Casper/FDR, 150 explicit trace pattern, 78
cast function, 38
characterization, 80 Func, 13
characterization algorithm, 88 function names, 13
claim future work, 158
alive, 36
i -agree, 46 i -agree, 46
i -synch, 44 inferable
ni -agree, 46 from role knowledge set, 18
ni -synch, 42 initial intruder knowledge, 30
secret, 35 initial network state, 27
claim set, 14 injective agreement, 46
ClaimRunEv , 25 injective synchronisation, 44
communication instantiation
label, 14 instantiation of role term, 22
order, 17 symbolic instantiation, 70
relation, 17 well-typedness of, 24
complete characterization, 80 instantiation function, 21
·, 16, 56 intruder
concatenate Dolev-Yao, 29
177
178 Index of subjects
181
Summary
Recent technologies have cleared the way for large scale application of electronic
communication. The open and distributed nature of these communications implies
that the communication medium is no longer completely controlled by the commu-
nicating parties. As a result, there has been an increasing demand for research in
establishing secure communications over insecure networks, by means of security
protocols.
In this thesis, a formal model for the description and analysis of security protocols
at the process level is developed. At this level, under the assumption of perfect
cryptography, the analysis focusses on detecting flaws and vulnerabilities of the
security protocol.
Starting from first principles, operational semantics are developed to describe se-
curity protocols and their behaviour. The resulting model is parameterized, and
can e.g. capture various intruder models, ranging from a secure network with no
intruder, to the strongest intruder model known in literature. Within the security
protocol model various security properties are defined, such as secrecy and vari-
ous forms of authentication. A number of new results about these properties are
formulated and proven correct.
Based on the model, an automated verification procedure is developed, which sig-
nificantly improves over existing methods. The procedure is implemented in a pro-
totype, which outperforms other tools.
Both the theory and tool are applied in two novel case studies. Using the tool pro-
totype, new results are established in the area of protocol composition, leading to
the discovery of a class of previously undetected attacks. Furthermore, a new pro-
tocol in the area of multiparty authentication is developed. The resulting protocol
is proven correct within the framework.
183
Curriculum Vitae
185
Titles in the IPA Dissertation Series
J.O. Blanco. The State Operator in Process of Mathematics and Computing Science, TUE.
Algebra. Faculty of Mathematics and Comput- 1996-13
ing Science, TUE. 1996-01
M.M. Bonsangue. Topological Dualities in
A.M. Geerling. Transformational Develop- Semantics. Faculty of Mathematics and Com-
ment of Data-Parallel Algorithms. Faculty puter Science, VUA. 1996-14
of Mathematics and Computer Science, KUN.
1996-02
B.L.E. de Fluiter. Algorithms for Graphs of
Small Treewidth. Faculty of Mathematics and
P.M. Achten. Interactive Functional Pro-
Computer Science, UU. 1997-01
grams: Models, Methods, and Implementation.
Faculty of Mathematics and Computer Science,
KUN. 1996-03 W.T.M. Kars. Process-algebraic Transfor-
mations in Context. Faculty of Computer Sci-
M.G.A. Verhoeven. Parallel Local Search. ence, UT. 1997-02
Faculty of Mathematics and Computing Sci-
ence, TUE. 1996-04 P.F. Hoogendijk. A Generic Theory of Data
Types. Faculty of Mathematics and Computing
M.H.G.K. Kesseler. The Implementation Science, TUE. 1997-03
of Functional Languages on Parallel Machines
with Distrib. Memory. Faculty of Mathematics T.D.L. Laan. The Evolution of Type Theory
and Computer Science, KUN. 1996-05 in Logic and Mathematics. Faculty of Mathe-
matics and Computing Science, TUE. 1997-04
D. Alstein. Distributed Algorithms for Hard
Real-Time Systems. Faculty of Mathematics
and Computing Science, TUE. 1996-06 C.J. Bloo. Preservation of Termination for
Explicit Substitution. Faculty of Mathematics
and Computing Science, TUE. 1997-05
J.H. Hoepman. Communication, Synchro-
nization, and Fault-Tolerance. Faculty of
Mathematics and Computer Science, UvA. J.J. Vereijken. Discrete-Time Process Alge-
1996-07 bra. Faculty of Mathematics and Computing
Science, TUE. 1997-06
H. Doornbos. Reductivity Arguments and
Program Construction. Faculty of Mathemat- F.A.M. van den Beuken. A Functional Ap-
ics and Computing Science, TUE. 1996-08 proach to Syntax and Typing. Faculty of Math-
ematics and Informatics, KUN. 1997-07
D. Turi. Functorial Operational Semantics
and its Denotational Dual. Faculty of Math- A.W. Heerink. Ins and Outs in Refusal Test-
ematics and Computer Science, VUA. 1996-09 ing. Faculty of Computer Science, UT. 1998-01
J.M.T. Romijn. Analysing Industrial Pro- P.A. Olivier. A Framework for Debugging
tocols with Formal Methods. Faculty of Com- Heterogeneous Applications. Faculty of Nat-
puter Science, UT. 1999-09 ural Sciences, Mathematics and Computer Sci-
ence, UvA. 2000-08
P.R. D’Argenio. Algebras and Automata
for Timed and Stochastic Systems. Faculty of E. Saaman. Another Formal Specification
Computer Science, UT. 1999-10 Language. Faculty of Mathematics and Nat-
ural Sciences, RUG. 2000-10
G. Fábián. A Language and Simulator for M. Jelasity. The Shape of Evolutionary
Hybrid Systems. Faculty of Mechanical Engi- Search Discovering and Representing Search
neering, TUE. 1999-11 Space Structure. Faculty of Mathematics and
Natural Sciences, UL. 2001-01
J. Zwanenburg. Object-Oriented Concepts
and Proof Rules. Faculty of Mathematics and R. Ahn. Agents, Objects and Events a com-
Computing Science, TUE. 1999-12 putational approach to knowledge, observation
and communication. Faculty of Mathematics M.C. van Wezel. Neural Networks for In-
and Computing Science, TU/e. 2001-02 telligent Data Analysis: theoretical and exper-
imental aspects. Faculty of Mathematics and
M. Huisman. Reasoning about Java pro- Natural Sciences, UL. 2002-01
grams in higher order logic using PVS and Is-
abelle. Faculty of Science, KUN. 2001-03 V. Bos and J.J.T. Kleijn. Formal Spec-
ification and Analysis of Industrial Systems.
I.M.M.J. Reymen. Improving Design Pro- Faculty of Mathematics and Computer Science
cesses through Structured Reflection. Faculty and Faculty of Mechanical Engineering, TU/e.
of Mathematics and Computing Science, TU/e. 2002-02
2001-04
T. Kuipers. Techniques for Understanding
S.C.C. Blom. Term Graph Rewriting: syntax Legacy Software Systems. Faculty of Natural
and semantics. Faculty of Sciences, Division Sciences, Mathematics and Computer Science,
of Mathematics and Computer Science, VUA. UvA. 2002-03
2001-05
S.P. Luttik. Choice Quantification in Process
Algebra. Faculty of Natural Sciences, Mathe-
R. van Liere. Studies in Interactive Visual-
matics, and Computer Science, UvA. 2002-04
ization. Faculty of Natural Sciences, Mathe-
matics and Computer Science, UvA. 2001-06
R.J. Willemen. School Timetable Construc-
tion: Algorithms and Complexity. Faculty
A.G. Engels. Languages for Analysis and
of Mathematics and Computer Science, TU/e.
Testing of Event Sequences. Faculty of Mathe-
2002-05
matics and Computing Science, TU/e. 2001-07
M.I.A. Stoelinga. Alea Jacta Est: Verifica-
J. Hage. Structural Aspects of Switching
tion of Probabilistic, Real-time and Parametric
Classes. Faculty of Mathematics and Natural
Systems. Faculty of Science, Mathematics and
Sciences, UL. 2001-08
Computer Science, KUN. 2002-06
M.H. Lamers. Neural Networks for Analy- N. van Vugt. Models of Molecular Comput-
sis of Data in Environmental Epidemiology: A ing. Faculty of Mathematics and Natural Sci-
Case-study into Acute Effects of Air Pollution ences, UL. 2002-07
Episodes. Faculty of Mathematics and Natural
Sciences, UL. 2001-09
A. Fehnker. Citius, Vilius, Melius: Guid-
ing and Cost-Optimality in Model Checking
T.C. Ruys. Towards Effective Model Check- of Timed and Hybrid Systems. Faculty of
ing. Faculty of Computer Science, UT. 2001-10 Science, Mathematics and Computer Science,
KUN. 2002-08
D. Chkliaev. Mechanical verification of con-
currency control and recovery protocols. Fac- R. van Stee. On-line Scheduling and Bin
ulty of Mathematics and Computing Science, Packing. Faculty of Mathematics and Natural
TU/e. 2001-11 Sciences, UL. 2002-09
D. Bošnački. Enhancing state space reduc- J.I. den Hartog. Probabilistic Extensions of
tion techniques for model checking. Faculty of Semantical Models. Faculty of Sciences, Di-
Mathematics and Computing Science, TU/e. vision of Mathematics and Computer Science,
2001-14 VUA. 2002-12
L. Moonen. Exploring Software Systems. Approach. Faculty of Electrical Engineering,
Faculty of Natural Sciences, Mathematics, and Mathematics & Computer Science, UT. 2003-
Computer Science, UvA. 2002-13 09
J.I. van Hemert. Applying Evolutionary M.H. ter Beek. Team Automata – A Formal
Computation to Constraint Satisfaction and Approach to the Modeling of Collaboration Be-
Data Mining. Faculty of Mathematics and tween System Components. Faculty of Mathe-
Natural Sciences, UL. 2002-14 matics and Natural Sciences, UL. 2003-10
S.M. Bohte. Spiking Neural Networks. Fac- Y. Qian. Data Synchronization and Browsing
ulty of Mathematics and Natural Sciences, UL. for Home Environments. Faculty of Mathe-
2003-04 matics and Computer Science and Faculty of
Industrial Design, TU/e. 2004-05
T.A.C. Willemse. Semantics and Verifica-
tion in Process Algebras with Data and Tim- F. Bartels. On Generalised Coinduction and
ing. Faculty of Mathematics and Computer Probabilistic Specification Formats. Faculty of
Science, TU/e. 2003-05 Sciences, Division of Mathematics and Com-
puter Science, VUA. 2004-06
S.V. Nedea. Analysis and Simulations of
Catalytic Reactions. Faculty of Mathematics L. Cruz-Filipe. Constructive Real Analysis:
and Computer Science, TU/e. 2003-06 a Type-Theoretical Formalization and Appli-
cations. Faculty of Science, Mathematics and
M.E.M. Lijding. Real-time Scheduling of Computer Science, KUN. 2004-07
Tertiary Storage. Faculty of Electrical En-
gineering, Mathematics & Computer Science, E.H. Gerding. Autonomous Agents in Bar-
UT. 2003-07 gaining Games: An Evolutionary Investiga-
tion of Fundamentals, Strategies, and Business
H.P. Benz. Casual Multimedia Process An- Applications. Faculty of Technology Manage-
notation – CoMPAs. Faculty of Electrical En- ment, TU/e. 2004-08
gineering, Mathematics & Computer Science,
UT. 2003-08 N. Goga. Control and Selection Techniques
for the Automated Testing of Reactive Systems.
D. Distefano. On Modelchecking the Dynam- Faculty of Mathematics and Computer Science,
ics of Object-based Software: a Foundational TU/e. 2004-09
M. Niqui. Formalising Exact Arithmetic: Support- . Faculty of Mathematics and Natural
Representations, Algorithms and Proofs. Fac- Sciences, UL. 2005-01
ulty of Science, Mathematics and Computer
Science, RU. 2004-10 R. Ruimerman. Modeling and Remodeling
in Bone Tissue. Faculty of Biomedical Engi-
A. Löh. Exploring Generic Haskell. Faculty neering, TU/e. 2005-02
of Mathematics and Computer Science, UU.
2004-11 C.N. Chong. Experiments in Rights Control -
Expression and Enforcement. Faculty of Elec-
I.C.M. Flinsenberg. Route Planning Algo- trical Engineering, Mathematics & Computer
rithms for Car Navigation. Faculty of Mathe- Science, UT. 2005-03
matics and Computer Science, TU/e. 2004-12
H. Gao. Design and Verification of Lock-free
R.J. Bril. Real-time Scheduling for Me- Parallel Algorithms. Faculty of Mathematics
dia Processing Using Conditionally Guaran- and Computing Sciences, RUG. 2005-04
teed Budgets. Faculty of Mathematics and
Computer Science, TU/e. 2004-13 H.M.A. van Beek. Specification and Analy-
sis of Internet Applications. Faculty of Mathe-
J. Pang. Formal Verification of Distributed matics and Computer Science, TU/e. 2005-05
Systems. Faculty of Sciences, Division of Math-
ematics and Computer Science, VUA. 2004-14 M.T. Ionita. Scenario-Based System Archi-
tecting - A Systematic Approach to Developing
F. Alkemade. Evolutionary Agent-Based Future-Proof System Architectures. Faculty of
Economics. Faculty of Technology Manage- Mathematics and Computing Sciences, TU/e.
ment, TU/e. 2004-15 2005-06
E.O. Dijk. Indoor Ultrasonic Position Esti- G. Lenzini. Integration of Analysis Tech-
mation Using a Single Base Station. Faculty niques in Security and Fault-Tolerance. Fac-
of Mathematics and Computer Science, TU/e. ulty of Electrical Engineering, Mathematics &
2004-16 Computer Science, UT. 2005-07