0% found this document useful (0 votes)
63 views198 pages

Modal and Temporal Properties of Processes Compress

The document is a book titled 'Modal and Temporal Properties of Processes' by Colin Stirling, which explores modal and temporal logics for concurrent processes. It introduces basic operators, transition behaviors, and discusses properties like safety and liveness in concurrent systems. The text aims to be accessible for both undergraduate and advanced readers, incorporating games for pedagogical clarity and verification techniques for temporal properties.

Uploaded by

Ayati Sonkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views198 pages

Modal and Temporal Properties of Processes Compress

The document is a book titled 'Modal and Temporal Properties of Processes' by Colin Stirling, which explores modal and temporal logics for concurrent processes. It introduces basic operators, transition behaviors, and discusses properties like safety and liveness in concurrent systems. The text aims to be accessible for both undergraduate and advanced readers, incorporating games for pedagogical clarity and verification techniques for temporal properties.

Uploaded by

Ayati Sonkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 198

TEXTS IN COMPUTER SCIENCE

Editors
David Gries
Fred B. Schneider

Springer Science+Business Media, LLC


TEXTS IN COMPUTER SCIENCE

A/agar and Periyasamy, Specification of Software Systems

Apt and O/derog, Verification of Sequential and Concurrent


Programs, Second Edition

Back and von Wright, Refinement Calculus

Beid/er, Data Structures and Aigorithms

Bergin, Data Structure Programming

Brooks, C Programming: The Essentials for Engineers and Scientists

Brooks, Problem Solving with Fortran 90

Dandamudi, Introduction to Assembly Language Programming

Fitting, First-Order Logic and Automated Theorem Proving,


Second Edition

Grillmeyer, Exploring Computer Science with Scheme

Homer and Se/man, Computability and Complexity Theory

/mmerman, Descriptive Complexity

Ja/ote, An Integrated Approach to Software Engineering, Second Edition

Kizza, Ethical and Social Issues in the Information Age

Kozen, Automata and Computability

Li and Vitanyi, An Introduction to Kolmogorov Complexity


and Its Applications, Second Edition
(continued after index)
Colin Stirling

MODAL AND TEMPORAL


PROPERTIES OF PROCESSES

With 45 Illustrations

f springer
Colin Stirling
Division of Informatics
University of Edinburgh
Edinburgh EH9 3JZ UK
[email protected]

Series Editors
David Gries Fred B. Schneider
Department of Computer Science Department of Computer Science
415 Boyd Studies Research Center Upson Hall
The University ofGeorgia Cornell University
Athens, GA 30605, USA Ithaca, NY 14853-7501, USA

Library ot Congress Cataloging-in-Publicalion Data


Stirling, Colin P.
Modal and temporal propertles of processes I Colin Slirling.
p. cm. - (Texts incomputer seience)
tncludes bibliographical references and index.
ISBN 978-1-4419-3153-5 ISBN 978-1-4757-3550-5 (eBook)
DOI 10.1007/978-1-4757-3550-5
1. Computer logic. 2. Parallel processing (Electronic computers) I. Tille. 11. Series.
QA76.9.L63 S752001
004'.35-dc21 00-067924

Printed onacid-free paper.

© 2001 Springer Science+Business Media New York


Originally published by Springer-Verlag New York, Inc in 2001.
Softcover reprint of the hardcover 1st edition 2001
All rights reserved. This work may notbe translated or coplsd in whole or in part without the written
permission of the publisher Springer Science+Business Media, LLC,
except for brief excerpts inconneclion with reviews orscholarly analysls. Use in connection with
any form ot information storage and retrieval, electronic adaptatlon, computer sonware, or bysimilar or
dissimilar methodology now known or nereatter developed is forbldden,
The use ot general descriplive names, traue names, trademarks, etc., in this publication, even if the
former are notespecially identified, is notto be taken as a sign that such names as understood bythe
Trade Marks and Merchandise Marks Ac!, may accordingly be used freely byanyone.

Production managed byTimothy Taylor; manufacturing supervlsed byErica Bresler.


Typeset pages prepared using the author's ~Tp< 28 files byThe Bartlett Press, Inc., Marietta, GA.
987 6 5 4 3 2 1
Ta Sarah and Susie
Preface

In this book we examine modal and temporal logics for processes. First, we in-
troduce concurrent processes as terms of an algebraic language comprising a few
basic operators. Their behaviours are described using transitions. Families of tran-
sitions can be arranged as labelIed graphs, concrete summaries of the behaviour
of proce sses. Various combinations of processes and their resulting behaviour, as
determined by the transition mies, are reviewed. Next, simple modal logics are
introduced for describing the capabilities of processes.
An important discussion point occurs when two processes may be deemed
to have the same behaviour. Such an abstraction can be presented by defining an
appropriate behavioural equivalence between processes. A more abstract approach
is to consider equivalence in terms of having the same pertinent properties. There
is special emphasis with bisimulation equivalence , since the discriminating power
of modal logic is tied to it.
More generally, practitioners have found it useful to be able to express tem-
poral properties of concurrent systems , especially Iiveness and safety properties.
A safety property amounts to "nothing bad ever happens," whereas a Iiveness
property expresses "something good does eventually happen ," The crucial safety
property of a mutual exclusion algorithm is that no two processes are ever in their
critical sections concurrently. And an important liveness property is that, whenever
a process requests execution of its critical section, then eventually it is granted.
Cyclic properties of systems are also salient: for instance , part of a specification
of a scheduler is that it must continually perform a particular sequence of ac-
tions. A logic expressing temporal notions provides a framework for the precise
formalisation of such specifications.
Forrnulas of the modal logic are not rich enough to express such temporal
properties, so an extra, fixed point operator, is added. The result is a very expressive
viii Preface

temporal logic, modal mu-calculus. However, it is also very important to be able


to verify that an agent has or does not have a particular property.
The text aims to be reasonably introductory, so that parts of the book could be
used at undergraduate level, as weIl as at more advanced levels. I have used the
material in this way at Edinburgh. The extensive use of games for both equivalence
and model checking is partly pedagogical, since they are so conceptually clear.
Parts of the book have been presented previously at various summerschools
over the years, and I wish to thank all the organisers for allowing me to present this
material. I should also like to thank current and previous colleagues at Edinburgh
for building such an inteIlectuaIly stimulating environment to work in. In particular,
I wish to thank Julian Bradfield (a pioneer of infinite state model checking and
who allowed me to use his TeX tree constructor for building derivation trees) , Olaf
Burkart, Kim Larsen (who introduced me to modal mu-calculus), Robin Milner
(from whom I learnt about process calculus and bisimulation equivalence), Perdita
Stevens and David Walker.

Colin Stirling
Edinburgh, United Kingdom
Contents

Preface vii
List of Figures xi

1 Processes 1
1.1 First examples 1
1.2 Concurrent interaction 8
1.3 Observable transitions 17
1.4 Renaming and linking 21
1.5 More combinations of processes 25
1.6 Sets of processes . 28

2 Modalities and Capabilities 31


2.1 Hennessy-Milner logic I . 32
2.2 Hennessy-Milner logic 11 . 36
2.3 Algebraic structure and modal properties 39
2.4 Observable modallogic . 42
2.5 Observable necessity and divergence 47

3 Bisimulations 51
3.1 Process equivalences 51
3.2 Interactive games 56
3.3 Bisimulation relations 64
3.4 Modal properties and equivalences 69
3.5 Observable bisimulations 72
3.6 Equivalence checking . 77
x Contents

4 Temporal Properties 83
4.1 Modal properties revisited 83
4.2 Processes and their runs . 85
4.3 The temporal logic CTI.. . 89
4.4 Modal formulas with variables 91
4.5 Modal equations and fixed points 95
4.6 Duality... . . . . . . . . . .. 100

5 Modal Mu-Calculus 103


5.1 Modallogic with fixed points .. . . . . 104
5.2 Macros and normal formulas . 107
5.3 Observable modallogic with fixed points 110
5.4 Preservation of bisimulation equivalence 112
5.5 Approximants . 115
5.6 Embedded approximants 121
5.7 Expressing properties .. 128

6 Verifying Temporal Properties 133


6.1 Techniques for verification 133
6.2 Property checking games 135
6.3 Correctness of games 144
6.4 CTL games . . . . . 147
6.5 Parity games . . . . . 151
6.6 Deciding parity games 156

7 Exposing Structure 163


7.1 Infinite state systems 164
7.2 Generalising satisfaction 165
7.3 Tableaux I 168
7.4 Tableaux II . . . . . . . 173

References 183

Index 187
List of Figures

1.1 The transition graph for Cl 2


1.2 A vending machine 3
1.3 The transition graph for Yen 3
1.4 A family of counters . . . . 4
1.5 The transition graph for Cti 4
1.6 Flow graphs of User and Cop . 10
1.7 Flow graph of Cop I User . . . . 11
1.8 Flow graph of Cop I User I User 11
1.9 Flow graph of (Cop I User)\K 11
1.10 A level crossing 12
1.11 Flow graphs of the crossing and its components 13
1.12 Transition graph of Crossing . . . . . . . . . . 14
1.13 A simple protocol 14
1.14 Protocol transition graph when there is one message m. 15
1.15 A slot machine . . . . . . . . . . . . . . . . . . . . . . . 15
1.16 Observable transition graphs for (C I U)\{in, ok} and Ucop 20
1.17 Flow graph of n instances of B, and BI I . . . I a.. 22
1.18 The flow graph of Cy' 23
1.19 Flow graph of Cy~ I Cyz I Cy; I Cy~ 24

3.1 Two clocks . . . . . . . . . 52


3.2 Three vending machines . . 54
3.3 Game graph for G(CI, CIs) 58
3.4 Reduced game graph for G(CI , CIs) 59
3.5 Game graph for (Ven2, Ven3) .. 60
3.6 Game play . 74
3.7 A simplified slot machine . 79
xii List 01 Figures

5.1 A simple process . . . . . . . . . . . . . . . . . . 123

6.1 Rules for the next move in agame play 136


6.2 Winning conditions . 138
6.3 A simple process . . . . . . . . . . . . 140
6.4 The game G(D, JLY. vZ . [a]«(b)tt v y) 1\ Z» 141
6.5 The formula JLY. vZ. [a]«(b}tt v y) 1\ Z) . 142
6.6 The formula JLZ. (-)tt 1\ [-a]Z . 142
6.7 Rules for the next move in a CTL game play 147
6.8 Winning conditions 148
6.9 A CTL game . 149
6.10 Game . 154
6.11 Cases in Theorem 1 159

7.1 Semantics for E FV <I> 167


7.2 Tableaux rules . . . . 169
7.3 Terminal goals in a tableau 170
7.4 A successful tableau for D I- <I> 172
7.5 New terminal goal in a tableau 174
7.6 Embedded tenninals: F ~ E and FI ~ EI. 176
7.7 Successful tableau . 179
1 _

Processes

1.1 First examples .... 1


1.2 Concurrent interaction 8
1.3 Observable transitions 17
1.4 Renaming and 1inking 21
1.5 More combinations of processes 25
1.6 Sets of processes ........ 28

In this chapter, processes are introduced as expressions of a simple language built


from a few basic operators. The behaviour of a process E is characterised by tran-
sitions ofthe form E ~ F, that E may become F by performing the action a.
Structural ru1es prescribe behaviour, since the transitions of a compound process
are determined by those of its components. Concrete pictoria1 summaries of be-
haviour are presented as labelIed graphs, which are collections of transitions. We
review various combinations of processes and their resulting behaviour.

1.1 First examples


A simple process is a clock that perpetually ticks .

Cl ~ tick.CI
Names ofactions such as tick are in lower case, whereas names ofprocesses such
as Cl have an initial capital letter. A process definition ties a process name to a

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
2 1. Processes

process expression. Inthis case, Cl is attached to tick.CI, where both occurrences


of Cl name the same process. The defining expression for Cl invokes aprefix
operator . that builds the process a. E from the action a and the process E.
Behaviour ofprocesses is captured by transitions E ~ F, that E mayevolve
to F by performing or accepting the action a. The behaviour of Cl is elementary,
since it can only perform tick and in so doing becomes Cl again . This is a
consequence of the mies for deriving transitions. First is the axiom for the prefix
operator.

I R(.) a .E ~ E I
A process a.E performs the action a and becomes E . An instance ofthis axiom is
. . tn,c
th e transrtion . k .CI ~
tick . . ru Ie re fiers to th e operator def
Cl. Th e next transition =,
and is presented with the desired conclusion uppermost.

R(~) P ~ F p~ E
E~F

Ifthe transition E ~ F is derivable and P ~ E, then P ~ F is also derivable.


Goal-directed transition rules are used because we are interested in discovering
the available transitions of a process. There is a single transition for the clock,
Cl ~tick Cl. S uppose ourI ' to deri
goa IS . . Cl ~
enve a transition a E . Because th e 0 nly

applicable rule is R(~), the goal reduces to the subgoal tick.CI ~ E, and the
only possibility for deriving this subgoal is an application ofR(.), in which case a
is tick and E is Cl.
The behaviour of Cl is represented graphically in Figure 1.1. Ingredients of
this behaviour graph (known as a "transition system") are process express ions and
binary transition relations between them. Each vertex is a process expression, and
one of the vertices is the initial vertex Cl. Each derivable transition of avertex is
depicted. Transition systems abstract from the derivations of transitions.
An unsophisticated vending machine Ven is defined in Figure 1.2. The
definition ofVen employs the binary choice operator + (which has wider scope
than the prefix operator) from Milner's CCS, Calculus ofCommunicating Systems
[42,44]. InitiallyVenmay accept a 2p or lp coin, and thena button big orlittle
may be depressed depending on the coin deposited, and finally after an item is

tick
FIGURE1.1. The transition graphfor Cl
1.1. First examples 3

2p .Venb + lp. Venl


def
Ven
def
Venb b äg .col l.ectx .Ven
def
Venl = little.collectl .Ven

FIGURE 1.2. A vending machine

collected the process reverts to its initial state . There are two transition rules for +.

R(+)

The derivation ofthe transition Ven ~ Ven, is as follows .


2p
Ven ~ Venb
2p
2p.Venb + lp.Venl ~ Venb
2p
2p.Venb ~ Ven,

The goal reduces to the subgoal beneath it as a result of an application of R(');f),


which in turn reduces to the axiom instance via an application ofthe first ofthe R(+)
rules. When presenting proofs oftransitions, side conditions in the application of a
rule, such as R(');f), are omitted. Figure 1.3 pictures the transition system for Ven.
A transition E ~ F is an assertion derivable from the rules for transitions.
To discover the transitions of E, it suffices to examine its main combinator and
the transitions of its components. There is an analogy with rules for expression
evaluation. To evaluate (3 x 2) + 4 it suffices to evaluate the components 3 x 2
and 4, and then sum their values . Such families of rules give rise to a structural

collect b collect 1

FIGURE 1.3. The transition graph for Ven


4 1. Processes

Cto
def
= UP.Ctl + round.Cto
UP.Cti+2 + dovn.Ctj
def
=

FIGURE 1.4.A familyof counters

operational semantics, as pioneered by Plotkin [49]. However, whereas the essence


of an expression is to be evaluated, the essence of a process is to act.
Families of processes can be defined using indexing . A simple case is the set
of counters {Cti : i E N} of Figure 1.4. The counter Ct3 can increase to Ct4
by performing up or decrease to Ct2 by performing down. The derivation of the
transition Ct3 ~ Ct4 is as follows .

up
UP.Ct4 + dOwn.Ct2 ~ Ct4
up
UP.Ct4 ~ Ct4

The rule R(1;f) is here applied to the instance Ct3 1;f UP.Ct4 + dOwn.Ct2' Each
member Cti determines the same transition graph ofFigure 1.5 which contains an
infinite number of vertices. This graph is " infinite state" because the behaviour of
Cti may progress through any ofthe processes Ctj, in contrast to the finite state
graphs ofFigures 1.1 and 1.3.
The operator + can be extended to indexed families L: {Ei : i EI} where I
is a set of indices . EI + E2 abbreviates ~) E i : i E {I, 2}}. Indexed sum may
be coupled with indexing of actions. An example is a register storing numbers,
represented as a family {Reg; : i E N}.

= read, ,R
Reg,i def eg'i + "{ '
LJ wntej .Reg,j : JEN}
The act ofreading the content ofthe register when i is stored is read. , whereas
wri te j is the action that updates its value to j . The single transition rule for L
generalises the rules for +.

up up

D a-- -- ---
~ ~ ~ ~

Ct Ct 1 Cti

round down down down down

FIGURE 1.5. The transitiongraph for Ct ;


1.1. First examples 5

Consequently, Reg~ is able to carry out any writej (and thereby changes to Regj)
as weIl as read, (and then remains unchanged). A special case is when the indexing
set I is empty. By the rule R(~), this process has no transitions, since the subgoal
can never be fulfilled. In CCS the nil process L:{Ei : i E 0} is abbreviated to 0
(and to STOP in Hoare's CSP, Communicating Sequential Processes [31]).
Actions can be viewed as ports or channels, means by which processes can
interact. It is then also important to consider the passage ofdata between processes
along these channels, or through these ports. In CCS, input of data at a port named
ais represented by the prefix a(x).E, where a(x) binds free occurrences of x in
E . (In CSP a(x) is written a?x.) The port label a no longer names a single action ,
instead it represents the set {a(v) : v E D} where D is the appropriate family of
data values. The transition axiom for trus prefix input form is

I R(in) a(x) .E ~ E{v/x} if v E D I


where E {v/ x} is the process term that results from replacing aIl free occurrences
of x in E with v I . Output at a port named a is represented in CCS by the prefix
a(e).E where e is a data expression. The overbar - symbolises output at the named
port . (In CSP a(e) is written a!e.) The transition rule for output depends on extra
machinery for expression evaluation. Assume that Val(e) is the data value in D (if
there is one) to which e evaluates .

, R(out) a(e) .E ~E if Val(e) =v I


The asymmetry between input and output is illustrated by the following process
that copies a value from in and then sends it through out.

Cop = in(x).out
def -(
x).Cop
. a deri
Be Iow IS . orfth e transition
envation . . Cop 1n(v) - ( )
~ out v .Cop f or v E D.
in(v) _
Cop ~ out(v).Cop
_ in(v)_
in(x).out(x).Cop ~ out(v).Cop
The subgoal is an instance ofR(in), as (out(x) .Cop){v/x} is out(v).Cop2, and so
the goal follows by an application ofR(~). The process out(v).Cop has only one
•.
transition -
out() ~ Cop that
v .Cop out(v) at is an iinstance 0 f R(out,
IS an ) smce
. we assume
that Val(v) is v. Whenever Cop inputs a value at in, it immediately disgorges it
through out. The size of the transition graph for Cop depends on the size of the
data domain D, and is finite when D is a finite set.

IThe process a(x).E can be viewed as an abbreviation ofthe process L{av.E{vjx) : v E D), writing av
instead of a(v).
2Cop contains no free variables because in(x) bindsx, and so (OUt(x) .Cop){vjx) equals out(v).(Cop{vjx))
because xis free in out(x), and (Cop{vjx)) is Cop.
6 1. Processes

Example 1 COP1 ~ in(x).in(x).out(X).COp1 is a different copier. It takes in two data values


at in, discarding the first but sending out the second. COP1 has initial transition
in( v) _
COP1 ~ in(x).out(X).COp1 for v E D.

Input actions and indexing can be mingled, as in the following redescription


of the family of registers, where both i and x have type N.

Reg, ~ read(i).Regi + write(x).Regx


Regi can output the value i at the port read, or instead it can be updated by being
.
wntten to at write. BI ' th e deri
e ow 1S . 0 f Regs llI'ite(3)
ertvation ~ Reg3.

llI'ite(3)
Regs ~ Reg3
--
read(5).Regs + wnte(x).Regx llI'ite(3)
.
~ Reg3
llI'ite(3)
write(x).Regx ~ Reg3

The variable x in write(x) binds the free occurrence of x in Reg x ' An index can
also be presented explicitly as a parameter.

Example 2 The multiple copier Cop' uses the parameterised subprocess Cop(n, x), where n
ranges over N and x over texts .
def
Cop' = no(n) .in(x).Cop(n, x)
def
Cop(O, x) = out(x).Cop'
Cop(i + 1, x) def
= out(x).Cop(i, x)

The initial transition ofCop' detennines the number ofextra copies ofa manuscript,
for instance Cop' ~ in(x).Cop(4, x) . The next transition settles on the text,
in(x).Cop(4, x) ~ Cop(4, v) . Then before reverting to the initial state, five
copies of v are transmitted through the port out.

Data expressions may involve operations on values, as in the following


example, where x and y range over aspace of messages.

App ~ in(x).in(y).out(xAy).App

App receives two messages m and n on in and transmits their concatenation mrn
on out. We shall assume different expression types , such as boolean express ions.
An example is that Val(even(i)) = true ifi is an even integerand is false otherwise.
This allows us to use conditionals in the definition of a process as exemplified by
8 that sieves odd and even numbers.

8 def •
= in(x).lf even(x ) then -out e(x).8 else - ()
outi, x .8
1.1. First examples 7

Below are the transition mies for the conditional .

R(if 1)
if b then EI else E2
a
.z; E' Va
l(b)
= true
E I -+ E'

if b then EI else E2 ~ E' (b)


R(if2) a Val = ra1se
t:

E2 -+ E'

. . iallv
S IIDtl recei
Yrecerves . 1 value through th e port an
a numenca . , For mstance,
. S in(55)
-+
if even(55) then out e(55) .S else out o(55).S . It then outputs through out,
if the received value is even, or through oun, otherwise. In this example,
• _ _ out o(55)
lf even(55) then out e(55).S else out o(55).S -+ S.

Example 3 Consider the following family of processes for i ? 1.

T(i) ~ if even(i) then out(i).T(i/2) else out(i).T«3i + 1)/2)


So T(5) performs the sequence oftransitions

T(5) o~) T(8) ~) T(4) ~) T(2)


. . out(2) out( I)
and then cycles through the transinons T(2) -+ T(l) -+ T(2).

Exercises 1. Draw the transition graphs for the following clocks.


a. Cl l ~ tick.tock.Cl l
b. C12 ~ tick.tick.C12
c. C13 ~ tick.Cl
d. tick.O
2. Show that there are two derivations of the transition C14 ~ C14 when
C14 ~ tick.C1 4 + tick.C14. Draw the transition graph for C14.
3. Contrast the behaviour of CIs ~ tick.Cl s + tick.O with that of Cl by
drawing their transition graphs.
4. Define a more rational vending machine than Yen that allows the big button
to be pressed if two Ip coins are entered, and the little button to be depressed
twice after a 2p coin is deposited.
5. Assume that the space of values consists of two elements , 0 and I. Draw
transition graphs for the following three copiers Cop, COPI and COP2 where
def - -
COP2 = in(x).out(x).out(x).COP2'
6. Draw transitiongraphs ofT(31) and T(l7), whereT(i) is definedin Example 3.
8 1. Processes

7. For any processes E, Fand G, show that the transition graphs for E + F
and F + E are isomorphie, and that the transition graph for (E + F) + G is
isomorphic to that of E + (F + G).
8. From Walker [60]. Define a process Change that describes a change-making
machine with one input port and one output port, that is capable initially of
accepting either a 20p or a 10p coin, and that can then dispense any sequence
of Ip, 2p, 5p and 10p coins, the sum of whose values is equal to that of the
coin accepted , before returning to its initial state.

1.2 Concurrent interaction


A compelling feature of process theory is modelling of concurrent interaction .
Aprevalent approach is to appeal to handshake communication as primitive . At
any one time, only two processes may communicate at a port or along a chan-
nel. In CCS, the resultant communication is a completed internal action. Each
incomplete, or observable, action a has a partner ii, its co-action . Moreover, the
action ais a, which means that a is also the co-action ofa. The partner of a pa-
rameterised action in( v) is in( v) . Simultaneously performing an action and its
co-action produces the internal action T, which is a complete action that does not
have a partner.
Concurrent composition of E and F is expressed as E lF . Below is the crucial
transition rule for I that conveys communication.

EIF~E'IF'
R(I com)
E~E' F~F'

If E can carry out an action and become E', and F can carry out its co-action and
become F' then ElF can perform the completed internal action T and become
E' I F'. Consider a potential user of the copier Cop of the previous section , who
first writes a file before sending it through the port in.

def
User = write(x).Userx
def
User v = in(v).User

As soon as User has written the file v, it becomes the process User; that can
communicate with Cop at the port in. Rule R(I com) is used in the following
1.2. Concurrent interaction 9

derivatiorr' ofthe transition Cop I User; ~ out(v).Cop I User.

Cop I Userv ~ out(v).Cop I User


in(v) _ in(v)
Cop ~ out(v).Cop User v ~ User
- in(v) - _ in(v)
in(x ) .out(x).Cop ~ out(v).Cop in(v).User ~ User

The goal transition is the resultant communication at in. Through this communi-
cation, the value v is sent from the user to the copier because User, performs the
output in(v) and Cop performs the input in(v), where they agree on the value v.
Data is thereby passed from one process to another. When the actions a and a do
not involve values, the resulting communication is a synchronization.
Several users can share the copying resource . Cop I (Uservl I User v2) involves
two users, but only one at a time is allowed to employ it. So, other transition rules
for I are needed, permitting components to proceed without communicating.

EIF~ E'IF EIF~EIF'


R(J) a
E~E' F~F'

In the first of these rules, the process F does not contribute to the action a that E
performs . Below is a sampie derivation.

Cop I (Uservl l Userd .z., out(vl).Cop I (User IUserv2)


in(vl) _ in(vl)
Cop ~ out(vl).Cop Uservl l Userv2 ~ User I Userv2
- in(vl) - in(vl)
in(x).out(x).Cop ~ out(vl).Cop User vl ~ User
- in(vl)
in(vl).User ~ User

The goal transition reflects a communication between Cop and Uservi, meaning
Userv2 is not a contributor. Cop I (Uservii Userv2 ) is not forced to engage in
communication. Instead, it may carry out an input action in( v) , or an output action
in(ul ) or in( v2).
in(v)_
Cop I (User vl I User v2 ) ~ out(v).Cop I (Uservl I User v2)
IIi(vl)
Cop I (User vl I Userv2) ~ Cop I (User I Userv2)
IIi(v2)
Cop I (Userv, I User v2) ~ Cop I (User vl I User)

3We assurnethat I has greaterscope than otherprocessoperators. The process out( v).Cop I User is therefore
the parallel compositionof out(v).Cop and User.
10 1. Processes

The second of these transitions is derived using two applications of R(I).


Iii{vl)
Cop I (User Vl I User v2) ----+ Cop I (User I User v2)
in{vl)
User v l I User v 2 ----+ User I User v 2
in{vl)
User v l ----+ User
- in(vl) ·
in(vl).User ----+ User

The behaviour of the users sharing the copier is not impaired by the order
of parallel subcomponents, or by placement of brackets. Both processes (Cop I
Uservi) I User v2 and Uservl I (Cop I User v2) have the same capabilities as Cop I
(Uservl I User v2). These three process expressions have isomorphie transition
graphs, and therefore in the seque1 we omit brackets between multiple concurrent
processes",
The parallel operator is expressive1y powerful. It can be used to describe infinite
state systems without invoking infinite indices or value spaces. A simple example
is the following counter Cnt.
der
Cnt = up .(Cnt I down.O)

Cnt can perform up and become Cnt I down.O that can perform down, or a further
up and become Cnt I down.O I down.O, and so on.
Figure 1.6 offers an alternative pictorial representation of the copier Cop and
user User. Such diagrarns are called "flow graphs" by Milner [44] (and should
be distinguished from transition graphs). A flow graph summarizes the potential
movement of information flowing into and out of ports, and also exhibits the ports
through which a process is, in principle, willing to communicate. In the case of
User, the incoming arrow to the port labelIed write represents input , whereas the
outgoing arrow from in symbolises output. Figure 1.7 shows the flow graph for
Cop I User with the crucial feature that there is a potentiallinkage between the
output port in ofUser and its input in Cop, permitting information to circulate from
User to Cop when communication takes place. However, this port is still available
for other users. Both users in Cop I User I User are able to communicate at
different times with Cop, as illustrated in Figure 1.8

wr i t e ~in i n r lout

-~- -~--
FIGURE 1.6. Flow graphs ofUser and Cop

4Equivalences between processes is discussed in Chapter 3.


1.2. Concurrent interaction 11

writeClin i'r-1out
~~--~"'~~~

FIGURE 1.7. Flow graph ofCop I User

wri~er::J.i~
~ ~n
iGJ-out
=<crl'V Cop ~
"'L=-r
F1GURE 1.8. Flow graph ofCop I User I User

=EJ----;"'~GJ-~-t

I
(Cop User) \ K

FIGURE 1.9. Flow graph of(Cop I User)\K

The situation in which a user has private access to a copier is modelled using
an abstraction or encapsulation operator that conceals ports . ces has a restrietion
operator \ J , where J ranges over families ofincomplete actions (thereby exc1uding
the complete action r). If K is (in( v) : v E D} when D contains the values that
can flow through in, then the port in within (Cop I User)\K is inaccessible to
otherusers. The flow graph of(Cop I User)\K is pictured in Figure 1.9, where the
linkage without names at the ports represents their concealment from other users,
so it can be simplified as in the second diagram of the figure.
The visual effect of \K on the flow graph in Figure 1.9 is justified by the
transition rule for restriction, which is as follows where J is {a : a E J} .

E\l ~ F\l -
R(\) a ~ J UJ
E~F
12 1. Processes

The behaviour of E\l is part of that of E, as any action that E\l may carry out
can also be performed by E, but not necessarily the other way round. For instance,
Cop I User is able to perform an in input action, whereas an attempt to derive an in
transition from (Cop I User)\K is prec1uded because ofthe side condition on the
rule for R(\). Thepresence of\K in (Cop I User)\K prevents Cop from everdoing
an in transition, except in the context ofa communication with User. Restrietion
can therefore be used to enforce communication between parallel components.
After the initial write transition (Cop I User)\K ,,~v) (Cop I Userv)\K, the
next transition must be a communication.

(Cop I Userv)\K ~ (OUt(v).Cop I User)\K


Cop I User, .z; out(v).Cop I User
in(v) _ in(v)
Cop ~ out(v).Cop User v ~ User
-( in(v) - ( ) _ in(v)
in(x ) .out x).Cop ~ out v .Cop in(v).User ~ User

A port a is concealed by restricting all the actions {a(v) : v E D}, and therefore
we shall usually abbreviate such a subset within a restriction to {a} .
Process descriptions can become quite large, especially when they consist
of multiple components in parallel. We shall therefore employ abbreviations of
process expressions using the relation ==, where P == F means that P abbreviates
F , which is typically a large expression.

Example 1 The mesh of abstraction and concurrency is further revealed in the finite state
example without data of a level crossing in Figure 1.10 from Bradfield and the
author [10], consisting ofthree components Road, RaH and Signal. The actions
car and train represent the approach of a car and a train, up opens the gates for
the car, ccross is the car crossing, down closes the gates, green is the receipt of a
green signal by the train, tcross is the train crossing, and red automatically sets
the light red. Unlike most crossings, it keeps the barriers down except when a car
actually approaches and tries to cross . The flow graphs of the components, and of
the overall system are depicted in Figure 1.11. The transition graph is pictured in
Figure 1.12. Both Road and RaH are simple cyc1ers that can only perform a
determinate sequence of actions repeatedly.

def
Road car.up.ccross.down.Road
def
RaH = train.green.tcross.red.Rail
green.red.Signal + up.down.Signal
def
Signal =
Crossing (Road I RaH I Signal)\{green, red, up, down}

FIGURE 1.10. A levelcrossing


1.2. Concurrent interaction 13

! car

~0~
!~ !red t red
! ca r

tra i n tcros s
Cross i ng

FIGURE 1.11. Flow graphs of the crossing and its components

An important arena for process descriptions is provided by modelling proto-


cols. An example is the proces s Protocol ofFigure 1.13 taken from Walker [60],
which model s an extremel y simple communications protocol that allows a mes-
sage to be lost during transmission. 118 flow graph is the same as that of Cop, and
the size of its transition graph depends on the space of messages. The sender trans-
mits any message it receives at the port in to the medium . In turn, the medium may
transmit the message to the receiver, or instead the message may be lost, an action
modelIed as the silent T action, in which case the medium sends a timeout signal to
the sender and the message is retransmitted. On receiving a message, the receiver
transmits it at the port out and then sends an acknowledgement directly to the
sender (which we assume can not be lost) . Having received the acknowledgement,
the sender may again receive a message at port in.
Although the flow graphs for Protocol and Cop are the same , their levels of
detail are very different. The process Cop is a one-place buffer that takes in a value
and later expel s it. Similarly, the protocol takes in a message and later may output
it. The transition graph associated with this process when there is just one message
is pictured in Figure 1.14. It turns out that Protocol and Cop are observationally
equivalent , as defined in Chapter 3. As process descriptions, however, they are very
different. Cop is elose to a specification, as i18 desired behaviour is given merely
in terms of what it does. In contrast, Protocol is eloser to an implementation,
because it is defined in terms ofhow it is built from simpler components.
14 1. Processes

K = {green, red, up , down}

EI - (up .ccross.down.Road I RaH I Signal)\K


E2 - (Road I green.tcross.red.RaH I Signal)\K
E3 - (up .ccross.down.Road I green.tcross.red.RaH I Signal)\K
E4 - (ccross.down.Road I RaH I down.Signal)\K
Es - (Road I tcross.red.RaH I red.Signal)\K
E6 - (ccross.down.Road I green.tcross.red.RaHI down.Signal)\K
E7 - (up .ccross.down.Road I tcross.red.RaHI red.Signal)\K
Es - (down.Road I RaH I down.Signal)\K
E9 - (Road I red.RaH I red.S ignal)\K
EJO - (down.Road I green.tcross.red.RaH I down.Signal)\K
Ell - (up .ccross.down.Road I red.RaH I red.Signal)\K

FIGURE 1.12. Transition graph ofCrossing

def
Sender = in(x).sm(x).Sendl(x)
ms.sm(x).Sendl(x) + ok.Sender
def
Sendl(x) =
def
Medium sm(y).Medl(y)
mr(y).Medium+ r .ms.Medi um
def
Medl(y) =
def
Receiver = mr(x).out(x).ok.Receiver

Protocol (Sender I Medium I Receiver)\{sm,ms ,mr, ok}

FIGURE 1.13. A simple protocol


1.2. Concurrent interaction 15

!
Pr otocol

'""'r: 1"""=1-·'··«"
Ln tm)

( smun ! •

7:'''"'~'·''·=·'~
(Sendl( rn) jMed i uml ou t(rn) . ok . Re c e i v e r ) \ J (Send l( m) Ims.Medium !Receiver ) \J

1 out Im) _

(Se ndl (m) IMedium Iok .Receiv er )\J

J = {sm, ms, mr, ok}


FIGURE 1.14. Protocol transition graph when there is one message m.

def
IO = slot .bank.(lost.loss.IO + release(y).win(y).IO)
def
Bn = bank.max(n + 1).left(y) .By
def
D = max(z).(lost.left(z).D + ~)release(y) .left(z - y).D 1~ y ~ zl)

SMn - (IO I Bn I D)\{bank, lost, max, left, release}

FIGURE 1.15. Asiat machine

Example 2 An example of an infinite state system from Bradfield and the author [10] is the
slot machine SMn defined in Figure 1.15. Its flow graph is also depicted there . A
coin is input (the action slot) and then, after some silent activity, either a loss or
a winning sum ofmoney is output. The system consists ofthree components: IO,
which handles the taking and paying out of money; Bn , a bank holding n pounds ;
and D, the wheel-spinning decision component.

Exercises 1. Give a derivation of the following transition.

Cop I (Userv l I Userd ~ out(v2).Cop I (User v l I User)


16 1. Processes

2. Show that the following three processes


a. (Cop I Uservd I Userv2
b. User vl I (Cop I Userv2)
c. Cop I (Uservl I User v2)
have isomorphie transition graphs (and flow graphs).
3. Sem ~ get.put .Sem is a semaphore. Draw the transition graph for Sem I
Sem I Sem I Sem.
4. How does the transition graph for Cnt differ from that for the counter Cta of
Figure 1.4?
5. Draw the transition graph for Bag ~ in(x).(out(x).0 I Bag) when the spaee
of values contains just two elements, 0 and 1.
6. Let LI be the set of aetions {lp.little} and let L z be {lp, little, 2p}.
Also let Usel ~ lp.little.Usel. Draw flow graphs and transition graphs
for the processes
a. Ven I Usel
b. Ven I (Use, I Used
c. (Ven I Usel)\L j
d. (Ven I Usel)\L j I Usel
e. (Ven I Usel I Used\L j
when i = 1 and i = 2.
7. Let Q(E) be the transition graph for E. Define prefixing (.), +, I and \]
operators directly on transition graphs so that eaeh of the following pairs is
isomorphie.
a. a.Q(E) and Q(a.E)
b. Q(E + F) and Q(E) + Q(F)
c. Q(E I F) and Q(E) I Q(F)
d. Q(E)\] and Q(E\])
8. Consider the definition of the following proeess from Hennessy and
Ingolfsdottir [27].
def
Fac inl(y).in2(Z).if y = 0 then out(z).O
else (inl(Y - 1).in2(Z * y).O I Fac)

Draw the transition graph of(inl(3).in2(1).0 I Fac)\{inl. in2}'


9. Draw the transition graph for Road I RaH I Signal, and eompare it with that
for Crossing.
10. Draw flow and transition graphs for the components ofProtoco1.
1.3. Observable transitions 17

11. Refine the description of Protocol so that acknowledgements mayaiso be


lost.

1.3 Observable transitions


Actions a on the transition relations ~ between processes can be extended to
finite length sequences w, which are also called "traces," The extended transition
E ~ F states that E may perform the trace w and become F. There are two
transition mies for traces , where e is the empty sequence of actions.

R(tr) E ~ E
E~E' E' ~F
First is the axiom that any proces s may carry out the empty sequence and remain
unchanged. The second rule allows traces to be extended. If E ~ E' and E'
can perform the trace w and become F then E ~ F . No distin ction is made
between carrying out the action a and carrying out the trace a (understood as an
action sequence oflength one) . Below is the derivation ofthe extended transition
Venb bigco llectb
----7
. part 0 f th e vendimg mac hine 0 f Section
Yen when Venb lS . 1.1.

big collectb
Venb ----7 Yen
co llectb
collectb .Ven ----7 Yen
big
big.collectb.Ven ----7 co Ll.ec'tj..Ven

Interna! r actions have a different status from incomplete actions . An incom-


plete action is "observable" because it is susceptible of interaction in a parallel
context. Suppose that E may at some time perform the action ok , and that
Resource is aresource. In the context (E I ok .Resource)\{ok} access to Re-
source is only triggered with an execution of ok by E. Observation of ok is the
same as the release of Resource. The silent action r cannot be observed in this
way. Consequently, an important abstraction of process behaviour derives from
silent activity.
Consider the following copier C and the user U.
def
C = in(x ).out(x ).ok .C
def
U = write(x ).in(x).ok.U

Uwrites a file before sending it through in and then waits for an acknowledgement.
(C I U)\{in, ok} has similar behaviour to Ucop.
def -
Ucop = write(x ).out(x).Ucop
18 1. Processes

The only ditference in their abilities is interna! activity. Both are initially able only
to carry out a write action
vrite(v)_
Ucop - out(v).Ucop
vrite(v) -
(C I U)\{in, ok} - (C I in(v).ok.U)\{in, ok},
Process out(v).Ucop outputs immediately, whereas the other process must first
perform a communication before it outputs, and then r again before a second
write can happen. By abstracting from silent behaviour, this ditference disappears.
Outwardly, both processes repeatedly write and output.
A trace w is a sequence of actions. The trace w r J is the subsequence of w
when actions that do not belong to J are erased.

=
_I
e rJ e

a(w r J) if a E J
aw rJ w rJ otherwise
Below are three simple examples.
(train r tcross r) r {tcross} = tcross
(r ccross r) r {tcross} = e
(write(v) r out(v) r) r {write, out} = write(v) out(v)
Associated with any trace w is the observable trace w r 0, where 0 is a universal
set of observable actions containing at least all actions mentioned in this work
apart from r. The etfect of r 0 on w is to erase all occurrences of the silent action
r , as illustrated by the following examples.
(in(m) r r out(m) r) r0 = in(m) out(m)
(in(m)r r r r) r0 = in(m)
(r r r r r r) r0 = e
To capture observable behaviour, another family oftransition relations between
processes is introduced. E ~ Fexpresses that E may carry out the observable
trace u and become F . The transition rule for observable traces is as follows.

E~F
R(Tr) u = w r0
E~F
' Protocol
An examp1e IS in(m)out(m)
='=} Protocol, w hose deri 'I'
envation utiuses the
. . in(m) T T out(m) T
extended transition Protocol _ Protocol.
Observable traces can also be built from their component observable actions .
The extended transition Crossing tra~oss Crossing is the result of gluing
.
toge th er crossmg ====> E an d E tcross
train . w hen thee imterme d'iate state E
====> Crossang
1.3. Observable transitions 19

is E2 or Es ofFigure 1.12. Observable behaviour is constructed from transitions


E ~ F or E ~ F when a E 0, whose mIes are as follows .

E~F
R(d::» E~ E
E~E' E/~F

E ~ F if E can silently evolve to F and E ~ F if E can silently evolve to a


process that carries out a and then silently becomes F .

Example 1 The derivation ofProtocol ~ F3, where F3 abbreviates (Sendl(rn) I Medium I


out(rn).ok.Receiver)\{sm, ms, mr, ok}, uses the following two intermediate
states (see Figure 1.14).
F1 _ (sm(rn).Sendl(rn) I Medium I Receiver)\{sm, ms, mr, ok}
F2 - (Sendl(rn) I Medl(rn) I Receiver)\{sm, ms, mr, ok]
Below is part of the derivation .
in(m)
Protocol ='=} F3
e ~~ e
Protocol ==} Protocol Protocol ~ FI FI ==} F3

Part of the derivation of Fl ~ F3 is as follows .

FI~F3
FI ~ F2 F2 d::> F3
F2 ~ F3 F3 d::> F3

Observable behaviour of a process can also be visually encapsulated as a tran-


sition graph. As in Section 1.1, ingredients of this graph are process terms related
by transitions. Each edge has the form d::> or ~ when a E 0. Assuming a value
space with just one element v, the observable transition graphs for (C I U)\ {in, ok}
and Ucop are pictured in Figure 1.16 (where thick arrows are used instead of ==}) .
There are two behaviour graphs associated with any process. Although both
graphs contain the same vertices, they differ in their labelled edges . Observable
graphs are more complex, since they contain more transitions. However, this abun-
dance of transitions may result in redundant vertices . Figure 1.16 exemplifies this
condition in the case of (C I U)\ {in, ok}. The states labelled 1 and 4 have identical
capabilities, as do the states labelIed 2 and 3. When minimized with respect to ob-
servable equivalences, as defined in Chapter 3, these graphs may be dramatically
simplified as their vertices are fused .
20 1. Processes

out (v)

write(v)
~ 2 : (Clfn(v) .ok .U) \{in ,ok)

out(v)

write(v)

Ucop ~(V) .ucop

out (v)

FIGURE 1.16. Observable transition graphs for (e I U)\{in, ok} and Ucop

Exercises 1. Derive the extended transition SMn ~ SMn +1 when w is the following trace
slot r r r r 1055 and SMn is the slot machine.
2. Provide a fuH derivation of Protocol ~ Protocol when s is the trace
in(m) r r out(m) r.
3. List the members of the following sets:

. traintcross E}
{E : Cressang ==>
in(m)
{E : Protocol ==> E}

4. Show that E ~ F is derivable via the rules R(tr) and R(Tr) iff it is derivable
using the rules R( ~ ) and R( ~ ).
5. Draw the observable transition graphs for the processes : Cl, Ven and
Crossing.
6. Although observable traces abstract from silent activity, this does not mean
that internal actions can not contribute to differences in observable capability.
1.4. Renaming and Iinking 21

Let Yen' be a vending machine very similar to YenofFigure 1.2, except that the
initia12p action is prefaced by the silent action, Yen' ~ r .2p .Venb+ lp.Venl
a, Show that Yen and Yen' have the same observable traces.
b, Let Usel be the user Usej ~ lp.li trt Le.Usej , who is only inter-
ested in inserting the smaller coin. Show that the process (Ven' I
Usel)\{lp, 2p , little} may deadlock before an observable action is
carried out unlike (Ven I Usel)\{lp, 2p, little}.
c, Draw both kinds oftransition graphs for each ofthe processes in part (b).
7. Assuming just one daturn value, draw the observable graphs for processes
(Cop I User)\{in} and Protocol. What states ofthese graphs can be fused
together?
8. Let g(E) be the transition graph for E, and let gO(E) be its observable
transition graph. Define the graph transformation ° that maps 9(E) into gO( E) .
9. A process is said to be "divergent" if it can perform the r action forever.
a, Draw both kinds of transition graph for the following pair of processes ,
r.O and Div' ~ r.Div' + r .O.
b, Do you think that the processes Protocol and Cop have the same
observable behaviour? Give reasons for and against.

1.4 Renaming and linking


Cop, User and Ucop ofprevious sections are essentially one-place buffers, taking
in a value and later expelling it. Assume that B is the following canonical buffer.

B ~ i(x).o(x).B
For instance, Cop is the process Bwhen port i is in and port 0 is out. Relabelling
of ports can be made explicit by introducing an operator which renames actions.
The crux of renaming is a function mapping actions into actions . To ensure
pleasant properties, a renaming function I is subject to a few restrictions. First, it
should respect complements . For any observable a, the actions I(a) and l(ii) are
co-actions, that is I(il) = I(a). Second, it should conserve the silent action, I(r)
= r. Associated with any function I obeying these conditions is the renaming
operator [n, which, when applied to process E, is written as E[f] ; this is the
process E whose actions are relabelled according to I.
A renaming function I can be abbreviated to its essential part. If each ai
is a distinct observable action, then bI/at • .. . . b. fa ; represents the function I
that renames a, to b, (and ai to bi ), and leaves any other action c unchanged .
For instance, Cop abbreviates the process B[in/i, out /o]: here we maintain the
convention that in stands for the family {in(v) : v E D} and for {i(v) : v E D},
ä
22 1. Processes

so in/i symbolises the function that also preserves values by mapping i(v) to
in(v) for each v. The transition rule for renaming is set forth below.

R([f]) E[f] ~ F[f] a = f(b)


E...!4F

This rule is used in derivations of the following pair of transitions.

. . in(v) _ . . out(v) . .
B[J.n/l, out./o] ~ (o(v) .B)[J.n/l, out /o] ~ B[J.n/l,out/O]

Below is the derivation of the initial transition.

B[in/i, out/o] ~ (o(v).B)[in/i, out/o]


B ~ o(v).B
. _ i(v) _
l(x) .o(x).B ~ o(v).B

A virtue of process modelling is that it allows building systems from simpler


components. Consider how to model an n-place buffer when n > 1, following
Milner [44], by linking together n instances ofB in parallel. The flow graph ofn
copies ofB is pictured in Figure 1.17. For this to become an n-place buffer we need
to "link," and then internalise, the contiguous 0 and i ports. Renaming permits
linking, as the following variants ofB show.

B, - B[oI/o]
Bj+1 - B[Oj/i,oj+I/o] l s j < n - l
Bn - B[on-I/i]

The flow graph of BI I . .. I Bn is also shown in Figure 1.17, and contains the
intended links. The n-place buffer is the result of intemalizing these contiguous
links, (BI I .. . I Bn)\{OI, . . . , on-d ·

~ i0° B - ~ i0° B -

FIGURE 1.17. Flowgraph of n instancesofB, and B1 I . .. IBn .


1.4. Renaming and Iinking 23

Part of the behaviour ofa two-place buffer is illustrated by the following cyele .
i{v)
(B[ot/o] I B[oJli])\{o\} ~ «o(v).B)[oJlo] I B[oJli])\{od
,J,r
i{ w)
«o(w).B)[ot/o] I (o(v).B)[Ot/i])\{od +- (B[ol/o] I (o(v).B)[o\/i])\{od
,J, o(v)
(B[ol/o] I (o(w).B)[ol/i])\{od
T
«o(w).B)[ot/o] I B[ol/i])\{od ~

,J, o(w)
(B[ol/o] I B[ot/i])\{od
Below is the derivation of the second transition.

«o(v).B)[ol/o] I B[ot/i])\{od ~ (B[ot/o] I (o(v).B)[o\/i])\{od

(o(v) .B)[ol/o] I B[ol/i] ~ B[ot/o] I (o(v) .B)[ol/i]


_ OI{V) • OI{V) _ •
(o(v).B)[ot/o] ~ B[ol/o] B[ol/~] ~ (o(v).B)[oJl~]
_ o{v) i {v) _
o(v) .B --4 B B ~ o(v) .B

i(x).o(x).B ~ o(v).B
A more involved example from Milner [44] refers to the construction of a
scheduler from small cyeling components. Assume n tasks when n > 1, and that
action ai initiates the ith task, whereas b, signals its completion. The scheduler
plans the order of task initiation, ensuring that the sequence of actions a\ ... an is
carried out cyclically starting with a\. The tasks may terminate in any order, but a
task can not be restarted until its previous operation has finished. So, the scheduler
must guarantee that the actions a, and b, happen altemately for each i.
Let Cy' be a cycler of length four, Cy ~ a.c.b .d.Cy', whose flow graph is
illustrated in Figure 1.18. In this case, the flow graph is very elose to its transition
graph, so we have cireled the a label to indicate that it is initially active . As soon as a
happens, control passes to the active action c. The elockwise movement of activity

FIGURE 1.18. The fiow graph ofCy'


24 1. Processes

FIGURE 1.19. Flow graph of cy; I CY2 I CiJ I C~

around this flowgraph is its transition graph . A first attempt at building the required
scheduler is as a ring of n cyc1ers, where the a action is task initiation, the b action
is task termination, and the other actions c and d are used for synchronization.

Cy'[a lla, ci lc. bsfb, cnl d ]


(d .Cy' )[a;/a, c;/c, b;/b , ci -i/d] I < i ~ n

Cy; carries out the cycle CY'I 0. ~ eR CYI and Ci; , for i > I carries out the different
/ Ci - I aj Cj b; ..r'
cyc Ie CYi ~ C,J i.
The flow graph of the process Cy; I Cy; I CyJ I C~ with initial active
transitions marked is pictured in Figure 1.19. Next , the Ci actions are intemalised.
Assume that Sched~ == (Cy; I Cy; I Cy) I C~)\{C l, .. . , C4}. Imagine that the
Ci action s are concealed in Figure 1.19, and notice then how the tasks must be
initiated cyc1ically. For example, a3 can only happen once al , and then a2, have
both happened. Moreover, no task can be reinitiated until its previous execution has
terminated. For example, a3 can not recur until b 3 has happened. However, Sched~
does not permit all possible acceptable behaviour. Put simply, action b4 cannot
happen before b, because ofthe synchronization between C4 and C4 , meaning task
four cannot terminate before the initial task.
Milner's solution in [44] to this problem is to redefine the cycler

Cy = a.c.(b.d.Cy + d.b.Cy)
der
1.5. More combinations of processes 25

and to use the same renaming funetions. Let CYi for 1 < i :::: n be the proeess
(d.Cy)[a ;/a, c.]c , b;/b, Ci-lid]
and let CYI be Cy[alla , c.]«, bllb , cnld]. The required seheduler is Sched., , the
proeess (CYl I··· I CYn)\{cI, .. . , cn }.

Exercises 1. Redefine Road and RaH from Seetion 1.2 as abbreviations of Cy' plus
renaming.
2. Assuming that the spaee of values eonsists of one element, draw both kinds
of transition graph for the three-plaee buffer

(BI I B2 I B3)\{OI, 02} '


3. What extra eondition on a renaming funetion f is neeessary to ensure that the
transition graphs of (E I F)[f] and E[f] I F[f] be isomorphie? Do either
ofthe buffer and seheduler examples fulfil this eondition?
4. a, Draw both kinds oftransition graph for the proeesses Scheda and Sched~ .
b. Prove that Sched~ permits all, and only the aeeeptable, behaviour of a
seheduler (as deseribed earlier).
S. From Milner [44]. Construet a sorting maehine from simple eomponents for
eaeh n 2: 1 eapable of sorting n-length sequenees of natural numbers greater
than O. It aeeepts exaetly n numbers, one by one at in, then delivers them up
one by one in deseending order at out, terminated by a O. Thereafter, it returns
to its initial state.

1.5 More combinations of processes


In previous seetions we have emphasised the proeess eombinators of CCS. There
is a variety of proeess ealeuli dedieated to preeise modelling of systems. Besides
CCS and CSP, there is ACP, due to Bergstra and Klop [5, 3], Hennessy's EPL
[26], MEIJE defined by Austry, Boudol and Simone [2, 51], Milner's SCCS [43],
and Winskel's general proeess algebra [62]. Although the behavioural meaning
of all the operators of these ealculi ean be presented using inferenee mies, their
eoneeption refleets different eoneems. ACP is primarily algebraie, highlighting
equations'' . CSP was devised with a distinguished model in mind, the failures
model", and MEIJE was introdueed as a very expressive ealculus, initiating general
results about families oftransition mies that ean be used to define proeess operators;
see Groote and Vaandrager [25]. The general proeess algebra in [62] has roots

5See Section 3.6.


6See Section 2.2 for the notion offailure.
26 1. Processes

in category theory. Moreover, users of process notation can introduce their own
operators according to the application at hand.
Numerous parallel operators are proposed within the calculi mentioned
above. Their transition rules are of two kinds. First, where x is parallel, is a
synchronization rule.

Ex F ~ E' X F'

E~E' F~F'
Here, ab is the concurrent product of the component actions a and b, and .. .
may be filled in with a side condition. In the case of the parallel of Section 1.2,
the actions a and b must be co-actions, and their concurrent product is the silent
action. Other rules permit components to act alone.

Ex F ~ E' x F E x F~ Ex F '
E~E' F~F'
In the case of the parallel I there are no side conditions when applying these rules.
This general format covers a variety of parallel operators. At one extreme is the
case when x is a synchronous parallel (as in SeeS), when only the synchronization
rule applies , thereby forcing maximal concurrent interaction. At the other extreme
is a pure interleaving operator when the synchronization rule never applies. In
between are the parallel operators of ACP, ees and esp.
A different conception of synchronization underl ies the parallel operator of
esp (when data is not passed). Synchronization is "sharing" the same action . Ac-
tions now do not have partner co-actions because multiple parallel processes may
synchronize. Each process instance in CSP has an associated alphabet consisting
of the actions that it is willing to engage in. Two processes must synchronize on
common actions belonging to both component alphabets. An alternative presenta-
tion, which does not require alphabets, consists of introducing a family of binary
parallel operators 11 K indexed by a set K of actions that have to be shared. Rules
for 11 K are as follows.

EIIKF ~ E'II KF' K


--------aE
E~E' F~F'

EIIKF ~ E'II KF a f/. K EIIKF ~ EIIKF' a f/. K


E~E' F~F'
The first rule requires that both components of Eil K F must share any action in
K. The other pair allows components to proceed independently, so long as they
perform actions outside of K .
def
Example 1 Assume that Yen is the vending machine of Section 1.1, and let Use
tp.L ätrt Ie.co'l l.ecta.Use be auser. The transition graph for VenliKUse when
K is the set {lp, l ittle , co'lLec'tj ] is isomorphic to that ofVen. The following
1.5. More combinations of processes 27

initial transitions are allowed.

lp
VenliKUse - + VenlIlKlittle.collectl .Use
2p
VenliKUse - + VenbllKUse

Adding another user does not change the possible behaviour. The process
VenliKUseliKUse also has an isomorphie transition graph to that ofVenliKUse,
as all components must synchronize on K actions . If instead K is the set
{lp, little, collectl , 2p}, then the graph ofVenliKUse is isomorphie to Use,
as the initial 2p transition is blocked .

The operator 11 K enforces synchronization ofactions in K . In ces, all synchro-


nization is silent. In CSP, silent activity is achieved using an abstraction or hiding
operator, which we represent as \\K, and whose transition rules are as follows.

E\\K ~ F\\K d E\\K ~ F\\K


-:..:.-------:.:..- a E K
-:..:.-------:.:..- a 'F- K
E~F E~F

Hiding is also useful for abstracting from observable behaviour of processes that
do not contain the sharing parallel operator.

Example 2 The scheduler of the previous section has to ensure that the sequence of actions
al . . . an
happens cyclically. The observable behaviour of Sched, \\{b l , •. • , bn }
and ofSched', \\{b l , . .. ,bn } is the infinite repetition ofthe sequence ai ... an ' For
example Sched~ \\{b l , • • • , b4} carries out the following cycle.

In ces, values can be passed between ports. A more general idea is to al-
low ports themselves to be passed between processes . For example, the process
in(x) .x.E receives aportat in, which itmaythen use to synchronize on. This kind
of general mechanism permits process mobility, since links may be dynamically
altered as a system evolves. This facility is basic to the zr-calculus, as developed
by Milner, Parrow and Walker [45].
There is a variety of extensions to basic process calculi for modelling real-
time phenomena, such as timeouts expressed using either action duration or delay
intervals between actions, priorities among actions or among processes, spatially
distributed systems using locations, and the description of stochastic behaviour
using probabilistic, instead of nondeterministic, choice . Some of these extensions
are useful for modelling hybrid systems that involve a mixture ofdiscrete and con-
tinuous, and can be found in control systems for manufacturing (such as chemical
plants) .
28 1. Processes

Exercises 1. In ACP, sequential composition ofprocesses E ; F is a primitive operator. The


idea is that the behaviour of E; Fis that of E followed by that of F. Define
transition mies for sequential composition. To what extent can sequential
composition be simulated within CCS using parallel composition?
2. Draw both kinds of transition graph for the following processes.
a, Sched~ \\ {bI, , b4 }
b. Scheda \\{bI, , b4 }
c. Sched~ \\{al , , a4}
d. Scheda \\ {al, , a4}
3. Show how the operator \\K can be defined in CCS.
4. A process E is determinate provided that, for any trace w if E ~ EI and
E ~ E2, then EI = E2. Assume that E and F are determinate, and that
K is the set ofactions comrnon to (that is, occurring in) both E and F . Show
that Eil K F is also determinate, but that ElF need not be.

1.6 Sets of processes


Processes can also be used to capture foundational models of computation, such
as Turing machines, counter machines and parallel random-access machines. This
remains true for the following restricted process language, where P ranges over
process names, a over actions, and lover finite sets of indices .

E ::= P I I)ai .Ei : i E I} I EI I E2 I E\{a}


A process expression is either a name, a finite sum ofprocess expressions, a parallel
composition of process express ions, or a restricted process expression. A (closed)
process is given as a finite family {Pi ~ Ei : I ::: i ::: n} of definitions, where
all the process names in each Ei belong to the set {PI , . . . , Pn }. For instance, see
the example Count, below. Although process expressions such as the counter Cto
(Figure 1.4) the register Rego (Section l.l) and the slot machine SMo (Figure 1.15)
are excluded because their definitions appeal to value passing or infinite sets of
indices, their observable behaviour can be "simulated" by processes belonging to
this restricted process language.
As an example , consider the following finite reformulation of the counter Cto,
due to Taubner [57].

round.Count + up .ICount j
def
Count = I a.Count)\{a}
def
down.a.O + up .(Count2 I b. COuntl)\{b}
down .b .O + up{Courrt I a. Count2)\{a}
def
= j
1.6. Sets of processes 29

The reader is invited to draw the observable transition graph for Count and compare
it with Figure 1.4.
In the remaining chapters, we shall abstract from the behaviour of processes.
However, in some cases this requires us to define families of processes that en-
capsulate the behaviour of some initial processes. This naturally leads to sets of
processes that are "transition closed." A set of processes E is transition closed if,
for any process E in E, and for any action a, and for any transition E ~ F , then
F also belongs to E. For instance, the set of processes appearing in a transition
graph is transition closed. In later chapters we use P to range over non-empty
transition closed sets , and we use P(E) to range over transition closed sets that
contain E .
There is a smallest transition closed set containing E , given by the set {F :
E ~ F for some trace w} . However, it may be computationally difficult, and
in some cases undecidable, to determine this set. Instead, we can define larger
transition closed sets containing E inductively on the structure of E. The resultant
set is only an "estimate" ofa smallest transition closed set. Consider the following
definition of the set of subprocesses Sub( E) of an initial CCS process E that does
not involve value passing or parameterisation.
Sub(a.E) {a .E} U Sub(E)
Sub(E+F) {E + F} U Sub(E) U Sub(F)
Sub(E I F) = {E I F} U {E' I F ' : E' E Sub(E) and F ' E Sub(F)}
Sub(E\K) = {E'\K : E' E Sub(E)}
Sub(E[f]) {E'[f] : E' E Sub(E)}
Sub(P) = {P} U Sub(E) if P ~ E
Example 1 The set Sub(Crossing), where Crossing is defined in Figure 1.10, contains 125
elements including the following, where K is the set {green, red, up, down}.
(Road I Rail I up .down.Signal)\K
(Road I tcross .red.Rail I Signal)\K
(down.Road I Rail I red.Signal)\K
Only 12 of these elements belong to the smallest transition closed set containing
Crossing; see Figure 1.12.
The above definition ofSub( E) is transition closed (as the reader can check). As
an estimate ofa smallest transition closed set, it may be very generous as illustrated
in example 1. A more refined definition is possible which, for instance, would
have the consequence that Sub«a.E)\{a}) always has size one . The definition of
Sub can also be extended to processes defined using parameters, or to processes
containing value passing. One method is to instantiate the parameters immediately.
For example, the following would capture the case for the input prefix.
Sub(a(x).E) = {a(x) .E} U U {Sub(E{v/x}) : v E D}
30 1. Processes

Another method is to first define the set of process "shapes," processes with free
parameters, then to define instances of these shapes using substitution. The defi-
nition of the input prefix would now be as folIows, provided that x does not also
occur bound within E. We leave details to the reader.
Sub(a(x) .E) = {a(x) .E} U Sub(E)

Exercises 1. Draw the observable transition graph for Count. How does it compare with
the graph for Cto?
2. From Taubner [57]. Using two copies of Count, show how a two-counter
machine can be modelIed within the restricted process language ofthis section .
3. A process E can carry out the "completed observable trace" w if E ~ Fand
F is deadlocked, or has terminated. Assurne that CT( E) is the set ofcompleted
observable traces that E can carry out.
a. Prove that, for any process E defined in the restricted process language
ofthis section , the set CT(E) is recursively enumerable.
b. Let L be a recursively enumerable language over a finite set ofobservable
actions (which , therefore, excludes r), Prove that there is a process E of
the restricted process language with the property CT(E) = L.
4. Answer the following open-ended questions.
a. What criteria should be used for assessing the expressive power of a
processlanguage?
b. Should there be a "canonical" process calculus?
c. Is there a concurrent version of the Church-Turing thesis for sequential
programs?
5. Consider any process E without parameters or value passing.
a. Show that Sub(E) as defined above is transition closed .
b. List all members of Sub(Crossing) and compare that listing with the
smallest transition closed set P(Crossing).
c. Refine the definition of Sub(E). What size does Sub(Crossing) have
with your refined definition?
6. Extend the definition of Sub to processes containing parameters and value
passing in both ways suggested in the text.
2 _

Modalities and Capabilities

2.1 Hennessy-Milner logic I 32


2.2 Hennessy-Milner logic 11 36
2.3 Algebraic structure and modal properties 39
2.4 Observable modallogic . 42
2.5 Observable necessity and divergence 47

Various examples ofprocesses have been presented so far from a simple clock to a
scheduler. In each case, a process is an expression constructed from a few process
operators. Behaviour is determined by the transition rules for process combinators.
These rules may involve side conditions relying on extra information. For instance ,
when data are involved, a partial evaluation function is used. Consequently, the
ingredients ofa process description are combinators, predicates and transition rules
that allow us to deduce behaviour.
In this chapter, some abstractions from the overall behaviour of a process are
considered. Already we have contrasted finite state from infinite state processes.
The size of a process is determined by its transition graph, although under some
circumstances an estimate is provided instead using Sub. Also, observable transi-
tions marked by the thicker transition arrows ~ have been distinguished from
their thinner counterparts ~. We examine simple properties ofprocesses as given
by modallogics, whose formulas express process capabilities and necessities, and
which can be used to focus on part of the behaviour of a process.

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
32 2. Modalities and Capabilities

2.1 Hennessy-Milner logic I


A modallogic M is introduced for describing local capabilities ofprocesses. For-
mulas ofM are built from boolean connectives and the modal operators [K] ("box
K") and (K) ("diamond K") for any set of actions K . The following abstract
syntax definition specifies formulas of M.

<1> ::= tt I ff I <1>1 /\ <1>2 I <1>1 V<1>2 I [K]<1> I (K)<1>


A formula can be the constant true formula tt, the constant false formula ff, a
conjunction offormulas <1>1 /\ <1>2 , a disjunction offormulas <1>1 v <1>2 or a formula
[K]<1> or (K)<1> prefaced with a modal operator.
Formulas of M are ascribed to processes. Each process either has a modal
property, or fails to have it. When process E has property <1> , we write E 1= <1> and
when it fails to have <1> , we write E ~ <1>. If E has <1> we often say "E satisfies <1>"
or" E realises <1>." The binary satisfaction relation between processes and formulas
is defined inductively on the structure of formulas .

E 1= tt
E~ff
E 1= <1> /\ \11 iff E 1= <1> and E 1= \11
E 1= <1> v \11 iff E 1= <1> or E 1= \11
E 1= [K]<1> iff VF E {E' : E ~ E' and a E K}. F 1= <1>
E 1= (K)<1> iff 3F E {E' : E ~ E' and a E K} . F 1= <1>

Every process has the property tt, whereas no process has the property ff . A
process has the property <1> /\ \11 when it has the property <1> and the property \11 ,
and it satisfies <1> v \11 ifit satisfies one ofthe disjuncts . The definition ofsatisfaction
between processes and formulas prefaced by a modal operator appeals to behaviour
of processes as given by the rules for transitions. A process E has the property
[K]<1> if every process which E evolves to after carrying out any action in K has
the property <1>. And E satisfies (K) <1> if E can become a process that satisfies <1>
by carrying out an action in K . To reduce the number ofbrackets in modalities, we
write [al, . . . , an] and (al, . . . , an) instead of [{al, . . . , an}] and ({al , . . . , an})'
The modallogic M slightly generalises Hennessy-Milner logic, due to Hennessy
and Milner [29], because sets of actions, instead of single actions, occur in the
modalities.
The simple formula (tick)tt expresses an ability to carry out the action tick.
Process E has this property provided that there is a transition E ~ F . The
clock Cl from Section 1.1 has this property, whereas the vending machine Yen
of Figure 1.2 does not. In contrast, the formula [tick]ff expresses an inability
to carry out the action tick, because process E satisfies [tick]ff if E does
not have a transition E ~ F. These basic properties can be embedded within
2.1. Hennessy-Milner logic I 33

modal operators and between boolean connectives. An example is the fonnula


[tick]( (tick}tt /\ [t ockjf f )", which expresses that, after any tick action , it is
possible to perform tick again, but not possible to perform tock.

Example 1 Cl has the property [tick]«tick}tt /\ [tock]ff). Applying the definition of


satisfaction Cl has this property:

tick
iff Y F E {E : Cl ~ E}. F f= (tick}tt /\ [tock]ff
iff Cl f= (tick}tt /\ [tock]ff
iff Cl f= (tick}tt and Cl f= [tock]ff
tick
iff 3F E {E : Cl ~ E} and Cl f= [tock]ff
iff 3F E {Cl} and Cl f= [tock]ff
iff Cl f= [tock]ff
iff {E : Cl ~ E} = 0
iff 0=0

On the other hand, Cl V= [tick]( (tock}tt V [tick]ff).

The fonnula (K) tt expresses a capability for carrying out some action in K ,
whereas [K]ff expresses an inability to initially perform any action in K. In the
case of the vending machine Yen, a button cannot be depressed before money is
deposited, so Yen f= [big, little]ff. Other interesting properties ofVen are as
folIows.
• Yen f= [2p]([little]ff /\ (big}tt): after 2p is deposited the little button
cannot be depressed whereas the big one can
• Yen f= [Ip, 2p][lp, 2p]ff: after a coin is entrusted no other coin (2p or lp)
may be deposited
• Yen f= [Ip, 2p][big, little](collectb, collectl}tt: after a coin is
deposited and a button is depressed, an item can be collected
Verifying that Yen has these properties is undemanding. A proof merely appeals
to the inductive definition of the satisfaction relation between a process and a
fonnula, which may rely on the rules for transitions. Similarly, establishing that a
process lacks a property is equally routine.

I We assurne that /\ and v have wider scope than the modalities [K], (K), and that brackets are introduced
to resolve any further ambiguities as to the structure of a formula. Therefore, /\ is the main connective of the
subformula (tick}tt /\ [tock]ff.
34 2. Modalities and Capabilities

Example 2 Yen ~ (lp}(lp, big}tt


lp
iff not (3F E {E : Yen ----+ E}. F F (lp, big}tt)
iff Venl ~ (Ip, big}tt
iff not (3F E {E : Venl ~ E and a E [Ip, big}})
iff {E: Venl ~ E and a E {ip, big}} = 0
iff 0= 0
Again, this demonstration appeals to the inductive definition of satisfaction
between a process and a formula.
When showing that a process has, or fails to have, a property we do not need
to build its transition graph, as the following example illustrates .
Example 3 Consider the following features of the crossing of Section 1.2.
Crossing F [train]«(T}tt 1\ (car}[T][T]ff)
Crossing F [car][train][T]«(tcross}tt v (ccross}tt)
Crossing ~ [car][train][T]«(tcross}tt 1\ (ccross}tt)
Proofs ofthese depend only on part ofthe behaviour ofthe crossing. For instance,
the first relies only on the processes E2, E3, Es, E6 and E7 ofFigure 1.12.
Actions in the modalities may contain values. For instance , the register Regs
from Section 1.1 can only transmit the value 5, whereas it can be overwritten by
any value k :::: O.
Regs F (read(5)}tt 1\ [{read(n) : n i= 5}]ff
Regs F (write(k)}tt

Assume that A is a universal set of actions including T. Hence, A is the set


o U {T}, where 0 is the general set of observable actions described in Section 1.3.
A little notation is now introduced for sets ofactions within modalities. The set - K
abbreviates A - K, and -at , . . . , an abbreviates -{at , . .. , an}. Also, assume that
- abbreviates the set -0, which is therefore the set A. A process E has the property
[- ]<1> when each member of the set {E' : E ~ E' and a E A} satisfies <1>. The
modal formula [- ]ff therefore expresses deadlock or termination, an inability to
carry out any action whatever.
Within M, one can express immediate "necessity" or "inevitability," The prop-
erty Ha must happen next" is given by the formula (- }tt 1\ [ -a]ff. The conjunct
(- }tt affirms that an action is possible, whereas [-a]ff states that every ac-
tion except a is impossible . After 2p is deposited, Yen must perform big, and so
Yen F [2p]«(-}tt 1\ [-big]ff). Yen also has the following property, that the
third action it performs must be a collect.
(-}tt 1\ [ - ]( (- }tt 1\ [-]( (- }tt 1\ [-collectl , collectb]ff))
2.1. Hennessy-Milner logic I 35

Exercises 1. Show the following


a. Yen F [2p , lp](big, little}tt
b. Yen li= [2p, lp]« (big}tt /\ (little}tt)
c, Yen F [2p]([-big]ff /\ (-little, 2p}tt).
d, Cnt F [up](down}[down]ff
e. Crossing F [train]«(r}tt /\ (car}[r][r]ff)
f. Crossing F [car][train][r]«(tcross}tt v (ccross}tt)
g. Crossing li= [car][train][r]«(tcross}tt /\ (ccross}tt)

where Cnt and Crossing are defined in Section 1.2.


2. Show Ct 3 F (down)(up) (down)(down}tt, but that Ctl fails to have this prop-
erty when Cti is from Figure 1.4. Using induction, show that, for any i and
j , Cti F [upF(down} jtt, that whatever goes up may come down in equal
proportions, where [a]°cf> = cf> and [a]n+lcf> = [a][a]ncf> (and similarly for
(a}ncf».
3. Using induction on i show

Cop' F [no(i)][in(v)](out(v)}i+l t t

where Cop' is defined in Section 1.1.


4. Consider the following three vending machines.

Venl
der
tp.tp.rtea.ven, + coff ee.Vem )
lp.(lp.tea.Ven2 + lp.coffee.Ven2)
der
Ven2
tp.tp.t.ea.ven, + Ip.Lp.cof f ee.Venj
der
Ven3 =

Give modal fonnulas that distinguish between them : that is, find fonnulas cf> j ,
1 ::: j ::: 3, such that Venj F cf> j but Ven, li= cf> j when i =I j .
5. Let Cl ~ tick.CI and Cl 2 ~ tick.tick.CI 2. Show that no modal fonnula
distinguishes between these clocks. That is, prove that Cl F cf> iff Cl2 F cf>
for all modal fonnulas cf>.
6. A modal fonnula cf> distinguishes between two processes E and F if either
E F cf> and F li= cf> , or E li= cf> and F F cf>. Provide a modal fonnula that
distinguishes between Scheda and Sched~ ofSection 1.4.
7. Express as a modal fonnula the property "the second action must be a
(parameterised) out action ," and show that Cop ofSection 1.1 has this property.
8. Express as a modal fonnula the property "the fourth action must be r ," and
show that the slot machine SMn ofFigure 1.15 has this property.
36 2. Modalities and Capabilities

2.2 Hennessy-Milner logic 11


The modallogic M, as presented in the previous section, does not contain a negation
operator r-, The semantic clause for negation is as folIows.

E F=..,<I> iff E ~ <I>

However, for any formula <I> of M, there is the formula <l>C that expresses the
negation of <1>. The complementation operator C is defined inductively as follows.
tt C = ff ff C = tt
(<I» /\ <l>2r = <I>~ v <1>2 (<I» v <l>2r = <I>~ /\ <1>2
([K]<I» C = (K}<I>c «K}<I»C = [K]<I>C

<l>C is the result of replacing each operator in <I> with its "dual," where tt and
ff, /\ and v, and [K] and (K) are duals. For instance, the complement of
[tick]( (tick}tt /\ [tock]ff) is the formula (tick}([tick]ff v (tock}tt). The
following result shows that <l>C expresses ..,<1>.
Proposition 1 E F= <l>C ifJ E ~ <1>.
Proof. By induction on the structure of <1>, we show that, for any process F,
F F= <l>C iff F ~ <1>. The base cases are when <I> = tt and <I> = ff . Clearly,
F F= ff iff F ~ tt and F F= tt iff F ~ ff . For the induction step, assurne the
result for formulas <I> 1and <1>2. Ifwe can show that it also holds for <1» /\ <1>2, <1» V <1>2,
[K]<I>1 and (K}<I>I, then the result is proved. Let <I> = <1>1 /\ <I>2. F F= (<I» /\ <I>2Y

iff F F= <I>~ v <1>2 (by definition of C)


iff F F= <I> ~ or F F= <1>2 (by clause for v)
iff F ~ <1>1 or F ~ <1>2 (by induction hypothesis)
iff F ~ <1» /\ <1>2 (by clause for /\).

The case <I> = <1» V <1>2 is very similar. Let <I> = [K]<I». F F= ([K]<I»Y
iff F F= (K) <I>~ (by definition of C)
iff 3G.3a E K. F ~ G and G F= <I>~ (by clause for (K))
iff 3G.3a E K . F ~ G and G ~ <1» (by induction hypothesis)
iff F ~ [K]<I» (by clause for [Kl).

The final case <I> = (K) <1» is similar. 0

To show that a process fails to have a property is therefore equivalent to showing


that it has the complement property. Notice that the complement of a complement
of a formula is the formula itself, (<I>cy = <1>.
In Section 1.6 we defined the set of subprocesses Sub(E) of a process E .
This set may have infinite size. We now inductively define the set of subformulas ,
2.2. Hennessy-Milner logic 11 37

Sub( <1» , of a fonnula <1>.


Sub(tt) = {tt}
Sub(ff) {ff}
Sub( <I> 1 /\ <1>2) = {<I> I /\ <l>2} U Sub( <I> 1) U Sub( <1>2)
Sub( <I> I V <1>2) {<I> I V <l>2} U Sub( <I> 1) U Sub( <1>2)
Sub([K]<I» = {[K]<I>} U Sub(<I»
Sub«K) <1» = {(K) <I>} U Sub( <1»

For any fonnula <1> , Sub( <1» is a finite set of fonnulas. For instance, if <I> is the
fonnula ([tick]«tick)tt /\ [tock]ff)), then Sub(<I» is the following set.

{<I>, (tick)tt /\ [tock]ff , (tick)tt, [tock]ff , tt , ff}

The size of a modal fonnula *, denoted by 1*I, is the number of occurrences of


tt, ff , »; v , [K] and (K) within it. Clearly, the number offonnulas in the set
Sub(<I» is no more than 1<1>1 .
A modal fonnula is "realizable" (or "satisfiable") if there is a process that
satisfies it. [tick]«tick)tt /\ [tock]ff) is realizable because Cl satisfies it. On
the other hand, (tick)«tick}tt /\ [tick]ff) is not realizable because a process
cannot tick, and then be able to both tick again and fail to tick. There is a
simple technique for detennining whether a fonnula is realizable, provided it does
not contain modalities with values. First, realizability is extended to finite sets of
fonnulas: the finite set r is realizable if there is a process satisfying every <I> in r.
The method for deciding realizability of a set of fonnulas consists ofreducing it to
realizability of smaller sized sets, by stripping away connectives. The size of a set
is the sum of the sizes of its fonnulas. An example reduction is that r U {<I> /\ *}
becomes the smaller set r U {<I>, IIJ}. The details are left as an exercise for the
reader.
The presence of values in modal operators suggests that a more extensive
modal logic is appropriate to property expression that pennits generality of value.
One extension is to include first-order quantification over values. For instance
Regk has the property Vn. (write(n))tt where n in the modal operator is bound
by the universal quantifier: here it is implicit that n ranges over N. Quanti-
fiers allow value dependence to be directly expressible. Cop has the property
Vd. [in(d)]( (out(d))tt /\[ -out(d)]ff), whered ranges overtheappropriatevalue
space D . Semantic clauses for the quantifiers are as folIows:
E F=Vx.<I> iff Vd E D. E F= <I>{d/x}
E F= 3x. <I> iff 3d E D . E F= <I>{d/x},
where {d/ x} is substitution ofd for all free occurrences of x . Predicates over values
can also be included (for example, expressing evenness of an integer). This leads
to very rich first-order modal logics. We leave the reader to speIl out some of the
possibilities here.
38 2. Modalities and Capabilities

An alternative to using quantifiers over values consists of using infinite con-


junction and disjunction. Infinitary modallogic M oo, where I ranges over arbitrary
finite and infinite indexing families , is defined as follows.
<P ::= /\{<P ; : i E I} I V{<P; : i E I} I [K]<P I (K)<P
The satisfaction relation between processes and 1\ and V formulas is defined
below.
E f= /\ {<P; : i E I} iff E f= <P j for every j E I
E f= V{<P; : i E I} iff E f= <P j for some jE I
tt abbreviates I\{<P; : i E 0} and ff abbreviates V{<P; i E 0}. Quantified
formulas can be interpreted in M oo• For instance,
Vd. [in(d)]«(out(d»tt /\ [-out(d)]ff)
can be expressed as
/\([in(d)]«(out(d»tt /\ [-out(d)]ff) : d E D} .

Exercises 1. For each ofthe following formulas, determine its complement.


a. (al )(a2)(a3)tt
b. (al )(a2)(a3)[ -]ff
c. [train]«(r)tt /\ (car)[r][r]ff)
d. (read(5»tt /\ [{read(n) : n =I 55}]ff
e. (- )tt /\ [-]((- )tt /\ [-]( (- )tt /\ [-collectl , collectb]ff»
2. Prove that (<pcY = <P .
3. For each of the following , determine whether it is realizable, and when it is,
exhibit arealizer.
a, (tick)[tock]«(tick)tt /\ [tick]ff)
b. (tick)[tock]((tick)tt /\ [tick]ff) /\ [-](tock)tt
c. (tick)[tock]((tick)tt /\ [tick]ff) /\ [-]( -)tt
d. [-]([ - ]((- )tt /\ [-collectl , collectb]ff) /\ (- )tt) /\ (- )tt
e. [in(5)]«(out(5»tt /\ (out(7»tt)
4. Design an algorithm that decides whether a modal formula ofM is realisable.
5. A modal formula is valid if every process satisfies it. Show that <P is valid iff
<p c is not realizable. Let ~ be the implies connective whose definition is

<P ~ \11 ~ <p c v \11.


Which ofthe following are valid when <P and \11 are arbitrary modal formulas?
a, (tick)( <P v \11) ~ «(tick) <P v (tick) \11)
2.3. Aigebraic structure and modal properties 39

b. ((tick) cl> 1\ (tick}lIJ) ~ (tick}(cl> 1\ IIJ)


c. [tick](cl> ~ IIJ) ~ ([tick]cl> ~ [tick]lIJ)
d. ([tick]cl> ~ [tick]lIJ) ~ [tick](cl> ~ IIJ)
6. Two modal fonnulas cl> and IIJ are "equivalent" if, for all processes E, E F= cl>
iff E F= IIJ . Which of the following pairs are equivalent when cl> I and cl>2 are
arbitrary modal fonnulas?
a. (tick}(cl>I 1\ cl>2), (tick}cl>l 1\ (tick}cl>2
b. (tick}(cl>] v cl>2), (tick}cl>l v (tick}cl>2
c. [tick](cl» 1\ cl>2), [tick]cl» 1\ [tick]cl>2
d. [tick](cl>l v cl>2), [tick]cl>l v [tick]cl>2
e. [tick]cl», (tick}cl»
7. Define first-order modal logic for value-passing processes, where the
quantifiers range over values .

2.3 Aigebraic structure and modal properties


Process behaviour is chronicled through transitions. Processes also have structure,
defined as they are from combinators. An interesting issue is the extent to which
properties of processes are definable from this structure, without appealing to
transitional behaviour. The ascription of boolean combinations of properties to
processes does not immediately depend on their behaviour. For instance, E satisfies
cl> v IIJ if and only if E satisfies one of the disjuncts. Therefore, it is the modal
operators that we need to concern ourselves with , and how algebraic structure
relates to them . A variety of cases is covered in the following proposition.

Proposition 1 1. Ifa ~ K then a.E F= [K]cl> and a.E ~ (K}cl>


2. Ifa E K then a.E F= [K]cl> ifJ E F= cl>
3. Ifa E K then a .E F= (K}cl> ifJ E F= cl>
4. 2:) Ei : i E I} F= [K]cl> ifJ for all j E I . s, F= [K]cl>
5. L{Ei : i E I} F= (K}cl> ifJ for some jE I . E j F= (K}cl>
def
6. If P = E and E F= cl> then P F= cl>

Proof, For I, if a ~ K, then the set {F : a.E .s; Fand b E K} = 0 , and


so a.E F= [K]cl>, but a.E ~ (K}cl> . Cases 2 and 3 follow from the observation if
b
a E K, then the set {F : a.E ~ Fand b e K} = {E}. 4 and 5 depend on the
following equality, {F : L {Ei : i EI} ~ F and a E K} is the same set as
40 2. Modalities and Capabilities

{F : E j ~ F and a E K and j E l}. For 6, observe that the behaviour of P


when P ~ E is that of E. 0
Example 1 Using Proposition i and the semantic clauses for boolean combinations of prop-
erties, we can now show that the vending machine Yen of Figure 1.2 has the
property [2p]«-)tt /\ [-big]ff} without appealing to transitional behaviour.
Using Proposition 1.6, this is established if
2p.Venb + lp.Venl F [2p]«-)tt /\ [-big]ff},
which reduces by 1.4 to demonstrating
2p.Venb F [2p]«-)tt /\ [-big]ff} and
lp.Venl F [2p]«-)tt /\ [-big]ff}.
The second follows from Proposition 1.1 because lp f/. {2p}. Using Proposi-
tion i .2, the first reduces to showing
Venb F (-)tt /\ [-big]ff .
By Proposition 1.6, this is established if
bi.g.co'lLectx .ven F (-)tt /\ [-big]ff.
That is, if the following pair hold
bäg.co Ll.ectig.ven F (-)tt and
bag.col Lectv.Ven F [-big]ff.
The second of these is true by Proposition l.I , and the first is established using
Proposition 1.2 because co l.Lec'tg.Ven F tt.
The effect of removing the restriction \ l from a process can be captured by
inductively defining an operator on modal formulas, <1>\1 . The intention is that
E\1 F <I> iff E F <I>\l. In the following, let J" be the set) U 7.

tt\l = tt ff\l = ff
(<I> /\ W}\l = <I>\l /\ W\1 (<I> V W}\1 = <I>\l V W\1
([K]<I>}\1 = [K - )+](<I>\1) «K)<I>}\l = (K - )+)(<I>\1)
The operator \1 removes actions from modalities, as the following example
illustrates.
[tick, tock]«-)tt /\ [tock]ff}\{tick} = [tock]«-tick)tt /\ [tock]ff}
The operator \) on formulas is an inverse of its application to processes, as the
next result shows.

Proposition 2 E\1 F <I> iff E F <1>\1.


Proof. By induction on <1> . If<l> is tt or ff , then the result is clear. Suppose <I>
is <I> I /\ <1>2. So, E\1 F <I> iff E\1 F <I> I and E\1 F <1>2 iff by the induction
2.3. Aigebraic structure and modal properties 41

hypothesis E F <1>1 \1 and E F <1>2\1 , and now by the inductive definition above
iff E F <1>\1 . The case when <1> is <1>1 V <1>2 is similar. Assurne <1> is [K]\}I and
E\1 F [K]\}I , but E ~ [K - J +](\}I\1 ). Therefore, E ~ F with a E K - J+
and F ~ \}I\1 . By the induction hypothesis it follows that F\1 ~ \}I , and because
a E K - J + we also know that E\1 ~ F\1. But then we have a contradiction
because this shows that E \1 ~ [ K] \}I. For the other direction, suppose that
E F [K - J+](\}I\1) , but E\J ~ [K]\}I . Therefore, E\1 ~ F\1 for some
a E K and F\1 ~ \}I. But this a must belong to K - J + by the transition rule for
\1, and by the induction hypothesis F ~ \}I\1. Therefore, E ~ ([K]\}I )\1 . The
final case when <1> is (K) \}I is similar. 0

There are similar inverse operations on formulas for renaming [f] of Sec-
tion 1.4 and for hiding \\1 of Section 1.5. We leave their exact definition as an
exercise,
Much more troublesome is coping with parallel composition. One idea, which
is not entirely satisfactory, is to define an "inverse" of parallel on formulas. For
each process F , and for each formula <1>, one defines the new formula <1>/ F with
the intention that for any E , ElF F <1> iff E F <1>/ F . Instead ofpresenting an
inductive definition of this "slicing operator," <1> / F , we illustrate a particular use
ofit.
Example 2 Let Ven be the vending machine and consider the following.
der
Use, = lp.li trt Ie.Usej
K = {lp,2p,little,big}
K+ = K UK

We show that (Ven I Used\K F [-][- ](collectl)tt. Proposition 2 is applied


first.
Ven I Use1 F ([-][- ](collectl)tt)\K iff
Ven I Use1 F [- K+][ - K +]( co LLec'tj )tt iff
Ven F ([-K+][- K +](collectl)tt)/Use1
We need to understand the formula ([- K+][ -K+](collectl)tt)/Use1. It is the
same as ([-K +][-K +](collectl)tt)/lp.little.Use1. We want to distribute
the process through the formula. The action lp can not directly contribute to
the modality [-K+] because lp E K +. However, it can contribute as part of
a communication if Ven has lp transitions. Therefore, either Ven contributes a
transition, or there is a communication between Ven and the user. Therefore, we
need to show the following pair.

Ven F [- K +]([- K +](collectl)tt/IP.li trt Le.Usej )


Ven F [lp]([-K +](collectl )tt/little.Use1)
42 2. Modalities and Capabilities

The first of these is derivable using Proposition 1. By the same Proposition, the
second is quivalent to the following.
Venl F= ([ -K+](collectl)tt}/little.Usel
There is now a similar argument. The process little.Usel can only contribute to
an action in [- K+] if it is part ofa communication. Therefore, one needs to show
the following pair.
Venl F= [- K+]( (collectl)tt/li trt Le .Usej)
Venl F= [little]«(collectl)tt/Usel}
The first is derivable using Proposition 1, and by the same Proposition the second
is equivalent to the following.
collectl.Ven F= «(collectl)tt}/Usel
This clearlyholds because collectl.Ven is able to perform co l.Lectij, and tt/ F
for any process F is just the fonnula tt.

Exercises 1. Using Proposition 1 and the semantic clauses for boolean connectives, show
the following.
3. Cl F= [tick, tock]«(tick)tt 1\ [tock]ff)

b, Ven F= [2p, lp](big, little)tt


c. Cto F= [up][down, up]([down]ff v [round]ff)
2. Let K = {a, b, c]. What are the following fonnulas?
3. «(r)[a](b)tt 1\ [-][-](c)tt)\K
b. ([a, b, C, d](d)[C, d]ffv [-J][-K]ff)\K
3. Define operators [f] and \\1 on modal fonnulas so that the following hold.
3. E[f] F= <1> iff E F= <1>[f]

b. E\\1 F= <1> iff E F= <1>\\1


4. Define the slicing operator / F on modal fonnulas . Use your definition to prove
the following without appealing to transitional behaviour.
3. Crossing F= [train]«(r)tt 1\ (car)[r][r]ff)
b. Crossing F= [car][train][r]«(tcross)tt v (ccross)tt)
c, Crossing ~ [car][train][r]«(tcross)tt 1\ (ccross)tt)

2.4 Observable modal logic


Process activity is delineated by the two kinds of transition relation distinguished
by the thickness of their arrows, ---+ and ==}. The latter captures the operation
2.4. Observable modal logic 43

of observable transition s because ~ pennits silent activity before and after a


happens. The relation ~ (see Section 1.3) was defined in tenns of ~ and the
relation ~ indicating zero or more silent actions .
The modallogic M does not express observable capabilities of processes be-
cause silent actions are not accorded a special status. To overcome this, it suffices
to introduce new modalities [ ] and « )} as folIows.

EF=[]<I> iff VF E {E' : E ~ E '} . F F= <I>


E F= «(}) <I> iff 3F E {E' : E ~ E '}. F F= <I>

A process has the property [ ] <I> provided that it satisfies <I> and, after evolving
through any amount of silent activity, <I> remains true. To satisfy « )} <1>, a process
has to be able to evolve in zero or more T transitions to a process realizing <1> .
Neither [ ] nor « )} is definable within the modal logic M. A technique for
showing non-definability employs equivalence of fonnulas : two fonnulas <I> and
IJI are equivalent if, for every process E , E F= <I> iff E F= IJI. For instance ,
(t ick, tock} <I> is equivalent to (tick}<I> v (tock}<I>, for any <1>. Two fonnulas
are not equivalent if there is a process that realises one, but not the other, fonnula.
If [ ] is definable in M, then for any fonnula <I> in M there is also a fonnula in M
equivalent to [ ] <I> (and similarly for definabil ity of « }}). Non-definability of [ ]
is established ifthere is a <I> in M and [] <I> is not equivalent to any fonnula ofM.
A simple choice of<l>, namely [a]ff , suffices. A proce ss realises [] [a]ff if it is
unable to perform a after any amount of silent activity. We show that [ ] [a]f f is
not equivalent to any fonnula ofM. For this purpose, consider the two familie s of
similar processes {Div, : JE N} and {Divf : JE N}

Divo ~ T.O " D


def
DJ.Vi+1 =
J.Vi T.
O

DiVa ~ a.O ° _..a def D ° _..a


DJ.vi+l = T. J.vi

whose transition graphs are as folIows.

r DO r D'J.Vj . .. D'J.Vl r O
DJ.Vo r 0
... ~ J.Vi+l ~
t
~ ~ ~

... ~
r DJ.vj+
°a ~
r D.J.vja . . . ~
t D'J.va ~
r DJ.vo
° a 0
_..a ~
1 1

Däv, F= [] [a]ff for each n , On the other hand, Div,: ~ [] [a]ff for each n
because ofthe transition Div,: ~ DivQ. For each fonnula IJI ofM, there is a
k ~ 0 with the feature that Divk F= IJI iffDi'1 F= IJI , and therefore [] [a]ff is
not equivalent to any M fonnula. The crucial step here in a strengthened form is
demonstrated in Proposition 1, below. Recall from Section 2.2 that the size of IIJI I
is the number of occurrences oftt, ff , r-; v, [K] and (K ) within it.
Proposition 1 If IJI E M and IIJI I = k , then for all m ~ k , DiVm F= IJI ifJ Di v,:, F= IJI.
44 2. Modalities and Capabilities

Proof. By induction on k. The base case is k = I, so 'lJ is tt or ff, and clearly


the property holds, For the induction step, assume the result for all k ::: n. Suppose
1'lJ I = n + 1. Four cases need to be dealt with . If'lJ is 'lJI /\ 'lJ2 or 'lJI V 'lJ2, then as
each component 'lJ;, i E {I, 2}, has size less than n + 1 the induction hypothesis
applies to it. It follows that Divm F= 'lJ; iffDi~ F= 'lJ; for all m 2: n + 1. Now
the result folIows. Otherw ise, 'lJ is [K]'lJI or (K) 'lJI. We just consider the first of
these cases and leave the second as an exercise for the reader. If r (j. K, then for
all m 2: I, Divm F= 'lJ and Di~ F= 'lJ. So, assume r E K. As l'lJII = n, by the
induction hypothesis for all m 2: n, Divm F= 'lJI iffDi~ F= 'lJI. And therefore
for all m 2: n, Di Vm+1 F= [K]'lJI iffDi ~+I F= [K]'lJI . 0

Using the new modal operators, supplementary modalities [K] and ((K}) are
definable as folIows, when K is a subset of observable actions O.

[K] <1> ~ [ ] [K] [ ] <1> ((K}}<1> ~ (( }}(K)(( )}<1>

Their meanings appeal to observable transition relations ~ in the same way


that the meanings of[K] and (K) appeal to the relations~. Process E has the
property [K] <1>

iff EF=[][K][]
iff VF E {E ' : E d::> E'}. F F= [K] [ ] <1>
iff VF E {E' : E d::> EI ~ E' and a E K} . F F= [] <1>
iff V F E {E' : E d::> E I ~ E2 d::> E' and a E K}. F F= <1>

iff VF E {E' : E ~ E' and a E K}. F F= <1>,

and E F= ((K}}<1> iff 3F E {E' : E ~ E' and a E K}. F F= <1> . As with


the modalities ofSection 2.1, we write [al , ... , an] and ((al , " " an)} instead of
[{al, . .. ,an}] and «{al, . . . , a n }».
The simple modal formula ((tick}}tt expresses the observable ability to carry
out the action tick, whereas [tick] ff expresses an inability to tick after any
amount ofinternal activity. Both clocks Cl and CIs from Section 1.1, where CIs ~
tick.CIs + r .o, have the property ((tick}}tt, but the clock CIs may at any time
silently stop ticking , and therefore also has the property ((tick}) [tick] ff .
Example 1 The crossing of Section 1.2 has the property that, after a car and a train approach ,
one of them may cross.

[car] [train] (((tcross}}tt v ((ccross}}tt)


2.4. Observable modal logic 45

In the following the processes, Ei are from Figure 1.12.


Crossing 1= [car] [train] «((tcross}}tt V ((ccross}}tt)
iff EI 1= [train] «((tcross}}tt V ((ccross}}tt) and
E4 1= [train] «((tcross}}tt V ((ccross}}tt)
iff E3 1= ((tcross}}tt V ((ccross}}tt and
E6 1= ((tcross}}tt V ((ccross}}tt and
E 7 1= ((tcross}}tt V ((ccross}}tt

Both E3 and E7 have observable tcross transitions, and E6 has an observable


ccross transition.
The set 0 , introduced in Section 1.3, is a universal set of observable actions
(which does not contain r). We assume the following abbreviations.
def
[-K] <1> = [0- K]
def
((-K}) <1> = ((0 - K)}

Therefore, [-] and (( -)} are abbreviations of [0] and ((O}). The modal formula
[ - ] ff hence expresses an inability to carry out an observable action, so the
= r .D~' v re al'ises it,
process D~' v def ,
Modal formulas can be used to express notions that are basic to the theory of
CSP [31]. A process can carry out the observable trace al ' , .a; provided it has
the property ((al}) , , . ((an}}tt . A process is stable if it has no r-transitions (and,
therefore, if it has the property [.]ff). The formula [K] ff expresses that the
observable set of actions K is a "refusal," since a realizing process is unable to
perform observable actions belonging to K. The pair (al, . , an, K) is an observable
failure for a process provided it satisfies ((all) . .. ((an })( [ .]ff 1\ [K] ff): a process
realises this formula if it can carry out the observable trace al ' . . an and become
a stable process that is unable to carry out observable actions belonging to K .
Example 2 The processes Cop and Protocol have the same observable failures, as given by
the following sequence of forrnulas.
(( }}([.]ff 1\ [-{in(m) : m e D}]ff)
((in(m)}}([.]ff 1\ [-out(m)] ff)
((in(m)}} ((out(m)}}([.]ff 1\ [-{in(n) : nE D}]ff)

For instance, Protocol satisfies the second formula because of the following
transition,

Protocol ~ (Sendl(m) I Medium I out(m).ok.Receiver)\J,


where J = {sm,ms, mr, ok}.
46 2. Modalities and Capabilities

There are two transition graphs asssociated with any process, as described in
Section 1.3, one built from thin transitions ~ and ~, and the other from the
observable transitions ~ and ~ . The modallogic M is associated with the
first kind of graph. For the second kind, we introduce observable modallogic MO,
whose formulas are defined inductively below.
<1> ::= tt I ff I <1>1 /\ <1>2 I <1>\ V <1>2 I [K] <1> I [] <1> I «K))<1> I « ))<1>
K ranges over subsets of O. The logic MO is closed under complement, as the
reader can ascertain (for instance ([K] <1» Cis «K)) <1>c and <1>)C is [ ] <1>C). «)
Exercises 1. Show that both [ ] and « )) are definable within infinitary modallogic M oo of
Section 2.2.
2. Show that « )) is not definable in M by employing a similar argument to that
used in Proposition 1, but with respect to the dual formula « ))(a) tt.
3. The modal depth of a formula \11 , written md(\I1), is defined inductively as
follows .
md(tt) = 0 = md(ff)
md(<1>l /\ <1>2) = max{md(<1>(), md(<1>2)} = md(<1> 1 v <1>2)
md([K]<1» = 1 + md(<1» = md«K)<1»
Show that Proposition 1 remains true if I\11 I is replaced with md(\I1).
4. Let M- represent the family of modal formulas of M that do not contain
occurrences of box modalities, [J] for any J. Show that, for any non-empty
J, the modality [J] is not definable in M-.
5. Prove the following pair.
a. Crossing ~ [car] [train] «(tcross))tt /\ «ccross))tt)
b. Protocoll= [in(m)] «(out(m)))tt /\ [(in(m) : m E D)] ff)
6. Consider the following three vending machines.

1p.1p.(tea.Venl + cof f ea.Venj )


def

1p.(1p.tea.Ven2 + Ip.coff ee.Yen-)


def

Ven3
def
= tp.tp.t ea.ven, + 1p .lp.coffee.Ven3
Show that Ven2 and Ven3 have the same set of observable failures, but that
Venl and Ven2 have different observable failures.
7. Show that (C I U)\{in,ok) and Ucop from Section 1.3 have the same MO
modal properties, but not the same M properties.
8. Show that for all <1> E MO, Cop 1= <1> iff Protocol 1= <1> .
9. An MO formula is realisable ifthere is a process that satisfies it. Which ofthe
following formulas are realisable?
2.5. Observable necessity and divergence 47

a. «car)) ((train))(((tcross))tt v (( ccross))tt)


b. [] ff
c, ((car))((train))( ((tcross))tt 1\ ((ccross))tt)
d. ((car))((train))( (( ))((ccross))tt 1\ (( )) [ceross] ff)

e. ((car))((train))((( ))((ccross))tt 1\ [ccross]ff)

2.5 Observable necessity and divergence


Within the modal logics M and MO, capabilities and observable capabilities of
processes are expressible. M also permits expression of immediate necessity (or
inevitability). The formula (- )tt 1\ [ -a]ff expresses that a must be the very next
action because (- )tt asserts that some action is possible, whereas [-a]ff states
that only a is a possible next action . In this section we examine how to express
immediate observable necessity.
A first attempt at expressing in MOthe property that a must be the next ob-
servable action is (( -)) tt 1\ [ -a] ff, which states that some observable action is
possible and that all observable actions except a are initially impossible. However,
it leaves open the possibility that a may become excluded through silent activity.
Both clocks Cl and CIs satisfy (( - ))tt 1\ [-tick] ff, as mentioned in the previ-
ous section. CIs is able to carry out an observable tick transition, CIs ~ CIs,
and also is unable to perform any other observable action. However, this clock may
also silently break down, CIs ~ 0, and become unable to tick. This shortcoming
can be surmounted by strengthening the initial conjunct ((- ))tt to [ ] (( - ))tt,
requiring that an observable action be possible after any amount of silent activity.
CIs ~ [] ((-))tt because ofthe silent transition CIs ~ 0, whereas Cl has this
property.
A second attempt at expressing necessity of a is [ ] ((- ))tt 1\ [-a] ff. Cer-
tainly, this formula expresses that the initial observable action must be a. However,
it does not guarantee that a first observable action will happen. CIs is another
clock, CIs ~ tick.Cl s + 'LCl s, which satisfies [] ((-))tt 1\ [-tick] ff . It
realizes both conjuncts because its only observable transitions are CIs ~ CIs
and CIs ~ CIs. Interpreting this formula as an inevitability that tick happens
next fails to take into account the possibility that CIs avoids ticking by perpetually
engaging in silent activity.
Example 1 Cop and Protocol both have the property that, after an input , an output must
happen next , that is, for any m

[in(m)] ([] ((-))tt 1\ [-out(m)] ff).


48 2. Modalities and Capabilities

In the case of Protocol, the message m may also be continually lost during
transmission, and therefore may never be transmitted. This is not possible for Cop.

A process diverges if it is able to perform internal actions forever. CIs diverges


because of the following endless sequence of r -transitions,

CIs ~ CIs ~ .. . ~ CIs ~ . . .

In contrast, CIs does not diverge, and so is said to converge. Following Hennessy
[26], let E t abbreviate that E diverges, and E ..j, abbreviate that E converges.
Neither convergence nor divergence is definable in the modal logics M and MO .
There is no formula <1> in these logics such that, for every process E, E 1= <1>
iff E t. Consequently, we introduce another pair of modalities [..j,] and (( t)),
analogous to [ ] and (( )), except that they contain information about divergence
and convergence.

E 1= [..j,] <1> iff E..j, and VF E {E' : E ~ E'}. F 1= <1>


E 1= ((t))<1> iff E t or 3F E {E' : E ~ E'}. F 1= <1>

A process satisfies [..j,] <1> if it converges and realises [] <1> . Dually, a process
realises (( t)) <1> if it diverges or satisfies (( )) <1> . Divergence and convergence are
expressible by means of these new modalities: [..j,] tt expresses convergence and
its dual (( t ))ff expresses divergence.
Let M°,J, be observable modallogic together with the two new modalities:
tt I ff I <1>] /\ <1>2 I <1>] V <1>2 I [K] <1> I [] <1> I [..j,] <1> I
((K)) <1> I (( )) <1> I (( t)) <1>

Within M°,J, , the strong observable inevitability that a must (and will) happen next
is expressible as
[..j,] ((-))tt /\ [-a] ff .

The initial conjunct preciudes the possibility that r could occur forever.
Consequently, CIs V= [..j,] ((-))tt /\ [-tick]ff.
Example 2 The difference between Cop and Protocol as described in Example 1 is expressed
as [in(m)] ([..j,] ((- ))tt /\ [-out(m)] ff) for any m. Protocol fails to have this
property because the message m may be continually lost during transmission, and
therefore may never be output.
Ancillary modalities can be defined in M°,J, as follows.
def
[..j, K] <1> = [..j,][K] <1>
def
[K ..j,] <1> = [K] [..j,] cf>
def
[..j,K..j,]cf> = [..j,] [K] [..j,] <1>
2.5. Observable necessity and divergence 49

Features of processes dealing with divergence, appealed to in definitions of be-


havioural refinement [26], can also be expressed as modal formulas in this extended
modallogic. For instance, that E cannot diverge throughout the observable trace
a\ . . . an is captured as E F= [..l. at i] . ·. [i an i] tt.

Exercises 1. Show that the modalities [i] and ({ t)) are not definable in MO.
2. Show that Cop realises the property
[in(m)] ([i] {(-))tt /\ [-out(m)] ff)
and Protocol fails to have this property.
3. Prove that, Mo.!- is closed under complement.
4. Prove that, for all <I> in MO, 0 F= <I> iff Div F= <1>.
5. Prove that for all <I> in MO,!. , Cto F= <I> iff Count F= <1> .
6. Which of the following MO,!. formulas are realisable?
a. [K] ([..l.] «tick))tt /\ {( t)) {(tick))tt)
b. [K i] {(tl) [tick] ff
c. [K] ({(t)) {(tick))tt /\ [] [tick] ff)
d. [K i] ({{t)) {(tick))tt /\ [] [tick] ff)
3 _

Bisimulations

3.1 Process equivalences 51


3.2 Interactive games . . 56
3.3 Bisimulation relations 64
3.4 Modal properties and equivalences 69
3.5 Observable bisimulations 72
3.6 Equivalence checking 77

Example processes were defined in Chapter 1, and in Chapter 2 modallogics were


introduced for expressing their capabilities. An important issue arises when two
processes may be deemed to have the same behaviour. Such an abstraction can
be presented by defining an appropriate equivalence relation between processes.
In this chapter, we focus on equivalences for CCS processes defined in terms
of bisimulation relations. However, we present them using games that provide a
powernd metaphor for understanding interaction. There is also an intimate relation
between modal properties and these equivalences.

3.1 Process equivalences


Process expressions are intended to be used for describing interacting systems. So
far, the discussion has omitted criteria applicable to when two expressions may be
said for all intents and purposes to describe the same system. Altematively, we can
consider grounds for differentiating process descriptions. Undoubtedly, the clock
Cl and the vending machine Yen of Section 1.1 are different. They are intended as

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
52 3. Bisimulations

models of distinct kinds of objects. At all levels of description they differ: in their
algebraic expressions, their action names, their flow graphs and their transition
graphs. A concrete manifestation of their difference is their initial capabilities.
The clock Cl can perform the (observable) action tick, whereas Yen can not.
Syntactic differences alone should not be sufficient grounds for distinguishing
processes. It is important to allow the possibility that two process descriptions may
be equivalent, even though they may differ markedly in their level of detail. An
example is that of two descriptions of a counter Cto of Figure 1.4 and Count of
Section 1.6. An account of process equivalence has practical significance when
one views process expressions both as specifications and as descriptions of imple-
mentations. Cto is a specification (even the requirement specification) ofa counter,
whereas the finite description Count with its very different structure can be seen
as a description of a possible implementation. Similarly, the buffer Cop can be
seen as a specification of the process Protocol of Section 1.2. In this context,
an account of process equivalence could tell us when an implementation meets its
specification' .

Example 1 A stark description ofthe slot machine SMn ofFigure 1.15 is the process SM~

SMi
I
= slot.(Lloss.SM
def --
i+ 1 + L..Jr.wm(y).SM(i+ll-Y :
I ~ - .- I
1~ y ~

I + In,
which carries no assumptions as to how the slot machine is to be built from separate
but interacting concurrent components.

The two counters Cto and Count have the same flow graph. Not only do they
have the same initial observable capabilities, but this feature is also preserved as
observable actions are performed. There is a similarity between their observable
transition graphs, a resemblance not immediately easy to define. A much simpler
case is that ofthe two clocks, Cl ~ tick.CI and Cl2 ~ tick.tick.CI2 pictured
in Figure 3.1. Although they have different transition graphs, whatever transitions
one ofthese clocks makes can be matched by the other, and the resulting processes
also retain this property. An alternative basis for suggesting that these two clocks
are equivalent starts with the observation that Cl and tick.CI should count as
equivalent expressions because Cl is defined as tick.Cl. An important principle

/C'\ C1
2

eCk.
)

~ tick tick

tick Cl,

FIGURE 3.1. Two clocks

1 Similar comments could be made about refinement, where we would expect an ordering on processes.
3.1. Process equivalences 53

is that, iftwo express ions are equivalent, then replacing one with the other in some
other expression should preserve equivalence. Replacing Cl with tick.CI in the
expression E should result in a process expression equivalent to E. In particular, if
Eis tick.CI then tick.tick.CI and tick.CI shouldcountas equivalent. Because
particular names of processes are unimportant, this should imply that Cl and Cl 2
are also equivalent.
The extensionality principle here is that an equivalence should also be a con-
gruence. That is, the equivalence should be preserved by the various process
combinators. For instance, if E and F are equivalent, then E I G should be
equivalent to F I G and E\l should be equivalent to F\l , and so on for all
the combinators introduced in Chapter 1. If the decision component D of the slot
machine SMn breaks down, then replacing it with an equivalent component should
not affect the overall behaviour ofthe system (up to equivalence).
Clearly, iftwo processes have different initial capabilities, then they should not
be deemed equivalent. Distinguishability of processes can be extended to many
other features, such as their initial necessities, their (observable) traces, or their
completed traces- . A simple technique to guarantee that an equivalence is a congru-
ence is as follows. First, choose some simple properties as the basic distinguishable
features. Second, count two processes as equivalent if, whenever they are placed in
a process context, the resulting processes have the same basic properties. A process
context is a process expression "with a hole in it," such as (E I [])\ l , where [ ]
is the "hole." This approach is sensitive to three important considerations. First is
the choice ofwhat counts as a basic distinguishable property, and whether it refers
to observable behaviour as determined by the ~ and ~ transitions, or with be-
haviour as presented by the single arrow transitions. Second is the choice ofprocess
operators that are permitted in the definition of a process context. Lastly, there is
the question whether the resulting congruence can be characterized independently
of its definition as the equivalence preserved by all process contexts.
Interesting work has been done on this topic, mostly, however, with respect to
the behaviour ofprocesses as determined by the single thin transitions ~ . Candi-
dates for basic distinguishable features include traces and completed traces (given,
respectively, by formulas ofthe form (al) . . . (an)tt and (al) ... (an) [ - ]ff). There
are elegant results by Bloom et al, Groote and Vaandrager [7, 25, 24] that isolate
congruencies for traces and completed traces . These cover very general families
of process operators whose behavioural meaning is governed by the permissible
format of their transition rules . The resulting conguencies are independently de-
finable as equivalences", Results for observable behaviour include those for the
failures model of CSP, [31] which takes the notion of observable failure as basic.

2 A trace w for E is completed if there is an F such that E ~ Fand F is unable to perform any action.
3They include failures equivalence (expressed in terms of formulas of the form (al) .. . (a n )[ K]ff), two-
thirds bisimulation, two-nested simulation equivalence, and bisimulaton equivalence. Bisimulation equivalence
is discussed at length in later section s.
54 3. Bisimulations

l:r
I\:
tea • coffee tea • coffee

• Ve

tea coffee

def
Ven1 = Ip.Lp.rtea.Ven, + coff ee.Venj )
def
Ven2 = lp.(lp.tea.Ven2 + Ip.ccff ee.Ven-)
def
Yens = Ip.Lp.t ea .Veng + lp.lp.coffee.Vens

FIGURE 3.2. Three vending machines

Related results are contained in the testing frarnework ofDe Nicola and Hennessy
[16, 26], where processes are tested for what they may and must do.
Example 2 Consider the three similar vending machines in Figure 3.2 (where we have left out
the names of the intermediate processes). These machines have the same
(observable) traces. Assume a user
def----
Use = lp.lp.tea.ok .O,
who only wishes to drink a single tea by offering coins and, having done so, ex-
presses visible satisfaction as the action ok. For each ofthe three vending machines ,
we can build the process (Ven; I Use)\K, where K is the set {lp, tea, coffee}.
If i = 1 then, there is a single completed trace r r r ok.
r r rOk
(Ven1 I Use)\K ----+ (Ven1 I O)\K
The user must then express satisfaction after some silent activity. In the other two
cases, there is another completed trace r r .

(Ven, I Use)\K ~ (coffee.Ven; I tea.ok.O)\K


The user is then precluded from expressing satisfaction.
3.1. Process equivalences 55

With respeet to the failures model of CSP [31] and the testing framework of
[16, 26], Ven2 and Ven3 are equivalent. These two proeesses obey the same failure
formulas. Finer equivalenees distinguish them on the basis that, onee a eoin has
been inserted in Ven3,any possible sueeessful collection oftea is already decided.
Imagine that, after a single coin has been inserted, the resulting process is copied
for a numberofusers. In the case ofVen3, all these users must express satisfaction,
or all of them must be precluded from doing so. Let Rep be a process operator
that replicates successor processes . There is a single transition rule for Rep .

Rep(E} ~ E' I E'


E~E'
The two processes (Rep(Ven2 I Use)} \K and (Rep(Ven3 I Use)} \K have dif-
ferent completed traces. The first can perform r r r r ok as follows, where E
abbreviates lp.tea.Ven2 + tp.ccrf ee .Ven-.

(R ep(Ven2 I Use)} \K
,J, r
(E I lp.tea.ok.O I E I lp.tea.ok.O}\K
,J, r
(tea.Ven2 I tea.ok.O I E I lp.tea.ok.O}\K
,J, r
(tea.Ven2 I tea.ok.O I coffee.Ven2 I tea.ok.O}\K
,J, r
(Ven2 I ok.O I coffee.Ven2 I tea.ok.O}\K
,J, ok
(Ven2 I 0 I coffee.Ven2 I tea.ok.O}\K
(Rep(Ven3 I Use)} \K is unable to perform this completed trace, as can be seen
from its two possible initial transitions. First is the transition

(Rep(Ven3 I Use)} \K
,J, r
(Lp.t ea.Ven., l Ip.tea.ok .O 11p .tea.Ven3 11p .tea.ok .O}\K,

which must continue with two ok transitions in any completed trace. The second
is
(Rep(Ven3 I Use)} \K
,J, r
(Lp.coff ee.Venj 11p .tea.ok .O 11p.coffee.Ven3 11p.tea.ok.O}\K,

which will not include ok transitions in any completed trace.


56 3. Bisimulations

Exercises 1. a. For each i : 1 ::: i ::: 3, draw the transition graph ofthe following process
(Rep(Ven; I Use2))\K .
b. Show the following

i. (Rep(Venl I Use))\K F= [] «-}}tt 1\ [ - ok ] ff


ü. (Rep(Ven2 I Use))\K li= [] «-}}tt 1\ [ -ok] ff
lii, (Rep(Ven2 I Use))\K F= «ok}) [ ok] ff
iv, (Rep(Ven3 I Use))\K li= «ok}) [ ok ] ff
c. Let r be the set ofM failure fonnulas (a)) . . . (an}[K]ff , n ~ O. Show
that for all <1> Er, Ven2 F= <1> iff Ven3 F= <1>.
2. Let Tr(E) be the set oftraces of E (that is, the set ofwords W E A* such that
E .z, E' for some E') . Let E =Tr F iffTr(E) = Tr(F) .

a. Show that =Tr is a congruence for ces processes.


b. A deadlock potential for E is a trace w such that E ~ E' and E' is
unable to perform an action (so w is a completed trace). Let DP(E) be
the set of deadlock potentials of E. Give examples of processes E and F
such that E =Tr F and DP(E) =1= DP(F).
c, Let E =DP F iffDP(E) = DP(F). Show that =DP is not a congruence
for ces processes.
d. Define a process operator for which =Tr is not a congruence.

3. Design a process context that distinguishes between the two vending machines
Vand U, below (by having different completed traces) .

lp.(lp.(tea.V + coffee.V) + lp.coffee.V)


def
V =
lp.(lp.(tea.U + coffee.U) + lp.coffee.U) + lp,U1
def
U
lp.(tea.U + coffee.U)
def

3.2 Interactive games


Equivalences for ces processes begin with the idea that an observer can repeatedly
interact with a process by choosing one of its available transitions. Equivalence
of processes is then defined in tenns of the ability of observers to match their
selections so that they can proceed with additional corresponding choices. The
crucial difference with the approach of the previous section is that an observer
can choose a particular transition. Such choices cannot be directly simulated in
3.2. Interactive games 57

tenns ofprocess activity". These equivalences are defined in tenns ofbisimulation


relations that capture precisely what it is for observers to match their selections.
However, we first proceed with an alternative exposition using games that offer a
powerful image for interaction.
The equivalence game G(Eo, Fo), where E o and Fo are processes , is an in-
teractive game played by two participants , players R (the refuter) and V (the
verifier), who are the observers who make choices of transitions . A play of
the game G(Eo, Fo) is a finite or infinite length sequence of pairs of proceses ,
(Eo. Fo) . . . (E;, F;) . ... The refuter attempts to show that the initial pair (Eo. Fo)
can be distinguished, whereas the verifier wishes to establish that they are equiv-
alent. Suppose an initial part ofa play is the finite sequence (Eo. Fo) . . . (E], F j ) .
The next pair (E j+ I. Fj+I) is detennined by one of the following two moves.
• Player R chooses a transition E j ~ E j+ I, then player V chooses a transition
with the same label r, ~ Fj + 1
• Player R chooses a transition Fj ~ Fj+l, then player V chooses a transition
with the same label E j ~ E j + l •
The play continues with more moves. The refuter always chooses first, then the
verifier, with full knowledge of the refuter's selection , chooses a transition with
the same label from the other process .
A play of agame continues until one of the players wins. As discussed in the
previous section, if two processes have different initial capabilities, then they are
clearly distinguishable. Consequently, any position (E n • Fn ) where one of these
processes is able to carry out an initial action that the other cannot counts as a win
for the refuter: that is, if there is an action a E A and

tE; F= (a}tt and F; F= [a]ff) or (E; F= [a]ff and F; F= (a}tt).

Player R can then choose a transition, and player V will be unable to match it. We
call such positions "R-wins." A play is won by the refuter if it reaches an R-win
position. Any other play counts as a win for player V. Consequently, the verifier
wins if the play is infinite, or if the play reaches a position (E n • Fn ) and neither
process has an available transition . In both these circumstances, the refuter has
been unable to find a difference between the starting processes .

Example 1 The verifier wins any play of G(Cl . C12). A play proceeds

(Cl , C12) (Cl, tick.C1 2) (Cl, C12) . ..

forever irrespective of the component the refuter chooses to make her move from.

4For instance, if E has two transitions E ~ E I and E ~ E 2 , then the observer is able to choose either
of'them, but therc is not a "te sting" proces s 'ä.F that can guarantee this choice in the context (li.F I E)\{a) : the
two results (F I E1)\{a} and (F I E 2)\{a} are equally likely after synchronization on a.
58 3. Bisimulations

8
/ I~
~
~-CI
~
~--o

~ I

o e
©
player R to move

R-win

1_ - player V to move

FIGURE 3.3. Game graph for G(Cl , Cls)

Example 2 In thecaseofG(CI, CIs) , when ct., ~ tick.CIs+tick.O, thereareplays thatthe


refuter wins and plays that the verifier wins. Ifplayer R initially moves CIs ~ 0,
then, after her opponent makes the move Cl ~ Cl, the resulting position (Cl, 0)
. an R-wm.
IS ' If p Iayer R aIways chooses th e transitrons
.. CIs ~tick CIs or Cl ~
tick Cl ,

then player V can avoid defeat. Figure 3.3 depicts the game graph for
G(CI, CIs). Round vertices are positions from which the refuter moves, and rect-
angular vertices are positions from which the verifier moves. Edges of avertex are
the possible moves that a player can make from that vertex. A V-vertex is labelIed
with the transition player R has chosen" , This information constrains the choice of
move that player V can make, since she must respond with a corresponding tran-
sition from the other component. Vertices encircled twice are R-wins . The game
graph represents all possible plays of the game. It begins with a token on the initial
vertex (Cl, CIs). A play is the movement of the token around the graph. If the
token is at an R-vertex, the refuter moves it, and if it is at a V-vertex the verifier
moves it. Ifthe token reaches an R-win vertex, the game stops and the refuter wins.

sWe suppress the position in which this transition has been chosen, as can be easily seen from the graph.
3.2. Interactive games 59

Example 2 shows that different plays of agame can have different winners .
Nevertheles s, for each game one ofthe players is able to win any play irrespective
ofwhat moves her opponent makes. To make this precise, we introduce the notion
of a strategy. A strategy for a player is a family of rules that tell the player how to
move. For the refuter, a rule has the form "if the play so faris (Eo, Fo) . . . (Ei, F;) ,
then choose transition t , where t is either Ei ~ Ei+1 or F; ~ F';+I." Because
the verifier responds to the refuter's choice of transition, a rule for player V has the
form "ifthe play so far is (Eo, Fo) . . . (Ei , Fi) , and player R has chosen transition
t , then choose transition t '," where t' is a corresponding transition of the other
process. However, it turns out that we only need to consider history-free strategies
whose rules do not depend upon previou s positions in the play. For player R, a rule
is therefore of the form
at position (E, F) choose transition t,

where t is either E ~ E' or F ~ F'. A rule for player V is


at position (E , F) when playerRhaschosen t choose t',

where t is either E ~ E' or F ~ F' and t' is a corresponding transition of


the other proces s. A player uses a strategy in a play if all her moves obey the rules
in it. A strateg y is a winning one if the player wins every play in which she uses
it. If a player has a winning strategy for agame, we say that the player "wins the
game ."

Example 3 Player R's winning strategy for the game G(Cl , C15 ) consists ofthe single rule " at
(Cl, C15 ) choose C15 ~ 0." This has the effect of reducing the game graph of
Figure 3.3 to the smaller subgraph in Figure 3.4, as redundant player R choices
are removed.

Example 4 The game graph for G(Ven2, Ven3), two ofthe vending machines ofFigure 3.2, is
pictured in Figure 3.5. A winning strategy for the refuter consists ofthe following
two rules (where E, G and H are from Figure 3.5) .
lp
at (Ven2 , Ven3) choose Ven3 ~ G
lp
at (E, G) choose E ~ H
The reader is invited to find another winning strategy.

8 (Cl,Cl ,)
~ O
~-o - -- ~

FIGURE 3.4. Reduced game graph für G(Cl , Cl s)


60 3. Bisimulations

I Ven, Ip F I I 1P
Venz-- E I I Ven, IP. G I
~ / -, /
~ ~
8 8/ ~
6 ~_lPJ 6 ~ ~ E9_1P
L

,-----
coffee
H - Ven,
- -,

~I
L=::j--ven 2 tea- Ven )
L I
E == lp.tea.Ven2 + tp.coffee .Ven-,
F _ Ip.cof f ee.Yens G - lp.tea.Ven3
H _ coffee.Ven2 J - tea.Ven2
K _ coffee.Ven3 L _ tea.Ven3

FIGURE 3.5. Game graph for (Ven2, Ven3)

For each game, one ofthe players has a winning strategy. This we shall prove
below. The strategy relies on defining sets of pairs of processes iteratively using
ordinals as indices . Ordinals are ordered in increasing size, as follows .
0,1, .. . ,w,w+ 1, .. . ,w+w,w+w+ 1, . . .
The initiallirnit ordinal (that is, one without an immediate predecessor) is w and
to + 1 is its successor. The next limit ordinal is w + to .
3.2. Interactive games 61

Theorem 1 For any game G(E, F), either player R or player V has a history-free winning
strategy.
Proof. Consider the game G( E, F). The set of possible player R positions P is
the set

P = {(E',F') : 3WEA* .E~ E'andF~ F'},

where ~ is the extended transition relation defined in Section 1.3. P contains all
possible positions of the game G( E , F) in which player R moves next. Let W ~ P
be the subset ofpositions that are R-wins. We now define the subset ofpositions
from which player R can force a win by eventually entering W . This set Force is
defined iteratively, starting with I and using ordinals, where A is a limit ordinal.

Force ' = W
Force?"! = Force" U {(G , H) E P :
3G' . G ~ G' and VR'. if H ~ H' then (G' , H') E Force"
or 3H' . H ~ H' and VG' . ifG ~ G' then (G', H') E Force"]
Force" = U[Force" : a < A}
Lastly we define? Force as the following subset of positions.

Force = U[Force" : a > O}


If(G, H) E Force, then the rank of(G, H) is the least ordinal a such that (G , H) E
Force" . For each (G, H) E Force, player R has a history-free winning strategy for
the game G(G, H). The strategy consists ofrule s of the form, "if (E', F ') has rank
tx > I, then choose transition t such that, whatever choice of transition player
V makes, the resulting pair ofprocesses has lower rank." The definition ofForce
guarantees that there is a choice of transition with this property.
If (G, H) .;. Force, then player V has a history-free winning strategy, which
is to avoid the set Force. It consists of rules of the form, "if (E', F') .;. Force and
player R chooses t, then choose t' such that the resulting pair of processes does
not belong to Force." The initial pair of processes (E , F) either belongs to Force
or belongs to P - Force , meaning one of the players has a history-free winning
strategy for the game G(E, F). 0

Ifplayer V wins the game G( E , F), then we say that process E is "game equiv-
alent" to process F , in which case player R cannot detect a difference in behaviour
between the pro cesses E and F . Game equivalence is indeed an equivalence
relation.

6The processes considered in this work have countable trans ition graphs, so the set P ofpositions is countable;
thereforc, we need only consider ordinals whose cardinality is at most that of'N ,
62 3. Bisimulations

Proposition 1 Game equivalence betweenprocesses is an equivalencerelation.

Proof. We show that game equivalence is an equivalence relation: that is, we


show it is reflexive (E is equivalent to E), symmetric (if Eis equivalent to F , then
Fis equivalent to E) and transitive (if Eis equivalent to F and F is equivalent to
G, then E is equivalent to G).
Player V's winning strategy for G( E, E) is the "copy-cat strategy" consisting of
the rules "at (F, F) , when player R has chosen t , choose t" For symmetry, suppose
that x is a history-free winning strategy for player V for the game G(E, F). Let
it' be the symmetric strategy that changes each rule "at (G, H) . . ." in n to "at
(H, G) . . .". Clearly, n' is a history-free winning strategy for player V for the game
G(F, E). Next, assume (J is a winning strategy for player V for G(E, F), and x is
a winning strategy forplayer V for G(F, G). The composition ofthese strategies,
rr 0 (J , is a winning strategy for player V for G( E, G). Composition is defined by
the following two closure conditions.

1. If"at (E', F ') , when player R has chosen E' ~ EI!, choose t'" is in (J, and
" at (F ', G'), when player R has chosen t', choose t" is in it , then "at (E' , G') ,
when player R has chosen E' ~ EI!, choose t" is in rr 0 (J
2. If "at (F', G'), when player R has chosen G' ~ GI!, choose t'" is in st , and
"at (E ' , F') , when player R has chosen t', choose t" is in (J, then "at (E' , G') ,
when player R has chosen G' ~ GI!, choose t" is in n 0 (J

We leave as an exercise that n 0 (J is a winning strategy for player V for the game
G(E , G). 0

If E and F are finite state processes, then the proof of Theorem 1 provides a
straightforward algorithm for deciding whether E is game equivalent to F . Assume
that the number of processes in the transition graphs for both E and F are at most
n , One first computes the set P of possible player R positions, {(E', F') : E ~
E' and F ~ F'}. The size ofthis set is therefore bounded by n2 • One then picks
out Force', the subset W ~ P of R-wins. Next , one defines iteratively the sets
Force'"! for i 2: I, by adding pairs from P - Force' obeying the requirement.
The algorithm stops as soon as the set Force i is the same set as Force":" . This set
is then the set Force . It is clear that there can be at most n2 iterations before this
happens. If (E, F) f/. Force, then E is game equivalent to F, otherwise, they are
not game equivalent.

Exercises 1. Draw the game graphs for G(Venl , Ven2) and G(Venl . Vena) where the
vending machines are defined in Figure 3.2.
3.2. Interactive games 63

2. Show that the pair of vending machines V and U, below, are not game
equivalent.

lp.(lp.(tea.V + coffee.V) + lp.coffee.V)


def
V

lp.(lp.(tea.U + coffee.U) + lp.coffee.U) + lp,U1


def
U

lp.(tea.U + coffee.U)
def
=
3. Show that Player V has a winning strategy for the game

G«C I U)\{in, ok}, Ucop'),


where these processes are
def
C = in(x).out(x).ok.C
def
U = write(x).in(x).ok.U
def
Ucop' = write(x).T.out(x). r .Ucop'.
4. Consider adding the following winning condition for player V: If a position
is repeated (occurs earlier in a play), then player V wins the play.
a, Show that this extra winning condition does not affect which player wins
agame G(E. F) .
b. Find out what it means for a problem to be P-complete. For example, see
Papadimitriou [48].
c, Show that game equivalence between finite state processes E and F is
P-complete.
5. A process F is immediately image-finite if, for each a E A, the set {C :
F ~ C} is finite. F is image-finite if each F' is immediately image-finite
whenever F ~ F' for W E A*.
a. Show that, ifthe starting processes E and F ofTheorem I are image-finite,
then for the proof of the result it suffices to define Force as follows.
Force = U{Force; : i E N}
b. Give an example ofa pair ofprocesses (E. F) that are not game equivalent,
but that fail to have rank i for any i E N.
6. A pair of processes (E o• Fo) is an S-game if, in any play, player R must
always choose a transition from the left process , meaning only the first move
is allowed. An R-win is any position (E n• Fn) such that E; ~ En+J, but
Fn has no available a transition. Player V wins if the play does not reach an
R-win .
a, Show that, for each S-game, one ofthe players has a history-free winning
strategy.
64 3. Bisimulations

b. List all the pairs of vending machines from Venl, Ven2, and Ven3 of
the previous section for which player V has a winning strategy for the
S-game.
c. E is S-equivalent to F if player V has a winning strategy for the two
S-games (E , F) and (F, E). Prove that S-equivalence is an equivalence
relation. Give an example of two processes that are S-equivalent, but not
game equivalent.

3.3 Bisimulation relations


When E and F are game equivalent, player V can match player R 's choice of
transition: if E ~ E', then there is a transition F ~ F' such that E' and
F' are also game equivalent, and if F ~ F', then again there is a transition
E ~ E' such that E' and F' are game equivalent. The ability to match transitions
is defining for abisimulation relation. Bisimulations were introduced? by Park
[47] as a refinement ofthe iteratively defined equivalence ofHennessy and Milner
[29,42].
Definition 1 A binary relation B between processes is abisimulation provided that, whenever
(E, F) E B anda E A,
• if E ~ E' then F .z,
F' for some F' such that (E' , F') E B, and
• if F ~ F' then E ~ E' for some E' such that (E', F') E B

A binary relation between processes is abisimulation provided that it obeys the


two hereditary conditions in the definition. Simple examples ofbisimulations are
the identity relation and the empty relation.

Example 1 Assume Cl , Cl2 and CIs are the clocks of the previous section. The relation B
= {(Cl, C12), (Cl, tick.Cl 2)} is abisimulation. For example, if Cl, ~ Cl then
Cl2 ~ tick.Cl 2 and the resulting pair ofprocesses belongs to B. The relation
{(Cl, CIs)} is not abisimulation because of the transition CIs ~ O. Adding
(Cl, 0) does not rectify this because the transition Cl ~ Cl cannot then be
matched by a transition ofthe process O.

Two processes E and F are bisimulation equivalent (or bisimilar) if there is a


bisimulation relation B such that (E, F) E B. We write E '" F if E and F are
bisimilar.

"They also occur in a slightly different form in the theory of modallogic as zig-zag relations; see Benthem
[6].
3.3. Bisimulation relations 65

Example 2 The following processes are not bisimilar, a.(b.O + c.O) and a .b.O + a.c.O. There
cannot be abisimulation relating the pair because it would have to include either
(b.O + c.O, b.O) or (b.O + c.O, c .O).

Proposition 1 If {Bi : i E I} is afamily ofbisimulations, then their union U {Bi : i E I} is a


bisimulation.

Proof. Let B be the relationjjfS, : i E l},andsuppose(E, F) E B. Therefore,


(E, F) E e, for some j E I . If E ~ E', then F ~ F' for some F' and
(E', F ') E B j , and similarly if F ~ F ', then E ~ E' for some E ' and
(E ' , F ') E Bj • Therefore, in both cases (E', F ') E B. 0

A corollary of Proposition I is that the binary relation '"V is itself abisimulation


because it is defined as U {B : Bis abisimulation}. Consequently, '"V is the largest
bisimulation (with respect to subset inclusion).
Bisimulation equivalence and game equivalence coincide.
Proposition 2 Eis game equivalent to F ijf E'"V F .

Proof. Assume that E is game equivalent to F . We show that E '"V F by es-


tablishing that the relation B = {(E' , F') : E' and F' are game equivalent} is
abisimulation. Suppose E ' ~ E il, and because this is a possible move by
player R, we know that player V can respond with F' ~ F" in such a way that
(Eil , F") E B, and similarly for a player R move F ' ~ F" . For the other direc-
tion, suppose E '"V F, so there is abisimulation relation B such that (E , F) E B.
We construct a winning strategy for player V for the game G( E, F). The idea is that
in any play, whatever move player R makes, player V responds with a move ensur-
ing that the resulting pair of processes remains in the relation B. So, the winning
strategy for the verifier consists of rules of the form "if (E', F') E B when R has
chosen E ' ~ E il,then choose F ' ~ F" such that (Eil, F") E B" and similarly
for the case when the refuter chooses transitions from the other component. 0

The parallel operator I and the sum operator + are both commutative and
associative with respect to bisimulation equivalence. This means that the following
hold for arbitrary processes E , F and G .

ElF FIE (E I F) I G EI (FI G)


E+F F+E (E + F) + G E +(F+ G)
This is further justification for dropping brackets in the case ofa process description
having multiple parallel components, or multiple sum components, such as the
description ofCrossing in Section 1.2.
To show that two processes are bisimilar, it is sufficient to exhibit a bisimu-
lation that contains them . This offers a very straightforward proof technique for
bisimilarity.
66 3. Bisimulations

Example 3 The two processes Cnt and Ct üare bisimilar where


def
Cnt = up .(Cnt I dOWIl.O)
def
Ct~ = up.Ct~

uP.Ct;+2 + dOWIl.Ct ; i 2: O.
def
Ct ;+1 =
Abisimulation that contains the pair (Cnt , Ct~) has infinite size because the pro-
cesses are infinite state. Let Pi be the following families of processes for i 2: 0
(when brackets are dropped between parallel components)

Po = {Cnt I oi : j 2: O}
Pi+1 = {E I oj I dOWIl.O I Ok : E E Pi andj 2: Oandk 2: O},

where F I 00 = Fand F I 0i+1 = F I Oi 10. The following relation


B = {(E, Ct;) : i 2: 0 and E E Pd
is abisimulation that contains the pair (Cnt, Ct~). The proofthat it is abisimulation
proceeds by case analysis . If i = 0, then (Cnt I oj, Ct~) E B for any j 2: O.
. .
Because Cnt - - Cnt I dOWIl.O, it follows that Cnt I OJ - - Cnt I dOWIl.O I OJ.
~ ~

This transition is matched by Ct~ ~ Ct~ because Cnt I dOWIl.O I oj E PI. The
other case when i > 0 is left as an exercise for the reader.

Example 4 Assume that C and U are as folIows.


def
C = in(x).out(x ).ok .C
def
U = write(x ).in(x ).ok .U

The proofthat (C I U)\{in, ok] '" (C I C I U)\{in, ok) is given by the following
bisimulation relation B .
{«C I U)\{in, ok}, (C I C I U)\{in, ok})} U
{«C I in(v).ok .U)\{in, ok], (C I C I in(v).ok.U)\{in, ok}) : v E D} U
{«out(v).ok.C I ok.U)\{in, ok] , (out(v).ok.C I C I ok.U)\{in, ok}) : v E D} U
{«out(v).ok .C I ok .U)\{in, ok}, (C I out(v).ok .C I ok.U)\{in, ok}) : v E D} U
{«ok.C I ok.U)\{in, ok}, (ok.C I C I ok.U)\{in, ok}) : v E D} U
{«ok.C I ok.U)\{in, ok}, (C I ok.C I ok.U)\{in, ok}) : v E D}
We leave the reader to check that B is indeed abisimulation.

Bisimulation equivalence is also a congruence with respect to all the process


combinators introduced in previous sections (including the operator Rep).

Proposition 3 If E '" F, then for any process G ,Jor any set ofactions K ,Jor any action a and
for any renamingfunction I,
3.3. Bisimulation relations 67

I. a.E ~ a.F 2. E+G ~ F+G 3. E I G ~ F IG


4. E[f] ~ F[f] 5. E\K ~ F\K 6. E\\K ~ F\\K
7. EIIKG ~ FIIKG 8. Rep(E) ~ Rep(F).

Proof. We show case 3 and leave the other cases for the reader to prove. The
relation B = {(E I G , F I G) : E F} is abisimulation. Assume that «E I
"V

G) , (F I G» E B and E I G .z, E' I G' . There are three possibilities. First,


E ~ E' and G = G'. Because E F, we know that F ~ F ' and E'
"V F' "V

for some F' . Therefore F I G ~ F ' I G, and so by definition «E' I G), (F' I
G» E B . Next , suppose G ~ G' and E' = E . So F I G ~ F I G', and by
definition «E I G') , (F I G'» E B. The last case is that E I G .z, E' I G' and
a ä a
E ~ E' and G ~ G'. However, F ~ F ' for some F' such that E' F ', "V

so F I G .z; F ' I G', and therefore «E' I G'), (F' I G'» E B. The argument is
symmetrie for a transition F I G ~ F ' I G' . 0

Bisimulation equivalence is a very fine equivalence between processes, reflect-


ing the fact that, in the presence of concurrency, a more intensional description of
process behaviour is needed than, for instance its set of traces . For full CCS, the
question whether two processes are bisimilar is undecidable. As was mentioned in
Section 1.6 Turing machines can be "coded" in CCS. Let TM n be this coding ofthe
nth Turing machine when all observable actions are hidden (using \\, which can be
defined in CCS). The undecidable Turing machine halting problem is equivalent to
whether TM n "VDi v, where Di v ~ r .Däv. However, an interesting question is for
what subclasses ofprocesses it is decidable? Clearly, this is the case for finite state
processes, since there are only finitely many candidates for being abisimulation.
Surprisingly, it is also decidable for families of infinite state processes including
"context-free processes'" for which other equivalences are undecidable; see the
survey by Hirshfeld and Moller [30]. One can also show decidability ofbisimilar-
ity for various classes of value passing processes whose data may be drawn from
an infinite value space; see Hennessy and Lin [28].

Exercises 1. Complete the proof that the relation B in example 3 is abisimulation.


2. Show that the relation B of example 4 is abisimulation.
3. Prove directly that "V is an equivalence relation.
4. Assume that processes C and U are as in example 4. Show the following
(C I U)\{in. ok} "V (Cn I U)\{in. ok}
for all n ~ I, where Cl = C and CHI = Ci I C.
5. Suppose that B and S are bisimulations. For each ofthe following, either prove
that it is true , or provide a counterexample.

8These are the fam ily of processes given by context-free grammars.


68 3. Bisimulations

a. B- 1 is abisimulation
b. B n S is abisimulation
c. BUS is abisimulation
d, - B is abisimulation
e. B 0 S is abisimulation
where
B- 1 = {(F, E) : (E, F) E B}
-B = {(E, F) : (E, F) (j B}
B 0 S = {(E , G) : there is an F . (E, F) E Band (F, G) E S}
6. Prove the remaining cases ofProposition 3.
7. Define a process operator for which bisimulation equivalence is not a
congruence.
8. A relation B between processes is a simulation (half of abisimulation) pro-
vided that, whenever (E , F) E Band a E A, if E ~ E' , then F ~ F '
and (E', F ') E B for some F '. E and F are simulation equivalent provided
that there are simulations B and S such that (E , F) E B and (F, E) ES.
a. List a11 the pairs (E, F) ofvending machines from Venl, Ven2, and Ven3
ofFigure 3.2 for which there is a simulation B containing (E , F).
b. Give an example of two processes which are simulation equivalent but,
not bisimilar.
c. Show that E and F are simulation equivalent if, and only if, they are
S-equivalent (defined in the exercises ofthe previous section).
9. For each ordinal a , the notion of a-equivalence, "'a, is defined as fo11ows.
First, the base case, E "'0 F for a11 E and F. Next, for a successor ordinal,
E "'a+1 F ifffor any a E A,

if E ~ E' then for some F ' . F ~ F ' and E' '"a F '
if F ~ F ' then for some E' . E .z; E' and E' "'a F ' .
Lastly, for a limit ordinal A, E "'). F if, and only if, E "'a F for all a < A.
a. Give an example of a pair of processes E, F such that E "'3 F but
EhF.
b. Consider the game G(E, F) as defined in the previous section. Show that,
for any possible player R position, (G, H) of this game, G fa H iff
(G, H) E Force",
c, Prove that E '" F iff E '" a F for all a,
d. Eis image-finite iffor every word w the set {E' : E ~ E'} has finite
size . Show that, if E and F are image-finite, then E '" F iff E "'n F for
all n 2: O.
3.4. Modal properties and equivalences 69

e. Give an example ofa pair ofprocesses for which E f F, but E "'n F


forall n ~ O.
10. Suppose E and F are finite state processes. Using the notion ofo-equivalence
of the previous exercise, design efficient algorithms
a. for detennining whether E '" F . (Hint: define aleast function f :N x
N ~ N such that E '" F iff E '" [(IEI ,IFI) F.)
b. which also present abisimulation relation containing (E , F) when E '"
F.
How do your algorithms compare with those presented in Kannellakis and
.Smolka [33]?

3.4 Modal properties and equivalences


An alternative approach to defining equivalence between processes uses properties.
Two processes are equivalent if they share the same properties. To understand this
further, we need an accounting ofproperties. In Chapter 2, we introduced a variety
of modal logics for describing properties of processes. Therefore, we can use
modal fonnulas as properties, If r is a set ofmodal fonnulas, then the equivalence
=r between processes, meaning "sharing the same r properties," is defined as
follows" .
E=rF iff {<I>Er : EF=<I>} = {\IIEr : FF=\II}
One extreme case is when r is the empty set, and then =r relates all pairs of
processes. Families of fonnulas provide the basis for defining various equiva-
lences . For instance, if F consists of all fonnulas of the form (al) .. . (an}tt for
n ~ 0, the relation =r is trace equivalence. Similarly, if r consists of fonnu-
las (al} ' " {an}[-]ff , for all n ~ 0 the induced equivalence is completed trace
equivalence. To capture observable equivalences, one uses subsets of'M" fonnulas.
As remarked above, =0 relates all processes. The other extreme is when r
consists of all modal fonnulas defined in Chapter 2. More generally, let r be the
set offonnulas built from the constants t t and ff, the boolean connectives 1\ and
v and modal operators [K] , (K), [-], «- )}, [H and {{ t }}. r encompasses all
the modal logics defined in Chapter 2. It turns out that bisimilar processes have
the same modal properties.
Proposition 1 If E '" F, then E =r F.
Proof, By induction on modal fonnulas <1>, we show that, for any G and H,
if G '" H , then G F= <I> iff H F= <1>. The base case when <I> is t t or ff is

9Similarly a "refinement" preorder!;r is definable as folIows: E !;r F iff for all <I> E F. if E F <I> then
F F <1>.
70 3. Bisimulations

clear. For the inductive step , the proof proceeds by case analysis. First, let <1> be
1JI 1 1\ 1J12 and assume that the result holds for IJI( and 1J12 . By the definition of
the satisfaction relation G F= <1> iff G F= IJI I and G F= 1J12 iff by the induction
hypothesis H F= IJI( and H F= 1J12, and therefore iff H F= <1> . A similar argument
applies to IJI) V 1J12• Next, assume that <1> is [KJIJI and G F= <1> . Therefore, for any
G' such that G ~ G' and a E K , it follows that G' F= IJI. To show that H F= <1>,
let H ~ H' (with a E K). However, we know that for some G' there is the
transition G ~ G' and G' rv H', so by the induction hypothesis H' F= IJI , and
therefore H F= <1> . The other modal cases are left as an exercise for the reader. 0
This result tells us that two bisimilar processes have the same capabilities, the
same necessities and the same divergence potentials. Although the converse of
Proposition 1 does not hold in general (see Example 1, below) , it does hold in the
case of a restricted set ofprocesses. A process E is immediately image-finite if, for
each a E A, the set {F : E ~ F} is finite. For each a E A, E has only finitely
manya-transitions. Eisimage-finiteifeverymemberof{F : 3w E A*. E ~ F}
is immediately image-finite. That is, a process is image-finite if all processes in its
transition graph are immediately image-finite. With this restriction to image-finite
processes, the converse of Proposition 1 holds .
Proposition 2 If E and F are image-finite and E =r F, then E rv F.
Proof. It suffices to prove the result for the case when r is the set of modal
formulas M of Section 2.1. We show that the following relation is abisimulation.
{(E , F) : E =M Fand E, F are image - finite}
But suppose not. Therefore, without loss of generality G =M H for some G
and H, and G ~ G' for some a and G', but G' t:M H ' for all H ' such that
H ~ H'. There are two possibilities. First, the set {H' : H ~ H '} is empty.
But G F= (a}tt because G ~ G' and H p6: (a}tt, which contradicts that
G =M H. Next , the set {H' : H ~ H '} is non-empty. However it is finite
because of image finiteness, and therefore assume it is {HI, .. . , H n } . Assume
G' t:M H; for each i : 1 S i S n, meaning there are formulas <1» , • • • , <1>n
such that G' F= <1>; and H; p6: <1>; . (Here we use the fact that M is closed under
complement; see Section 2.2.) Let IJI be the formula <1» 1\ .. • 1\ <1>n . Clearly
G' F= IJI and H; p6: IJI for each i , as it fails the i th component of the conjunction.
Therefore, G F= (a}1JI because G ~ G', and H p6: (a}1JI because each H; fails
to have property IJI. But this contradicts that G =M H . Therefore, the relation =M
between image-finite processes is abisimulation. 0

Clearly if E =r F and D. ~ r then E = Ll F. Therefore, Proposition 1 remains


true when r is any subset ofmodal formulas , including the set M. Proposition 2 as
illustrated in its proofholds when r is also the set M. Under this restriction, these
two results are known as the "modal characterization ofbisimulation equivalence"
due to Hennessy and Milner [29J.
3.4. Modalproperties and equivalences 71

Example 1 The need for image finiteness in Proposition 2 is iIlustrated by the following
example. Consider the following family of clocks, ci: for i > 0,
def
tick.O
def
= tick.Cl i i ~ 1

and the clock Cl from Section 1.1. Let E be the process L{Cli i ~ I}, and let
:

F be E + Cl. The processes E and F are not bisimilar. The transition F ~ Cl


cannot be matched by any transition E ~ ci/, j ~ 1 because Cl f Cl i , On the
other hand E =M F . This follows from the observation that, for any <t>, Cl F <t>
iff3j ~ O.Vk ~ j. Clk F <t> (which is proved later in Section 4.1).

There is an unrestricted characterization of bisimulation equivalence in the


case of infinitary modallogic, M oo, of Section 2.2. The proof is left as an exercise
for the reader.

Proposition 3 E'" F iff E =M oo F.

A variety of the process equivalences in the linear and branching time spec-
trum as summarized by Glabbeek in [22] can be presented in terms ofhaving the
same modal properties drawn from sublogics ofM (when restricted to image-finite
processes). Also, these equivalences can often be presented game theoretically by
imposing restrictions on possible next moves in a play.

Exercises 1. Recall the definition ofe-equivalence from the exercise ofthe previous section.
Consider the restricted case when a E N. First, is the base case, E "'0 F for
all E and F . Next, for a successor, E "'i+1 F ifffor any a E A ,

if E .z, E' then for some F'. F ~ F' and E' "'i F'
if F ~ F' then for some E'. E ~ E' and E' "'i F.'
Another idea is that of modal depth (defined in the exercises of Section 2.4) .
The modal depth of an M formula is the maximum embedding of modal
operators within it. We let md( <t» be the modal depth of <t> defined as folIows :
md(tt)
md(<t> 1\ w)
=
=
°
max{md(<t», md(w)}
=
=
md(ff)
md(<t> v w)
md([K]<t» = 1 + md(<t» = md«(K)<t»
Let Mn be the set ofmodal M formulas <t> such that md(<t» ~ n.
a. Prove that E '"n F iff E =M. F.
b. Let E and F be arbitrary image-finite processes, and assume that E f F .
Present a method that constructs a formula <t> E M and distinguishing
between E and F, that is, for which E F <t> and F ~ <t>.
72 3. Bisimulations

2. Prove Proposition 3.
3. Give a fonnula CI> of Moo such that E F CI> and F ~ CI> when E and F are
the processes from example I.
4. Let r be the subset ofM fonnulas that do not contain any occurrence of a [K]
modality. If E and F are image-finite, prove that E =r F iff E and F are
simulation equivalent (defined in an exercise of the previous section) .
5. A relation B between processes is a 2/3-bisimulation (see Larsen and Skou
[37]) provided that, whenever (E , F) E B and a E A,

if E ~ E' then for some F ' . F .z; F ' and (E ', F') E B
if F .z; F ' then for some E'. E .z., E'.

Let r be the subset ofM fonnulas with the restriction that, for any subfonnula
[K]\IJ , the fonnula \IJ is ff . Prove the following for image-finite E and F :
E =r F iffthere are 2/3-bisimulations B and S with (E , F) E Band (F, E) E
S.
6. Let r be the subset of M fonnulas that do not contain an occurrence of a
[K] modality within the scope of a (J) modality. Provide adefinition of an
interactive game G'(E , F) such that player V wins G'(E, F) iff E =r F ,'
assuming that E and F are image-finite.

3.5 Observable bisimulations


Game equivalence and bisimulation equivalence, as we have seen, coincide. More-
over, two equivalent processes have the same modal properties . Conversely, ifthey
are image-finite and have the same M modal properties , then they are bisimilar.
There are three different notions here: games, bisimulations and M properties.
Not one of this trio abstracts from the silent action r because each appeals to the
family of transition relations {~ : a E A}. By consistently replacing this set
with the family of observable transitions, as defined in Section 1.3, these notions
unifonnly abstract from r . Observable modallogic MOwas defined in Section 2.4
with modalities [K], [ ], ((K}) and (( )}. Observable games and observable bisim-
ulation relations that appeal to the thicker transition relations ~,a E 0 U {e},
are defined below.
A play ofthe observable game GO(Eo , Fo) is a finite or infinite length sequence
ofpairs (Eo, Fo) . . . (Ei , Fd played by the refuter R and the verifier V. After an
initial part ofaplay (Eo, Fo) tE] , F j), thenextpairofprocesses is detennined
by one ofthe following two moves, where a E 0 U [s} ,
• Player R chooses a transition E j ~ E j+ 1, then player V chooses a transition
with the same label F j ~ Fj +1
3.5. Observable bisimulations 73

• Player R chooses a transition Fj ~ Fj+ l , then player V chooses a transition


with the same label E j ~ E j+ 1

The play continues with additional moves.


A position (E n • Fn ) in which one ofthe processes is able to perform an initial
observable action that the other can not is an R-win. A play is won by player R
if the play reaches an R-win. Any play that fails to reach such a position counts
as a win for player V: this is equivalent to the play having infinite length because
player R can always make a move, given that the empty transition Ej ~ Ej or
Fj ~ Fj is available.
As in Section 3.2, a history-free strategy for a player is a set ofrules independent
of previous moves, and that tell the player how to move. For the refuter, a rule has
the form "at position (E, F) choose transition t," where t is either E ~ E' or
F ~ F'. For the verifier, a rule has the form "at position (E, F) when player R
has chosen t choose t '," where t ' is corresponding transition of the other process
from that of t . A player uses a strategy in a play ifall her moves obey the rules in it.
A strategy is winning if the player wins every play in which she uses it. For every
game GO(E, F), one of the players has a history-free winning strategy (whose
proof is the same as that of Theorem 1 of Section 3.2, except for the use of the
thicker transitions). Two processes E and F are "observationally game equivalent"
ifplayer V has a winning strategy for GO(E, F) .

Example 1 The processes (C I U)\{in,ok} and Ucop from Section 1.3 are observationally
game equivalent. An example play is in Figure 3.6, where the refuter moves from
the round vertices and the verifier from the reetangular vertices. The reader is
invited to explore other possible plays.
Underpinning observational game equivalence is the existence ofan observable
bisimulation relation whose definition is as in Section 3.3, except with respect to
observable transitions ~.

Definition 1 A binary relation B between processes is an observable bisimulation provided that,


whenever (E, F) E B anda E 0 U [s},

• if E ~ E' then F ~ F' for some F' such that (E', F') E B, and
• if F ~ F' then E ~ E' for some E' such that (E' , F') E B

E and F are observably bisimilar, written as E ~ F, ifthere is an observable bisim-


ulation B with (E. F) E B . The relation ~ has many properties in common with "'.
It is an equivalence relation. The union of a family of observable bisimulations is
also an observable bisimulation (compare Proposition 1 ofSection 3.3) and there-
fore ~ is itself an observable bisimulation. Observable bisimulation equivalence
and observable game equivalence also coincide.
Proposition 1 Eis observablegame equivalent to F ifJ E ~ F.
74 3. Bisimulations

G
r:
J

8 1

E
o
- (C I U)\{in, ok}
F - Ucop
E IlI"~v) E'

E' - (C I in(v).ok.U)\{in, ok}


F' - out(v).Ucop

t' E'o~) E

FIGURE 3.6. Game play

The proof of this result is the same as for Proposition 2 of Section 3.3 except,
with respect to observable transitions. Ifprocesses are bisimilar, then they are also
observably bisimilar, as the next result shows.
Proposition 2 If E '" F , then E :::::: F .

Proof. It suffices to show that the relation r- is an observable bisimulation. The


details are left to the reader. 0

A direct proo fthat two processes are observably bisimilar consists ofexhibiting
an observable bisimulation relation containing them.
3.5. Observable bisimulations 75

Example 2 We show that Protocol ~ Cop by exhibiting an observable bisimulation which


contains them. Let B be the following relation .

{(Protocol, Cop)}U
(«Sendl(m) I Medium I ok.Receiver)\], Cop) : m E D} U
(«sm(m).Sendl(m) I Medium I Receiver)\], out(m).Cop) : m E D} U
(«Sendl(m) I Medl(m) I Receiver)\], out(m).Cop) : m E D} U
(«Sendl(m) I Medium I out(m).ok.Receiver)\], out(m).Cop) : m E D} U
(«Sendl(m) I mS .Medium I Receiver)\], out(m).Cop) : m E D}
The reader is invited to establish that B is an observable bisimulation.

There is also an intimate relationship between observable bisimulation equiva-


lence, and having the same properties ofobservable modallogic MOofSection 2.4.
The following result is the observable correlate ofProposition I ofSection 3.4. Its
proof, which is left as an exercise, is by induction on MOformulas.
Proposition 3 If E ~ F, then E =Mo F.
This result is not true ifwe include the modalities [K] and (K), or the divergence
sensitive modalities ofSection 2.5. The converse ofProposition I holds for observ-
ably image-finite processes. A process E is immediately observably image-finite
if, for each a E 0 U tel, the set {F : E ~ F} is finite, and E is observably
image-finiteifeachmemberofthefollowingset {F : 3w E (OU{e})*. E ~ F}
is immediately image-finite.

Proposition 4 If E and F are observationally image-finite and E =Mo F, then E ~ F.


The proof ofthis result is very similar to the proof ofProposition 2 ofSection 3.4,
except that one appeals to observable transitions.
So far there is a smooth passage from results based on transitions ~ to similar
results which use observable transitions ~ . There is one important exception,
Proposition 3, case 2 of Section 3.3. Observable bisimilarity is not a congruence
with respect to the + operator because of the initial preemptive power of r , The
two processes E and r. E are observably bisimilar, but for many instances of F
the processes E + F and r . E + F are not equivalent.
Example 3 The two processes r .2p.O and 2p .O are observably bisimilar, but r .2p.O + lp.O is
not equivalent to 2p .O + lp.O. For instance, the first process in this pair has the
property (( )} [1p] ff, which the second fails.
In ces [29,44], observable equivalence ~c is defined as the largest subset
of~ that is also a congruence!". Observable bisimilarity is a congruence for all

IOFor instance, E ~c F implies E ~ Fand for aIl G, E + G ~ F + G.


76 3. Bisimulations

the operators 11 of ces, except sum. It therefore turns out that the equivalence ~c
can be described independently of process contexts in terms of transitions; for it
is only the initial preemptive r transition that causes problems .
Proposition 5 E ~c F ifJ
1. E ~ Fand
2. if E ~ E', then F ~ FI ~ F' and E' ~ F' for some FI and F' , and
3. if F ~ F', then E ~ EI ~ E' and E' ~ F' for some EI and E' .
This Proposition can be viewed as the criterion for when E ~c F holds. If E and
F are initially unable to perform a silent action (as is the case with Protocol and
Cop of example 2), E ~ F implies E ~c F.
There is also a finer observable bisimulation equivalence called "branching
bisimulation equivalence" which is due to Glabbeek and Weijland [23]. Observable
bisimilarity and its congruence are not sensitive to divergence. So, they do not
preserve the strong necessity properties discussed in Section 2.5. However it is
possible to define equivalences that take divergence into account [26, 31, 61].

Exercises 1. Prove that ~ is an equivalence relation.


2. Let Cross be the following simple crossing

Cross ~ train.tcross.Cross + car.ccross.Cross


and let Crossing be as in Figure 1.10. Using games, show that these two
crossings are not observably game equivalent.
3. Using games, show that SM~ ~ SMn where SM~ is defined in Section 3.1 and
SMn is as in Figure 1.15.
4. Show that (User I Cop)\{in} is not observably game equivalent to Ucop,
where User and Cop are as in Sections 1.1 and 1.2.
5. Prove Propositions 2 and 3 above.
6. Prove the following result. If E ~ F, then for any process G, and set of
observable actions K , action a, and renaming function f, the following an
hold .
a .E ~a .F EIG~FIG
E[f] ~ F[f] E\K ~ F\K
E\\K ~ F\\K EIIKG ~ FIIKG
Rep(E) ~ Rep(F)
7. Prove Proposition 4.
8. Show that if E '" F , then E ~c F.

11More generally, only case 2 of Proposition 3 of Seetion 3.3 fails for ~.


3.6. Equivalence checking 77

9. Prove Proposition 5
10. Show that Scheda >ft Sched~, wheretheseprocessesaredefinedinSection 1.4.
However, show that
Scheda \\{bi, .. . , b4 } ~ Sched~ \\{bi, ... , b4 } .
H. For a a
E A let be a if a # r, and let f be E. A binary relation B between
processes is an ob bisimulationjust in case whenever (E, F) E Band a E A,

a. if E ~ E' then F ~ F' for some F' such that (E' , F') E B, and
b. if F ~ F' then E ~ E ' for some E' such that (E', F') E B.
Two processes are ob equivalent, denoted by ~', if they are related by an ob
bisimulation relation. Prove that ~ = ~'.
12. Prove that Cto ~c Count, where these processes are given in Figure 1.4 and
Section 1.6.
13. Extend the modallogic MO so that two image-finite processes have the same
modal properties if, and only if, they are observably congruent.

3.6 Equivalence checking


A direct proof that two processes are bisimilar, or observably equivalent, is to
exhibit the appropriate bisimulation relation that contains them. Examples in Sec-
tions 3.3 and 3.5 show the proof technique. In the case that processes are finite
state, this can be done automatically. There is a variety of tools that include this
capability including the Edinburgh Concurrency Workbench [14].
Alternatively, equivalence proofs can utilize conditional equational reasoning.
There is an assortment of algebraic, and semi-algebraic, theories of processes
depending on the equivalence and the process combinators. For details, see the
references [26,31,3,44]. It is essential that the equivalence be a congruence.
To give a flavour of equational reasoning, we present a proof in the equational
theory for CCS that a simplified slot machine without data values is equivalent to a
streamlined process description. The equivalence involved is ~c, the observational
congruence defined in the previous section.
The following important CCS laws are used in the proof. The variables x and
y stand for arbitrary CCS process expressions.
a.-r.x = a.x
x+-r.x = r .x
(x + y )\ K = x\K + y\K
(a .x)\K = a.(x\K) ifa rf. K UK
(a .x)\K 0 if a E KU K
x+O x
78 3. Bisimulations

The last four are clear from the behavioural meanings ofthe operators . The first two
are r-laws and show that we are dealing with an observable equivalence. We shall
also appeal to a rule schema called an "expansion law" by Milner [44], relating
concurrency and choice .

if x, = L{aij.xij : I ~ j ~ nd for i : I ~ i ~ m,
then XI I .. . I Xm = L{aij .Yij : 1 ~ i ~ m and 1 ~ j ~ nd
+ L{LYklij : 1 .s k < i .s m and akt = aij},
where Yij== XI I I Xi_I I Xij I Xi+1 I I Xm
and vuu == Xl I I Xk-I I xu I Xk+1 I I Xi_I I Xij I Xi+1 I ... I Xm •

For example, if

XI = a .Xl1 + b .X12 + a .X13


X2 a 'X21 + C.X22 ,
then

xllx2 + b.(XI2Ix2) + a,(x13lx2) + a ,(xllx2IH


a '(Xlllx2)
C.(XdX22) + 'f.(Xl1IX2I) + 'f.(x13IX2I).

The expansion rule is justified by the transition rules for the parallel operator.
Proofrules for recursion ~g) are also needed. If E does not contain any occur-
rences ofthe parallel operator I, then P is said to be "guarded" in E, provided that
all occurrences of P in E are within the scope of aprefix a . and a is an observable
action (that is, not r). Assume that P is the only process constant in E . The guard-
edness condition guarantees that the equation P = E has a unique solution up to
~c . A solution to the equation P = E is a process F such that F ~c E {F / P}.
Uniqueness of solution is that, if both F and G are solutions, then F ~c G.

Example 1 The clock Cl is a solution to the equation P = tick.P because Cl ~c tick.Cl.


Moreover, any other solution E (such as C12 ) has the property that Cl ~c E .
In contrast, the equation P = r . P where P is not guarded has as solutions any
process r. E because r .E ~c L r .E .

The specific recursion proof rules used are the following .

• if P ~ Ethen P = E
• if P = E and P is guarded in E, and Q = F and Q is guarded in F, and
E { Q/ P} = F , then P = F
3.6. Equivalence checking 79

def
10 = slot.10 1
def
10 1 = bank.10 2
lost.loss.IO + release.win.IO
def
10 2

def
B = bank.Bj
def
max.left.B

def
D max.Dj

lost.left.D + release.left.D
def
=

SM (10 I B I D)\K where K = {bank, max,left,lost, release}

SM'
def
= slot.(r.loss.SM' + r .win .SM')
FIGURE 3.7. A simplified slot machine

Example 2 The recursion rules can be used to prove that Cl ~c Cl2 (where these processes
are pictured in Figure 3.1) as follows.
Cl = tick.CI by definition of Cl
tick.CI = tick.tick.CI by congruence
Cl = tick.tick.CI by transitivity of =
Cl 2 = tick.tick.C12 by definition of C12
(tick.tick.Cl){CI2/CI} = tick.tick.CI2 byequality
Cl = Cl 2 by the second recursion rule

The slot machine SM without data values, and its succinct description SM',
appear in Figure 3.7. We prove that SM = SM'. The idea behind the proofis to first
simplify SM by showing that it is equal to an expression E that does not contain
the parallel operator. The proof proceeds on SM using the expansion law, and the
laws earlier for \K l 0 and r (and the first recursion rule).
SM = (10 I B I D)\K
= (slot.I0 1 I bank.Bj I max.D1)\K
= (slot.(I0 1 I B I D) + bank.(IO I BI
I D) + max.(10 I B I Dl»\K
= (slot.(I0 1 I B I D»\K + (bank.(IO I Bi I D»\K + (max.(IO I B I D1»\K
= slot.(I0 1 I B I D)\K + 0 + 0
= slot.(I0 1 I B I D)\K
80 3. Bisimulations

Let SM 1 == (lOl I B I D)\K . By similar reasoning to the above, we obtain


SM 1 = r .r.(10 2 l Lef t .B I D1)\K .
Assume SM2 == (lO 2 11eft.B I D1)\K . By similar reasoning,
SM2 = r.SM3 + r .SM4 where
SM3 == (loss.lO 11eft.B 11ef t .D)\ K
SM4 == (win.lO l LeftB 11eft.D)\K.
The r-laws are used in the following chain ofreasoning.
(10ss.10 11eft.B Ileft.D)\K
= 10ss.(lO l Lef t..B 11eft.D)\K + r .(loss.lO I B I D)\K
10ss.(r.SM + slot.(10 1 11eft.B 11eft.D)\K) + r .1oss.SM
10ss.(r.SM + slot.r .(10 1 I B I D)\K) + r .1oss.SM
= 10ss.(r.SM + slot.(10 1 I B I D)\K) + r.1oss.SM
= 10ss.(r.SM + SM) + r.1oss .SM
= 10ss.r.SM + r .loss.SM
= 10ss.SM + r.loss.SM
= r .loss.SM
By similar reasoning, SM 4 = r.win.SM. Backtracking and substituting equals for
equals, and then applying r laws gives the following .
SM slot.SM 1
= slot .ToT .8M2
slot.r.r.(r.SM3 + r.SM 4 )
= slot .r:r.(r .r .loss.SM + r:r.win.SM)
slot.(r.1oss.SM + r .win.SM)
We have now shown that SM = E, where E does not contain the paral-
lel operator, and where SM is guarded in E. The expression E is very elose to
the definition of SM' (and SM' is also guarded within it). Clearly, E {SM' ISM} =
slot.(r.1oss.SM' + r.win.SM'), so by the second recursion rule SM = SM', which
completes the proof.

Exercises 1. For each ofthe following cases of XI, X2 and X3, define XI I X2 I X3 using the
expansion theorem .
a,
XI = a .xlI + b,X12 + a.X12
X2 = a .x21 + C.X 22

X3 = a ,x31 + C ,X32
3.6. Equivalencechecking 81

b.
XI = LXII + LXI2
X2 = a.X21 + b.X22
X3 = a.x31 + a,x32 + 'foX33

c.
XI = a,xl1 + b ,xl2 + a.X12 + a .xl3
X2 = a.x21 + b ,X22 + a.X23

X3 = a ,x31 + b ,X32 + a ,x33

2. Refine the expansion law to take account of the restriction operator. That is,
assume that each x, has the form L {aij .xij : I ~ j ~ ni} and the parallel
form is (Xl I ... I xm)\K.
3. Let Cl ' ~ tick.tick.tick.CI' . Use the recursion proofrules to prove that
Cl ' = C12 •
4. Assume that there is just one datum value in the set D . Using equational
reasoning only, prove that Protocol ~c Cop (where these processes are
defined in Chapter 1).
S. Compare the different methods for proving equivalence, by showing that SM ~
SM':
a. directly using games
b. directly by exhibiting an observable bisimulation
c. directly by showing that they obey the same modal properties in MO
6. Extend the proof rules of the equational system to take into account value
passing, and then prove equationally that SMn = SM~ .
7. Extend the second recursion rule so that the expressions E and F can contain
occurrences of parallel. (To do this we need to refine the definition of being
guarded.)
4 _

Temporal Properties

4.1 Modal properties revisited 83


4.2 Processes and their runs 85
4.3 The temporallogic CTL 89
4.4 Modal formulas with variables 91
4.5 Modal equations and fixed points 95
4.6 Duality . . . . . . . . . . . . . . 100

Modallogics as introduced in Chapter 2 can express local capabilities and neces-


sities ofprocesses such as "tick is a possible next action" or "tick must happen
next ," However, they cannot express enduring capabilities such as "tick is always
a possible next action" or long term inevitabilities such as "tick eventually hap-
pens," These features, especially in the guise of safety or liveness properties, have
been found to be very useful when analysing the behaviour of concurrent systems.
Another abstraction from behaviour is a run of a process that is a finite or infinite
length sequence oftransitions. Runs provide a basis for understanding longer term
capabilities. Logics where properties are primarily ascribed to runs of systems are
called "temporal" logics. An alternative foundation for temporal logic is to view
enduring features as extremal solutions to recursive modal equations.

4 .1 Modal properties revisited


A property partitions a family of processes into two disjoint sets, the subset of
processes that have the property and the subset that does not have the property. For

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
84 4. Temporal Properties

example, the fonnula (tick}tt divides {CI 1 , tock.Cld into two subsets {Cld
and [tock.Cf-}. Given a fonnula cI> and a set ofprocesses E, we define 11 cI> IIE be
the subset of processes in E having the modal property cI>.

I 11 cI>
E
II ~ {E E E : E F= cI>} I
Therefore, each modal fonnula cI> partitions E into 11 cI> 11 E and E - 11 cI> 11 E•
There are many different notations for describing properties of processes.
Modallogic is one such fonnalism. Other notations are also logical and inc1ude
first-order and second-order logic over transition graphs. Another kind offonnal-
ism is automata, which recognises words, trees or graphs. The expressive power
of different notations can be compared by examining the properties of processes
they are able to define. Whatever the notation, a property of a set of processes can
be identified with the subset that has it. Consider the following family of clocks
E = {Cl i , Cl : i ::: I}, where ci' is defined in example I of Section 3.4. For
instance, the transition graph for Cl4 is as follows.

Cl 4 ~ Cl 3 ~ Cl 2 ~ CI l ~ 0
Cl is distinguishable from other members of E because of its long tenn capability
for ticking endlessly. Each clock ci' ticks exact1y i times before stopping. The
property "can tick forever" partitions E into two subsets {Cl} and E - {Cl}. How-
ever, this partition cannot be captured by a single modal fonnula, as the following
result shows .
Proposition 1 For any modal cI> EMU M°.J- , if Cl F= cI>, then there is a j ::: I such that, for all
k ::: j, Cl k F e .
Proof. By induction on the structure of cI>. The base case when cI> is tt or ff is
c1ear. The induction step divides into subcases. Let cI> = 'lJ1 1\ 'lJ2 and assume that
Cl F= cI>. Therefore, Cl F 'lJ1 and Cl F= 'lJ2. By the induction hypothesis, there
is a j I ::: I and a j2 ::: I such that Clk1 F= 'lJ1 for all k I ::: j I and Cl k2 F 'lJ2
for all k2 ::: j2. Let j be the maximum of {jl, j2}, and so Cl k F cI> for all
k::: j . Next,assumethatcI> = 'lJ1 V'IJ2 and C'l F cI>. SoCI F= 'lJ1 or ci F 'lJ2.
Suppose it is the first of these two. By the induction hypothesis, there is a j ::: I
such that Clk F 'IJ1 for all k ::: l, and therefore also Cl k F= cI> for each k ::: j,
Let cI> = [K]'IJ and assume that Cl F cI>. Iftick rt K, then Cli F cI> for all
i ::: 1. Otherwise tick E K, and since Cl ~ Cl it follows that Cl F= 'IJ. By
the induction hypothesis, there is a j ::: I such that Cl k F= 'IJ for all k ::: j .
However, since Cl i +1 ~ ci' it follows that Cl k F cI> for all k ::: j + 1. The case
cI> = (K) 'IJ is similar. The other modal cases when cI> is (( )} 'IJ, [ ] 'IJ, [.J,] 'IJ, or
((t)} 'IJ are more straightforward and are left as an exercise. 0
Proposition I shows that no modal fonnula partitions the set E into the pair {Cl}
and E - {Cl}, and therefore the enduring capability ofbeing able to tick forever is
not expressible in the modallogics introduced in Chapter 2. A similar argument
4.2. Processes and their runs 85

establishes that the property "t i ck eventually happens" is also not definable within
these modallogics.

Exercises 1. a. Define <I> II E directly by induction on the structure of <1>. What


11
assumptions do you make about the set E?
b. Let E be the the set [Ct, : j ::: O} of counters from Figure 104. Work out
the following sets using your inductive definition.
i, 11 (down)tt 1\ (up)tt II E
Ü. 11 [down]((down)tt 1\ (round)tt) IIE
ili. 11 [up]ff II E
c. Assume instead that E is the subset {Ct2i : j ::: O}. What sets are now
determined by the formulas above, according to your inductive definition?
2. An important feature of modal logic is that its basic modal operators are
monotonic with respect to subset inc1usion. Prove the following, assuming
that # E {[K] , (K), [ ] , (( )), [-!-] , ((t))} in part c.
a. IfE) ~ E2 then 11 <I> II E n E) ~ 11 <I> II E n E2
b, IfE) ~ E2 then 11 <I> II E U EI ~ 11 <I> II E U E2
c. If 11 <I> II E ~ 11 \I1 I1 E then 11 # <1> II E ~ 11 # \I1 I1 E
3. We can view the meaning ofa modal operator # as a process transformer 11 # 11 E
mapping subsets of E to subsets of E. For example, 11 [K] II E is the function
which for any EI ~ Eis defined as follows .

11 [K] II
E
EI = {F E E : if F ~ E and a E K then E E EIl
a, Define the trans formers 11 ( K ) II E , 11 ((K)) II E and 11 [-!-] II E •
b. An indexed family of subsets {Ei ~ E : j ::: O} of E is achain if Ei ~ Ej
when j ~ j . A modal operator # is U-continuous iffor any such chain the
set 11 # II E U{E i : j ::: O} is the same as U{li # II EEi : j ::: O}. Prove that, if
each member ofE is finitely branching (that is {F : E ~ F with a E A}
is finite for each E E E), then [K] is U-continuous. Give an example
process which fails U-continuity.
4. Prove that the property "tick eventually happens" is not definable within M
UM°.J, .

4.2 Processes and their runs


Proposition 1 ofthe previous section shows that modallogic is not very expressive.
Although able to describe local or immediate capabilities and necessities, modal
formulas cannot capture global or long term features of processes. Consider the
86 4. Temporal Properties

contrast between the local capability for ticking and the enduring capability for
ticking forever, and the contrast between the urgent inevitability that tick must
happen next with the lingering inevitability that tick eventually happens.
Another abstraction from behaviour is a run of a process. A run of Eo is a finite
or infinite length sequence of transitions

with "maximal" length. This means that , if a run has finite length, then its final
process is unable to perform a transition because, otherwise, the sequence can be
extended. A deadlocked process E only has the zero length run E. A run of a
process carves out a path through its trans ition graph.
tick tock tick . .
Example 1 CI l ---* tiock.Cl., ---* CI l ---* . . . IS the only run from the clock CI l and it
has infinite length. CIs I has infinitely many finite length runs, each of the form
tick tick tock ... tick tick
CIs ---* . . . ---* CIs ---* 0 and the smgle infimte length run CIs ---* CIs ---*
.. .. An infinite length cyclic run of Crossing is
. car r ccross r . car
Croaamg ---* EI ---* E4 ---* Eg ---* Crossäng ---* .. . ,
where EI , E4 , and Eg are as in Figure 1.12.

Runs provide a means for distinguishing between local and long term features
ofprocesses. For instance, a process has the capability for ticking forever provided
that it has a perpetual run of tick transitions. The property of a process that tick
eventually happens is the requirement that, within every run ofit, there is at least one
tick transition. The clock CI l ofExample 1 has the property that tock eventually
happens. However, the other clock CIs fails to have this trait because of its sole
infinite length run, which does not contain the action tock.
Bisimulation equivalence "preserves" runs in the sense that, if two processes
are bisimilar then for any run of one of the processes there is a corresponding run
of the other process.

Proposition 1 Assume that E o ""' Fo.


1. Jf Eo
~ EI ~ ~ En is afinite length run, then there is a run
Fo ~ FI ~ . . . ~ Fn such that Ei ""' F, for all i : 0 ~ i ~ n; and
2. Jf Eo ~ E I ~ . . . is an infinite length run, then there is an infinite length
run Fo ---*
al
FI ---*
a2
. .. such that Ei ""' F, fior a 11 I:::
. 0.
Because bisimulation equivalence is symmetrici, Proposition 1 also implies that
every run from Fo has to be matched with a corresponding run from Eo. We leave
the proof of this result as an exercise for the reader.

ICI s ~ tick.CI s + tock.O.


2If E o '" Fo• then also Fo '" Eo.
4.2. Processes and their runs 87

Many significant properties of systems can be understood as features of their


runs . Especially important is a classification of properties into "safety" and "live-
ness," originally due to Lamport [35]. A safety property states "nothing bad ever
happens ," whereas a liveness property expresses "something good eventually hap-
pens." A process has a safety property if none of its runs has the bad feature, and
it has a liveness property if all of its runs have the good feature. That is, we can
informally express them as follows.
Safety( <t» for all runs n , <t> is never true in n
Liveness( <t>) for all runs n , <t> is true in n

Example 2 A property that distinguishes each clock ci ' of the previous section from Cl is
eventual termination. The good feature is expressed by the formula [-]ff, and
the property is given as Liveness([ - ]ff). On the other hand, termination can also
be viewed as defective , as exhaustion of the clock. In this case, Cl has the safety
property of absence ofdeadlock, expressed as Safety( (-}tt), which each ci' fails
to have.
Liveness and safety properties of a process pertain to all of its runs. Weaker
properties relate to some runs of a process. A "weak" safety property states " in
some run nothing bad ever happens," and a "weak" liveness property asserts " in
some run something good eventually happens ."
WSafety( <t>) for some run x , <t> is never true in n
WLiveness( <t>) for some run n , <t> is true in x
Notice that a weak safety property is the "dual" of a liveness property (and a weak
liveness property is the "dual" of a safety property).
WSafety( <1» iff not(L iveness(not<l»)
WLiveness( <t>) iff not(Safety(not<l»)
Example 3 A weak liveness property ofSMn ofFigure 1.15 is that it may eventually pay out a
windfall ofa million pounds . The good feature is (win(l 06))tt, and so the property
is given by WLiveness«(win(l06))tt).
There are also intermediate cases between all and some runs when liveness
or safety properties pertain to special families of runs which obey some general
constraints.
Example 4 A desirable property ofCrossing is that, whenever a car approaches, eventually
it crosses. This requirement is that any run containing the action car also contains
ccross as a later action.

Sometimes, the relevant constraints are complex and depend on assumptions out-
with the possible behaviour of the process itself. For example, .Protocol of
Figure 1.13 fails to have the property "whenever a message is input eventually
it is output" because of runs where a message is forever retransmitted. However,
88 4. Temporal Properties

we mayassume that the medium does eventually pass to the receiver a repeatedly
retransmitted message, meaning these deficient runs are thereby precluded.

Exercises 1. Enumerate the runs ofVen and Crossing.


2. Prove Proposition I.
3. For each of the following, state whether it is a liveness or a safety property,
and identify the good or bad feature .
a, At most five messages are in the buffer at any time.
b. The cost of Iiving never decreases.
c. The temperature never rises.
d. All good things must come to an end.
e. If an interrupt occurs, then a message is printed within one second.
4. Prove the following dualities,
WSafety( <1> ) iff not(Liveness(not <1> ))
WLiveness( <1>) iff not(Safety(not<1»)
5. Observable bisimulation equivalence, ~, does not "preserve" runs in the sense
ofProposition I. For instance , r.O does not have a corresponding infinite length
run to the following run ofDiv (where Div ~ r .Div)

Div~ Div~ .. .
even though Div ~ r .O, We may try to weaken the matehing requirement by
stipulating that, for any run from one process, there is a corresponding run
from the other process such that there is a finite or an infinite partition across
these runs containing equivalent processes.
a, SpeIl out adefinition of equivalence = based on this partitioning
requirement.
b, Prove a.(E + r .F) + a .E ~ a.(E + r .F) for any E and F. Is this still
true ifyou replace ~ with =?
=
c. What is the relation between and branching bisimulation, as defined
by Glabbeek and Weijland [23]?
6. Prove that the following properties are not definable within modal logic M U
M°.J,.
a, "in some run tick does not happen"
b. "in some run eventually tick happens"
c, "the sequence of actions al .. . a4 happens cyclically forever starting with
al"
4.3. The temporal logic CTL 89

4.3 The temporal logic CTL


Modallogic expresses properties ofprocesses in terms of their transitions. Tempo-
rallogic, on the other hand, ascribes properties to processes by expressing features
oftheir runs. In fact, there is not a c1eardemarcation between modal and temporal
logic because modal operators can also be viewed as temporal operators as follows.

Eo F= [K]<I> iff for any run Eo ~ EI if GI E K then EI F= <I>


Eo F= (K}<I> iff there is a run Eo ~ E I and GI E K and EI F= <I>

The operator [-] expresses "next" over all runs and its dual (-) expresses weak
"next."
A useful temporal operator is the binary until operator U. A finite or infinite
length run Eo ~ E I ~ . . . satisfies the formula <I> U \11 , "<I> is true until \11,"
provided there is an i 2: 0 such that Ei F= \11 , and for each j : 0 S j < i the
intermediate process E j has property <1>.

Eo
F=
<I>

The index i can be 0 (in which case <I> does not have to be true at any point of the
run). If the run has zero lengtlr' then the index i must be zero. A special instance
ofU is when the first formula <I> is tt . The formula tt U \11 expresses "eventually
\11 ," which we abbreviate to F\I1 .
A finite or infinite length run E o ~ EI ~ . . . satisfies .....(tt U ..... \11)4, that
is .....F..... \11 , if every process Ei within the run has the property \11 .

Eo a i +l
~ ...
F=
\11

This, therefore, expresses "always \11," which we abbreviate to G\I1. Notice that F
and G are duals of each other.
Modallogic can be enriched by adding temporal operators to it. For each
temporal operator (such as F) there are two variants, the strong variant ranging
over all runs of a process, and the weak variant ranging over some run of the
process . We preface the strong variant with A "for all runs" and the weak variant
with E "for some run." Liveness and safety as described in the previous section

3The length of a run is the number of transitions within it.


"Here ~ is the negation operator.
90 4. Temporal Properties

can now be properly defined.


Safety(<I» AG ......<I>
Liveness(<I>) = AF <I>
WSafety( <1» = EG ......<I>
WLiveness(<I>) = EF <I>
If modal logic is extended with the two kinds of U operator the resulting
temporallogic is a slight variant of computation tree temporallogic, CTL , due to
Clarke, Emerson and Sistla [12]. We present the logic with an explicit negation
operator.

<I> ::= tt I . . . <1> I <1>1 1\ <1>2 I [K]<I> I A(<I>I U <1>2) I E(<I>I U <1>2) I
The definition of satisfaction between a process Eo and a formula proceeds by
induction on the formula. The only new clauses are for the two U operators that
appeal to runs of Eo.

Eo 1= A(<I> U \11) iff for all runs Eo ~ EI ~ ... there is i 2: 0


with Ei 1= \11 and for all j : 0 ::: j < i, E j 1= <I>
Eo '" U 'T')
1= E( 'i' 'I' l'ff fior some run E 0 --+
at az . . . there
E I --+ ere is
IS iI 2: 0

with Ei 1= \11 and for all j : 0 ::: j < i, Ej 1= <I>


The two variants of"eventually" and "always" are definable as folIows.
def def
AF<I> = EF <I> = E(tt U <1»
def def
AG <I> = EG <I> ......AF ...... <I>
Example 1 The level crossing has the crucial safety property that it is never possible for a train
and a car to cross at the same time. In terms of runs, this means that no run of
Crossing passes through a process that can perform both tcross and ccross as
next actions, so the bad feature is (tcross}tt 1\ (ccross}tt. The safety property
is therefore expressed by the CTL formula AG([tcross]ff V [ccross]ff).

Example 2 The weak liveness property ofthe slot machine SMn that it may eventually pay out
a windfall is expressed as EF(win(106)}tt.
As with modal operators, temporal operators may be embedded within each
other to express complex process features. An example is "eventually, action a is
possible until b is always impossible," AF«(a}tt U AG[b]ff).
In Section 4.1 it was shown that the ability to tick forever is not expressible in
modallogic. It is also not directly expressible in CTL as defined here . For instance,
the formula EG(tick}tt states that action tick is possible throughout some run,
and process Cl' ~ tock.CI' + tick.O satisfies it. The problem is that the CTL
temporal operators are not relativised to actions. A variant "always" operator is
4.4. Modal formulas with variables 91

G K, which includes action infonnation. A run satisfies G K CI> if every transition in


the run belongs to K and CI> is true throughout. The ability to tick forever is then
directly expressible with the fonnula EG(tiCkl (-}tt, (where (-}tt ensures that
the run is infinite) .
In this work , we do not found temporallogic on runs . Partly this is because we
wish to integrate action capabilities with temporal properties more elegantly than
suggested above, partly because we wish to suppress the notion of a run. Instead
we shall define appropriate closure conditions on sets of processes that express
long term capabilities by appealing to inductive definitions built from modallogic.
The idea is that a long term capability is just a particular closure of an immediate
capability.

Exercises 1. Show the following


a. Yen F= AF(collectb , collectl}tt
b. Yen F= EF(collectb}tt
c. Yen li= AF(collectb}tt
2. Show that Crossing F= AG([tcross]ff v [ccross]ff}.
3. a. Show that CTL properties are preserved by bisimulation equivalence.
That is, prove that if E '" F, then for all CI> E CTL, E F= CI> iff F F= CI>.
b. Let WCTL be CTL , except that the modal operators [K] and (K) are re-
placed with the modalities of M" ofSection 2.4. Notice that the temporal
operators of WCTL are still interpreted over runs involving thin tran-
sitions. Are WCTL properties preserved by observational equivalence
~?

4. A run Eo ~ EI ~ . . . satisfies FG<I> provided there is an i ~ 0 such that


for all j ~ i, E j F= CI> .
a. Contrast the different meanings of the operators AFG and AFAG.
b. Give an example of a process E and a modal fonnula CI> such that E F=
AFGCI> but E li= AFAGCI> .
5. The clock CI l ~ t i.ck.tock.C'lj has the property that (tick}tt is true
at every even point of its single run . Prove that the property "for every run
(tick}tt is true at every even point" is not definable in CTL.

4.4 Modal formulas with variables


The modallogic M ofSection 2.1 is extended with propositional variables which
are ranged over by Z , as folIows .

I CI> ::= Z I tt I ff I Cl>1 /\ Cl>2 I Cl>1 V<1>2 I [K]<1> I (K}<1> I


92 4. Temporal Properties

Modal fonnulas may contain propositional variables, and therefore the satisfaction
relation F between a process and a fonnula needs to be refined. The important
case is when a process has the property Z. Think of a propositional variable as a
colour that can be ascribed to processes. A particular process may have a variety of
different colours. Processes can be "coloured" arbitrarily. A particular colouring
is defined by a "valuation" function V that assigns to each colour Z a subset of
processes V(Z) having this colour. If there are two colours X, red, and Z, blue,
then V(X) is the set ofred coloured processes and V(Z) is the set ofblue coloured
processes.
The satisfaction relation between a process and a fonnula is relativised to a
valuation. We write E FV <I> when E has the property <I> relative to the valuation
V, and E ~v <I> when E fails to have the property <I> relative to V. First is the
semantic clause for a variable.

I E FV Z iff E E V(Z) I
Process E has colour Z relative to V iff E belongs to V(Z). The remaining semantic
clauses are as in Section 2.1 for the logic M, except for the relativization to the
colouring V. For example, the semantic clause for /\ is as folIows.
E FV <1>1 /\ <1>2 iff E FV <1>( and E FV <1>2
The notation for the subset of E processes with property <1>, 11 <I> 11 E, is also
refined by an additional colouring component, 11 <I> II~ .

I 11 <I> II~ ~ {E E E : E FV <I>} I


The set E in 11 <I> II~ is invariably a transition closed set P or P(E), as defined in
Section 1.6, and for each Z it is expected that the set of coloured processes V(Z)
is a subset of E.
Valuations may be revised, so a colouring is then updated. A useful notation
is V[Ej Z], which represents the valuation similar to V, except that E is the set of
processes coloured Z.

(V[EjZ])(Y) = I ~(Y) ifY = Z


otherwise
There are many uses for revised valuations. Assume that Z does not occur in <1>.
Whether E FV <I> is independent of the colour Z. It follows that, for any colouring
V and for any set E, E FV <I> iff E FV[E/Z] <1>. In particular, if <I> does not contain
any variables (and is therefore also a fonnula ofM), then whether E satisfies <I>
is completely independent of colourings: in this special case we write E F <1>, as
before.
A valuation V' extends the valuation V, which is written V ~ V', if V(Z) ~
V'(Z) for all variables Z . The colouring V' is unifonnily more generous than V, so
4.4. Modal formulas with variables 93

for any variable Z it follows that 11 Z II C is a sub set of 11 Z II C,. Thi s feature extends
to all fonnulas of M with variables.
Proposition 1 If V' extends V, then 11 <I> II C ~ 11 <I> lIe,.
Proof. The proof proceeds by induction on <1>. We drop the index P. The
base cases are when <I> is a variable Z, tt or ff. The first of these case s fol-
lows directly from the definition of ~ on valuations. Clearly, it also holds for
tt and for ff. The general case divides into the various subcases. First, sup-
pose <I> is \111 /\ \112, and assume that V ~ V'. By definition 11 <I> IIv is the set
{E : E FV \111 /\ \112}, which is just the set 11 \11 1 IIv n 11 \112 IIv. By the induction
hypothesis 11 \11; Ilv ~ 11 \11; 11 v' for i = I and i = 2. Therefore, I1 <I> IIv ~ 11 <I> 11 v' .
The case when <I> is \111 V \112 is similar. Next, assume that <I> is [K] \11 . The set
11 <I> IIv is therefore equal to {E : if E ~ Fand a E K then F FV \11}, wh ich is
{E : if E ~ F and a E K then FEil \11 11 v}. By the induction hypothesis this
is a subset of {E : if E ~ Fand a E K then FEil \11 11 v' }, which is the def-
inition of 11 <I> IIv'. The other modal case, <I> is (K}\I1 , is similar and is left as an
exercise. 0

2 P denotes the set of all subsets of P. For instance, if P is {Cl l , tock.Cld,


then 2 P is {eI , {Cld. {t ock.Cfj} , P}. A funct ion g : 2 P ~ 2 P map s elements of
2 P , subsets of P, into elements of 2 P • For any E ~ P, the element g(E) is also
a subset of P. With respect to a colour Z and valuation V, a modal fonnula <I>
determines the function f[<I>. Z] : 2 P ~ 2 P , which when applied to the set E ~ P
is as folIows .

def
f[<I>. Z](E) {E E P : E FV[E/Z] <I>}

Example 1 For any valuation V, the function f[(tick}Z , Z] map s E into the set ofprocesses
that has a tick tran sition into E.
f[(tick)Z . Z](E) = {E E P : E FV[E/Z] (tick}Z}
= tick
{E E P : 3F . E ----+ Fand F FV[E/ Z] Z}
tick
= {E E P : 3F E E. E ----+ F}

Changing colouralso changes the function. Forany V, and Y =1= Z, f[(tick}Z , Y]


is a constant function.
f[(tick}Z , Y](E) = {E E P : E FV[E/Y ] (t i ck}Z }

= {E E P :
tick
3F. E ----+ Fand F FV[E/ Yj Z}
tick
= {E E P : 3F E V(Z) . E ----+ F}

A second example is that tbe function J[[tick]Z, Z] maps the set E to {E E P :


. tick
VF. if E ----+ F then FEE}.
94 4. Temporal Properties

Example 2 Assume that <1> is a fonnula that does not eontain the variable Z . For any V, the
funetion 1[<1> v (-) Z, Z] maps E into the set of proeesses that have property <1>,
or ean do a transition into E.
1[<1> v (-)Z , Z](E) = {E E P : E FV[E/Z] <1> V (-)Z}
= {E E P : E FV[E/Z] <1> or E FV[E/Z] (-)Z}
= {E E P : E FV <1> or 3F E E 3a . E ~ F}

Beeause <1> does not eontain Z , E FV[E/Z] <1> iff E FV <1> .

With respeet to P and V the set 1[<1> , ZJ(E) can also be defined direetly by
induetion on <1>. For example, the following eovers the ease when the fonnula is a
eonjunetion.

A funetion g : 2 P -+ 2 P is "monotonic" with respeet to the subset ordering,


~, if it obeys the following eondition.

if E ~ F then g(E) ~ g(F)

Proposition I, above , has the eonsequenee that, for any fonnula <1> and variable Z ,
the funetion /[<1>, Z] is monotonie with respeet to any P and V.
Corollary 1 For any P and V, thefunetion 1[<1> , Z] is monotonie .

Proof. If E ~ F, then by definition V[Ej Z] ~ V[Fj Z]. It fOllOWS from Propo-


sition I that 11 <1> IIC[E/z] ~ 11<1> IIC[F/z]' However, 11 <1> IIC[E/z] is 1[<1>, Z](E) and
11 <1> IIC[F/z] is /[<1>, Z](F), and therefore for any P and V, the funetion 1[<1>, Z] is
monotonie. 0

Beeause Vmay be an arbitrary eolouring, it is not in general true that, if E F, 'V

then for any fonnula <1>, E FV <1> iff F FV <1>. For instanee, ifV(Z) is the singleton
set{ E} and E F , then E FV Z but F ~v Z. However, ifthe eolouring respeets
'V

bisirnulation equivalenee, that is, if eaeh set V(Z) of proeesses is bisirnulation


closed" , then bisimilar proeesses will agree on properties. We extend the notion of
bisirnulation closure to eolourings. The valuation V is bisirnulation closed if, for
eaeh variable Z, the set V(Z) is bisirnulation elosed.
Proposition 2 Jf V is bisimulation closed and E F , thenfor all modalformulas
'V <1> possibly
eontaining variables , E FV <1> iff F FV <1>.
The proof of this result is a minor elaboration of the proof of Proposition I of
Seetion 3.4, and is left as an exercise for the reader.

SE is bisimulationclosed if E E E and F E P, and E ~ F implies FEE.


4.5. Modal equations and fixed points 95

Exercises 1. Prove by induction on <1> that, for any set E if Z does not occur in <1>, then
E FV <1> iff E FV[E/Z] <1>.
2. Show that, iffor any variable Z, V'(Z) = V(Z) n E, then the followingis true:
11 <1> II~ = 11 <1> II~, .
3. Relative to P and V, define the function f[<1>, Z] directly by induction on the
structure of <1>.
4. If negation, ...." is added to modallogic with variables, then show that Propo-
sition I fails to hold, and that therefore there are functions [[<1>, Z] that are
not monotonie.
S. Consider the extended modallogic with variables and the CTL temporal op-
erators A(<1>U\IJ) and E(<1>U\IJ). Show that Proposition 1 still holds for this
extended logic.
6. Assume that <1> and \IJ do not contain the variable Z . For any V and P, work
out the following functions.
a. [[<1>/\ [-]Z , Z]
b. [[<1> /\ (-)Z, Z]
c, [[\IJ v (<1> /\ «-)tt /\ [-]Z)) , Z]
d. [[\IJ v (<1> /\ (-)Z), Z]
e. [[[a]«(b)tt v y) /\ Z), Z]
f. f[[a]«(b)tt v y) /\ Z) , Y]
7. Prove Proposition 2.

4.5 Modal equations and fixed points

Definitional equality, ~, is essential for describing perpetual processes, as in the


simplest case of the uneluttered clock Cl. Modallogic can be extended with this
facility, followingLarsen [36]. A modal equation has the form Z ~ <1>, stipulating
that Z expresses the same property as the formula <1>. The effect of this equation is
to constrain the colour Z to processes having the property <1>. For instance, Z ~
(tick)tt stipulatesthat only processesthat may immediatelytick are coloured Z.
In the recursive modal equation Z ~ (tick)Z, both occurrences of Z select
the same trait. What property is thereby expressedby Z? Reeall from the previous
seetionthat f[ (tick) Z, Z] is a monotoniefunetionthat, whenappliedto argument
E ~ P, is
f[(tick)Z, Z](E) = 11 (tick)Z IIC[E/z]
= {E E P : E FV[E/Z] (tick)Z}
tick
= {E E P : 3F E E. E ~ F} .
96 4. Temporal Properties

Consequently, the recursive modal equation Z ~ (tick)Z constrains Z to be any


set which obeys the following equality.
E = f[(tick)Z , Z](E)
= {E E P : 3F E E. E ~ F}
tick

There may be many different subsets of P that are solutions. One example is the
empty set

o = {E E P : 3F E 0. E
tick
~ F}

because the right hand set must be empty. When P is the set {Cl}, then P is also a
solution because of the transition Cl ~ Cl.
{Cl} = {E E {Cl} : 3F E {Cl}. E ~ F}
tick

A function g : 2 P ~ 2 P transforms subsets into subsets . E ~ Pis said to be


a "fixed point" of g if the transformation leaves E unchanged, g(E) = E. Further
applications of g also leave E fixed, g(g(E)) = E, g(g(g(E))) = E, and so on.
Every subset of P is a fixed point of the identity function. If g maps every subset
into 0, then it is the only fixed point. On the other hand, if g maps every set to a
different set, so g(E) =1= E for all E, then g does not have a fixed point. The fixed
point constraint, g(E) = E, can be dissected.
E is a "prefixed point" of g, if g(E) ~ E
E is a "postfixed point" of g, if E ~ g(E)
A fixed point has to be both a prefixed and a postfixed point. P is always a prefixed
point, and 0 is always a postfixed point.
A solution to the recursive modal equation Z ~ (tick)Z is therefore a fixed
point of the function f[ (tick) Z, Z] . The definitions of prefixed and postfixed
points can be viewed as " closure" conditions on putative solution sets E.
PRE f[(tick)Z , Z](E) ~ E
tick
{E E P: 3F E E. E ~ F} S; E
. tick
IfEEPandFEE andE~ FthenEEE
POST E ~ f[(tick)Z , Z](E)
tick
E ~ {E E P : 3F E E. E ~ F}
. tick
if E E Ethen E ~ F for some FEE
A fixed point must obey both closure conditions.

Example 1 If P is {Cl}, then both candidate sets 0 and P obey the closure conditions PRE
and POST, and are therefore fixed points of f[ (tick) Z, Z] . The solutions can be
ordered by subset inclusion, 0 ~ {Cl}, offering aleast and a greatest solution. In
4.5. Modal equations and fixed points 97

the case ofthe more sonorous clock CI l , which alternately ticks and tocks , there
are more candidates for solutions, the sets 0, {Cld, [tiock.C'lj ], {CI l , tock.Cld.
Let [abbreviate [[{tick)Z, Z].

[(0) 0
[({Cld) 0
[({tock.Cld) = {Cld
[({CI l , tock.Cld) = {Cld

The prefixed points of [ are 0 and {CI l , t.ock .C'lj}, and 0 is the only postfixed
point. Therefore, there is just a single fixed point in this example.

With respect to any set of processes P, the equation Z ~ (t i ck) Z has both
aleast and a greatest solution (which may coincide) with respect to the subset
ordering. The general result guaranteeing these extremal solutions is due to Tarski
and Knaster. It shows that the least solution is the interseetion ofall prefixed points,
of all those subsets obeying PRE, and that the greatest solution is the union of all
postfixed points , ofall those subsets fulfilling POST. The result applies to arbitrary
monotonie funetions from subsets of P to subsets of P.
Proposition 1 If g : 2P ~ 2P is a monoton ic function with respeet to ~, then g
1. has a leastfixed point given as the set n{E ~ P : g(E) ~ E},
2. has a greatestfixed point given as the set U{E ~ P : E ~ g(E)} .

Proof. We show I, leaving 2, which is proved by dual reasoning, as an exereise.


Let E' be the set n{E ~ P : g(E) ~ E}. First , we establish that E' is indeed
a fixed point , whieh means that g(E') = E'. Suppose E E g(E'). By definition
E E g(E) for every E ~ P such that g(E) ~ E. Consequently, E E E for every
sueh set E, so E belongs to their interseetion too. This means that g(E') ~ E' .
Next , assume that E E E' but E rt g(E'). Let EI = E' - {E}. By monotonicity
of g, g(E» ~ g(E '). But we have just shown g(E') ~ E' and, sinee E rt g(E'), it
follows that g(E') ~ EI. Therefore, g(EI) ~ EI, whieh means that E' ~ EI by the
definition of E', whieh is a eontradiction. We have now shown that E' is a fixed
point of g. Consider any other fixed point F. Because g(F) = F, it follows that
g(F) ~ F, and by the definition of E' we know that E' ~ F, whieh means E' is the
least fixed point. 0

Proposition I guarantees that any reeursive modal equation Z ~ <I> has ex-
tremal solutions , whieh are least and greatest fixed points ofthe monotonie funetion
[[<1>, Z] . Relinquishing the equational format, let JL Z . <I> express the property given
by the least fixed point of [[<I>, Z] and let vZ. <I> express the property determined
by its greatest fixed point.
What properties are expressed by the extremal solutions of the modal equation
Z ~ {tick}Z? The least ease, JLZ. {tick}Z, isoflittleimportbeeauseitexpresses
98 4. Temporal Properties

the same property as ff 6. More interesting is v Z . (t i ck)Z, which expresses the


longstanding ability to tick forever. This property is not expressible in modallogic,
as was shown in Section 4.1. To see this, let E ~ P consist of all those proce sses
. . Uck U ck
Eo that have an infinite length ron of the form, Eo -----+ E) -----+ . ... Each process
Ei mentioned in this run also belongs to E, so E obeys the closure condition POST
earlier (that if E E E, then E ~ F for some FEE). The set detennined by
vZ . (tick)Z must therefore include E. Assurne that there is a larger set E' ::> E
that also satisfies POST, and that Fo belongs to the set E' - E. By the requirement
t ick .
POST, Fo -----+ F) for some F) In E'. The process F( also belongs to E' - E
because, if it belonged to E, then Fo would also be in E. The POST requirement
can now be applied to F), so F] has a transition FI ~ F2 and F2 in E' - E.
Repeated application of this construction produces a perpetual run from Fo of the
c. t ick Fl -----+
tonn rl:'o -----+ tick . , ., w here eac h qr: E E' - E, but thiIS IS
. a contradicti
iction, smce
each F, can tick forever and therefore belongs to E. The ability to tick forever is
therefore given as a simple closure condition of the immediate ability to tick.
Generalizing slightly the fonnula v Z. (K) Z expresses an ability to perform K
actions forever. There are two special cases: vZ. {-)Z expresses a capacity for
never ending behaviour, and vZ. {r)Z captures divergence , t ofSection 2.5, the
ability to engage in infinite internal chatter.
A more composite recursive equation is Z ~ 4> V (-)Z. Assurne that 4> does
not contain Z. For P and V, the monotonic function f[4> V {-) Z, Z] applied to E
is 114> V {-)Z IIC[E/zl' which is

{E E P : E FV 4>} U {E E P : 3a E A. 3F E E. E ~ F}.
Because 4> does not contain Z , valuation V can be used instead ofV[E/Z] in the
first subset. A fixed point E of this function is subject to the following closure
condit ions .
PRE if E E P and (E FV 4> or 3a E A. 3F E E. E ~ F) then E E E
POST if E E Ethen E FV 4> or 3a E A. 3F E E. E ~ F
A subset E satisfying the condition PRE has to contain those processes with the
property 4>. But then it also has to include processes F, that fail to satisfy 4> , but
have a transition F ~ E , where E FV 4>. And so on. It turns out that a process
Eo has the property J-LZ. 4> V {-)Z ifthere is a run Eo ~ EI ~ ... and an
i ~ 0 and Ei FV 4>. That is, if Eo has the weak eventually property EF4> of
CTL. The largest solution also includes the extra possibility ofperfonning actions
forever without 4> ever becoming true, as the reader can check. A slight variant is
to consider J-LZ . 4> V {K)Z , where K is a family ofactions. This expresses that
a process is able to perform K actions until 4> holds. When K is the singleton

6Because 0 is always a fixed point of f[(tick)Z . Z].


4.5. Modal equations and fixed points 99

set {r}, this fonnula expresses that, after some silent aetivity, <1> is true, expressed
modallyas « )) <1>.
Another example is the reeursive equation Z ~ \IJ v (<1> 1\ (-) Z ), where neither
<1> nor \IJ eontains Z. The funetion f[\IJ v (<1> 1\ (- ) Z), Z] applied to E is the set

{E : E Fv \IJ} U {E : E Fv <1> and 3F E E.3a E A. E ~ F} .


Its least fixed point with respeet to P and V is the smallest set E that obeys the
closure eondition
({E : E Fv \IJ} U {E : E Fv <1> and 3F E E.3a E A. E ~ F}) ~ E.
If E E P and E Fv \IJ,
then E E E. Also, if E FV <1> and E has a transition
E ~ F with FEE, then E E E. Therefore, we ean build up this least set E in
stages as folIows.
EI = {E: E FV \IJ}
E2 = EI U {E : E Fv <1> and 3F E EI.3a E A. E ~ F}

Ei + 1 Ei U {E : E Fv <1> and 3F E Ei .3a E A. E ~ F}

The least set E will be the union ofthe sets Ei . Pietorially, the stages are as folIows.
1 2 i+ 1
E E
0.
----+ EI E
at ~
----+ . . . E j . . . ----+ Ei
Fv Fv Fv Fv Fv Fv
\IJ <1> \IJ <1> <1> \IJ
The fonnula J-LZ . \IJ v (<1> 1\ (-)Z) eaptures the weak until ofCTL, E(<1>U\IJ), as
deseribed in Seetion 4.3. Later, we shall deseribe the idea of eomputing stages
more fonnally using approximants . The strong until A(<1>U\IJ) of CTL is defined
as J-LZ. \IJ v (<1> 1\ «-)tt 1\ [- ]Z», where Z does not oeeur in \IJ or <1>. Later, we
shall see that we ean also define properties that are not expressible in CTL.

Exercises 1. Show that if g : 2 P -+ 2 P maps all arguments to different sets (that is, g(E) =1= E
for all E), then g is not monotonie.
2. Assume h : 2 P -+ 2 P is monotonie with respeetto ~' Prove the following.
a. if E] and E2 are prefixed points of h, then EI n E2 is also a prefixed point
ofh
b, if EI and E2 are postfixed points of h, then EI U E2 is also a postfixed
point of h
100 4. Temporal Properties

3. Prove Proposition 1, part 2.


4. Assume g : 2 P --+ 2 P is an arbitrary funetion . The funetion g is inflationary
if E ~ g(E) for any set E ~ P. Show that if g is inflationary, then g has fixed
points.
5. What properties are expressed by the following formulas?
a. vZ . [tick]Z
b. f.J.Z . [tick]Z
c. vZ. (tick)(-}Z
d, vZ . (tick}Z /\ (tock)Z
6. Show that v Z. <1> v (-) Z, when <1> does not eontain Z, expresses the CTL
property EF<1> v EG(-)tt.
7. Contrast the properties expressed by the formulas f.J. Z. <1> /\ [r] Z and vZ . <1> /\
[r ]Z when <1> does not eontain Z .
8. Assume that <1> and IIJ do not eontain Z .
a. What property does v Z . IIJ v (<1> /\ (-) Z) express?
b. Show that f.J. Z. IIJ v (<1> /\ «(- )tt /\ [-] Z)) eaptures the strong until
A(<1>UIIJ) ofCTL.
c. What does vZ. IIJ v (<1> /\ «(- )tt /\ [- ]Z)) express ?
9. Prove that vZ . <1> /\ [-]Z expresses AG<1> (when Z does not oeeur in <1».
10. What property does v Z . <I> /\ [-] [- ] Z express, assuming that <I> does not
eontain Z? Prove that it is not expressible in CTL.
11. What property is expressed by the formula f.J. Y. <1> v «( K) Y /\ (J) Y) when <1>
does not eontain Y?

4.6 Duality
The expressive power of modal logie is inereased when extended with least and
greatest solutions of reeursive modal equations. For instanee, the least solution to
Z ~ <1> v (-) Z (when <1> does not eontain Z) expresses the weak liveness property
EF<1> . The eomplement of weak liveness is safety. A proeess E does not satisfy
f.J. Z. <1> v (-) Z if <1> never beeomes true in any run of E.
Complements are direetly expressible when negation is freely admitted into
formulas as with CTL. The reason for avoiding negation in modal formulas is to
preserve monotonieity. A simple example is the equation Z ~ -.Z. The funetion
f[ -.Z, Z] is not monotonie beeause f[ -.Z, Z](E) = P - E. A fixed point E of
this funetion must obey E = P - E, whieh is impossible beeause P is non-empty.
Another example is Z ~ (tock)tt /\ [tick]-.Z. The funetion f[(tock}tt /\
4.6. Duality 101

[tick]-'Z, Z] applied to E is the set


tock . tick
{E E P : 3F. E ---+ F} n {E E P : VF. if E ---+ F then F ~ E}.
In general, this funetion is not monotonie, and whether it has fixed points depends
on the strueture ofthe set P.
Negation ean be admitted into modal formulas provided the following restrie-
tion is plaeed on the form of a reeursive modal equation Z ~ <1>: every free
oeeurrenee of Z in <I> lies within the seope of an even number of negations . This
guarantees the funetion f[ <1> , Z] is monotonie. The two examples above do not
eomply with this eondition.
The complement of a formula is also in the logie without the explicit presence
of negation. This was shown for modal formulas of M in Section 2.2 where <l>c,
the complement of <1>, is defined inductively as follows.
tt C = ff ffc tt
(<1> /\ \lJY = <1>c v \lJc (<I> v \IJ)C = <1>c /\ \lJc
([K]<1»C = (K)<1>c «(K) <I»C [K]<1>c
It also turns out that the fixed point operators are duals of each other.
ZC Z
(vZ. <1>Y = f..LZ. <1>c
(f..LZ. <1>Y = vZ. <l>c
Assume that Y(Z) ~ P for all Z. The complement valuation y c with respeet to P
is given as YC(Z) = P - Y(Z) for all Z. The following result shows that <1>c is the
eomplement of <1> modulo eomplementation of Y.
Proposition 1 E FV <I> iff E ~Vc <1>c.
Proof, This result generalises Proposition I of Section 2.2 and is proved by
struetural induetion on <1> . The base eases are when <1> is tt, ff, or Z . The first
two are elear. E FV Z iff E E Y(Z) iff E ~ YC(Z) iff E \;6vc Z. For the inductive
step, the boolean and modal eases are as in Proposition I of Section 2.2. The new
cases are those involving fixed points.
E FV vZ . <I> iff E E U {E ~ P : E ~ 11 <1> IIC[E/z]}
E ~ P - U {E ~ P : E ~ 11 <1> IIC[E/z]}
n
iff
iff E~ {P - E : E ~ 11 <1> IIC[E/z]},

n
which by the induction hypothesis is as follows.
iff E~ {P - E : 11 <1>c IICcIP-E/Z] ~ P - E}
iff E~ n {E ~ P : 11 <1>c IICc[E/z] ~ E}
iff E \;6vc f..L Z . <1>c
iff E \;6vc (v Z. <I»C
102 4. Temporal Properties

The other case E I=v J-L Z. <I> is similar and is left as an exercise. D

If a property is an extremal solution to the equation Z ~ <I>, then its comple-


ment is the dual solution to the equation Z ~ <I>c. Consider convergence, E +,
which holds if E is unable to perform silent actions for ever. The formula v Z . (r) Z
expresses its complement, divergence . Hence , convergence is defined as the least
solution to the equation Z ~ «r}Z)C, which is J-LZ. [r]Z.
Safety is the complement ofweak liveness. The formula J-LZ. <I> v (-) Z captures
EF<I>. Its complement is the largest solution to Z ~ (<I> v (_}Z)C, which is
v Z. <I>C /\ [-] Z . This formula expresses " <I> is never true," AG<I> C.
Example 1 The level crossing of Figure 1.10 has the crucial safety property that it is never
possible for a train and a car to cross at the same time . The feature to be avoided is
(tcross}tt /\ (ccross}tt, so the safety property is given by vZ . ([tcross]ff v
[ccross]ff) /\ [-]Z.
The slot machine eventually produces winnings or an indication of loss. A
slightly better description is "whenever a coin is input, eventually either a loss or
a winning sum of money is output." This property is expressed using both fixed
point operators. Embedding fixed point operators within each other goes beyond
the simple equational format here described.

Exercises 1. Let f be the function f[ (tock}tt /\ [tick]-'Z, Z].


3 . Give a set P such that f does not have fixed points

b. Give a set P such that f has both least and greatest fixed points
2. Assume that formulas may contain occurrences of -.. Prove that, if every
occurrence of Z within <I> lies within the scope ofan even number ofnegations ,
then the recursive equation Z ~ <I> has both aleast and a greatest solution.
3. When -. is freely admitted into formulas , show the following (where two
formulas <I> and IIJ are equivalent if for all processes E, E 1= <t> iff E 1= IIJ).
a, J-LZ . <t> is equivalent to -.vZ . -.<t>{-.ZjZ}
b. vZ. <t> is equivalent to -'J-LZ . -.<I>{-.ZjZ}
4. What property ofprocesses does J-LZ. [-]Z express?
5. Prove that [ ] <I> is expressed as the formula v Z . <t> /\ [r] Z .
5 _

Modal Mu-Calculus

5.1 Modallogic with fixed points 104


5.2 Macros and normal formulas . 107
5.3 Observable modallogic with fixed points 110
5.4 Preservation ofbisimulation equivalence 112
5.5 Approximants . . ... 115
5.6 Embedded approximants 121
5.7 Expressing properties 128

In the previous chapter we saw that modal formulas are not very expressive . They
can not capture enduring traits ofprocesses, the properties definable within tempo-
rallogic. However, these longerterm properties can be viewed as closure conditions
on immediate capabilities and necessities that modallogic captures . By permitting
recursive modal equations, these temporal properties are expressible as extremal
solutions of such equations. The property "whenever a coin is inserted, eventually
an item is collected" is expressed using two recursive modal equations with dif-
ferent solutions. In the previous chapter, least and greatest solutions to recursive
modal equations were represented using the fixed point quantifiers J1-Z and vZ.
In this chapter we shall explicitly add these connectives to modallogic, thereby
providing a very rich temporallogic.

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
104 5. Modal Mu-Calculus

5.1 Modal logic with fixed points


Modallogic with the extremal fixedpoint operators v Z and J.LZ is known as "modal
mu-calculus," J.LM. Formulas of J.LM are built from variables, boolean connectives,
modal operators and the fixed point operators.

tt I ff I Z I <1>( /\ <1>2 I <1> \ V <l>2 I [K]<I> I


(K)<I> I vZ . <I> I J.LZ . <I>

In the sequel, we let a range over the set {J.L, v]. Formulas of J.LM may contain
multiple occurrences of fixed point operators. An occurrence of a variable Z is
"free" within a formula if it is not within the scope of an occurrence of a Z. The
operator a Z in the formula a Z. <I> is a quantifier that binds free occurrences of Z
in <1>. We assume that o Z has wider scope than other operators. The scope of J.LZ
is the rest of the formula in
J.LZ. (J) Y v «(b)Z /\ J.LY. vZ. ([b]Y /\ [K]Z))

and it binds the occurrence of Z in the subformula (b)Z , but does not bind the
occurrence of Z in [K]Z , which is bound by the occurrence of vZ. There is just
one free variable occurrence in this formula, that of Y in (J) Y. An occurrence of
a Z may bind more than one occurrence of Z, as in vZ . (tick)Z /\ (t ock)Z.
The satisfaction relation FV between processes and formulas (relative to the
valuation V) is defined inductively on the structure of'formulas. First, are the cases
that have been presented previously.

E FV tt
E ~v ff

E FV Z iff E E V(Z)
E FV <I> /\ \IJ iff E FV <I> and E FV \IJ
E FV <I> v \IJ iff E FV <I> or E FV \IJ
E FV [K]<I> iff VF E {E' : E ~ E' and a E K} . F FV <I>
E FV (K)<I> iff 3F E {E' : E ~ E' anda E K}. F FV <I>

The remaining cases are the fixed point operators, and we appeal to sets of
processes, 11 <I> IIC, as defined in Section 4.4, {E E P : E FV <I>}, in theirdefinition.
It is assumed in both cases that E belongs to P.

FV vZ . <I> U {E S; P
n
E iff E E E S; 11 <I> IIC[E/Zl}
E FV J.LZ . <I> iff E E {E S; P 11 <I> II C[E/Zl S; E}

These clauses are instances ofProposition I ofSection 4.5. A greatest fixed point
is the union of postfixedpoints, and aleast fixed point is the intersection of prefixed
5.1. Modallogic with fixed points 105

points. To justify their use here we need to show that, for any variable Z and JlM
fonnula <1>, the funetion f[<I>. Z] is monotonic' . The fonnula <I> may eontain fixed
points, and therefore we need to extend Proposition I of Seetion 4.4 from modal
fonnulas to JlM fonnulas. Reeall that the valuation V' extends V, written V ~ V',
iffor eaeh variable Z , V(Z) ~ V'(Z) .
Proposition 1 If V' extends V, then 11 <I> IIC ~ 11 <I> IIC,.

Proof. We show by induetion on <I> that, ifV ~ V', then 11 <I> IIC ~ 11 <I> IIC,. We
now drop the index P. The base eases, boolean eases and modal eases are exaetly
as expressed in the proof ofProposition 1 of Seetion 4.4. This just leaves the fixed
point eases . Let <I> be the fonnula vY. \fJ . Suppose E E 11 <I> IIv. By the semantie
clause for v Y, there is a set E eontaining E with the property that E ~ 11 \fJ IIv[E/ Yj.
Beeause V ~ V', it follows that V[E/ Y] ~ V'[E/ Y] and therefore by the indue-
tion hypothesis 11 \fJ IIv[E/Yj ~ 11 \fJ IIv'[E/Yj . Consequently, E ~ 11 \fJ IIv'[E/Yj, and
therefore E E 11 <I> IIv' too. The other ease when <I> is JlY. \fJ is similar and is left
as an exereise . D

A straightforward eorollary of Proposition I is that, for any JlM fonnula <I> and
valuation Vif E ~ F, then 11 <I> 11 C[E/ zj ~ 11 <I> 11 C[F/ zj - and that therefore the funetion
f[ <1> , Z] is monotonie.
A slightly different presentation of the elauses for the fixed points dispenses
with explieit use of sets 11 <I> IJC.

E FV vZ . <I> iff :JE ~ P. E E E and VFEE. F FV[E/Zj <I>


E FV JlZ . <I> iff VE ~ P. if E tJ Ethen:JF E P. F FV[E/Zj <I> and F tJ E
The first is a simple refonnulation of the clause above, and the seeond follows by
routine ealculation:

E FV JlZ. <I> iff E E n {E ~ P : 11 <I> IIv[E/z] ~ E}


iff VE ~ P. if 11 <I> IIv[E/zj ~ Ethen E E E
iff VE ~ P. if E tJ Ethen 11 <I> IIv[E/z] ~ E
iff VE ~ P. if E tJ Ethen:JF E P. F FV[E/Zj <I> and F tJ E.
An "unfolding" of a fixed point fonnula a Z. <I> is the fonnula <I> {a Z . <I>/ Z} :
the fixed point fonnula is substituted for all free oeeurrenees of Z in the "body"
<1> . For instanee , the unfolding ofvZ . (-}Z is (-}(vZ . (-}Z) . The meaning ofa
fixed point fonnula is the same as its unfolding.
Proposition 2 E FV aZ.<I> iff E FV <I>{aZ.<I>/Z}.

I Relative to P and V, for any E ~ P, 1[<1>. Z](E) is the set 11 <I> IIC[E/zj '
106 5. Modal Mu-Calculus

Modal mu-calculus was originally proposed by Kozen [34] (and see also Pratt
[50]) as an extension of propositional dynamic logic/ . Its roots lie in more general
program logics with extremal fixed points, originally developed by Park, De Bakker
and De Roever. Larsen suggested that Hennessy-Milner logic with fixed points is
useful for describing properties ofprocesses [36]. Previously, Clarke and Emerson
used extremal fixed points on top of a temporallogic for expressing properties of
concurrent systems [18].
Example 1 The vending machine Yen of Section 1.1 has the property "whenever a coin is
inserted, eventually an item is collected," expressed as vZ. [2p, lp]\II /\ [-]Z,
when \11 is /-LY. (-)tt /\ [-{collectb, collectd]Y. The appropriate set P is

{Yen, Venb, Venl, collectb.Ven, co.l.Lec'tjVen] .


Let V be any valuation. First we show that 11 \lilie is the full set P. Clearly, P is a
prefixed point.

11 (-)tt /\ [-{collectb' collectd]Y IIC[p/zl ~ P


It is in fact the smallest prefixed point. Any proper subset E ofP fails the associated
closure condition, where A' is the set A - [co l Lectx, co.l.Lectij}.

If (3F.3a . E ~ Fand VF .Va E A/.E ~ F implies FEE), then E E E

For example, if Eis the subset P - {Venb}, then Venb satisfies the antecedent of
this closure condition, and therefore E is not a prefixed point. Similarly, 10 fails
to be a prefixed point because collectb.Ven satisfies the antecedent. Given that
11 \llIIC is P, it follows that 11 vZ . [2p, lp]\II /\ [-]Z IIC is also P.

As with modal formulas, a formula <I> is said to be realizable (or satisfiable) if


there is a process that satisfies it. Forexample, the clock Cl realizes v Z. (tick) Z. In
contrast J-tZ. (tick) Z is not satisfiable. There is a technique for deciding whether
a formula is realizable, due to Streett and Emerson [56]. An important consequence
oftheir proofis that modal mu-calculus has the "finite model property:" if a formula
holds of a process then there is a finite state process satisfying it.
Proposition 3 Jf E FV <1>, then there is a finite state process Fand valuation V' such that
F FV' <1>.

Exercises 1. Show the following


a. Cl F vZ . (tick)Z v [tick]ff
b, tick.O F vZ . (tick)Z V [tick]ff

2Themodalitiesof JLM slightlyextendthoseofKozen's logic becausesets oflabels may appearwithinthem


instead of single labels,and on the otherhand Kozenhas explicitnegation. Kozencalls the logic"propositional
mu-calculus," which would be more appropriate to boolean logic with fixed points.
5.2. Macros and normal formulas 107

c. Cl ~ IlZ. (tick)Z v [tick]ff


d. tick.O FilZ. (tick)Z v [tick]ff
2. Let P = P(Ven). Detennine the sets 11 cl> IIC when cl> is each of the following
fonnulas.
a, IlZ. (2p.lp)tt v [-]Z
b. IlZ . (little)tt v [-]Z
c. IlZ. (little)tt v [-]Z
d. vZ. [2p](IlY. (collectb)tt v [-]Y) /\ [-]Z
3. Assurne that D and D' are the following two processes D ~ a.D' and D' ~
+
b .O a.D . Show the following.
a, D F vZ. IlY. [a]«(b)tt /\ Z) V Y)
b. D' F vZ. IlY. [a]«(b)tt /\ Z) V Y)
c. D ~ IlY. vZ . [a]«(b)tt v Y) /\ Z)
d. 0 F IlY. vZ. [a]«(b)tt v Y) /\ Z)
4. Show that, if Z is not free in cI>, then E FV cl> iff E FV[ß/zl cI>.
s. a. Carefully define the substitution operation cI>{\IJ jZ} by induction on cI>.
b. Prove Proposition 2, above.
6. In propositional dynamic logic there is some structure on actions.
E ~ F iff E .z, EI ~ F for some EI

E ~ F iff E = F or E .z, EI .z, .. . .z, s, .z, F for some


n:::OandEI . . .. .E;
Show the following, assuming that cl> does not contain Z as a free variable.
a. E FV [a;b]cI> iff E FV [a][b]cI>
b, E FV [a*]cI> iff E FV vZ . cl> r. [a]Z
7. What properties are expressed by the following fonnulas?
a. IlY. [b]ff /\ [a](IlZ. [a]ff /\ [b]Y /\ [-b]Z) /\ [-a]Y
b. vY. [b]ff /\ [a](IlZ . [a]ff /\ [b]Y /\ [-b]Z) /\ [-a]Y
c, IlX. vY. [a]X /\ [-a]Y
d. vZ. [a](IlY. (-)tt /\ [-b]Y) /\ [-]Z
e. vZ. (IlX. [b](vY. [c](vYI • X /\ [-a]Yd /\ [-a]Y) /\ [-]Z)

5.2 Macros and normal formulas


A common complaint about 11M is that fonnulas can be difficult to understand,
and that it can be hard to find the right fonnula to express a particular property.
108 5. Modal Mu-Calculus

Of course, this is also true of almost any notation involvingbinding or embedded


operators. For example, it is not immediately clear what property the following
CTL formula AG(EF(tick}tt /\ AF[tock]ff) expresses. However, the problem
is more acute in the case of j.tMbecause of fixed points.
Because j.tM is very expressive, we can introduce a variety of macros. For
example, as we saw in Chapter 4, the temporal operators of CTL are definable as
fixed points in a straightforward fashion (where cl> and IIJ do not contain Z).

A(cI> U IIJ) j.tZ.1IJ v (cl> /\ «(-}tt /\ [-]Z))


E(cI> U IIJ) j.tZ .1IJ v (cl> /\ (-}Z)

Macros can also be introduced for the starring operator of propositional dynamic
logic (where again cl> does not contain Z).

[K*]cI> vZ. cl> /\ [K]Z

Formulas of j.tM exhibit a duality. A formula cl> that does not contain oc-
currences of free variables has a straightforward complement cl>c, as defined in
Section 4.6. The following is an example.

(vZ. ur. vX. [a]«(b}X /\ Z) v [K]Y)r = oz. vY. ux. (a}«[b]X v Z) /\ (K}Y)
Because CTL has explicit negation, we can also introduce macros for negations of
the two kinds ofuntil formulas as follows (where again Z does not occur in cl> or
IJI).

--.A( cl> U IIJ) vZ.IIJ C/\ (cI>c v ([-]ff v (-}Z))


--.E( cl> U IIJ) vZ .lIJc /\ (cI>c v [-]Z)

The fixed point versions of these formulas are arguably easier to understand than
their CTL versions.For instance, the second formula expresses "lIJc unless cl>c/\ IIJC
is true."
A formula that contains free occurrences ofvariables does not have an explicit
complement. For example, Z does not have a complement in j.tM. However, the
semantics of free formulas require valuations. Therefore, we can appeal to the
complement valuation VC , as in Section 4.6. Consequently, E FV cl> iff E ~Vc cl>c.
Bound variables can be changed in formulas without affectingtheir meaning. If
Y does not occur at all in a Z. cI>, then this formula can be rewritten a Y. (cl> { Y/ Zn.
It is useful to write formulas in such a way that bound variables be unique. This
supports the following definition.
Definition 1 A formula cl> is "normal" provided that
1. if a\ Z\ and a2Z2 are two different occurrences ofbinders in cl> then Z\ =I Z2;
and
2. no occurrence of a free variable Z is also used in a binder a Z in cI> .
5.2. Macras and normal formulas 109

Every fonnula can be easily converted into a normal fonnula of the same size and
shape by renaming bound variables.

JlZ . (J)Y v «(b)Z 1\ JlY. vZ . ([b]Y 1\ [K]Z»

can be rewritten as

JlZ. (J) Y v «(b)Z 1\ JlX . vU . ([b]X 1\ [K]U» .

This he1ps us to understand which occurrences of variables are free, and which
occurrences are bound. In the sequel, we shall exc1usively make use of normal
fonnulas.
In later applications, we shall need to know the set of subfonnulas of a fonnula
<1>. This set Sub(<1» is finite and extends the definition provided in Seetion 2.2 for
modal fonnulas.
Definition 2 Sub(<1» is defined inductively by case analysis on <1>

Sub(tt) = {tt}
Sub(ff) = {ff}
Sub(X) {X}
Sub(<1> I 1\ <1>2) = {<1>1 1\ <1>2} U Sub(<1>I) U SUb(<1>2)
Sub(<1> I v <1>2) {<1>1 V<1>2} U Sub(<1>I) U SUb(<1>2)
Sub([K]<1» = {[ K]<1>} U Sub(<1»
Sub«(K)<1» {(K}<1>} U Sub(<1»
Sub(vZ. <1» {vZ. <1>} U Sub(<1»
Sub(JlZ. <1» {JlZ . <1>} U Sub(<1»

Example 1 Sub(JlX. v Y. ([b]X 1\ [K]Y» is the set

{JlX. vY. ([b]X 1\ [K]Y), vY. [b]X 1\ [K]Y, [b]X 1\ [K]Y, [b]X , [K]Y, X, Y},

which contains seven subfonnulas.


If<1> is normal and o Z. \IJ belongs to Sub(<1», then the binding variable Z can be
used to uniquely identify this subfonnula.
Later we shall need to understand when one fixed point fonnula is more
"outermost" than another. For this we introduce the notion of subsumption.
Definition 3 Assume <1> is normal and that aIX .\IJ, a2Z.\IJ' E Sub(<1». The variable X
"subsumes" Z ifa2Z .\IJ' E Sub(aIX .\IJ).
For instance, in the case ofthe fonnula ofExample 1, X subsumes Y but not vice
versa, since vY. ([b]X 1\ [K]Y) E Sub(JlX. vY. ([b]X 1\ [K]Y» . The following
are some simple but useful properties of subsumption whose proofs are left as an
exercise for the reader.
110 5. Modal Mu-Calculus

Proposition 1 1. X subsumes X.
2. Jf X subsumes Z and Z subsumes Y, then X subsumes Y .
3. Jf X subsumes Y and X i= Y, then not Y subsumes X.

Exercises 1. Introduce the following operators ofCTL as macros by defining them as fixed
points AP, EF, AG and EG.
2. Put the following fonnulas into normal form
a. /LY.[K]Y I\vY.(a)Y
b. [J]Y 1\ (K)Y

c, vX. /LY. (Z V /LZ . vX. (b)(X 1\ Z) v [a]Y)


3. A normal /LM fonnula is "singular" if each occurrence of a binder o Z
binds exact1y one occurrence of Z . Prove that every fonnula can be re-
written as a singular fonnula. (Hint: show that a Z .<I>(Z, Z) is equivalent
toaZI .aZ2 .<I>(ZI , Z2) .)
4. Work out the following.
a. Sub(tt 1\ (tt V (a)ff))

b. Sub(vX. /LY. [K]X v «(-)X 1\ [-K][ - ]Y))

c, Sub(JLX. (a)X 1\ (a)(a)X)

d. Sub(/LY. [b]ff 1\ [a](/LZ. [a]ff 1\ [b]Y 1\ [-b]Z) 1\ [-a]Y)

e. Sub(v Z . [a](/LY. (- )tt 1\ [-b] Y) 1\ [ - ]Z)

5. For each ofthe following, detennine whether X subsumes Y.


a. vX. /LY. [K]X v «(-)X 1\ [-K]Y)
b. /LY. vX . (a)X 1\ (a)(a)X

c. vZ . «vX. (a)X) v (/LY. [b]Y)) 1\ [-]Z


d. /LX . vY . [a]X 1\ [-a]Y

6. Prove Proposition 1.

5.3 Observable modallogic with fixed points


In Sections 2.4 and 2.5, the observable modallogics MO and Mot are described.
Their modal operators such as (( )), [], [.!.-] and ((t)) are not definable in the
modallogic M. However, they are definable in /LM, as follows (where as usual it
5.3. Observable modallogic with fixed points 111

is assumed that Z is not free in <1».


def
(( )) <1> = JlZ. <1> v (r)Z
def
[ ] <1> = vZ . <1> r; [r]Z
def
((t)) <1> = vZ . <1> v (r)Z
def
[.J.] <1> JlZ . <1> ;\ [r]Z

The contrast in meaning between [ ] and [ ~] is the difference in their fixed point.
The formula [] <1> isjust [r*]<1>, since <1> has to hold throughout any amount of
silent activity. A divergent process may have the property [ ] <1>, but it cannot
satisfy [~] <1>. Hence, the change in fixed point. The set
n {E ~ P : 11 <1> ;\ [r]Z IIC[E/Zl ~ E}
at least contains stable processes (those unable to perform a silent action) with the
property <1>. Therefore, the set also contains any process that satisfies <1> and that
eventuallystabilizesafter some amount of silent activity,providedthat <1> continues
to be true.
The modal logics MO and MO.\. are therefore special sublogics of JlM. The
derived operators [K] and [~ K] are defined using embedded fixed points, as
follows (where <1> does not contain Z or Y).
def
[K]<1> = [ ] [K] [ ] <1>
vZ . [K] [ ] <1> ;\ [r]Z
= vZ . [K](vY. <1> ;\ [r]Y);\ [r]Z
def
[.J, K] <1> [H [K] [ ] <1>
JlZ. [K] [ ] <1> ;\ [r]Z
JlZ. [K](vY. <1> ;\ [r]Y) ;\ [r]Z
Observable modallogic MOcan be extended with fixed points. The formulas
of observable mu-calculus, JlMo, are as follows.
<1> Z I tt I ff I <1>1 ;\ <1>2 I <1>1 V <1>2 I [K] <1> I ((K))<1> I
[ ] <1> I (0) <1> I vZ. <1> I Jl Z. <1>
Here K ranges over sets of observable actions (which exclude r). This sublogic is
suitable for describing properties of observable transition systems.
Wecan also definethe sublogic JlM°.\. , which containsthe additional modalities
[~] and (( t)) . Unlike JlMo, this logic is sensitive to divergence.

Exercises 1. Show that the fixed point definitions of (( )) , [], ((t)) and U] are indeed
correct.
112 5. Modal Mu-Calculus

2. Provide fixed point definitions for the following modalities [K ~], [~ K ~],
«K)) , and «t K t)).
3. Show that divergence is not definable in J-tMo . That is, prove that there is not
a formula <I> E J-tMo with the feature that E t iff E F= <I> for any E .
4. Show that Protocol and Cop have the same J-tMo properties, but not the same
J-tM°,J. properties.

5.4 Preservation of bisimulation equivalence


The modal logic M characterizes strong bisimulation equivalence, as shown in
Section 3.4. There are two parts to characterisation.
1. Two bisimilar processes have the same modal properties
2. Two image-finite processes having the same modal properties are bisimilar.
Because M is a sublogic of J-tM, 2 is also true for J-tM (and in fact for any exten-
sion of modal logic). Let r be the set of formulas of J-tM which do not contain
free variables. Recall that E =r F abbreviates that E and F share the same r
properties.
Proposition 1 If E and F are image-finite, and E =r F , then E '"V F.
The proof ofProposition I does not rely on the extra expressive power of J-tM
over and above M. However, one may ask whether the restriction to image-finite
processes is still essential to this result given that fixed points are expressible
using infinitary conjunction and disjunction , and that infinitary modallogic Moo
characterizes bisimulation equivalence exactly. In Section 3.4 two examples of
clocks showed that image finiteness is essential in the case of modal formulas.
However, although these clocks have the same modal properties , they do not have
the same J-tM properties. One ofthe clocks has an infinite tick capability, expressed
by the formula vZ. (tick), that the other fails to satisfy. The following example,
due to Roope Kaivola, shows that image finiteness (or a weakened version ofit)
is still necessary.
Example 1 Let {Qi : i E l} be the set of all finite state processes whose actions belong to
{a, b}, and assuming n E N, consider the following processes.

an.b.P(n + 1)
def
P(n) =
def
R = L{a.Qi : i E l}

P(l) + R
def
P =
The behaviour of P(1) is as folIows.
a n + 1b
P(l) ---+ P(2) ---+ P(3) ---+ .. . ---+ P(n + 1) ---+ .. .
ab aab aaab anb
5.4. Preservation of bisimulation equivalence 113

The processes P and R are not bisimilar because no finite state process can be
bisimulation equivalent to b.P(2) (via the pumping lemma for regular languages).
However, P and Rhave the same /lM properties, when expressed by formulas with-
out free variables. To see this, suppose that there is a formula <I> that distinguishes
between P and R: it follows that there is a formula IIJ such that b.P(2) 1= IIJ and
Qi li= IIJ for all i E I . By the finite model theorem, Proposition 3 of Section 5.1,
there is a finite state process E such that E 1= IIJ. A small argument shows that E
can be built from the actions a and b. Consequently, E is Qj for some j E I . But
this contradicts that every process Qi fails to have the property IIJ .
Not only do bisimilar processes have the same modal properties, but they also
have the same /lM properties.
Proposition 2 If E "" F, then E =r F.
An indirect proof of this Proposition uses the facts that it holds for the logic Moo ,
and that /lM is a sublogic of M oo, as we shall see in the next section. However,
we shall prove this result directly; for we wish to expose some of the inductive
structure of modal mu-calculus.
A subset E of P is bisimulation closed if, whenever E E E and F E P and
E "" F, then F is also in E. The desired result, Proposition 2, is equivalent to the
claim that, for any formula <I> without free variables and set ofprocesses P, the set
11 <I> 11 P is bisimulation closed. If this is true, then it is not possible that there can be
a /lM formula <I> and a pair ofbisimilar processes E and F such that E 1= <I> and
F li= <1>. Conversely, if 11 <I> 11 P is bisimulation c1osed, then any pair ofprocesses E
and F such that E 1= <I> and F li= <I> can not be bisimilar. The next lemma states
some straightforward features ofbisimulation closure .

Lemma 1 IfE and F are bisimulation closed subsets ofP, then

1. E n Fand E'U F are bisimulation closed,


2. {E E P : if E ~ Fand a E K then FEE} is bisimulation closed,
3. {E E P : 3F E E.3a E K. E ~ F} is bisimulation closed.

Proof. Assumethatthesubsets E and F ofP are bisimulationc1osed. If E E EnF,


then E E E and E E F. Consequently, if E "" Fand F E P, then FEE and
F E F, and so FEE n F. Assurne E E EU F. Therefore E E E or E E F.If
E "" F and F E P, then FEE or F E F because these sets are bisimulation
c1osed, and therefore E U F is also bisimulation closed. For 2, suppose E belongs
to G = {G E P : if G ~ Hand a E K then H E E} and E "" F with FE P.
To show that F is also in G, it suffices to demonstrate that, if F ~ F ' when
a E K, then F' E E. Because P is transition c1osed, F' E P. Moreover, because
E "" F it follows that E ~ E' and E' "" F', for some E' E E. And therefore
F' E E because E is bisimulation c1osed. Part 3 has a similar proof. 0
114 5. Modal Mu-Calculus

Associated with any subset E of P are the following two subsets.


Ed = {E E E : if E ,...., Fand F E P then FEE}
EU = {E E P : 3F E E. E,...., F}
The set Ed is the largest bisimulation closed subset of E, and EU is the smallest
bisimulation closed superset of E (both with respect to P).

Lemma 2 For any subsets E and F ofP


1. Ed and EU are bisimulation closed,
2. Ed ~ E ~ EU,
3. ijE is bisimulation closed, then Ed = EU,
4. ijE ~ F, then Ed ~ Fd and E" ~ FU.
Proof. These are straightforward consequences of the definitions of Ed and
EU. 0
A valuation Y is bisimulation closed if, for each variable Z, the set Y(Z) is
bisimulation closed . Therefore, we can associate the bisimulation closed valuations
y d and y u with any valuation Y: for any variable Z, the set yd(Z) = (Y(Z))d and
y u (Z) = (Y(Zj)" . Proposition 2 is a corollary of the following result, in which
<I> is an arbitrary formula of modal mu-calculus and therefore may contain free
variables.

Proposition 3 JfY is bisimulation closed, then 11 <I> IIC is bisimulation closed.


Proof. The proofproceeds by simultaneous induction on the structure of<l> with
the following three propositions.
1. IfY is bisimulation closed, then 11 <I> IIC is bisimulation closed
2. If 11 <I> IIC ~ E, then 11 <I> lied ~ Ed
3. IfE ~ 11 <I> IIC, then P ~ 11 <I> IIC.
We now drop the index P. The base cases are when <I> is tt, ff or a variable Z. The
first two are clear. Suppose <I> is Z. Because Y is bisimulation closed, it follows
that 11 Z IIv is also bisimulation closed. For 2, suppose Y(Z) ~ E. By Lemma 2
part 4, it follows that yd(Z) ~ Ed. Similarly for 3, ifE ~ Y(Z), then EU ~ YU(Z).
The induction step divides into the various subcases. Suppose <I> = <1>, 1\ <1>2 .
For I, since 11 <I> IIv = 11 <1>, IIv n 11 <1>2 IIv and by the induction hypothesis both
11 <1>; IIv are bisimulation closed, it follows from Lemma 1 part 1 that 11 <I> IIv is also
bisimulation closed . A similar argument establishes that 11 <I> IIvd is bisimulation
closed . By monotonicity, 11 <I> IIvd ~ 11 <I> IIv and so 11 <I> IIvd ~ E, and therefore
11 <I> IIvd ~ Ed as required for 2. For 3, assume that E E P . By the definition of
EU, there is an FEE such that F ,...., E. But then FEil <I> 11 v and therefore, by
monotonicity, FEil <I> IIv•. The same argument as in case 1 shows that 11 <I> IIv·
is bisimulation closed, and therefore E E 11 <I> IIv•. The case of<l> = <1>, V <1>2
is similar. Suppose <I> = [K]\IJ. For I, by the induction hypothesis 11 \IJ IIv is
5.5. Approximants 115

bisimulation closed, and therefore by Lemma I part 2 the set 11 <I> Ilv is as weIl.
The arguments for 2 and 3 follow those for the case of /\, above: both sets 11 <I> IIvd,
11 <I> IIv' are bisimulation closed. So, if E E 11 <I> IIvd, then E E 1I <I> IIv and also
E E Ed . And if E E EU, then there is some FEE such that F '" E, meaning
FEil <I> IIv and therefore FEil <I> IIv'. The other modal case when <I> = (K} IJI
is similar, using Lemma I part 3.
The interesting cases are when <I> is a fixed point formula. Suppose first that <I> is
J.lZ. IJI. To show I, we need to establish that the least E such that IIlJ1l1v[E/z j ~ Eis
bisimulation closed. By assumption, V is bisimulation closed, and so V = Vd from
Lemma 2 parts 2 and 3. Therefore, by the induction hypothesis on 2, we know that
IIlJ1l1v[Ed/ Zj ~ Ed . Because Ed ~ E and E is the least set obeying 1IIJIIlv[E/zj ~ E,
it follows that E = Ed and is therefore bisimulation closed. Cases 2 and 3 follow
the same pattern as before. The final case is <I> = v Z . IJI. The argument is similar
to that just employed for the least fixed point , except that to establish I we use
the induction hypothesis on 3: we need to show that the largest set E such that
E ~ IIlJ1l1v[E/z j is bisimulation closed . Because V is bisimulation closed V = Vu ,
so EU ~ 11 IJI 11 V[E"/ Zj by the induction hypothesis on 3, and therefore E = EU .
Cases 2 and 3 folIowas before. 0

This result teIls us more than that bisimilar processes have the same proper-
ties when expressed as a formula without free variables. They also have the same
properties when expressed by formulas with free variables, provided the meanings
of the free variables are bisimulation closed. The proof of this result also estab-
lishes that formulas of J.lMo, observable modal mu-calculus, which do not contain
free variables, are preserved by observable bisimulation equivalence. The earlier
lemmas and Proposition 3 all hold for observable bisimulation equivalence, and
J.lMo .

Exercises 1. Prove Lemma I part 3.


2. Show that if E is abisimulation closed subset of P, then its complement P - E
is also bisimulation closed .
3. Let {Ei : i E I} be an indexed family of bisimulation closed subsets of P.
Show that n{E i : i E I} and U{E i : i E I} are bisimulation closed.
4. Prove Lemma 2.
5. Extend modal mu-calculus so that there is a formula <I> that does not contain
free variables such that 11 <I> 11 is not bisimulation closed.

5.5 Approximants
At first sight, there is achasm between the meaning of an extremal fixed point and
techniques (other than exhaustive analysis) for actually finding it. However, there
116 5. Modal Mu-Calculus

is a more meehanieal method , an iterative teehnique, due to Tarski and others,


for diseovering least and greatest fixed points . Let I1g be the least fixed point ,
and vg the .greatest fixed point, of the monotonie funetion g : 2 p ~ 2 p . From
Proposition I of Seetion 4.5 the following hold.

I1g = n (E ~ P : g(E) ~ E}
vg = U(E ~ P : E ~ g(E)}
Suppose we wish to determine the set vg. Let vi g for i ~ 0 be defined iteratively
as follows.

Beeause P is the largest subset of itself, v i g ~ vOg and, by monotonieity of


g, this implies that g(v1g) ~ g(vOg), that is v's ~ v1g. Applying gagain to
both sides, v 3 g ~ v2 g, and eonsequently for all i, vi+ 1g ~ vi g . Moreover, the
required fixed point set vg is a subset of every vi g. First, vg ~ vOg and so by
monotonieity g(vg) ~ v1g. Beeause vg is a fixed point of g, g(vg) = vg, and
therefore vg ~ vi g. Using montonieity of g onee more we obtain vg ~ v2g .
Consequently, with repeated applieation of g it follows that vg ~ vi g for any i.
Therefore, we have the following situation.
vOg :J v I g:J :J vi g :J

U U U
vg vg vg

If Vi g = Vi+1g, then the fixed point vg is Vi g. Why is this? Beeause Vig is then a
fixed point of g, and vg ~ vi g and vg is the greatest fixed point.
These observations suggest a strategy for diseovering vg. Iterativelyeonstruet
the sets Vi g starting with i = 0, until Vig is the same set as v i +1g . If Pis a finite
set eontaining n proeesses , then this iterative eonstruetion terminates at, or before ,
the ease i = n, and therefore vg is equal to v n g .
Example 1 Let P be {Cl, tick.O, O}, and let g be the following funetion f[(tick}Z , Z].
vOg = P = {Cl, tick.O, O}
v1g = 11 (tick) Z 1I~[vog/Z] {Cl, tick.O}
v 2g = 11 (tick) Z 1I~[v' g/Z] = {Cl}
v3g = 11 (tick) Z 1I~[v2g/Z] = {Cl}

Stabilization oeeurs at the stage v 2 g beeause this set eoincides with v3 g . The fixed
point vg, the set ofproeesses 11 vZ. (tick}Z IIC, is therefore the singleton set {Cl}.

IfP is not a finite set ofproeesses, then we ean still guarantee that vg is reaehable
iteratively by invoking ordinals as indices . Reeall that ordinals are ordered as
5.5. Approximants 117

folIows.

0,1 , .. . ,w,w+ 1, . . . , w + w, w + w + 1, . . .

The ordinal W is the initial limit ordinal (which has no immediate predecessor),
whereas W + I is its successor. Assume that a and Ä range over ordinals. We
define vag, for any ordinal a ~ 0, with the base case and successor case as before,
vOg = P and va+lg = g(vag). The case when Ä is a limit ordinal is defined as
folIows.

vAg = n {vag : a < Ä}

By the same arguments as above there is the following possibly decreasing


sequence' .

vg vg

The fixed point set vg appears somewhere in the sequence, at the first point when
vag = va+lg.
Example 2 Let P be the set {C, Bi : i ~ O} when Cis the cell

def
C = in(x).Bx where x : N
def
= dOWIl.B n for n ~ °
Let g be the function f [{- }Z, Z] . The fixed point vg is the empty set.

vOg = P {C, Bi : i ~ O}
vlg = 11 {-}Z IIC[vOg/ZI {C, Bi : i ~ I}

vj+lg = 11 {-}Z IIC[v jg /Zl {C, Bi : i ~ j + I}


The set VW g is n
{Vi g : i < W}, which is {C} because each Bj is excluded from
w 1
vj+1 g. The very next iterate is the fixed point, v + gis 11 {-}Z IIC[vwg/Z] = 0. So,
w 1
stabilization occurs at v + g. This example can be further extended.

C' ~ ' ( ) B'


- an x . x
,def
Bn+2 = dOWIl.Bn, +1 BI = dOWIl.C
,def

3Notice that vg ~ vAg when Ais a limit ordinalbecause vg ~ vag for an a < A.
118 5. Modal Mu-Calculus

Consider the following iterates.

vi g = {C', B~, C, Bj : k ~ 1 and j ~ i}

vw+1g = {C', B~ : k ~ I}

vW+Wg {C'}
vw+w+1g = 0

The fixed point vg therefore stabilizes at stage to + to + 1.


°
The situation for aleast fixed point J-tg is dual . Let J-t g be the smallest subset
of P, that is 0 , and let J-ta+l g = g(J-ta g) . The following defines a limit ordinal A.

J-t Ag = U{J-tag:a<A}

There is the following possibly increasing sequence of sets .

J-tg J-tg
U U
C u" g C J-tw+l g C

The fixed point J-tg is a superset of each of the iterates J-ta g. First, J-t0 g ~ p.g, and
so by monotonicity of g (and limit considerations) J-ta g f; J-tg for any a. The first
point that J-ta g is equal to its successor J-ta+l gis the required fixed point J-tg . An
iterative method for finding tig is to construct the sets J-ta g starting with J-t 0g until
it is the same as its successor.
Example 3 Let g be be the function f[[tick)ff V (-)Z, Z) . Assume P is the set ofprocesses
in example 1 above .

J-t0g = 0
J-tlg = 11 [tick)ff v (-)Z 1I~[1l0g/Z1 = {O}
J-t2 g 11 [tick)ff V (-)Z 1I~[lllg/Z1 = {tick.O,O}
J-t3 g = "[tick]ff V (-)Z "~[1l2g/Z1 = {tick.O,O}

Stabilization occurs at u. 2g, which is the required fixed point. Notice that if we
consider vg instead, then we obtain the following different set.

P = {Cl, tick.O, O}
11 [tick]ff V (-)Z 1I~[vOg/zl = P
This stabilizes at the initial point.
5.5. Approximants 119

Example 4 Cons ider the following family of clocks , ci', i > O.

CI I ~ tick.O
Cli +1 def tick.Cl i i::: 1

Let E be L{
Cl i : i ::: I}. E models an arbitrary new clock, which will eventually
break down . Let P be the set {E, 0, ci' : i ::: I}. Because all behaviour is
finite, each process in P has the property J.LZ. [tick]Z. Let g be the funct ion
f[[tick]Z, Z].
J.L0g = o
J.Ll g 11 [tick]Z IIC1/Log/ ZI {O}

J.Li+1 g = 11 [tick]Z IIC1/Lig/ z l = {O, ci- : j < i + I}


The initial limit point J.Lw g is the following set.
U{J.Li g : i < w} = {O,Clj : i e: O}
At the next stage, the required fixed point is reached.
J.Lw+1 g = 11 [tick]Z IIC1/Lwg/Zl = P
Therefore, for all a ::: t» + 1, J.La g = P.
Each set o"g is an approximation to the set ag. Increasing the index a provides
a closer approximation. Each vag approximates vg from above , whereas each J.L"s
approximates J.Lg from below. Consequently, an extremal fixed point is the limit
of a sequence of approximants.
There is a more syntactic characterization ofthe extremal fixed points using the
extended modallogic M oo of Section 2.2 (which contains infinitary conjunction
/\ and disjunction V). If g is the function f[<I>, Z], then with respect to P and V
the element vg is 11 vZ. <I> IIC and J.Lg is 11 J.LZ. <I> IIC. The initial approximant vOg
is the set P, which is just 11 tt IIC, and the initial approximant J.L 0g is 0, which is
11 ff IIC. The element Vi gis g(v og), which is 11 <I> IIC tt II~/zl' that is 11 <I>{tt/Z} IIC·
11I
Similarly J.L I gis 11 <I>{ff/Z} IIC. For each ordinal a, we define a za.<I> as a formula
of the extended modallogic. As before, let x be a limit ordinal.
v ZO . <I> tt J.LZo . <I> = ff
v Za+1 . <I> = <I>{v Z". <1>/ Z} J.Lza+l . <I> <I> {J.Lza . <1>/ Z}
vZA.<I> = !\(vZa.<I>:r:x<).} J.LZA .<I> = V{J.LZa .<I>:r:x<).}

Proposition 1 Fix P and V and let g be the function f[ <1>, Z]. Then the approximant o" g =
11 a z-. <I> IIC for any ordinal a,

Proof. By induction on a , We drop the index P. The base cases are straight-
forward. Suppose the result holds for all a < 8. If 8 is a successor ordinal,
120 5. Modal Mu-Calculus

say Ci + 1, then aa+1 g is g(aa g), which by the induction hypothesis is


11 <I> 11vr 11 o Z".<I> IIv/Z], which in turn is equalto 11 cI>{a za.<I>/ Z} IIv. The result follows
because <I> {a Z" •<I>/ Z} is a za+ I . <1>. If 8 is a limit ordinal, then v 8g is n
{va g :
Ci < 8}. By the induction hypothesis, this set is n {II va Z.<I> IIv : Ci < 8}, which
is 11 /\ {va Z .<I> : Ci < 8} 11 v, and is therefore 11 v 8Z •<I> 11v. A similar argument
establishes the least fixed point case . 0

A simple consequence ofProposition 1is a more direct definition ofsatisfaction


E I=v <I> when <I> is a fixed point formula.

E I=v vZ . <I> iff E 1= vZ" . <I> for all ordinals Ci


E I=v J.tZ. <I> iff E 1= u.Z", <I> for some ordinal Ci

The quantification over ordinals in these clauses is bounded by the size of P(E) .
The following is a corollary of the discussion in this section. It will turn out to be
very usefullater, since it provides "least approximants" for when a process has a
least fixed point property, and when a process fails to have a greatest fixed point
property.

Proposition 2 1. If E I=v J.tZ . <1>, then there is aleast ordinal Ci such that E I=v u.Z" , <I> and
forall ß < Ci, E \i=v J.tZ ß. <1> .
2. If E ~v v Z. <1>, then there is aleast ordinal Ci such that E ~v v Z" . <I> and
forall ß < Ci, E I=v «z». <1> .

Proof. Notice that for any E , E \i=v J.tZo. <1>. Therefore, if E I=v u.Z", <1>, then
for all ß > Ci it follows by monotonicity that E I=v J.tZ ß. <1>. Consequently, there
is aleast ordinal Ci for which E I=v u.Z" , <I> when E has the property J.tZ. cI>
relative to V. Case 2 is dual. 0

Example 5 InSection5.3, thedefinitions of[] <I> and[.J..] <I> in u.M werecontrasted. Let vZ. \11
be the formula v Z. <I> 1\ [r] Z (expressing [ ] <1» and let u.Z . \11 be J.t Z. cI> 1\ [r] Z
(expressing [.J..] <1»4. These formulas generate different approximants.

tt = ff
= <1>1\ [r]tt = <I> = <I> 1\ [r]ff
= <l>1\[r]<I> = <I> 1\ [r](<I> 1\ [r]ff)

4It is assumed that Z is not free in <1>.


5.6. Embedded approximants 121

<I> /\ [T](<I> /\ [T](<I> /\ [T]<I> . ..))


<I> /\ [T](<I> /\ [T](<I> /\ [T](<I> /\ [T]ff) .. .))

The approximant jLZ i . \IJ carries the extra demand that there cannot be a sequence
of silent actions of length i . Hence, [..j,] <I> requires all immediate t behaviour to
eventually peter out.

Exercises 1. Let P be the set ofprocesses P(Ven). Using approximants, determine the sets
11 <I> IIC when <I> is each ofthe following.

a, jLZ. (2p, lp}tt v [-]Z


b. jLZ. (little}tt v [-]Z
c. jLZ . (little}tt v [-]Z
d. vZ. [2p](jLY. (collectb}tt v [-]Y) /\ [-]Z
2. Let P be the set ofprocesses P(Crossing). Using approximants, determine
the sets 11 <I> IIC when <I> is each ofthe following.
a, vZ.([tcross]ff v [ccross]ff) /\ [-]Z
b. jLY. (-}tt /\ [ ccross]Y
3. If IPI = n prove that for any E E P
a, E FV vZ . <I> iff E FV vzn .<I>
b. E FV jLZ. <I> iff E FV jLzn.<I>
4. Work out the approximants a Z i.<I> for the following formulas for i .:s 4.
a, jLZ. [-]Z
b. vZ. [- ](tick}Z
c. jLZ. [-]Z /\ [-][-]Z
d. vZ . (tick}Z /\ (tock}Z
e. jLZ . \IJ v (<I> /\ «(-}tt /\ [-]Z))
f. vZ. \IJ v (<I> /\ «(-}tt /\ [-]Z))
g. jLZ. vY. falZ /\ [-a]Y
5. Prove Proposition 2 part 2.

5.6 Embedded approximants


Fixed point sets can be calculated iteratively using approximants. The examples
in the previous section involved a single fixed point. In this section, we examine
122 5. Modal Mu-Calculus

the iterative technique in the presence of multiple fixed points, and comment on
entanglement of approximants.

Example 1 Yen has the property vZ. [2p, lp]\I1/\ [- ]Z , when \11 is the formula /LY. (-}tt /\
[-{collectb ' collectd]Y ; see example 1 of Section 5.1. Let P be the set
{Yen, Venb, Venl , collectb .Ven, collectl .Ven}.First, the subset ofP is calcu-
lated for the embedded fixed point \11, as follows. Its ith approximant is represented
as /Ly i (where we drop the index P).

o
11 (-}tt /\ [-{collectb, collectd]Y IIv[/LYo/y]

= {collectb.Ven, collectl ·Ven}


11 (-}tt /\ [-{collectb ' collectd]Y IIv[/Lyl /y]

= {Venb, Venl, collectb.Ven, co Ll.ect.jVen]


11 (-}tt /\ [-{collectb' collectl}]Y IIV[/Ly2/Yl
= P

Therefore, 11 \111Iv is P. Next, the outermost fixed point is evaluated , given that
every process in P has the property \11 . Its i th approximant is v Zi .
vZo = P
1
vZ = 11 [2p , lp]\I1 /\ [-]Z IIv[vzo/Z] = P
In this example , the embedded fixed point can be evaluated independently of the
outermost fixed point.

Example 1 illustrates how the iterative technique works for formulas with mul-
tiple fixed points that are independent of each other. The formula of example 1
has the form vZ . <I>(Z, /LY. \I1(Y» where the notation makes explicit what vari-
ables can be free in subformulas. Z does not occur free within the subformula
/LY. \I1(Y), but may occur within the subformula <I>(Z, /LY. \I1(Y» . Consequently,
when evaluating the outermost fixed point, we have that:
vZo = P
1
vZ 11 <I>(Z, /LY. \I1(Y» IIv[vzojZ]

vZ i +! = 11 <I>(Z, /LY. \I1(Y» IIV[vzi /zl

Throughout these approximants, the subset of processes with the property


/LY. \I1(Y) is invariant. This is because the subformula does not contain Z free.
Therefore, 11 /LY. \I1(Y) IIv[vza / Zl is the same set as 11 /LY. \I1(Y) IIv[vzß/ Z] for any
ordinals a and ß.
5.6. Embeddedapproximants 123

The subset of fonnulas of JLM, written JLMI, which has the property that all
its fixed points are independent of each other, is characterised as folIows.

cf> E JLMI iff if al Y. \lJ 1 E Sub(cf» and a2Z . \lJ2 E Sub(cf» and Y =1= Z,
then Y is not free in \lJ2 and Z is not free in \IJ 1.

CTL fonnulas, when understood as fixed point fonnulas, belong to this subclass
of JLM fonnulas.
Proposition 1 CTLformulas belong /0 JLMI.

Proof. This follows from the fixed point definitions ofuntil fonnulas (and their
negations) presentedin Seetion 5.2. Forexample, A(cf> U \IJ) is defined as JLZ . \IJ V
(cf> /\ «(- )t t /\ [-] Z»
where Z does not occur in cf> or \IJ . Therefore, any further
fixed points introduced into these subfonnulas do not contain Z. 0

Example 2 Consider the simple processes in Figure 5.1. Let P be the set {O, 0' , O"} and let \IJ
be the fonnula

\IJ = JLY. vZ. [a]«(b)tt v Y) /\ Z).

This fonnula does not belong to JLMI because its innennost fixed point subfor-
mula v Z . . . contains Y free. The calculation of the outer fixed point depends on
calculating the inner fixed point at each index. We use «z» to represent the ith
approximant of the subfonnula prefaced with v Z when any free occurrence of Y

D"

FIGURE5.1. A simple process


124 5. Modal Mu-Calculus

is understood as the approximant j.L Y j • Again, we drop the index P.

o
vZ . [a]«(b}tt
11 v Y) /\ Z) IIv[/LYo/y]
vZ oo = P
I
vZO = 11 [a]«(b}tt v Y) /\ Z) lI(v[/LYo/y])[vz oo /Z]

= {O",O}
vZ02 = 11 [a]«(b}tt v Y) /\ Z) II(V[/Lyo/Y])[vzol/Z]
= {O"}
3
vZ0 = 11 [a]«(b}tt v Y) /\ Z) 1I(V[/Lyo/Y])[vzo2/Z]
= {O"}
{O"}
11 vZ. [a]«(b}tt v Y) /\ Z) IIv[/Lyl/Y]
IO
vZ = P
vz" = 11 [a]«(b}tt v Y) /\ Z) lI(v[/Lyl /y])[vzlO/Z]

= {O",O}
VZ 12 = 11 [a]«(b}tt v Y) /\ Z) 1I(V[/Lyl /Y])[vz lI/Z]

= {O"}
vZ
13
= 11 [a]«(b}tt v Y) /\ Z) 11(V[/Lyl/Y])[vZ I2/Z]
= {O"}
So j.Ly 2 = {O"}

The innennost fixed point is evaluated with respect to more than one outennost
approximant.
Example 2 illustrates dependence of fixed points. The fonnula has the form
j.LY. <I>(Y, vZ. \II(Y, Z)), where Y is free in the innermost fixed point. The inter-
pretation ofthe subfonnula vZ. \II(Y, Z) may vary according to the interpretation
ofY.
A simple measure of how much work has to be done when calculating the
subset of P with a fixed point property is the number of approximants that must
be calculated. For a simple fonnula (J Z. <I>, when <I> does not contain fixed points
and when IPI = n, the maximum number of approximants is n. If<I> E j.LMI and
contains k fixed points, then the maximum calculation needed is k x n. In the
case of the fonnula of example 2 with two fixed points, it appears that n2 is the
upper bound. For each outennost approximant, we may have to calculate n inner
approximants. There are at most n outer approximants, and therefore the upper
bound is n2 •
The initial approximants for fixed points are vZo = P and j.LZo = 0. Initial
approximations that are closer to the required fixed point will, in general, mean that
5.6. Embedded approximants 125

less work has to be done in the calculation. For instance, if we know that E has the
property P ;2 E;2 11 vZ. 4> IIv, then we can set vZo to E. This observation can be
used when evaluating the embedded fixed point formula v Z. 4>( Z, v Y. \11 (Z , Y»).
vZo = P
vZ I 114>(Z, vY. \I1(Z, Y») IIv[vzo/Z )
vy oo P
I
vyO = 11 \I1(Z, Y) lI(v[vz o/Z))[vYoo/y)

= 114>(Z, vY. \I1(Z, Y)) IIv[vz1/Z)


To evaluate vZ 2 one needs to calculate 11 vY. \I1(Z, Y) IIv[vz1/Z) ' However, by
monotonicity we know that

11 vY. \I1(Z, Y) IIv[vz 1/Z) ~ 11 vY. \I1(Z, Y) IIv[vz o/Z) ~ P.

Therefore, we can use 11 vY. \I1(Z, Y) IIv[vzo/Z) as the initial approximant vY IO, and
so on as folIows.
vZ 2 = 114>(Z, vY. \I1(Z, Y») IIv[vz 1/Z)
vy lO = 11 vY. \I1(Z, Y) IIv[vz o/Z)
ll
vy 11 \I1(Z, Y) lI(v[vz 1/Z))[vYIO /y)

VZ i +1 = 114>(Z, vY. \I1(Z, Y») IIv[ vz i /Z)

vY iO
= 11 vY. \I1(Z, Y) IIV[vzi/zl
il
vY = 11 \I1(Z, Y) 1I(V[vz i /Z))[vY iO /y)

Consequently, instead of requiring at most n 2 approximants, the upper bound is


2 x n, for in total the maximum number ofapproximants for the inner fixed point is
n. This technique can be extended to long sequences of embedded maximal fixed
points

VZI . 4>1 (ZI, VZ2 . 4>2(ZI , Z2, . . . , VZk. 4>k(ZI, . . . , Zk) . . .))
when at most k x n approximants need to be calculated.
The situation is dual forleastfixed points. If0 ~ E ~ 11 J,lZ. 4> IIv, then we can
set the initial set J,lZo to be E. In the case ofthe formula J,lZ. 4>(Z, J,lY. \I1(Z, Y)),
by monotonicity

o~ II/LY. \I1(Z, Y) IIv[/Lzi /Z) ~ 11 J,lY. \I1(Z, Y) IIv[/Lzi+l/Z) ,


126 5. Modal Mu-Calculus

so the set 11 JtY. \fJ (Z , Y) II v[/L zi / Z ] is a better initial approximation than 0 for jlYiO .
Again, we need only calculate 2 x n approximants instead of n 2 • This can be
extended to multiple occurrences of embedded least fixed points.
jlZl . <f» (Z] , jlZ2. <f>2(ZI , Z2, . .. jlZk. <f>k(Z Io .. . , Zk) . . .))
More complex are formulas , as in example 2, that contain multiple oc-
currences of dependent but different fixed points . In the case of a formula
vZ. <f>(Z, jl Y. \fJ(Z, Y)), we cannot use 11 jlY. \fJ(Z, Y) IIV[vzo/zl as an initial ap-
proximant jlY IO because the ordering is the wrong way around to be of help,
since vzo :;:2 vZ' . Least fixed points are approximated from below and not from
above. Similar comments apply to the calculation of u.Z, <f>(Z, vY. \fJ(Z, Y)). In
both these cases, a best worst case estimate of the number of approximants is n2 •
However, when we consider three fixed points
ur. <f>t (Y, vZ. <f>2(Y, Z , jlX. <f>3(Y, Z, X))) ,
the maximum number of calculations needed is less than n 3 • As the reader can
verify montonicity can be used on the innermost fixed point because jlXij k ~
jlXi'j k when i S i'. Roughly speaking , the number of calculations required for k
dependent alternating fixed points is n *~I , as was proved by Long et al [38].
The general issue of how many approximants are needed to calculate an em-
bedded fixed point remains an open problem. The observations here suggest that
the more alternating dependent fixed points there are, the more calculations are
needed , and that the number of calculat ions is exponential in this alternation . In-
deed, we may wonder if somehow the expressive power of jlM grows as more
dependent alternating fixed points are permitted. The answer is "yes," as -was
proved by Bradfield [9].
A more generous subset of jlM than jlMI is the alternation-free fragment,
jlMA defined as follows.
<f> E jlMA iff if jlY. \fJ 1 E Sub(<f» and vZ. \fJ2 E Sub(<f» ,
then Y is not free in \fJ2 and Z is not free in \fJI.
This subset permits dependence offixed points, provided they are ofthe same kind.
For instance, a formula ofthe following form is permitted.
vZ! . <f>t(ZI, VZ2· <f>2(Zt , Z2, jlY1• \fJ,(Yt , jl Y2· \fJ2(Y1 , Y2))))
An example of a formula that does not belong to jlMA is \fJ from example 2, since
Y is free in the subformula vZ . . .. From the analysis above, any formula of jlMA
with k fixed point operators requires the calculation of at most k x n approximants.

Exercises 1. Show precisel y that, if IPI = n, then for both formulas


vZ , . <f> , (Z ), VZ2. <f>2(Z ), Z2, , VZk. <f>k(Z ), , Zd · . .))
u.Z«. <f» (Z ), jlZ2. <f>2(Z ), Z2, u.Zi , <f>k(Z ), , Zd· . .))
5.6. Embedded approximants 127

at most k x n approximants need to be calculated.


2. For the following formulas , work out their approximants and embedded
approximants up to stage 4.
a. p,Y. [b]ff /\ [a](p,Z. [a]ff /\ [b]Y /\ [-b]Z) /\ [-a]Y
b. vY. [b]ff /\ [a](p,Z. [a]ff /\ [b]Y /\ [-b ]Z) /\ [-a]Y
c. p,X. [a](p,Y. [a](vZ. [a]ff /\ [- ]Z) /\ (- }tt /\ [-a]Y) /\ (- }tt /\ [-a]X
d. p,X.vY.[a]X /\[-a]Y
e. vZ. [a](p,Y. (-}tt /\ [-b]Y) /\ [-]Z
f. vZ .(IlJc v (IlJ /\ vY. <I> /\ [-]Y) /\ [-]Z
g. vZ . (p,X. [b](vY. [c](vY\ . X /\ [-a]Y\) /\ [-a]Y) /\ [- ]Z)
3. Give an exact upper bound on the number of approximants that need to be
calculated in the case ofthe following formula with respect to IPI = n.

What upper bound is there for a formula with k dependent alternating fixed
points?
4. Prove Proposition I in full. Show using approximants that there is a linear
time algorithm for checking whether E F <1>, when E is finite state and <I> is
a CTL formula.
5. Show that, for any <I> E p,MA with k fixed point operators, at most k x n
approximants need to be calculated to work out which subset of P has property
<I> when IPI = n.
6. The following is a technical definition of the full alternation hierarchy from
Niwinski and Bradfield [46, 9].
a, If<l> contains no fixed point operators, then <I> E ~o and <I> E TIo
b. If<l> E ~n U TIn, then <I> E ~n+\ and e TIn+\
E
c. If <1>, IlJ E ~n(TIn), then [K]<I>, (K}<I>, <I> /\ IlJ, <I> v IlJ E ~n(TIn)

d. If<l> E ~n, then p.Z, <I> E ~n

e. If<l> E TIn, then vZ . <I> E TIn '


f. If <1>, IlJ E ~n(TIn), then <I>{IlJ / Z} E ~n(TIn)
Let ADn be {<I> : <I> E ~n+\ n TIn+d.
a. Prove that AD\ = p,MA.
b. Give an example ofa property that is definable in ADn+\, but not definable
in AD n • (See Bradfield [9].)
c. Provide an upper bound for the number of approximants that need to be
calculated for any <I> E ADm containing k fixed points, when IPI = n.
128 5. Modal Mu-Calculus

5.7 Expressing properties


Modal mu-calculus is a very powerful temporallogic that permits expression of a
very rich class of properties. In this section, we examine how to express a range
of liveness and safety properties.
A safety property, as described in the previous chapter, has the form "nothing
bad ever happens ." Safety can either be ascribed to "states" (or "processes"), that
bad states can never be reached, or to "actions," that bad actions never happen. In
the former case, if the formula et>c captures the bad states , then the CTL formula
AGet> or its j,tM equivalent v Z. et> /\ [- ]Z expresses safety. As mentioned before ,
the safety property for the crossing of Figure 1.10 is that it is never possible to
reach astate such that a train and a car are both able to cross, AG([tcross]ff v
[ccross]ff).
It is useful to allow the full freedom of j,tM notation by allowing open formulas
with free variables and appropriate valuations that capture their intended meaning.
In the case of a safety property, let E be the family of bad states. The formula
v Z . Q /\ [- ] Z expresses safety relative to the valuation V that assigns P - E to Q.
The idea is that the free variable Q has adefinite intended meaning captured by
the particular valuation V.
Example 1 The slot machine of Figure 1.15 never has a negative amount of money. A simple
way of expressing this feature is the open formula v Z . Q /\ [-] Z relative to the
valuation V that assigns to the free variable Q the set P - {SMj : j < O} .
This device can be used to express properties succinctly. If the particular valuation
V is bisimulation closed with respect to the free variables of et> , we say that the
property expressed by et> relative to V is extensional. In this case 11 et> Ilv is also
bisimulation closed by Proposition 3 of Section 5.4. The formula of example 1 is
extensional when P is the set ofprocesses ofthe transition graph for SMn for n :::: O.
If et> relative to V is not extensional, then it is intensional. Examples of intensional
properties of a process include "has two tick-transitions" and "consists of three
parallel components."
Safety can also be ascribed to actions , "no bad action belonging to K ever
happens." It is expressed by the j,tM formula vZ . [K]ff /\ [-]Z (or the equivalent
CTL formula AG[ K]ff) . Really, there is no distinction between safety in terms of
bad states and safety in terms ofbad actions . In the action case, safety is equivalent
to "the bad state (K)tt is never reached."
A liveness property has the form "something good eventually happens ." Again,
the good feature can be ascribed either to states or to actions. If et> is true at the good
states, then the CTL formula AFet> or its j,tM equivalent u.Z, et> v «(-)tt /\ [- ]Z)
expresses liveness.
In contrast, that eventually some action in K happens means that all possible
runs contain a transition whose action belongs to K . This liveness property is
5.7. Expressing properties 129

expressed by the following fonnula.


u.Z , (-) t t 1\ [-K]Z

To satisfy it, a process must not be able to perform actions from the set A - K
forever. Consider the syntactic approximation s (as described in Section 5.5) this
fonnula generates. We abbreviate the ith approximant to /l-Zi.
/l-Zo ff
/l-Zl (-)tt 1\ [-K]/l-Zo = (-)tt 1\ [-K]ff
/l-Z2 = (-)tt 1\ [-K]/l-Zl = (-)tt 1\ [-K]«-)tt 1\ [-K]ff)

/l-Zi+l = (-)tt l\[-K]/l-Zi


= (-)tt 1\ [-K]«-)tt 1\ . . . [-K ]« -)tt 1\ [-K]ff) . . .)

The approximant /l-Zl expresses that a K action must happen next. /l-Z2 states
that a K action must happen within two transitions , that is, for any sequence of
transitions ~ either a E K or b E K. Moreover, the fonnula requires that,
if there is a run with just one transition, then it is a K transition. General ising,
/l-Zi states that any sequence of transitions of length i contains a K action (and
any run whose transition length is less than i contains a K transition). Therefore ,
/l-ZlJJ+l = V {/l-Z i : i ~ O} guarantees the liveness property that in every run
there is a K action.
Liveness with respect to actions does not appear to be expressible in tenns of
liveness with respect to state, that is, as a fonnula ofthe form u.Z: lI> v «- )tt 1\
[ - ]Z) where lI> does not contain fixed points (or Z ). For instance , neither of the
following fonnulas captures liveness.
lI> I /l-Z. (K )t t v «-)tt 1\ [-]Z)
lI>2 u.Z , « - )tt 1\ [- K]ff) v « - )tt 1\ [- ]Z)

lI> I is too weak, since it merely states that eventually some action in K is possible
without any guarantee that it happens. In contrast, <1>2 is too strong, since it states
that eventually only K actions are possible (and therefore must happen) .
Example 2 Consider the following definitions of processe s.
def
Ao a. L{Ai : i ~ O}
def
Ai+l b.Ai i ~ 0

Bo
def
a. L{Bi : i ~ O} + b. L {Bi i ~ O}
def
Bi+1 b.Bi i ~ 0
130 5. Modal Mu-Calculus

a bi
For any i ::: 0, process Ao can perform the cycle AO ~ Ao, whereas BO can
ab' b b' .
perform the cycles Bo ~ Bo and Bo ~ Bo. When J ::: 0, Aj I Aj has the
property "eventually a happens," which is not shared by Bj I Bl : In every run of
Aj I Aj , the action a occurs. There are runs from Bj I Bj consisting only of b actions.
Therefore, Aj I Aj 1= J.,LZ . (- )tt /\ [-a]Z and Bj I Bj Pf: J.,LZ . (-)tt /\ [-a]Z .
On the other hand , both these processes have the property <1> I, above, when K is
the singleton set [zr}. Process s, I Bj eventually reaches Bo I Bk or Bk I Bo, and
both satisfy (a)tt. Moreover, both fail to have the property <1>2, above , when K is
the set {al. For instance, in the case of Aj I Aj , there is no guarantee that in every
run Ao I Ao is reached because this is the only process ofthe form Ai I Aj satisfying
(-)tt /\ [-a]ff .
Liveness and safety may relate to subsets of runs. For instance, they may be
triggered by particular actions or states . A simple case is "if action a ever happens,
then eventually b happens," any run with an a action has a later b action. This is
expressed by the following fonnula.

vZ. [a](J.,LY. (- )tt /\ [-b]Y) /\ [-]Z

An example of a conditional safety property is "whenever \11 holds , <1>c will never
become true ," which is expressed as folIows.

v Z . (\I1 c v (\11 /\ vY. <1> /\ [- ]Y») /\ [-]Z


In both these examples, the fonnulas belong to J.,LMI (as defined in the previous
section).
More involved is the expression of liveness properties under fairness . An ex-
ample is "in any run , if b and c happen infinitely often, then so does a ," expressed
as folIows .

vZ. (J.,LX . [b](v Y. [c](v YI. X /\ [-a]YI) /\ [-a]Y) /\ [- ]Z)

There is an essential fixed point dependence, as described in the previous section,


because X occurs free within the fixed point subfonnula prefaced with v Y.
Example 3 The desirable liveness property for the crossing "whenever a car approaches the
crossing, eventually it crosses" is captured by the following fonnula.

vZ . [car](J.,LY. (-)tt /\ [ ccross]Y) /\ [-]Z

However, this only holds if we assume that the signal is fair. Let Q and R be
variables, and V a valuation, such that Q is true when the crossing is in any state
where Rail has the form green.tcross.red.Rail (the states E2, E 3, E6, and
E 10 of figure 1.12) and R holds when it is in any state where Road has the form
up. ccross .dovn.Road (the states EI , E3, E7 and EIl). The liveness property now
becomes "for any run , if QC is true infinitely often and R C is also true infinitely
often, then whenever a car approaches the crossing, eventually it crosses," which
5.7. Expressing properties 131

is expressed by the open fonnula relative to V,

vY. [car](JlX. vY\ .(Q V [ ccross](\11 /\ [ ccross]Yd» /\ [-]Y,

where \11 is vY2 .(R v X) /\ [-{ccross}]Y2 • The property expressed here is


extensional.
Another class of properties is until properties, as in CTL. The fonnula
A(<t> U \11) expresses "for any run, <t> holds until \11 becomes true ," which in JlM
is the fonnula JlZ. \11 v (<t> /\ (- )tt /\ [- ]Z) , where Z does not occur in <t> or
\11. This property does require that \11 eventually become true . This commitment
can be removed by changing fixed points . The property "in every run , <t> holds
unless \11 becomes true" does not imply that \11 does eventually hold, and therefore
is expressed as vY. \11 v (<t> /\ [- ] Y). Until properties may concern actions instead
of states. For example, "in any run , K actions happen until a J action happens"
is expressed as JlY. [-(K U J)]ff /\ (-)tt r. [-J]Y , with the implication that
eventually a J action occurs . This implication can be removed, as in the property
"in any run, K actions happen unless a J action occurs," by changing fixed points
v Y. [-(K U J)]ff /\ [-J]Y . The reader is invited to fonnulate weaker versions of
these properties with respect to some runs.
Cyclic properties can also be described in JlM . A simple example is that tock
recurs at each even point: if Eo ~ E\ ~ . .. is a finite or infinite length run, then
each au is tock. The fonnula v Z . [- ]([-tock]ff /\ [- ]Z) expre sses this property,
and CI l satisfies it. The clock CI l in fact has an even more regular cyclic property,
that every run is a repeating cycle oftick and tock actions, vZ. [-tick]ff /\
[tick]([-tock]ff /\ [_]Z)5. These properties can also be weakened to some
family of runs . Cyclic properties that allow other actions to intervene within a
cycle can also be expressed.

Example 4 Recall the scheduler from Section 1.4 that schedules a sequence oftasks, and must
ensure that a task cannot be restarted until its previous operation has finished .
Suppose that initiation ofone ofthe tasks is given by the action a and its tennination
by b. The scheduler therefore has to guarantee the cyclic behaviour a b when other
actions may occur before and after each occurrence of a and each occurrence of
b. This property can be defined inductively as follows.
def
Cycle(ab) = [b]ff /\ [a]Cycle(ba) /\ [-a]Cycle(ab)
def
Cycle(ba) = [a]ff /\ [b]Cycle(ab) /\ [-b]Cycle(ba)

Here we have left open the possibility that runs have finite length. Appropriate oc-
currences of (-) tt within the definition preclude it. An important issue is whether
these recursive definitions are to be interpreted with least or greatest fixed points,

5This fonnula leaves open the possibility that a run has finite length. To preclude it, we add (-)tt at the
outer and inner levels.
132 5. Modal Mu-Calculus

or a mixture of the two. This depends upon whether intervening actions are al-
lowed to continue forever without the next a or b happening. If we prohibit this,
the cyclic property is expressed using least fixed points.
JLY. [b]ff 1\ [a](JLZ. [a]ff 1\ [b]Y 1\ [-b]Z) 1\ [-a]Y
Ifwe permit other actions to intervene forever, but insist that whenever a happens
b happens later, a mixture of fixed points is required .
vY. [b]ff 1\ [a](JLZ. [a]ff 1\ [b]Y 1\ [-b]Z) 1\ [-a]Y
The length of the cycle can be extended. As an exercise the reader is invited to
define the formula for Cycle(abcd).
Another class of properties involves counting . An instance is that, in each Tun
there are exactly two a actions, given as folIows.
JLX. [a](JLY. [a](vZ . [a]ff 1\ [- ]Z) 1\ (- }tt 1\ [-a]Y) 1\ (- }tt 1\ [-a]X
Another property is that, in every Tun, there are at least two a actions. Even more
general is the property "in each Tun, a can only happen finitely often," which
is expressed by JLX. vY. [a]X 1\ [-a] Y. However, there are also many counting
properties that are not expressible in the logic. A notable case is the following
property of a buffer, "the number of out actions never exceeds the number of in
actions."

Exercises 1. Define in JLM the following properties


a, Eventually either tick happens or <t> becomes true
b. In some Tun <t> is always true
c. tick happens until <t>
d. tick happens until tock
e, tick happens unless tock happens
2. Prove that liveness with respect to actions cannot be expressed in terms of
liveness with respect to state, that is, as a formula ofthe form JL Z. <t> v ((- )tt 1\
[-]Z), where <t> is a modal formula that does not contain fixed points, or the
free variable Z.
3. Define in JLM the following properties.
a. In any TUn, a and b happen finitely often
b. If a, b and c happen infinitely often, then <t> is true infinitely often
c, In any TUn, <t> is true twice and 'IJ is true twice
4. Define fixed point formulas for the properties Cycle(abcd), which depend on
assumptions about intervening actions.
5. Prove that "the number of out actions never exceeds the number ofin actions"
is not expressible in JLM (see Sistla et al. [52] for a prooftechnique).
6 _

Verifying Temporal Properties

6.1 Techniques for verification 133


6.2 Property checking games 135
6.3 Correctness of games 144
6.4 CTL games . . . . . 147
6.5 Parity games . . .. 151
6.6 Deciding parity games 156

A very rich temporallogic, modal mu-calculus , has been described . Formulas of


the logic can express liveness, safety, cyclic and other propertie s of processes. The
next step is to provide techniques for verification, for showing when processes
have, or fail to have, these features . In this chapter, we show that game theoretic
ideas provide a general framework for verification.

6.1 Techniques for verification


To show that a process has, or fails to have, a modal property we can appeal
to the inductive definition of satisfaction between a process and a formula. A
simple approach is goal directed. We start with the goal, E 1= <1>?, that is, "does
E satisfy <1>1" and then continue reducing goals to subgoals until we reach either
"obviously true" or "obviously false" subgoals. The reduction of goals to subgoals
can proceed via rules that depend on the main connective of the formula in the
goal. For example, the goal
Goal: E 1= [K ]<1> ?

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
134 6. VerifyingTemporal Properties

reduces to the subgoals

Subgoal : F F <I> ?

for each F such that E ~ F and a E K . Similarly, the next goal

reduces to

Subgoal : E F <1>1 ?

or to

Subgoal : E F <1>2 ?

This technique has the merit that the formula \11 in any subgoal is a proper sub-
formula of the goal formula <1>, that is \11 E Sub( <1» . The method thereby supports
the general principle that a proof of a property "follows from" subproofs of
subproperties.
Checking whether a process satisfies a /LM formula is not as straightforward.
The problem is what to do with fixed point formulas , for instance with the following
goal.

Goal: E FV vZ . <I> ?

According to the semantic clause for v Z in Section 5.1 the goal reduces to the
subgoals

Subgoal : F FV[EjZI <I> ?

for each FEE when E is a subset of P(E). This requires us first to identify the
set P(E), and then to choose an appropriate subset E.
We can avoid calculating P(E) and choosing subsets of it by appealing to
approximants, as presented in Sections 5.5 and 5.6. The goal now reduces to the
subgoals

Subgoal : E FV v Z" . <I> ?

for each ordinal (1 . At this point we may use induction over ordinals . However,
we need to be careful with limit ordinals. If the formula contains embedded fixed
points, we may need to use simultaneous induction over ordinals. However, if E
is a finite state process, then the goal reduces to the single subgoal

Subgoal : E FV vZn. <I> ?

where n is the number ofprocesses in the transition graph for E (or an overestimate
ofits size). This allows one to reduce verification ofa /LMproperty to an M property.
6.2. Property checking games 135

However, it has the disadvantage that a proof of a property no longer follows from
subproofs of subproperties.
Discovering a fixed point set in general is not easy, and is therefore prone to
error, We therefore prefer simpler, and consequently safer, methods for checking
whether temporal properties hold. Towards this end, we first provide a different
characterisation of the satisfaction relation between a process and a formula in
terms of game playing.

Exercises 1. a. Develop a set of goal directed rules for checking M properties of finite
state processes. This can be viewed as a "top down" approach to property
checking.
b. Develop a "bottom up" approach to property checking offinite state pro-
cesses. That is, develop a method that when given processes that have
subformula properties, constructs the processes having the property itself.
c. Give advantages and disadvantages of these two approaches , top down
and bottom up. Is there a way of combining their advantages?
2. a. Develop a set of goal directed rules for checking CTL properties of finite
state systems. Use your rules to show the following.
i. Yen F AG([tick]ff v AF(collectl}tt)
ü. Cl F AG((- }tt 1\ [-tick]ff)
iü. Crossing F EF(ccross}tt
iv. Cl! F A«( -}tt 1\ [-tick]ff) U (tock}tt)
b. Develop a bottom up approach to CTL property checking of finite state
processes . That is, develop a method that when given processes that have
subformula properties, constructs the processes having the property itself.
Illustrate your technique on the examples above.
3. Present an inductive technique for showing when E FV v Z. <I> that uses
induction on ordinals a.

6.2 Property checking games


We now present agame theoretic account ofwhen a process E has a /LM property
<1>, relative to a valuation V. We assurne that the formula <I> is normal, as defined
in Section 5.2. Our intention is to present agame for property checking as we do
for bisimulation checking in Chapter 3, so that a process has a property whenever
player V has a winning strategy for the corresponding game.
136 6. Verifying Temporal Properties

• if <I> j = qJl /\ qJ2, then player R ehooses a eonjunet qJ; where i E {I, 2}: the
proeess E j+I is E j and <I> Ht is qJ;
• if <I> j = qJl V qJ2, then player V ehooses a disjunet qJi where i E {l, 2}: the
proeess EH t is E j and <l>j+1 is qJ;
• if <I> j = [K] qJ, then player R ehooses a transition E j ~ EH 1 with a E K
and <I>H 1 is qJ
• if <I> j = (K) qJ, then player V ehooses a transition E j ~ EH t with a E K
and <I>j +1 is qJ
• if <I> j = a Z . qJ, then <I> H f is Z and EHI is E j
• if <I> j = Z and the subfonnula of <1>0 identified by Z is a Z. \IJ, then <I> j+ 1 is
qJ and EH 1 is s,

FIGURE6.1. Rules for the next movein agame play

The "property eheeking game" Gv( E , <1» , when V is a valuation, E is a process


and<l> is a normal fonnula, is played by two partieipants, players R (the refuter)
and V (the verifier) . Player R attempts to refute that E FV <1>, whereas player V
wishes to establish that it is true.
A play of the game Gv(Eo , <1>0) is a finite or infinite length sequenee of the
form

where eaeh fonnula <1>; is a subfonnula of <1>0, that is <1>; E Sub(<I>o), and eaeh
proeess E; belongs to P(Eo). Ifpart ofa play is (Eo, <1>0) ••• (E j , <I> j ), then the next
move , and which player makes it, depends on the main eonneetive of the fonnula
<I> i - All the possibilities are presented in Figure 6.1. Players do not neeessarily
take tums, as in the bisimulation game', There is a duality between the rules for /\
and v, and [K] and (K). Player R makes the move when the main eonneetive is /\
or [K], whereas player V makes a similar move when it is V or (K) . The rules for
fixed point fonnulas use the fact that the starting formula <1>0 is normal, so eaeh
fixed point subformula is uniquely identified by its bound variable. Each time the
eurrentpositionis (E, o Z . qJ), thenextpositionis (E, Z), andeaeh time it is (F, Z)
the next position is (F, qJ), meaning the fixed point subfonnula Z identifies is, in
effeet, unfolded onee. Beeause there are no ehoices, neither player is responsible
for these moves.

1It would be straightforward, but somewhatartificial, to make playerstake turn, by addingextra null moves.
6.2. Property checking games 137

Example 1 Considerthe game G(CI, vZ . [tock]ff /\ (tick}Z? Thepositions are as follows.


(Cl , vZ. [tock]ff /\ (tick}Z)

"'-
(Cl, Z)

"'-
(Cl, [tock]ff /\ (tick}Z) R
/ ~
(Cl, [tock]ff) (Cl, (tick}Z) V

"'-
(Cl, Z)

The arrows indicate which of the positions can lead to a subsequent position, and
the label at a position indicates what player is responsible for the move. The position
(Cl, Z) is repeated. Hence , this game has only finitely many different positions.
The "game graph" is the graphical presentation of the positions as above. A play
of the game can be viewed as the sequence ofpositions that a token passes through
as the players move it around the game graph. For instance , the single infinite play
repeatedly cycles through (Cl, Z) .
Example 2 Consider the counter Cto ofFigure 1.4, which is a simple infinite state system and
the game G(Cto, fLZ. [up]Z). The positions are very straightforward.
(Cto, fLZ . [up]Z)

"'-
(Cta , Z)

"'-
(Cto , [up]Z) R

"'-
(Ct}. Z)

"'-

2 If agame does not depend on a valuation, we drop this index.


138 6. Verifying Temporal Properties

Player R wins
1. The play is (Eo. <1>0) • • • (En• <l>n) and
• <l>n = ff, or
• <l>n = Z and Z is free in <1>0 and E; rt V(Z), or
• <l>n = (K}\lf and {F : E; ~ F anda E K} = 10
2. The play (E o• <1>0) • • • (E n, <l>n) • • • has infinite length and the unique vari-
able X, which occurs infinitely often and which subsumes all other variables
occurring infinitely often, identifies aleast fixed point subfonnula JlX . \lf
Player V wins
1. The play is (Eo. <1>0) • • • (E n• <l>n) and
• <l>n = tt, or
• <l>n = Z and Z is free in <1>0 and En E V(Z), or
• <l>n = [K]\lf and {F : E; ~ Fand a E K} = 10
2. The play (Eo . <1>0) •• • (E n• <l>n) . • • has infinite length and the unique vari-
able X. which occurs infinitely often and which subsumes all other variables
occurring infinitely often, identifies a greatest fixed point subfonnula v X. \lf

FIGURE 6.2. Winning conditions

The result is agame with an infinite number of different positions.


Next , we define when a player is said to win a play of agame. The winning
conditions are presented in Figure 6.2. The refuter wins ifa blatantly false position
is reached, such as (E n • Z) , where Z is free in <1>0 and E; Pf:v Z . The verifier wins
if a blatantly true position is reached. For instance, ifthe play reaches the position
(Cl. [tock]ff) in example I because Cl has no tock transitions.
The second winning condition in Figure 6.2 identifies the winning player in an
infinite length play. The winner depends on the "outermost fixed point" subfonnula
that is unfolded infinitely often: if it is aleast fixed point subfonnula, player R
wins and if it is a greatest fixed point subfonnula, player V wins. Because there is
only one fixed point subfonnula in the only infinite play of example I, and it is a
greatest fixed point, player V wins that play. Similarly, player R wins the only play
of example 2 because the single fixed point variable Z identifies aleast fixed point
subfonnula. Ifthere is more than one fixed point variable occurring infinitely often,
then we need to know which of them is outennost. The definition in condition 2 is
in tenns of subsumption, as defined in definition 3 of Section 5.2. If X identifies
G) X. \lf and Z identifies G2Z. \lf', then X subsumes Z if G2Z. \11' E Sub(G)X .\11). In
thecaseofagameinvolvingafonnulaG)X) .G2X2.. . . GnXn .<I>(X) • . . . • Xn),any
Xi may occur infinitely often in an infinite length play. However, there is just one
X j that occurs infinitely often and that subsumes any other X k occurring infinitely
6.2. Property checking games 139

often. This X j is the outennost fixed point subformula and it decides which player
wins the play. Proposition I makes precise this observation.
Proposition 1 Jf tE«; <1>0) ... (En, <l>n) . . . is an infinite length play ofthe game Gv(Eo, <1>0), then
there is a unique variable X that
1. occurs infinitely often, that is for infinitely many i. X = <I>i - and
2. if Y also occurs infinitely often, then X subsumes Y .
Proof. Leta XI. \111, . . . , o X n • \IIn be all the fixed point subformulas in Sub( <1>0) in
decreasing order ofsize". Therefore if i < j , then X j cannot subsume Xi. Consider
the next position (E j+ 1, <I> j+ 1) after (E i- <I> j) in a play: if <I> j is not a variable, then
1<1> j+ r] < 1<1> j I. Because each subformula has finite size, an infinite length play
must proceed infinitely often through variables belonging to {X" . .. , X n}. Hence,
there is at least one variable occurring infinitely often. If a subpart of the play has
the form

and K, =f:. X], then either X, subsurnes X j or X j subsumes Xi, but not both; see
Proposition I of Section 5.2. Consequently, by transitivity of subsumption, there
is exactly one variable X, occurring infinitely often and subsurning any other X],
which also occurs infinitely often. 0

A strategy for a player is a family of rules telling the player how to move.
It suffices (as with the bisimulation game of Chapter 3) to consider his tory-free
strategies, whose rules do not depend upon previous positions in the play, For
player R, rules have the following form,
• at position (E, <1>, /\ <1>2) choose (E, <l>i) where i = I or i = 2
• at position (E, [K]<I» choose (F, <1» where E ~ Fand a E K
For the verifier rules have a similar form,
• at position (E, <1>1 V <1>2) choose (E, <l>i) where i = 1 or i = 2
• at position (E, (K)<I» choose (F, <1» where E ~ Fand a E K
A player uses the strategy 1r in a play provided all her moves in the play obey
the rules in tt . The strategy n is winning if the player wins every play in which
she uses n . The following result provides an alternative account ofthe satisfaction
relation between processes and formulas.

Theorem 1 1. E FV <I> ijJ player V has a history-free winning strategyfor Gv(E, <1».
2. E ~v <I> ijJplayer R has a history-free winning strategyfor Gv(E, <1».
The proof ofTheorem I is deferred until the next section.

3The size of a fonnula <J>, written I<J> I, is the number of connectives within it.
140 6. Verifying Temporal Properties

D"

FIGURE6.3. A simple process

Property checking games can be presented graphically, as in examples 1 and


2, where all the possible positions (those that are reachable in some play) are
represented, together with which player is responsible for moving from a position,
and where she is able to move to. Player V has a winning strategy for the game
in example 1 and player R has a winning strategy for the game in example 2. In
both cases the strategy consists ofthe empty set of'rules (since the winning players
have no choices).
Consider the simple process 0 in Figure 6.3, and the following formula \It.
\It = f.LY. vZ . [a]«{b}tt v y) 1\ Z)
D fails to have the property \It; see example 2 of Seetion 5.6. The refuter has
a winning strategy for the game G(O, \It). The full game graph is pictured in
Figure 6.4. The node labelIed 16, (0 , (b}tt), is a winning position for the refuter
and node 9, (0" , tt), is a winning position for the verifier. Player R's winning
strategy consists of the following two rules.
at 14, (0, «(b}tt v Y) 1\ Z) , choose 15, (0, (b)tt v Y)
at 6, (0' , «(b}tt v Y) 1\ Z) , choose 12, (0', Z)
Play will proceed from node 6 to node 12, and fromnode 14 tonode 15.Atnode 15,
player V either loses immediately by moving to 16 or returns to node 2. If player
V always chooses node 2 when she is at 15, the play will be infinite. Although the
two variables Z and Y occur infinitely often in this play (at nodes 2, 4 and 12), the
refuter wins because Y subsumes Z and Y abbreviates aleast fixed point formula.
In this graphical account, the positions represent the board of the game, and a
token at a position represents the current position. A play is then a movement of
the token around the board , with the responsible player at a position choosing the
next position. An alternative presentation is to keep the process separate from the
formula . Figure 6.5 is a pieture of the above formula \It. Now we can think of
a position as a pair of tokens, one that moves around the transition graph of the
process , and the other that moves around the graph of the formula that is always
finite. For example, the position (0, «(b}tt v Y) 1\ Z) would be represented with a
token over 0 in Figure 6.3 and a token over 6 ofFigure 6.5. It is the formula that
6.2. Property checking games 141

1
2 ~---

13

~4---
1
5 R

1
.r >:
6 R

v 7 12~

Vj/~r/ j'R
9 11 R 14

V 15
1
116
1: (0, J1Y. vZ . [a]«(b}tt v Y) /\ Z)) 9: (D//,tt)
2: (0, y) 10: (0', Y)
3: (0, vZ. [a]«(b}tt v y) /\ Z)) 11: (0' , vZ . [a]«(b}tt v Y) /\ Z))
4: (0, Z) 12: (D',Z)
5: (0, [a]«(b}tt v Y) /\ Z)) 13: (0', [a]«(b}tt v Y) /\ Z))
6: (0' , «(b}tt v Y) /\ Z) 14: (0, «(b}tt v Y) /\ Z)
7: (0', (b}tt v Y) 15: (0, (b}tt v Y)
8: (0' , (b}tt) 16: (0, (b}tt)

FIGURE6.4. The garne G(D, J1,Y. vZ. [a]«(b}tt V Y) 1\ Z»


142 6. Verifying Temporal Propertles

R R v v
1-2 - 3-4-5 -6 -7 - 8 - 9

"----/
1: JLY. vZ. [a]«(b)tt v y) /\ Z) 5: [a](((b)tt v Y) /\ Z)
2: Y 6: (b)tt v Y) /\ Z
3: vZ. [a](((b)tt v Y) /\ Z) 7: (b)tt v Y
4: Z 8: (b)tt
9: tt

FIGURE 6.5. The fonnula p,Y. vZ. [a](((b)tt v Y) /\ Z)

decides which player makes the next move. This is a more compact representation
of agame because it requires only the sum of the size of the formula and the
number ofprocesses in a transition graph, whereas the representation using explicit
positions may require the product of the two.
Example 3 The following family ofprocesses is from example 2 ofSection 5.7.

a. I)B; :i
der
Bo ~ O} + b. L{B; : i ~ O}
der
b .B; i ~ 0

For any j ::: 0, Bj I s, ~ JLZ. (-)tt /\ [-a]Z. This formula is pictured in


Figure 6.6. Player R's winning strategy for any j is as follows .

at Bk I Bj and fonnula 3 choose formula 5


at Bk +1 I Bj and formula 5 choose transition Bk+1 I B j ~ Bk I Bj
at Bo I Bj and fonnula 5 choose transition Bo I Bj .s; Bj I Bj

I: JLZ .(-)tt/\[-a]Z 4: (-)tt


2: Z 5: [-a]Z
3: (-)tt/\[-a]Z 6: tt

FIGURE6.6. The formularzZ. (-)tt/\[-a]Z


6.2. Property checking games 143

For each j , the result is an infinite play passing through formula 2 infinitely often.

The game theoretic account of having a property holds for arbitrary processes
whether they be finite or infinite state. Generally, property checking is undecidable:
for instance, the halting problem is equivalent to whether TM n F vZ . (-)tt /\
[- ]Z, where TM n is the coding of the nth Turing machine. However, for some
classes of infinite state processes property checking is decidable; see for example
Esparza [21] .
The general problem ofproperty checking is closed under complement. Recall
the definition of complement <1>c of a formula <1>. For any valuation V, assume that
VC is its " complement" (with respect to a fixed E), the valuation such that for any
=
Z, the set VC(Z) P(E) - V(Z). In the following result, let player P be either the
refuter R or the verifier V throughout the Proposition.
Proposition 2 Player P does not have a history-free winning strategyfor Gv(E, <1» iff Player P
has a history-free winning stategyfor Gvc(E, <1>C).
Proof. Gvc(E, <1>C) is the dual ofGv(E , <I» , where the players reverse theirroles,
and the blatantly true and false configurations are interchanged. Therefore, ifplayer
P does not win Gv ( E , <1» , then the opponent player 0 wins this game. But then P
wins the dual game Gvc (E, <1>C). The converse holds by similar reasoning. 0

Exercises 1. Give winning strategies for the following properties of clocks


a, C11 F vZ . (tick) {tock)Z
b. C11 F J.tY. (tock)tt V [-] Y
C. C12 F vZ . [tn ck, tock]Z
d. C15 F vZ . [tick]Z
e. Cls F vZ . (r)Z
f. C17 F vZ. {tick)(tick)Z /\ (tock){tock)Z
where C11 , C12 , and C15 are as in Section 1.1, Cls ~ tick.Cls + r.Cl s, and
C17 ~ tick.C17 + tock.C1 7 •
2. Show the following using games
a. C11 F Cycle(ticktock)
b, Sched~ F Cycle(at a2a3)
c, Sched3 F Cycle(at a2a3)
where the property Cycle(. ..) is from Section 5.7, and the schedulers are from
Section 1.4.
3. Show the following using games
a, T(17) I- JLY. (-)tt /\ [-out(l)]Y
b. Sem I Sem I Sem I- vZ . [get](JLY. (-)tt /\ [-put]Y) /\ [-]Z
144 6. Verifying Temporal Properties

where T(i) is defined in Section 1.1, and Sem ~ get.put.Sem.


4. Using games , show Aj I Aj F J.1Z. (-)tt /\ [-a]Z , for any j ~ 0, where Aj
is defined in example 2 ofSection 5.7.
5. Define complement games for the following.
a. CI l F vZ. (tick)(tock)Z
b, CI l F J.1Y. (tock)tt v [-]Y
c. CI l F Cyc1e(tick tock)

6.3 Correctness of games


This section is devoted to the proof of Theorem I of the previous section, which
shows that games provide an alternative basis for processes having, or failing to
have , a property. The result appeals to approximants, and uses an extension ofthe
"least approximant" result; Proposition 2 ofSection 5.5.

Theorem 1 1. E FV <1> iff player V has a history-free winning strategyfor Gv(E, <1» .
2. E ~v <1> iffplayer R has a history-free winning strategyfor Gv(E , <1» .
Proof. AssumethatEo FV <1>0 . We show thatplayerV has ahistory-freewinning
strategy for Gv(Eo, <1>0). The proof idea is straightforward, since we show that
player V is always able to preserve "truth" of game configurations by making
her choices wisely, so winning any play. The subtlety is the construction of the
history-free winning strategy, and it is here that we use approximants to make
optimal choices. We develop first the idea of "least" approximants.
Let a I Z I.\I1I, . . . , anZn' \I1n be all the fixed point subformulas in Sub(<1>o) in
decreasing order of size (as in the proof of Lemma I of the previous section).
Therefore, if Z, subsumes Z] , this means that i :5 j : it is not possible for Z, to
subsurne Z, when i < j. A position in the game Gv(Eo, <1>0) has the form (F, \11),
where \11 may contain free variables in {ZI, .. . , Zn}. But really these variables are
bound. Therefore, we now consider the correct semantics for understanding the
formulas in a play.
Let P bethe set ofprocesses P(Eo). Wedefine valuations Vo , . .. , Vn iteratively
as follows
Vo V
Vi+1 = Vi[Ei+I!Zi+Il ,
where Ei+l = {E E P : E FV i ai+IZi+1 . \I1 i +d . The valuation Vn captures the
meaning ofall the bound variables Zi. We say that a game position (F, \11) is true if
F FVn \11. Clearly, the starting position is true because Eo FVn <1>0 iff Eo FV <1>0 ,
and by assumption Eo FV <1>0 .
6.3. Correetness 01 games 145

Given any true position, we define a refined valuation that identifies the smallest
least fixed point approximants making the configuration true . To define this, let
jLYI. \11; • . . . , jLYk. \II~ be allieastfixed point subformulas in Sub(<I>o) in decreasing
order of size , meaning each Yi is some Z i - We define a signature as a sequence of
ordinals oflengthk, Cli .•. ai, We assume the lexicographic ordering on signatures:
Cli • • • Clk < ßI . . . ßk if there is an i .::: k such that Clj = ß j for all I .::: j < i and
a, < ßi . Given a signature S = Cli . •. Clb we define its associated valuation V~ as
folIows . Again the definition is iterative, since we define valuations Vb•. .. , V~

Vb V
Vf+1 Vf[Ei+I/Zi+il.
where now the set EH 1 depends on the kind offixed point 0i+! Zi+l.

1. If 0i+1 = v then Ei+1 = {E E P : E ~vi 0i+1 Zi+l . \IIi+d


2. Ifoi+IZi+1 = jLYj thenEi+1 = {E E P : E ~vi oy; i ,\IIj}

We leave as an exercise that, if F ~v. \11 , then there is indeed a smallest signature
S such that F ~v~ \11. It is this result that generalises the "least approximant" result
ofSection 5.5. Given a true configuration (F. \11), we therefore define its signature
to be the least S such that F ~v~ \11 . Signatures were introduced by Streett and
Emerson in [56].
Next, we define the history-free winning strategy for player V. Consider any
true position, which is a player V position. If the position is (F, \111 v \112), then
consider its signature s. It follows that F ~v~ \111 V \112. Therefore, F ~v~ \IIi
for i = I or i = 2. Choose one of these \IIi that holds , and add the rule " at
(F. \11 1 v \112) choose (F. \11;)." The construction is similar for a true position of
the form (F, (K }\II). Let s be its signature, and therefore F ~v~ {K} \II. Hence,
there is a transition F ~ F ' with a E K , and F ' ~v~ \11. Therefore, the rule
in player V's strategy is " at position (F, {K} \11) choose (F' . \11)." The result is a
history-free strategy. The rest of the proof cons ists of showing that indeed it is a
winning strategy.
But suppose not. Assume that player R can defeat player V. The starting position
is true, and as play proceeds we show that the play can not reach a false position
because player V uses her strategy. Moreover, we shall carry around the signature
of each position. The initial position is (Eo. <1>0) and Eo ~v:.o <1>0. Assume that
(E m , <l>m) is the current position in the play and E m ~V:;" <l>m . Ifthe position is a
final position, then clearl y player R is not the winner. Therefore, either player V
wins , or the play is not yet complete. In the latter case , we show how the play is
extended to (E m + 1, <l>m+l) by case analysis on <l>m .
If <I>m = \11 1 /\ \112 , then player R chooses \11;, i E {I , 2}, and the next position
(Em+l, <l>m+l) is truewhere E m+1 = E m and <l>m+1 = \11;, and Em+1 ~V~"+l <l>m+l .
Clearly, the signature Sm+1 .::: Sm . A similar argument applies to the other player
R move when <l>m = [K]\II , and again Sm+1 .::: Sm'
146 6. Verifying Temporal Properties

If <l>m = 1111 V 1112, then player V uses the strategy to choose the next position.
Clearly, this preserves truth because Sm must be the same signature as when defining
the strategy. A similar argument applies to the other case of a player V move when
<l>m = (K}III. Again, in both cases Sm+1 ::: Sm '
If <l>m = o, Z;. 111;, then the next game configuration is the true position
(Em+l , <l>m+I), where E m+1 = E m and <l>m+l = Z;. Notice the signature may
increase if a, = /L. If <l>m = Z, and o, = v, then the next position (E m+l, <l>m+l) is
true , where E m+1 = E m and <l>m+1 = 111; and Sm+1 = Sm' If <l>m = Z; and a, = /L
then the next position (E m+1, <l>m+l) is true, where E m+1 = E m and <l>m+1 = 111;
and Sm+l < Sm because, in effect, the fixed point has been unfolded. In this case
there is a genuine decrease in signature.
The proof is completed by showing that player V must win any such play. As
we have seen, this is the case when a play has finite length. Consider therefore
an infinite length play. By Proposition I of the previous section, there is a unique
Z; occurring infinitely often, and which subsumes any other Z j also occurring
infinitely often. Consider the game play (E b <l>d .. • from the point k such that
every occurrence of a variable Zj in the play is subsumed by Zi, and let k 1, k2,
.. . be the positions in this suffix play where Z; occurs. Z; cannot identify aleast
fixed point subformula, For by the construction above, this would require there
to be a strictly decreasing sequence of signatures sets SkI > Sk2 > ..., which is
impossible. Therefore Z, identifies a greatest fixed point subformula.
Part 2 of the proof is similar. We need to generalise the other half of the least
approximant result; Proposition 2 of Section 5.5. The proof is left as an exercise
for the reader. 0

Exercises 1. Prove the smallest signature result used in Theorem 1: if F FVn 111 , then there
is a smallest signature S such that F FV~ 111 .
2. For each ofthe following construct its signature

a, CI l F vZ . (tick}(tock}Z
b. CI l F /LY. (tock}tt v [-]Y
c. T(19) f- /LY. (-}tt /\ [-out(I)]Y
d. Sem I Sem I Sem f- vZ . [get](/LY. (-}tt /\ [-put]Y) /\ [-]Z

e. As lAs F /LZ. (-}tt /\ [-a]Z

where As is defined in example 2 ofSection 5.7.


3. Prove part 2 of Theorem 1.
6.4. CTL games 147

6.4 CTL games


The two until operators ofthe logic CTL ofSection 4.3 can be viewed as macros
for JLM formulas, as described in Section 5.2.
JLZ . W v (<P /\ ((-)tt /\ [-]Z»
JLZ . W v (<P /\ (-)Z)
Negations of until formulas are therefore maximal fixed point formulas, as also
described in Section 5.2. In this section, property checking games for CTL are
presented. The idea is to develop CTL games entirely within the syntax of CTL,
even though their justification owes to the fact that CTL can be embedded in JLMI,
as shown in Section 5.6. Similar games could be developed for any other logic that
can be systematically defined in JLM, such as propositional dynamic logic .
The syntax of CTL is given in Section 4.3 . In the following, for ease of
exposition, the following abbreviation is used .
(K)W == ....,[K]....,W
A play ofthe CTL property checking game G(Eo, <Po), where <Po is a CTL formula,
is a finite or an infinite length sequence ofpairs (Ei, <Pi)' The next move in the
play (Eo, <Po) . . . (E], <P j) is defined in Figure 6.7. The rules appear to be com-
plicated, but this is because of the presence of negation in the logic . The refuter

• if<l>j = W)/\ 'lJ2,thenplayerRchoosesaconjunct'lJi,wherei E {1,2}:the


process E j +! is E j and <I> j+) is Wi
• if <I> j = ....,(W) /\ W2) , then player V chooses a conjunct Wi, where i E {l , 2}:
theprocess E j +! is E j and <l>j+1 is "",Wi
• if <I> j = [K] W, then player R chooses a transition E j ~ E j+) with a E K
and <I> j+) is W
• if <I> j = ....,[ K] W, then player V chooses a transition E j ~ E j+) with a E K
and <I> j +) is ....,W
• if <P j = ""'....,W, then Ej+) is E j and <I> j+) is W
• if<l>j = A(W)UW2),then "",("",W2/\ ....,(W)/\((-)tt/\ [-]A(W)UW2»» is <I> j+)
and Ej+) is E j
• if <I> j = ....,A('lJ) U'lJ2), then ....,'lJ2 /\ ....,('lJ) /\ (( - }tt /\ [- ]A('lJ)U'lJ2») is <I> j+ )
and Ej+) is E j
• if <I> j = E('lJ)U'lJ2), then "",("",W2 /\ ....,('lJ)/\ (-}E('lJ) U'lJ2») is <P j+) and Ej+)
is E j
• if <I> j = ....,E('lJ) UW2), then "",W2 /\ ....,('lJ) /\ (- }E('lJ)UW2» is <I> j+) and Ej+)
is E j

FIGURE 6.7. Rules for the next move in a CTL gameplay


148 6. Verifying Temporal Properties

Player R wins
1. The play is (Eo, <1>0) •.. (En, <1>n) and
• <1>n = -.tt, or
• <1>n = -.[K]\II and {F : E ~ Fand a E K} = 0
2. The play (Eo , <1>0)' .• (E n, <1>n) • . • has infinite length and there is an until
fonnula A(\II) U\II2) or E(\II) U\II2) occurring infinitely often in the play
Player V wins
1. The play is (E o, <1>0) ' .. (E n, <1>n) and
• <1>n = tt, or
• <1>n = [K]\II and {F : E ~ Fand a E K} =0
2. The play (Eo, <1>0) • . . (En, <1>n) •• • has infinite length and there is a negated
until fonnula -.A(\II) U\II2) or -.E(\II) U\II2) occurring infinitely often in the
play

FIGURE 6.8. Winning conditions

chooses the next position when the fonnula has the form \111 /\ \112 or [K]\II, and
the verifier chooses when it has the form -.(\11) /\ \112) or -.[K]\II . Because there
are no choices in the remaining mies, neither player is responsible for them . The
first of these reduces a double negation. The remaining mies are for until fonnulas
and their negations, and are detennined by their fixed point definitions.
Figure 6.8 captures when a player is said to win a play of agame. Player R
wins a play if a blatantly false position is reached, and V wins if an obviously true
position is reached. The other condition identifies which of the players wins an
infinite length play. This is much easier to decide than for full JLM. For any infinite
length play of a CTL game, there is only one until fonnula or negation of an until
fonnula occurring infinitely often. It is this fonnula that decides who wins . If it is
an until fonnula, and therefore aleast fixed point fonnula, then R wins the play; if
it is the negation of an until fonnula, and therefore a maximal fixed point fonnula,
then V wins the play.
Again a history-free strategy for a player is a family of rules telling the player
how to move, and independent of previous positions. For the refuter, mies have
the following form.
• at position (E, <1» /\ <1>2) choose (E, <1>i) where i = I or i = 2
• at position (E, [K]<1» choose (F, <1» where E ~ Fand a E K
For the verifier mies have a similar form .
• at position (E , -.(<1» /\ <1>2)) choose (E, -.<1>i) where i = 1 or i = 2
• at position (E, -.[K]<1» choose (F, -.<1» where E ~ Fand a E K
A player uses the strategy n in a play if all her moves in the play obey the mies in
it , and Jr is winning if the player wins every play in which she uses Jr •
6.4. CTL games 149

j
V 2

3
/~ 5

4
/ ~R 6

V 7
/~R 9

8
/

/~
10 V 11
R

<1> == A((tick}tt U (tock}tt»


1: (Cl, <1»
2: (Cl, -{-.(tock}tt /\ -.«(tick}tt /\ «(-}tt /\ [-]<1»»)
3: (Cl, -.-.(tock}tt)
4: (Cl, (tock}tt)
5: (Cl, -.-.«(tick}tt /\ «(-}tt /\ [-]<1»»
6 : (Cl, (tick}tt /\ «(-}tt /\ [-]<1»)
7: (Cl, (tick}tt)
8: (Cl, tt)
9 : (CI,(-}tt/\[-]<1»
10: (Cl, (-}tt)
11 : (CI,[-]<1»

FIGURE 6.9. A CTL game

Example 1 Just for illustration, we show that the refuter has a winning strategy for the game
G(CI, A«(tick}ttU(tock}tt» . The game graph is presented in Figure 6.9. Po-
sition 4 is a winning position for the refuter, and 8 is a winning position for the
verifier. The refuter's winning strategy is to choose 9 at 6, and 11 at 9. Therefore,
the only infiniteplay cycles through the until formula, and is thereby won by player
R.
150 6. Verifying Temporal Properties

The following Proposition is a corollary ofTheorem 1 ofSection 6.3.

Proposition 1 1. If <I> E CTL, then E F <I> i.fJ player V has a history-free winning strategy
for G(E , <1» .
2. If <I>
E CTL, then E ~ <I> i.fJplayer R has a history-free winning strategy for
G(E, <1» .
The characteristic of a play of a CTL game is inherited for any play of agame
Gv( E, <1» when <I> E JLMI, that for any infinite play there is just one fixed point
variable that occurs infinitely often . A more general fragment of JLM is JLMA,
the alternation free formulas of modal mu-caIculus. These fragments are defined
in Section 5.6. In any infinite play of Gv( E, <1» when <I> E JLMA all fixed point
variables occurring infinitely often are of the same kind, that is, they are either all
greatest or all least fixed point variables .

Proposition 2 1. If <I> E JLMI, then in any infinite play ofthe game Gv(E, <1» there isjust one
fixed point variable X occurring infinitely often.
2. If <I>E JLMA, then in any infinite play ofthe game Gv(E, '1» the variables
occurring infinitely often are either all greatest fixed point variables, or are
allieastfixed point variables .
Proof. For both cases it is c1ear that, in any infinite length play, there is at least
one fixed point variable that occurs infinitely often. If there is just one fixed point
variable that occurs infinitely often, then the results follow. Otherwise, assume an
infinite play in which the distinct variables X], . .. , X n all occur infinitely often.
Assume that they identify the fixed point subformulas u, X,. \11], • . . , U n Xn - \11 n, in
decreasing order ofsize. Consider now the first case, when '1> E JLMI. Because it is
possible to proceed in a number ofmoves from some position (F], X]) to (F2 , X2),
and then to (F3, X3) and so on to ( Fn, Xn) and then back to (F{, X]), it follows that
X] must occur free in at least one of the subformulas U2X] , \11], . . •, unX n. \I1n,
which in the first case contradicts that the starting formula <I> E JLMI. For the
second case, assume that X; is the first variable that identifies a different kind of
fixed point to that of X] . Because it is possible to proceed in a number of moves
from (F;, X;) to (F{, X]), at least one of the variables Xs- 1 :::: j < i must occur
free in u;X; . \11;, which contradicts that the starting formula <I> E JLMA. 0

Exercises 1. Give winning strategies for the following properties ofVen


a. Yen F A(tt U (co.l Lec tj , collectl}tt)
b. Yen F E(tt U (collectb}tt)
c. Yen F E(tt U (co l Lect j rt't)
6.5. Parity games 151

d. Yen 1= ....,A(tt U (collectl)tt)


2. Extend the rules of CTL game playing to fonnulas of the form A F 41, E F 41.
Give the winning strategies for the following .
a. SMo 1= EF(win(5»)tt
b, Crossing 1= ....,EF«(tcross)tt 1\ (c cr os s )t t )
3. Consider AD n as defined in the exercises in Section 5.6. Give a condition,
similar in spirit to Proposition 2, that characterises how many different alter-
nations offixed point variables any infinite play ofagame involving 41 E ADn
canhave.

6.5 Parity games


Assume that E is a finite state process. An algorithm answering the question " is
it the case that E I=v 4I?" is known as a model checker. Process E detennines a
model , its trans ition graph, and the question is whether the property 41 holds at state
E ofthis model (relative to V). Model checking was introduced by Clarke, Emerson
and Sistla for the logic CTL [12]. Games offer a foundation for model checking.
However, there are other foundations including nondetenninistic automata and
altemating automata; see Vardi and Wolper, and Bernholtz et a1., [59,4].
An important issue is how much resource is needed to model check. There are
questions of time (How quickly can it be done?) and space (How much memory
is needed?). The quantitative results depend on what counts as the size of the
decision problem. One notion of size is that of the resulting game graph, whose
upper bound is IP( E) 1 x 141 1. Altematively, because the input to model checking
is the model E and the fonnula 41, the size is IP(E )I + 1411. The contrast between
these two notions of size is illustrated by the two graphical presentations of the
property checking game in Section 6.3. Both notions of size depend on the model
of E as a transition graph. A third notion of size does not appeal to this graph.
Instead, the size of E is the size of its description in CCS. For instance, if E is a
parallel composition of processes EIl . . . IE n then the size of its description is the
sum of the sizes of the components, whereas the size of its transition graph is the
product of the sizes of its components.
Whatever the notion of size, there is also the question of an efficient imple-
mentation of the decision question. What succinct data structures should be used
for representing processes, their behaviour, and fonnulas? One popular method is
to represent graphs succinctly using OBDDs (ordered binary decision diagrams).
Model checking provides a "yes" or "no" answer to the quest ion, "is it the
case that E I=v 4I?" A more informative answer includes why E satisfies or fails
to satisfy 41. The property checking game offers the possibility of exhibiting the
winning strategy that can then be used as evidence.
152 6. Verifying Temporal Properties

In this section, model checking is abstracted into a simpler graph game , (which
is a slight variant ofwhat is called the parity game; see Emerson and Jutla [19]).
A parity game is a directed graph G = (N,~, L) whose set ofvertices N is a
finite subset of N and whose binary edge relation ~ relates vertices. As usual we
write i ~ j instead of (i, j) E~. The vertices are the positions of the game. The
third component L labels each vertex with R or with V: L(i) teIls us which player
is responsible for moving from vertex i . A play always has infinite length because
we impose the condition that each vertex i has at least one edge i --+ j. A parity
game is a contest between player Rand V. It begins with a token on the least vertex
j (with respect to < on N). When the token is on vertex i and L(i) = P , player
P moves it along one of the outgoing edges of i , A play therefore consists of an
infinite length path through the graph along which the token passes.
The winner of a play is detennined by the label of the least vertex i that occurs
infinitely often in the play. If L(i) = R, then player R wins; otherwise L(i) = V ,
and player V wins. We now show how to transform the property checking game
for finite state processes into an equivalent parity game. Let Gv(E , <1» be the
property checking game for the finite state process E and the normal fonnula <1>.
The transfonnation into the parity game Gv [E , <1>] proceeds as follows.
1. Let EI , , Ern be a list ofall processes in P(E) with E = EI .
2. Let ZI, , Zk be a list of all bound variables in <1>. Therefore, for each Z,
there is the associated subfonnula a, Zr. Wi'
3. Let <1>1 , • . • , <1>/ be a list of all fonnulas in Sub(<1» - (ZI, . . . , Zd in
decreasing order of size. This means that <1> = <1> I. Extend the list by in-
serting each Z, directly after the fixed point subfonnula associated with it:
<1>1, . .. , aiZi .Wi, Zi , .. . , <1>/ . The result is a list ofall fonnulas <1>1, .. • , <1>n
in Sub( <1» in decreasing order of size, except that the bound variable Z, is
"bigger" than Wi but "smaller" than o, Zi. Wi.
4. The possible positions of the property checking game Gv( E , <1» is now listed
in the following order. Positions earlier in the list contain "larger" fonnulas.
(EI , <1>J>, . •• , (Ern , <1>J>, (EI, <1>2) , . . " (EI, <1>n), " " (Ern, <1>n)
The vertex set N of the parity game Gv [E , <1>] is {l , . . . , m x n}. Each vertex
i = m x (k - 1) + j represents the position (E] , <1>t).
5. Next, we define the labelling L(i) of vertex i and its edges by case analysis
on the position (F, w) that i represents.
• If W is Z and Z is free in the starting fonnula <1> and F E V(Z) , then
L(i) = V and there is the edge i ~ i. Instead, if F (j. V(Z), then
L(i) = R and there is the edge i ~ i
• If W is tt, then L(i) = V and there is the edge i ~ i
• Ifw is ff, then L(i) = Rand there is the edge i ~ i
• If W is WI /\ W2, then L(i) = R and there are edges i ~ j 1 and i ~ j2
where j 1 represents (F , WI) and j2 represents (F, W2)
6.5. Parity games 153

• If W is WI V W2, then L(i) = V and there are edges i -+ j 1 and i -+ j2


where j 1 represents (F, WI) and j2 represents (F, W2)
• If W is [K]W' and {F ' : F ~ F' and a E K} = 0, then L(i) = Vand
there is the edge i -+ i
• Ifw is [K]W' and {F' : F ~ F' and a E K} =1= 0, then L(i) = Rand
there is an edge i -+ j for each j representing a position (F' , w') such
that F ~ F' for a E K
• Ifw is (K}W 'and{F' : F~ F'anda E K} = 0,thenL(i) = Rand
there is the edge i -+ i
• Ifw is (K}W ' and {F' : F ~ F' and a E K} =1= 0, then L(i) = Vand
there is an edge i -+ j for each j representing a position (F', W') such
that F ~ F' for a E K
• If W = v Z] . Wj , then L(i) = V and there is the edge i -+ j' where j'
represents (F, Z j)
• If W = J1Zl : Wj» then L(i) = R and there is the edge i -+ j' where j'
represents (F, Z j)
• Ifw = Zj and v Z] . Wj is in Sub(<I» , then L(i) = V and there is the edge
i -+ j ' where j' represents (F, Wj)
• If W = Z j and J1Zj. Wj is in Sub(<I», then L(i) = Rand there is the
edge i -+ j' where j' represents (F, Wj)
6. Finally, we tidy up and remove any positions not reachable from the initial
position (E , <1» .
Consider any play of the resulting parity game Gv [E , <1>]. The least vertex i
that occurs infinitely often must represent one of the following positions.
1. (F, Z) and Z is free in <I>
2. (F, tt)
3. (F, ff)
4. (F, [K]W) and {F' : F ~ F' and a K} = °
°
E

5. (F, (K}W) and {F' : F ~ F' and a E K} =


6. (F , Zj)
In all but the last case, there is the cycle i -+ i, and the winner is the same as
in the property checking game. In the parity game , these positions turn into loops
to make playing perpetual. Otherwise, position i represents (F, Z j ). Consider any
other j' occurring infinitely often and that represents position (F' , Z/) . Because
i is the least vertex occurring infinitely often, position (F', Z/) appears later than
(F , Z j) in the position ordering. Therefore, either Z j = Z/ or the fixed point
formula identified by Z/ is strictly smaller than that identified by Z j; in which
case, Z j subsumes Z/. If v Zi- Wj is in Sub( <1» , then player V is the winner, and
154 6. Verifying Temporal Properties

D Player V vertices

o Player R vertices

1: (0, vZ . (b)tt 1\ (-)Z) 2: (0', vZ. (b)tt 1\ (-)Z)


3: (0", vZ . (b)tt 1\ (-)Z) 4: (0, Z)
5: (0', Z) 6: (0", Z)
7: (0, (b)tt 1\ (-)Z) ·8 : (0', (b)tt 1\ (-)Z)
9: (0", (b)tt 1\ (-)Z) 10: (0, (-)Z)
11 : (0' , (-)Z) 12 : (0", (-)Z)
13: (0, (b}tt) 14: (0', (b)tt)
15: (0", (b)tt) 16: (0, tt)
17 : (0', tt) 18: (O",tt)

FIGURE 6.10. Game

instead if J-tZl : \l1 j is in Sub( <1», then player R wins. This agrees with the property
checking game.
The notions of a (history- free) strategy and of a winning strategy for the parity
game are very similar to the property checking game. For player P, a history-free
strategy is a set of mies of the form "at i choose j" where L(i) = P and there
is an edge i -+ j. A strategy is winning for player P if she wins every play in
which that strategy is employed. The following proposition is deducible from the
analysis and observations above.
Proposition 1 Player P has a winningstrategyfor Gv(E, <1» iffPlayer P has a winningstrategy
for Gv[E, <1>].
Example 1 The game G[D, vZ. (b)tt 1\ (-)Z], where Dis depicted in Figure 6.3 is given in
Figure 6.10. Vertices (such as 2 and 3) not reachable from vertex I in the game are
excluded. The representation of positions from the model checking game are also
presented.
6.5. Parity games 155

Exercises 1. Define the parit y games G[E, <1>] when E and <1> are the following.
a. D and \lJ from Section 6.2.
b. Yen and vX . (-}X
c, Crossing and j.lX. [-]X
2. Prove Proposition 1.
3. Prove that there is a converse to Proposition 1. Any parity game can be trans-
formed into a property checking game who se size is polynomially bounded
by the parity game. (See Mader for an explicit construction [39].)
4. Define infinite state parity games so that Proposition 1 also holds for infinite
state processes E. (Hint: use a finite set of indexed colours.)
5. Boolean fixed point logic is defined as folIows .

<1> ::= Z I tt I ff I <1>1 /\<1>2 I <1>1 V <1>2 I vZ. <1> I j.lZ. <1>
Formulas are interpreted over the two element set 10 (false) and {I} (true) , A
closed formula is therefore either true or false. For instance, u.Z , Z is false
because 10 is the least solution to the equation Z = Z .
a. Show that a parity game can be direct1y translated into a closed formula
of boolean fixed point logic in such a way that player V wins the game
iff the translated formula is true.
b, Show the converse , that any closed boolean formula can be translated into
a game in such a way that the formula is true iff player V wins the game .
6. A simple stochastic game , SSG , is a graph game whose vertices are labelIed R,
Vor A (average) , and where there are two special vertices R-sink and V-sink
(which have no outgoing edges). Each R and V vertex (other than the sinks)
has at least one outgoing edge , and each Avertex has exact1y two outgoing
edges. At an average vertex during agame playa coin is tossed to determine
which of the two edges is traversed, each having probability ~ . Agame play
ends when a sink vertex is reached: player V wins if it is the V-sink, and player
R otherwise (which includes the eventuality ofthe game continuing forever).
The decision question is whether the probability that player V wins is greater
than ~. It is not known whether this problem can be solved in polynomial
time; see Condon [15].
Show how to (polynomially) transform a parity game into an SSG in such a
way that the same player wins both games. (Hint: add the two sink vertices,
and an average vertex i 1 for each vertex i for which there is an edge j ---7 i
with j ~ i. Each such edge j ---7 i when j ~ i is removed, and the edge
j ---7 i 1 is added. Two new edges are added for each Avertex i 1: first an
edge to i , and second an edge to R-sink, if i is labelled R, or to V-sink if it
is labelIed V. The difficult question is: what probabilities these edges should
have?)
156 6. Verifying Temporal Properties

6.6 Deciding parity games


The model checking question "is it the case that E FV <I>?" is equivalent to
the decision question "which player has a winning strategy for the parity game
Gv[E , <I>]?" In this section, we develop a method for deciding parity games and
for exhibiting a winning strategy. The technique uses subgames of a parity game.
If G = (N, -+, L) is a parity game and i is avertex in N, then G(i) is the
game G, except that the starting vertex is i. The game G itself is G(j), where j is
the smallest vertex. If X ~ N, then G - X is the result of removing all vertices
in X from G, all edges from vertices in X, and all edges into vertices in X. It is
therefore the subgraph G' = (N', -+', L') whose vertices N' = N - X, and whose
edge relation -+' is -+ n(N' x N'), and further whose labelling L'(j) = L(j)
for each j E N '. The subgraph G' may not be agame because it may not satisfy
the requirement that every vertex i E N' have an edge i -+ I j . If it does obey
this condition, then G' is said to be a subgame of G. Adegenerate case is when
N' is the emptyset. Now we shall consider appropriate subsets X so that G - X is
guaranteed to be a subgame.
A useful notion is a set of vertices of a parity game for which a player P can
force play to enter a subset X of vertices, written as Forcep(X). This idea of a
Force set is used in Section 3.2 in the case ofbisimulation games. It was used by
McNaughton and others [41, 58] in the case ofmore general games. A force set is
defined iteratively as folIows.
Definition 1 Let G = (N , -+ , L) be a parity game and X ~ N,

Force~(X) = X for P E {R , V}

Force~+I(X) = Force~(X)
U {j: L(j) = Rand 3k E Force~(X) . j -+ k}
U {j : L(j) = V and Vk. if j -+ k then k E Force~(X)}

Force~+I(X) = Force~(X)
U {j : L(j) = V and 3k E Force~(X) . j -+ k}
U {j : L(j) = R and Vk . if j -+ k then k E Force~(X)}

Forcep(X) = U {Force~(X) : i 2: O} for P E {R, V} .

Ifvertex j E Force p (X) and the current position is j , then player P can force play
from j into X irrespective ofwhatever moves her opponent makes. Vertex j itself
need not belong to player P. The rank of such avertex j is the least index i such
that j E Force~(X) . The rank is an upper bound on the total number ofmoves it
takes for player P to force play into X. There is an associated strategy for player P.
6.6. Deciding parity games 157

For every vertex i E Forcep(X) belonging to P, either i E X, or there is an edge


i ~ k and k E Forcep(X) , so the strategy for P is to choose a k with the least
rank.
The definition ofa force set provides a method for computing it. As i increases,
we calculate Force~(X) until it is the same set as Force~-I(X). Clearly, this must
hold when i .:::: (INI - lXI) + 1.
Example 1 Consider the following force set, where the vertices are from Figure 6.10.

Force~({l2, l5}) {12,15}


Force~({l2, l5}) = {12, l5} U {9}
Force~({12, l5}) {9, 12, l5} U {6}
Forcet({12,15}) = {6, 9,12, l5} U {11}
Forcet({12,15}) = {6, 9,11,12, l5} U '"

So, Forcev({12 , l5}) = {6,9, 11, 12, l5}, and 11 has rank 3. The different set
ForceR({l2, l5}) = {6, 9,12, l5}.
The following result shows that removing a force set from agame leaves a
subgame.
Proposition 1 If Gis agame and X is a subset ofvertices, then the subgraph G - Forcep(X) is
asubgame.

Proof. Assume that G = (N, ~, L) is agame and X ~ N, and that P is a player.


Consider the structure G' = G - Forcep(X). G' fails to be a subgame ifthere is
avertex j in N I such that there is no edge j ~ I k. Consider any such vertex j.
Clearly, each k such that j ~ k belongs to Forcej-t X) , so there is a least index i
such that k E Force~(X) for each such k. But then j E Force~+l (X), and therefore
j E Forcep(X), which contradicts JEN'. 0

We wish to provide adecision procedure for parity games that not only com-
putes the winner but also a winning strategy. Such a procedure can be extracted
from the proof of the following theorem.
Theorem 1 For any parity game G and vertex i, one ofthe p/ayers has a history-free winning
strategyfor G(i).

Proof. Let G = (N , ~, L) be a parity game. The proof is by induction on IN I.


The base case is when INI = 0 and the result holds . For the inductive step, let
INI > 0 and assume that k is the least vertex in N . Let X be the set ForceL(k)({k}).
If X = N, then player L(k) has a history-free winning strategy for G(i) for
each i E N, by forcing play infinitely often through vertex k. More precisely, the
strategy consists ofthe rules "at j choose j 1" when L(j) = L(k) and where j 1 has
a least rank in ForceL(k)({k}) among the set {j ' : j ~ j'} : this strategyensures
that k will be traversed infinitely often in every play.
158 6. Verifying Temporal Properties

Otherwise, X =1= N . By Proposition 1, G' = G - X is a subgame. The size of


N' is strictly smaller than the size of N . Therefore, by the induction hypothesis,
for each j E N', player Pj has a history-free winning strategy aj for the game
G' (j). Partition these vertices into W~, the vertices won by player R, and W~, the
vertices won by player V, as in Figure 6.11. Notice that, with respect to G', the set
W~ = Force p (W~) when P is R or V. The proof now consists of examining two
subcases, depending on the set Y = U : k -+ j}.
Case 1 Y n (Wf(k) U X) =1= 0. There is an edge k -+ j 1 and vertex j 1 E X or j 1 E Wf(k)'
as in Figure 6.11. Player L(k) has a history-free winning strategy for the subgame
G(i) for each i E X U Wf(k)' Let 1[ be the substrategy for L(k) that forces play from
any vertex in X to vertex k, as described earlier. Add to 1[ the rule "at k choose j 1",
and let 1[ ' be the resulting strategy. If j1 E X, then 1[' is a history-free winning
strategy for G(i) for each i E X, and a/ U 1[' is a history-free winning strategy for
G(i) for any i E Wf(k): the opponent may move from Wf(k) into X. If j 1 E Wf(k)'
then the strategy 1[' U ajl is winning for G(i) for i EX U U1}, and a/ U 1[' U aj l
is a winning strategy for G(i) for i in Wf(k) ' Again, the opponent may move from
Wf(k) into X. The opponent 0 of L(k) has the history-free winning strategy a/ for
each game G(i) when i E Wo '
Case 2 Yn(Wf(k)UX) = 0. Thismeans that,forevery j1 such thatk -+ j 1, theopponent
o of L(k) has a history-free winning strategy for G/(j 1). Let Z = Forceo(Wo)
with respect to the full game G (see the shaded picture in Figure 6.11) : notice that
Z contains the vertex k and possibly vertices from Wf(k) ' For each i E Z player 0
has a history-free winning strategy for G(i) : the strategy consists of forcing play
into Wo and then using the winning strategies determined from G', much 1ike we
described. Let o; be the history-free strategy for any i E Z. If Z = N, then aj is
the strategy for G(i) . Otherwise, consider the subgame G" = G - Z (the unshaded
part of the game in Figure 6.11). By the induction hypothesis, for each j in G"
player Pj has a history-free winning strategy aj' for G"(j). If Pj = L(k), then aj'
is a history-free winning strategy for L(k) for the game G(j). Otherwise, Pj = 0,
and player 0 has a history-free winning strategy for G(j). Player 0 uses the partial
strategyaj' until (if at all) player L(k) plays into the set Z, in which case p1ayer
o uses the approriate winning strategy, which keeps the p1ay in Z. We leave the
details to the reader. D

The proof of Theorem 1 contains a recursive algorithm for model checking,


which also computes winning strategies. Notice that a winning strategy is linear in
the size ofthe game. The decision question is, given a parity game G = (N, -+, L) ,
determine for each vertex i what p1ayer wins G(i), and what the winning strategy
is. Below is a summary ofthe algorithm that leaves out the computation ofwinning
strategies, which the reader can add . It computes the sets WR and Wv, which are
the vertices that p1ayer R wins and the vertices that player V wins.
1. Letk be the leastvertex in N, let X = ForceL(k)({k}) and let 0 be the opponent
of L(k)
2. If X = N, then return WL(k) = N and Wo = 0
6.6. Deciding parity games 159

W'R
]

~] W'v

W Uk)
G-
--jl
V-

jl
W'0

G
[ WUk)
]

FIGURE 6.11. Cases in The orem 1


160 6. Verifying Temporal Properties

3. Else solve the subgame G - X, and let W~ and W~ be the winning vertices
for R and V in the subgame. Let Y = {j : k --+ j}
a. If Y n (W{(k) U X) =1= 0, then return WL(k) = W{(k) U X and also Wo =
W'o
b. Else let Z = Forceo(Wo) . Solve the subgame G - Z, and let W~ and W~
be the winning vertices for R and V in the subgame. Return Wo=Z U W ö
and WL(k) = WZ(k)
The algorithm in Theorem I is exponential in the size of N because it may
twice call the solve procedure on games of smaller size at stage 3, and then again
at stage 3b. It is an open question whether there is a polynomial time algorithm for
this problem. We shall now look at subcases where model checking can be done
more quickly.
The first case is for alternation free formulas, the sublogic JtMA that contains
JtMI and also CTL. Recall that if<l> is a CTL formula, then the game G(E, <1»
has the property that, in any infinite length run, there is just one until formula, or
the negation of an until formula, occurring infinitely often. More generally (see
Proposition 2 of Section 6.4) if<l> belongs to JtMA, then the game Gv(E, <1» has
the property that, in any infinite length run all the variables that occur infinitely
often are ofthe same kind. This means that the resulting parity game Gv[E, <1>]
has a pleasant structure. Its vertices can be partitioned into a family of subsets
NI, . .. , N; such that the following properties hold.
1. Ni n N j = 0 when i i= j
2. if any play is at avertex in N], then the play cannot return to avertex in Ni
when i < j
3. for each i , all plays that remain in Ni forever are always won by the same
player
For such a parity game there is a straightforward iterative linear time algorithm that
finds the winner as follows. Start with the subgame whose vertices are N« . This
mustbeasubgame; forthereare no edges from Nn to N j,j < n. Oneoftheplayers
wins every vertex of N n . Suppose it is player P. Let K; = Forcep(Nn ) . Player P
wins every vertex in X n • Let G' = G - X; and assurne its vertex set is N'. This is
a subgame. Consider the non-empty sets in the partition NI n N' , . . . , Nn-I n N' .
These sets will inherit the properties above. Consider the final such set. One of the
players wins every vertex in this set, and so on.
The second case we consider is when one of the players never has a choice of
move. The game obeys the following condition in case of one of the players P.

for any vertex i, if L(i) = P and i --+ j and i --+ k, then j = k

Games displaya subgame invariance property for the opponent O. Let Q be one
ofthe players, and let G' = G - ForceQ(X). Ifplayer 0 wins vertex i E G', then
she also wins i E G because player P cannot escape from G' into ForceQ(X).
6.6. Deciding parity games 161

Therefore, we can use Theorem 1 to provide a polynomial time algorithm for this
case.

Exercises 1. Consider vertices in the parity game of Figure 6.10. Work out the following
force sets.
a. Forcev({l8})
b. ForceR({18})
c. Forcev({13, 15, 12})
d. ForceR({13, 15, 12})
2. Use the algorithm from Theorem 1 to decide who wins each vertex, and what
the winning strategy is, for the game in Figure 6.10.
3. Emerson, Juda and Sistla show that the model checking decision problem
belongs to NP n co-NP because deciding who wins a parity game belongs to
NP, and model checking is closed under complement [20]. Give a proof that
deciding who wins a parity game belongs to NP.
4. Prove that, if<1> E JlMA, then the parity game Gv[E, <1>] has the pleasant
structure decribed. That is, its vertices can be partitioned into a family of
subsets NI, . . . , N; such that the following properties hold .
a. Ni n N j = 0 when i =f:. j
b. if any play is at avertex in N] , then the play cannot return to avertex in
Ni wheni < j
c. for each i, all plays that remain in Ni forever are always won by the same
player
5. Assume the parity game G obeys the following condition for player P whose
opponent is 0 :
for any vertex i, if L(i) = P and i -+ j and i -+ k, then j = k.
Prove that, if Q is one of the players, and G' = G - ForceQ(X) and player 0
wins vertex i E G', then she also wins i E G. Use this fact to provide a quick
algorithm for deciding parity games obeying this condition.
7 _

Exposing Structure

7.1 Infinite state systems .. 164


7.2 Generalising satisfaction 165
7.3 Tableaux I 168
7.4 Tableaux 11 173

In the previous chapter, an alternative characterization of when a process has a


property was presented in terms of games . Simple property checking games were
presented where the verifier has a winning strategy for agame if, and only if, the
process satisfies the formula . When the state space ofa finite state process is large,
a proofusing games may become unwieldy. Moreover, we should like to be able to
cope with processes that have infinite state spaces. Plays of games involving such
processes may have infinitely many different positions. The question then arises as
to when there can be a finitely presented strategy, as a summary of the successful
strategy for a player.
There is another reason for examining more general ideas. Often we are in-
terested in showing properties ofschematic or parameterised processes. Processes
involving value passing constitute one such family. Another kind of example is
illustrated by the scheduler Sched, ofSection 1.4, which is parametric on the size
of the cycle n. The techniques of the previous chapter allow us to prove that, for
instance , Sched32 is free from deadlock. However, they do not allow us to directly
show that Schad, for any n > 1 is deadlock free.

C. Stirling, Modal and Temporal Properties of Processes


© Springer Science+Business Media New York 2001
164 7. Exposing Structure

7.1 Infinite state systems


Process definitions may involve explicit parameterisation. Examples include the
counters Cti and registers Regi of Section 1.1, and the slot machines SMn of Sec-
tion 1.2. Each instantiation of these processes, when i and n are set to a particular
value, is itself infinite state and contains the other family members within its
transition graph . However, the parameterisation is very useful because it reveals
straightforward structural similarities within these families of processes.
Another class of processes that is infinite state owes entirely to the presence
of data values. This family includes the copier Cop and the processes T(i) of
Section 1.1 and the protocol Protocol ofSection 1.2. However, there are different
degrees of involvement of data within these processes, depending on the extent
to which data determines future behaviour. At one extreme are examples such as
Cop and Protocol, which pass data items through the system oblivious to their
values. Different authors have identified classes of processes that are in this sense
data independent. At the other extreme are systems such as T(i), where future
behaviour strongly depends on the value i . In between are systems such as the
registers Regi , where particular values are essential to change of state.
A third class of processes is infinite state independently of parameterisation
and data values. An instance is the counter Count ofSection 1.5, which evolves its
structure as it performs actions. In certain cases, processes that are infinite state,
in that they determine an infinite state transition graph, are in fact bisimulation
equivalent to a finite state process. A simple example is that C and C' are bisimilar,
where these processes are as follows.
def
C = a .C I b .C
def
C' a .C' +b.C'

Although the structure ofCevolves through behaviour, C ~ C I b .C for example,


and therefore the transition graph for C is infinite state, each state of the graph
is bisimulation equivalent to the starting state, and hence to C'. A more general
interesting subclass of processes consists of those that can be infinite state, but
for which bisimulation equivalence is decidable . Two examples are context free
processes and basic parallel processes' ; see Hirshfeld and Moller for a survey of
results [30].
A final class of systems is also parameterised. However, for each instance of
the parameter the system is finite state. Two paradigm examples are the buffer
Buff n and the scheduler Sched., , both from Section 1.4. Although the techniques
for verification of temporal properties apply to intances they do not apply to the
general families . In such cases, we should like to prove properties generally, to
show for instance that Sched, is free from deadlock for each n > 1. The proof of

IThe processes C and C' are basic parallel processes.


7.2. Generalising satisfaction 165

this requires us to expose structure that is common to the whole family ofprocesses.
In this case, of freedom from deadlock, the property itself is not parameterised.
Much more complex is the situation wherein it is. An example is again the case
of the scheduler, that Sched, has the property Cycle(a l .. . an) for every n > I
(where the parameterised property is defined in Section 5.7).

Exercises 1. a, Provide adefinition for when a value passing process is data independent.
b, Can you generalise your definition for when a value passing process
only depends on finitely many different values (and so is "almest" data
independent).
Compare your definitions with those of Jonsson and Parrow [32].
2. Show that C '" C', where these processes are given above.
3. Prove the following for all n > I.
a. Sched, F vZ. (-)tt 1\ [-]Z
b. Sched, F Cycletc, . .. an)
4. Assume CYn = a!. . . . anCYn ' for any n >
def
l.
a, Prove that Sched; \\{b 1 , •• •• bn} ~ CYn ' for all n > I
b. Now provide adefinition of a parameterised bisimulation relation
covering the example in part (a)

7.2 Generalising satisfaction


The satisfaction relat ion between an individual process and a property is not suffi-
ciently general to capture properties ofparameterised processes. Although it allows
us to express, for instance, Sched, F vZ . (-)tt 1\ [- ]Z, that Sched, is deadlock
free, it does not allow the more general claim that Bched, for all n > I has this
property. A straightforward remedy is to generalise the satisfaction relation so that
it holds between a family of processes and a formula. We use the same relation
FV for this extension.

I E FV <J> iff E FV <J> for all E E E I


We write E F <J> when the formula contains no free variables. The more general
claim that the schedulers are deadlock free is therefore represented as folIows .

[Sched, : n> I} F vZ. (-)tt 1\ [-]Z


This generalization does not cover all pos sibilities discussed in the previous section.
In particular, we leave as an exercise the situation wherein the property is also
parameterised.
166 7. Exposing Structure

Example 1 The family of counters of Figure 1.4, {et; : i 2: O}, has the property
[up]([round]ff /\ [up](down)(down)tt). The following "proof" uses properties
of the generalised satisfaction relation, which are fonnalised precisely below.
{Ct; : i 2: O} 1= [up]([round]ff /\ [up](down) (down)tt)
iff {Ct; : i 2: I} 1= [round]ff /\ [up](down)(down)tt
iff {Ct; : i 2: I} 1= [round]ff and {Ct; : i 2: I} 1= [up](down)(down)tt
iff {Ct; : i 2: I} 1= [up](down)(down)tt
iff {Ct;: i ~ 2} 1= (down)(down)tt
iff {Ct; : i 2: I} 1= (down)tt
iff {Ct; : i ~ O} 1= tt
Arguably, this is a more direct proofthan to appeal to induction on process indices .
The reader is invited to prove it inductively, and expose the required induction
hypothesis.
Example 1uses various features ofthe satisfaction relation between sets ofpro-
cesses and fonnulas . The case when a fonnula is a conjunction oftwo subfonnulas
is the most straightforward, E I=v CI> /\ I}/ iff E I=v CI> and also E I=v I}/ . Disjunc-
tion is more complicated. If E I=v CI> V I}/ , then this does not imply that E I=v CI>
or E I=v I}/ . A simple illustration is that, although {O, a .O} 1= (a) t t V [a]ff, it is
neither the case that {O, a.O} 1= (a)tt, nor is it the case that {O, a .O} 1= [a]ff. In
general, if E I=v CI> v I}/ , then some processes in E may have the property CI> and
the rest have the property I}/. Therefore, E can be spit into two subsets EI and E2
in such a way that EI I=v eI> and E2 I=v I}/. One of these sets could be empty'.
To understand the situation when a set of processes satisfies a modal fonnula,
a little notation is introduced. If K is a set of actions, and E a set ofprocesses, then
K (E) is the following set.
def a
K (E) = {F : E ~ F for some E E E and a E K}
Therefore, K (E) is the set of processes reachable by K transitions from members
of E. For instance, {up}({Ct; : i 2: On
in example 1 is the set {Ct; : i ~ I}.
Clearly, E I=v [K]eI> iff K(E) I=v eI>. This principle is used in example 1. One
case' is {Ct; : i ~ I} 1= [up](down)(down)tt iff
{up}({Ct; : i 2: I} 1= (down)(down)tt.
A function f : E ~ K (E) that maps processes in E into processes in K (E) is
a "choice function" if, for each E E E, there is an a E K such that E ~
feE) . If f : E ~ K(E) is a choice function, then feE) is the set of processes

2By definition 0 I=v <I> for any <1>.


3 Another
use ofthe principle in example I is that {Ctj : j ?: I} 1= [round]ff because {round}({Ct j : j ?:
I)) = 0 and 01= ff .
7.2. Generalising satisfaction 167

E FV Z iff E ~ V(Z)
E FV <1> 1\ 1}1 iff E Fv <1> and E FV 1}1
E FV <1> V 1}1 iff E = EI U E2 and EI FV <1> and E2 FV 1}1

E FV [K]<1> iff K (E) FV <1>


E FV (K) <1> iff there is a choice function f : E -+ K (E)
and f (E) FV <1>
E FV a Z .<1> iff E FV <1>{a Z.<1>/Z}
FIGURE7.1. Semantics for E Fv <I>

{f(E) : E E E}. Example I uses the choice function f : {Ct;; i 2: 2} -+


{down}({Ct;; i 2: 2}) where f(Ct ;+I) = Ct;. Choice functions are appealed to
when understanding satisfaction inthe case of (K) modal formulas, E FV (K)<1>
iffthere is a choice function f : E -+ K(E) such that f(E) F <1> . Consequently,
in example 1 {Ct; ; i 2: I} F (down)tt iff {Ct; ; i 2: O} F tt.
The final cases to examine are the fixed points . We shall make use of the
principles developed in the previous chapter using games. Notice however, that
the fixed point unfolding principle, Proposition 2 of Section 5.1, holds for the
generalised satisfaction relation. In Figure 7.1 we summarise the conditions for
satisfaction between a family of processes and a formula.

Exercises 1. Prove example I using the principles ofFigure 7.1. Also provide a proofthat
Ct ; F [up]([round]ff 1\ [up](down) (down)tt) for all i 2: 0 using induction
on i. What induction hypothesis do you appeal to?
2. Show the following
a, {Ct; : i 2: O} ~ [up](down}{down)tt
b. {TU) : i S 1O} F p,Y. (-)tt 1\ [-out(l)]Y
c, [Sched, : n> I} F Cycle(ala2)
where T(i) is defined in Seetion 1.1.
3. Can you show the following?

{Semn : n 2: I} F vZ. [get](jLY. (-)tt 1\ [-put]Y) 1\ [-]Z ,


def
where Sem = get.put.Semand

= Sem
= Sem I Sem; i 2: 1.

4. Assume a copier shared by n users, where n 2: 2, given as a system Sys


which is (Cop I User, I . .. I Usern)\{in} whose components are defined as
168 7. Exposing Structure

folIows .
def
Cop in(x ).out(x).Cop
def
User; write(x).in(x).User;
Demonstrate that this system has the property that, if VI and then V2 are written,
then the output of either of these values may be the next observable action.
5. Develop similar principles to those in Figure 7.1 that also allow parameteri-
sation of formulas . Can you prove the following using these princ iples, for all
n > I?
Schad, F Cycletc, . . . an)

7.3 Tableaux I
In the previous chapter, games were developed for checking properties ofindividual
processes . Our interest is now in checking properties ofsets ofprocesses. Instead of
defining games to do this, we provide a tableau proofsystem for proving properties
of sets of processes. A tableau proof system is goal directed, similar in spirit to
the mechanism in Chapter I for deriving transitions. Its main ingredient is a finite
family of proof rules that allow reduction of goals to subgoals. Tableaux have
been used for property checking ofboth finite state and infinite state systems; see
Bradfield, Cleaveland, Larsen and Walker [8, 10, 11, 13,36,55].
A goal has the form E rv <I> where E is a set of processes, <I> is anormal
formula as defined in Section 5.2 and V is a valuation. The intention is to try to
achie ve the goal E rv <1>, that is, to show that it is true, that E Fv <1> . The proof
rules allow us to reduce goals to subgoals, and each rule has one of the following
two forms :
E I-v <I> C E rv <I> C,
F I- v W FI I- v WI F2 I- v W2
where C is a side condition. In both cases, the premise, E I- v <1>, is the goal that
we are trying to achieve (that E has the property <I> relative to V), and the subgoals
in the consequent are what the goal reduces to.
The tableau proofrules are presented in Figure 7.2. Other than Thin, each rule
operates on the main logical connective ofthe formula in the goal . The logical rules
for boolean and modal connectives follow the stipulations described in Figure 7.1.
For instance, to establish the goal E I- v <I> /\ W, it is necessary to establish that E
has the property <1>, and that E has the property W, both relative to V. A fixed point
formula is abbreviated by its bound variable, since we are dealing with normal
formulas. The rule for abound variable is just the fixed point unfolding rule. Thin
is a structural rule , which allows the set of processes in a goal to be expanded.
The rules are backwards sound, which means that, if all the subgoals in the
consequent ofarule are true, then so is the premise goal. The stipulations introduced
7.3. Tableaux I 169

EI-y<l>/\ W
/\
El-v<l> El-vW

v El-y<l>vW E E UE
E, I-v<l> E21-vW = 1 2
El-vIK )<I>
[K] K(E)l-v<l>

(K) ~~~:;: f: E -+ K (E) is a choice function


«z. El-vrrZ. <I>
El-vZ

Z ~~V~ Z identifies the subfonnula o Z. <1>

Thin EI-y<l> EC F
FI-v<l>

FIGURE 7.2. Tableaux mies

in the previous section justify this for the boolean and modal operators, and in the
case of the fixed point rules this follows from the fixed point unfolding property,
Proposition 2 of Section 5.1. Backwards soundness of Thin is also clear because,
if F FV <1> and E C F, then E FV <1> .
The other ingredient of a tableau proof system is when a goal counts as a
"terminal" goal, so that no rule is then applied to it. As we shall see, terminal goals
are either "successful" or " unsuccessful," To show that all the processes in E have
the property <1> relative to V, we try to achieve the goal E f-v <1> by building a
successful tableau whose root is this initial goal. A successful tableau is a finite
proof tree whose root is the initial goal, and all whose leaves are successful termin al
goals. Intermediate subgoals of the proof tree are detennined by an application of
one of the rules to the goal immediatel y above them.
The definition ofwhen a goal in a tableau is terminal is underpinned by the game
theoretic characterization of satisfaction. A tableau represents a family of games.
Each process in the set of processes of a goal detennines a play. The definition of
when a goal F f- v \11 in a proof tree is terminal is presented in Figure 7.3. Clearly,
goals fulfilling 1 or 2 are correct. For instance, F ~v (K)<1> if there is an F E F
and F ~v (K) <1> . The justification for the success of condition 3 in the case of a
successful terminal follows from considering any infinite length game play from
a process in E with respect to the property Z that "cycles" through the terminal
goal of the proof tree F f- v Z . Because F ~ E, the play jumps back to the
companion goal (the goal E FV Z ). Such an infinite play must pass through the
variable Z infinitely often , and because Z subsurnes any other variable that also
occurs infinitel y often , and Z iden tifies a maximal fixed point fonnula, the play is
therefore a win for the verifier. The justification for the failure of condition 3 in the
case of an unsuccessfulleaf is more involved, and will be properly accounted for in
the next section when correctness of the proof method is shown. Howe ver, notice
that if E = F, then any infinite length play from a process in E that repeatedly
170 7. Exposing Structure

Successful terminal F f-v \It


1. \It = tt or \It = Z and Z is free in the initial fonnula and F ~ V(Z)
2. F = 0,
3. 1}1 = Z and there is a goal E f-v Z above F f-v Z, such that there is a path as
follows
E f-v Z

at least one application of

a rule other than Thin


F f-v Z
and E ;2 F and Z identifies a maximal fixed point formula , v Z . <1>, and for any
other fixed point variable Y on this path, Z subsumes Y
Unsuccessful terminal F f-v \It
1. \It = ff or \It = Z and Z is free in the initial fonnula and F ~ V(Z)
2. \It = (K)<I> and for some F E F, K({F}) = 0
3. 1}1 = Z and there is a goal E f-v Z above F f-v Z, such that there is a path as
follows
E f-v Z

at least one application of

a rule other than Thin


F f-v Z
and E ~ F and Z identifies a minimal fixed point fonnula, JLZ. <1> , and for any
other fixed point variable Y on this path, Z subsumes Y

FIGURE 7.3. Terminal goals in a tableau

cycles through the terminal goal F f-v Z (with the play repeatedly jumping to the
companion goal) is won by the refuter.
A successful tableau for E f-v <I> is a finite proof tree, all of whose terminal
goals are successful. A successful tableau only contains true goals . The proof of
this is deferred until the next section .
Proposition 1 If E f-v <I> has a successful tableau , then E FV <1>.
However, as the proofsystem stands , the converse is not true. A further tennination
condition is needed . But this condition is a little complex, so we defer its discussion
until the next section. Instead, we present various examples that can be proved
without it. Below we often write E f-v <I> instead of {E} f-v <1> , and also drop the
index V when it is not gennane to the proof.
7.3. Tableaux I 171

Example 1 The model checking game showing that Cnt has the property vZ . (up)Z consists
of an infinite set of positions. Cnt is the infinite state process, Cnt ~ up.(Cnt I
down.0). There is a very simple proof of the property using tableaux. Let Cnto be
Cnt and let Cnti+l be Cnti I down.O for i ~ O.

Cnt I- v Z . (up) Z
{Cnti : i ~ O} I- v Z . (up)Z
(*) {Cnti : i ~ O} I- Z
{Cnti : i ~ O} I- (up)Z
[Crrt, : i ~ I} I- Z

Thin is used immediately to extend the set of processes from Cnt to Cnt, for all
i ~ O. The application ofthe (up) rule employs the choice function t, which we
have left implicit in the proof and which maps each Crrt, to Cnti+l. The final goal
is a successful terminal, by condition 3 ofFigure 7.3, owing to the goal (*) above
it.
The tableau proof system also applies to finite state processes, as illustrated in
the next example.
Example 2 We consider the process 0 described in Figure 5.1 and the following property <1>:

vZ. J.1.Y. [a]«(b)tt /\ Z) V Y).

Process 0 has the property <1> . A succe ssful tableau proving this is given in
Figure 7.4 . The terminal goals are all successful. The goal (**) is terminal because
of its companion (*). Although both variables Z and Y occur on the path between
these goals, Z subsumes Y and Z identifies a maximal fixed point formula; there-
fore, the terminal goal (**) is successful. Notice that the goal 2 : D I- Y is not a
leaf, even though there is the goal l : D I- Y above it: this is because Z occurs on
the path between these goals , at (*), and Y does not subsume Z .
Example 2 illustrates an application of the tableau proof system to a finite
state example. However, in this case the proof is really very elose to the game. Of
more interest is that we can use the proof system to provide a much more succinct
presentation of a verifier's winning strategy by considering sets of processes.
Example 3 Recall the level crossing of Figure l.lO. Its safety property is expressed as
vZ .([tcross]ff V [ccross]ff) /\ [-]Z. Let <I> be this formula. We employ the
abbreviations in Figure 1.12, and let E be the full set {Crossing} U {EI, ... , EIl}.
172 7. Exposing Structure

D f- <1>

D f- Z

D f- tLY. [a ]« (b }t t 1\ Z) V Y)

l:Df-Y

o f- [a ]« (b }t t 1\ Z) V Y)
0' f- «b}tt 1\ Z) V Y
D' f- (b}tt 1\ Z

D' f- (b}tt (*) 0' f- Z


o f- tt 0' f- tLY. [a ]« (b}tt 1\ Z) V Y)
0' f- Y

D' f- [a ]« (b}tt 1\ Z) V Y)
D f- «b}tt 1\ Z) V Y
2 :0f-Y

D f- [a ]« (b}tt 1\ Z) V Y)
0' f- «b}tt 1\' Z) V Y
0' f- (b}tt 1\ Z
D' f- (b}tt (**) D' f- Z
o f- tt
FIGURE 7.4. A successfultableau for D I- <I>

Below is a successful tableau showing that the crossing has this property.

Crossing f- <1>
E f- <1>

Ef-Z
E f- ([tcross]ff v [ccross]ff) 1\ [-]Z
E f- [tcross]ff V [ccross]ff E f- [-]Z
E - {Es , E 7 } f- [tcross]ff E - {E4, E6} f- [ccross]ff E f- Z
o f- ff 0 f- ff

Notice the essential use ofthe Thin rule at the first step.
7.4. Tableaux 11 173

Exercises 1. Recall the definitions of[], [K], () and (K) as fixed points in Section 5.3.
Provide tableau proofs of the following

a. [Däv, : n ~ O} F= [] [a]ff
b. Crossing F= [car] [train] «((tcross))tt V ((ccross))tt)
c. Protocol F= [in(m)] «((out(m)))tt /\ [(in(m) : mE D}]ff),

where Di v, for each i is defined in Section 2.4.


2. Prove [Sched, : n> I} F= vZ . (-)tt /\ [-]Z.
· 3. The safety property ofthe slot machine in Figure 1.15, that the machine never
pays out more than it has in its bank, as described in example 1 of Section 5.7,
is given by the formula v Z. Q /\ [- ] Z relative to the valuation V that assigns
the set P - {SMj : j < O} to the variable Q. Give a successful tableau
demonstrating that the slot machine SMn has this property.
4. Give successful tableaux for the following

a. CI l F= vZ. (tick)(tock)Z
b. CI l F= jLY. (tock)tt v [-]Y
c. T(l9) I- jLY. (-)tt /\ [-out(l)]Y
d. Sem I Sem I Sem I- vZ. [get](jLY. (-)tt /\ [-put]Y) /\ [-]Z
e, As lAs F= jLZ. (-)tt /\ [-a]Z,

where As is defined in example 2 of Section 5.7.

7.4 Tableaux 11
In the previous section, a tableau proof system was presented for showing that
sets of processes have properties. The idea is to build proofs around goals of the
form E I- v <1>, which, if successful, show that each process in E has the property
<I> relative to V. As it stands, the proof system is not complete.
An example for which there is not a successful tableau is given by the following
cell, from example 2 ofSection5.5.

def
C in(x).Bx wherex: N
def
doen.B, for n ~ 0
174 7. Exposing Structure

This cell has the property of eventual tennination, J-LZ . [- ] Z . The only possible
tableau for C f- J-LZ. [-]Z up to inessential applications ofThin is as folIows.

Cf-J-LZ.[-]Z
Cf-Z
Cf-[-]Z
1: {Bi : i 2: O} f- Z
2 : {Bi : i 2: O} f- [- ]Z
3: {Bi : i 2: O} f- Z

The goal 3 is terminal because of the repeat goal 1, and it is unsuccessful be-
cause Z identifies aleast fixed point fonnula. However, the verifier wins the game
G(C, J-LZ. [- ]Z), as the reader can verify. One solution to the problem is to permit
induction on top of the current proof system. The goallabelIed 1 is provable using
induction on i. There is a successful tableau for the base case Ba f- Z, and as-
suming successful tableaux for Bi f- Z for all i ~ n, it is straightforward to show
that there is also one for Bn+1 f- Z. However, we wish to avoid explicit induction
principles. Instead, we shall present criteria for success that capture player V's
winning strategy. This requires one more condition for tennination.
The additional circumstance for being a terminal goal of a proof tree concerns
least fixed point variables. A goal F f-v Z is also a terminal if it obeys the (almost
repeat) condition ofFigure 7.5. This circumstance is very similar to condition 3 of
Figure 7.3 ofthe previous section for being an unsuccessful terminal goal, except
that here F ~ E. It is also similar to condition 3 for being a successful terminal ,
except that here Z identifies aleast fixed point variable. Not all terminal goals that
obey this new condition are successful. The definition ofsuccess (taken essentially
from Bradfield and the author [11]) is intricate, and requires some notation.

New terminal F f-v Z


4. There is a goal E f- y Z above F f-v Z such that there is a path as follows
E f-y Z

at least one application of

a rule other than Thin


F f-v Z
and E ;2 F and Z identifies a minimal fixed point fonnula, J-LZ. cI>, and for any
other fixed point variable Y on this path , Z subsumes Y

FIGURE 7.5. New terminalgoal in a tableau


7.4. Tableaux JI 175

A terminal goal that obeys condition 3 of Figure 7.3 for being a successful
terminal, or the new terminal condition of Figure 7.5, is called a "o -terminal,"
where a may be instantiated by a v or u. depending on the fixed point variable Z.
Anode of a tableau (which is just a proof tree) contains a goal, and we use
boldface numbering to indicate nodes . For instance, n : E f- v <I> picks out the
node n ofthe tableau that has the goal E f-v <1>. Suppose node n ' is an immediate
successor of n , and n contains the goal E f- v <I> and n ' contains E' f- v <1>'.
n : Ef-<I>
. . . n ' : E' f- <1>' •. .
The . . . on both sides ofthe bottom goal suggest that there may be another subgoal
in one ofthese positions. Agame play proceeding through (E, <1» where E E E
can have as its next configuration (E ', <1>') , where E' E E' provided the rule applied
at n is not the rule Thin . Which possible processes E' E E' can be in this next
configuration depend on the structure of <1>. This motivates the following notion .
We say that E' E E' at n ' is a "dependant" of E E E at n if
• the rule applied to n is /\, v, a Z, Z or Thin, and E = E', or
• the rule is [K] and E ~ E' for some a E K , or
• the rule is (K }, and E' = feE) where f is the choice function.
All the possibilities are covered here. An example is that each Bi at node 2 in
the earlier tableau is adependant of the same Bi at node 1, and each Bi at 1 is a
dependant of C at the node directly above 1, since the rule applied is the [K] rule,
f . th .. in(i)
an d or eac h Bi there IS e transition e --+ Bi,
The "companion" of a o -tenninal is the most recent node above it that makes
it a terminal. (There may be more than one node above a a-tenninal that makes it
a terminal, hence we take the lowest.) Next, we define the notion of a "trail,"
Definition 1 Assurne that node nk is a jL-tenninal and node nl is its companion. A trail from
process EI at nl to Ek at nk is a sequence of pairs of nodes and processes
(nj , EI) , " " (nk, E k ) such that for all i with 1 ~ i < k either
1. E i+1 at ni+1 is a dependant of E i at ni , or
2. n, is the immediate predecessor of a a -tenninal node n ' (where n ' is different
from nk) whose companion is nj for some j : 1 ~ j ~ i, and ni+! = nj and
Ei+1 at n ' is adependant of Ei at nr.
A simple trail from B2 at 1 to BI at 3 in the earlier tableau earlier is:
(1, B2 ) (2, B2 ) (3, BI)'
Here B2 at 2 is a dependant of'B, atl, and B, at3 is a dependantof'Be at2. Condition
2 ofDefinition I is necessary to take account ofthe possibility of embedded fixed
points as pictured in Figure 7.6. A trail from (nj , EI) to (nk , E k ) maypass through
Bj repeatedly before continuing to nk via ni . In this case, nk is a jL-tenninal,
meaning Z identifies aleast fixed point fonnula. However, nl may be either a jL or
a v-terminal: in both cases FI ~ EI and in both cases Z must subsume Y because
176 7. Exposing Structure

01 : E f- Z

Oj : F' f- <1>'
0k : F f- Z 0, : F, f- Y

FIGURE 7.6. Embedded tenninals: F ~ E and FI ~ EI.

0k labels a leaf. 10 fact, oode 0i here could be 01 with 0k and o. both sharing the
same companioo. There can be additional iteratioos ofembedded a-terminals. For
instance , there could be another a-companion along the path from n, to 0, whose
partner labels a c-terminal, and so on. There is an intimate relation between the
sequence of processes in a trail and a sequence ofprocesses in part of agame play
from a companion node to a o -terminal.
The companion oode 0 of a fL-terminal induces a relation 1>0' defined as
folIows.

Definition 2 E 1>0 F ifthere is a trail from E at 0 to F at a fL-terminal whose companion is n.


For example, in the earlier tableau, B2 t>1 B1 because of the above trail
(1, B2) (2, B2) (3, Bl)' More generally, Bi+' t>1 Bi for any i 2: O.
A v-terminal node is successful, as described in the previous section. The
definition ofwhen a fL-terminal, 0' : E f- v Z, is successful now folIows.

Definition 3 A fL-terminal whose companion is node 0 is successful if there is no infinite


" descending" chain Eo 1>0 E, 1>0 .. ..

Success means, therefore, that the relation induced by the companion of a fL-
terminal is weIl founded . This precludes the possibility of an infinite game play
asssociated with a tableau that cycles through the node 0 infinitely ofteo so that
player R wins . Ifthere is an infinite descending chain Eo 1>0 E, t>o • • • then any
fL-terminal whose companion is node 0 is unsuccessful.
A tableau is successful if it is finite and all its terminal goals are successful
(obey either conditions I, 2 or 3 ofFigure 7.3 for being successful or is a successful
fL-terminal). The tableau technique is both sound and complete for arbitrary (infi-
nite state) processes. Again, the result is proved using the game characterization
of satisfaction. The following theorem includes Proposition I from the previous
section.
7.4. Tableaux 11 177

Theorem 1 There is a successful tableau with root node E f-v <I> ifJ E FV <1> .

Proof. Assume that the goal E f-v <I> has a successful tableau . We show that
player V wins every game Gv( E , <1» when E E E. Consider a play ofsuch agame.
The idea is to follow the play through the tableau retuming to the companion node
ofa o -terminal. All choices for player R are given in the tableau . Choices for player
V are determined by the tableau . Suppose (F, IJI\ v 1J12) is the position reached
in the game play, and F f-v IJI\ V 1J12 is the goal reached in the tableau where
F E Fand the goal is not a leaf. Then F] f-v IJI\ and F2 f-v 1J12 are the immediate
successors in the tableau . If F E F\ , then Player V chooses (F, IJI\) as the next
position in the game. If F fj. F\, then F E F2 and player V chooses (F , 1J12) as the
next position. Next, assume that (F, (K)IJI) is the position reached in the game
play, and F f-v (K) IJI is the goal reached in the tableau where F E F and the goal
is not a leaf. Then f(F) f-v IJI is the immediate successor in the tableau, where
f is a choice function . Player V chooses (f(F), IJI) as the next position in the
game . Ifthe game configuration is (F, IJI) and the associated goal in the tableau is
F f-v IJI with Thin applied to it, then we associate the same game position with the
successor goal in the tableau . Any such play must be won by player V as the reader
can verify. The only interesting case to show is that a play can not proceed through
aleast fixed point variable Z infinitely often, where Z subsumes all other variables
occurring infinitely often . This would break the well foundedness requirement on
fL-terminals.
For the other half of the proof, assume that E FV <1>. We build a successful
tableau for E f-v <I> by preserving truth, and with a judicious use of the Thin rule.
Suppose we have built part ofthe tableau and goal F f-v IJI still has to be developed,
and F FV IJI. If IJI is tt, or the variable Z that is free in the starting formula <1> ,
then the goal is a successful terminal. If IJI is IJI\ /\ 1J12 , then the goal reduces to
the subgoals F f-v 1JI1 and F f-v 1J12. If IJI is IJI] V 1J12, then consider each game
Gv(F, IJI). By assumption, player V has a history-free winning strategy for any
such game. Let Fj for i E {I, 2} contain the processes F E F such that player V's
winning strategy for Gv(F, IJI) inc1udes the rule "at (F, IJI) choose (F, lJI i ) ." The
goal F f-v IJI reduces to the subgoals F\ f-v IJI\ and F2 f-v 1J12 . Clearly, F, FV lJI i
and F = F 1 UF2. IflJl = [K]IJI\ , then the goal reduces to the subgoal K (F) f- IJI! . If
IJI is (K) IJI! , then again consider each game Gv( F, (K) IJI]) for F E F. Player V has
a history-free winning strategy for any such game. Let f be the following choice
function: f(F) is the process G such that "at (F, (K) 1JI1) choose F ~ G" is the
rule in the winning strategy for Gv(F, IJI). The goal F FV IJI therefore reduces to
the true subgoal f(F) f-v IJI\ . Next , suppose IJI is aZ.IJI\ . Let P be the smallest
transition closed set containing F as a subset. Let F! be the set 11 o Z .IJI\ IIC. If
F\ ;:2 F apply the rule Thin to give the subgoal Fl f-v aZ .1JI 1 , and then one
introduces the subgoal F I f- v Z followed by F1 f-v IJI! . The final part of the proof
is to show that the tableau is successful, and this we leave as an exercise for the
reader. 0
178 7. Exposing Structure

Example 1 The tableau at the start of this section, reproduced below, is now successful.

CI-JLZ.[-]Z

CI-Z
CI-[-]Z
1: {B; : i ::: O} I- Z
2: {B; : i:::O}I-[-]Z
3 : {B; : i ::: O} I- Z

The only trail from Bi+I at node 1 to node 3 is


(1, Bi+I)(2, Bi+l)(3, B;)
and therefore B;+I 1>1 B;. Hence, there cannot be an infinite sequence ofthe form
Bjl 1>1 Bj2 1>1· · ··
Suppose the definition ofB;+1 is amended as follows .

B;+I = down.B; + UP.B;+2


def

Each B;+I has the extra capability of performing up. An attempt to prove that C
eventually terminates yields the same tableau as above . However, the tableau is
unsuccessful. There are two two trails from B;+I at 1 to the leafnode 3.
(1, B;+1)(2, B;+1)(3, B;)
(1, B;+I) (2, Bi+l) (3, B;+2)
Therefore B;+I 1>1 B; and B;+I 1>1 B;+2. Consequently, there is a variety ofinfinite
decreasing sequences from B;+1. One example is the following sequence, B;+1 I> 1
B;+2 1>1 B;+I 1>1 ·· ..
Example 2 The crossing in Section 1.2 has the liveness property "whenever a car approaches,
eventually it crosses" provided the signal is fair, as stated in Section 5.7. Let Q
and R be variables and V a valuation such that Q is true when the crossing is in
any state where Rail has the form green.tcross.red.Rail (the states E2, E3 ,
E6, and EIO of Figure 1.12) and R holds when it is in any state where Road is
up.ccross .down.Road (the states EI, E3, E7 and Eil). The liveness property now
becomes "for any run, if QC is true infinitely often and R C is also true infinitely
often, then whenever a car approaches the crossing eventually it crosses," which is
expressed by the following formula with free variables relative to V, v Y.[car]IJII /\
[-]Y, where IJII is
JLX.vY1.(Q v[ ccross](vY2 .(R V X} /\ [ ccross]Y2» /\ [ ccross]YI .

Let E be the full set {Crossing} U {EI, . . . , Eil} ofprocesses in Figure 1.12. A
proof that the crossing has this property is given as a successful tableau in stages
in Figure 7.7. Throughout the tableau, we omit the index Von I-v.
7.4. Tableaux 11 179

{Cr os sing} I- vY.[car] \II1 1\ [-]Y

E I- vY .[ car] \II] 1\ [-]Y

E I- Y
E I- [carN' ] 1\ [-] Y
E I- [car]\II ( E I-[-] Y
{E I , E3. E7. E ll } I- \11] EI- Y
Tl
EI = {E I , E3, E4• E6, E 7, Eid
Tl
EI I- \11 1
1 : EI I- X
E] I- vYJ.{ Q V [ ccross](v Y2.(R V X) 1\ [ ccross] Y2)) 1\ [ ccross] Y]
EI I- Y]
EI I- Q V [ ccross ](v Y2.(R V X) 1\ [ ccross] Y2) 1\ [ ccross] YI
EI I- Q V [- c cr oss](vY2.( R V X) 1\ [- ccr os s]Y2) EI I- [ ccross]YI

T2
{E I . E 4 • E 7 • Ei d I- [ ccross](v Y2.(R V X) 1\ [ ccross] Y2)
{EI . E3, E 4, E6, Ell} I- v Y2.(R V X) 1\ [ ccross] Y2
{E I. E3, E4• E6. E 7 , Ell } I- vY2.( R V X) 1\ [ ccross] Y2
{E], E3. E 4, E6• E7 , E ll} I- Y2

{E I , E 3, E4, E6, E7, E ll} I- R V X {E I , E3, E 4, E6. E7. Ell} I- [ ccross]Y2

{E I , E3. E7, E ll } I- R 2 : {E4. E6} I- X {E I . E3, E 4, E6, E7, Eid I- Y2

FIGURE 7.7. Successful tableau

In this tableau there is one J-L-tenninal, labelled 2 whose companion is 1. The


relation 1> ] is weIl founded because we only have: E] 1> 1 E4, E4 1>] E6, and
E3 1>] E6. Therefore, the tableau is successfu l.
180 7. Exposing Structure

Example 3 The proof that the slot machine SMn of Section 1.2 has the property that a windfall
can be won infinitely often is given by the following successful tableau.

{SM n : n ~ O} I- vY.J,LZ.(wrn(l06»)y V (-)Z

EI- vY.J,LZ.(wrn(l06»)y V (-)Z

EI-Y
EI- J,LZ.(wrn(l06»)y v (-)Z
1 : EI-Z

E~ I- Y
E is the set of all derivatives. The vital mies in this tableau are the disjunction
at node 1, where EI is exactly those processes capable ofperforming a win(10 6)
action, and E2 is the remainder; and the (K) rule at node 3, where f is defined to
ensure that EI is eventually reached: for processes with less than 106 in the bank,
f chooses events leading towards loss, so as to increase the amount in the bank;
and for processes with more than 106, f chooses to release(10 6). The formal
proof requires partitioning E2 into several classes, each parametrised by an integer
n, and showing that while n < 106, n is strictly increasing over a cycle through
the classes; then, when n = 106, f selects a successor that is not in E2, so a trail
from Eo through nodes 1, 2, 3, 4 terminates, and therefore node 4 is a successful
J,L-terminal.
Example 4 Consider the processes T(i) for i ~ I from Section 1.1.

T(i) ~ if evenii) then out(i).T(i /2) else out(i).T«3i + 1)/2)


IfT(n) for all n ~ 1 stabilizes into the following cycle

T(2) o~) T(1) ~) T(2) ~) ... ,

then the following tableau is successful, and otherwise it is not. But which ofthese
holds is not known!

{T(i) : i ~ I} I- pT. (0üt(2»)tt v [-]Y

1 : {T(i) : i ~ I} I- Y
{T(i) : i ~ I} I- (out(2»)tt v [-]Y

T(2) I- (0üt(2»)tt {T(i) : i ~ 1 /\ i =I 2} I- [- ]Y


T(l) I- tt {T(i) : i > I} I- Y
7.4. Tableaux 11 181

The problem is that we don't know whether the relation 1>1 induced by the
companion node 1 of the j,L-terminal is weIl founded.

Exercises 1. Show that the verifier wins the game G(C, j,LZ. [- ]Z).
2. The following processes are from Section 5.7.
def
Ao = a·L{Ai : i ::: O}
def
Ai+l = b.Ai i ::: 0
Bo
def
= a·L{Bj : i ::: O} + b. L{Bi : i ::: O}
def
Bi+l = b.Bj i ::: 0
Provide a successful tableau whose root is
{Aj I Aj : j ::: O} I- j,LY. (-)tt /\ [-a]Y.
Show that there is not a successful tableau when A is replaced by B.
3. Complete the proof ofTheorem 1.
4. For the various interpretations of Cycle(...) of Section 5.7 give successful
tableaux for
a. Sched~ F= Cycleür, a2 a3)
b. Sched3 F= Cyclefcj az a3)
where the schedulers are from Section 1.4.
References

[I] Andersen, H., Stirling, C., and Winskel, G. (1994). A compositional proofsystem for
the modal rnu-calculus, Proes 9th IEEE Symposium on Logie in Computer Seienee,
144-153 .
[2] Austry, D., and Boudol, G. (1984). Algebra de processus et synchronisation.
Theoretieal Computer Sciene e 30, 90-131 .
[3] Baeten, L, and Weijland , W. (1990). Proeess Algebra. Cambridge University Press .
[4] Bernholtz, 0 ., Vardi, M. and Wolper, P. (1994). An automata-theoretic approach to
branching-time model checking. Leeture Notes in Computer Scienee 818, 142-155 .
[5] Bergstra, L, and Klop, 1 (1989). Process theory based on bisimulation semanties.
Leeture Notes in Computer Scienee 354 , 50-122 .
[6] Benthem, J. van (1984). Correspondence theory. In Handbook of Philosophieal
Logie , Vol. 11, ed. Gabbay, D. and Guenthner, F., 167-248 . Reidel.
[7] Bloom, B., Istrail , S., and Meyer, A. (1988). Bisimulation cant be traced. In 15th
Annual Symposium on the Principles ofProgramming Languages, 229-239 .
[8] Bradfield, 1 (1992). Verifying Temporal Properties ofSystems. Birkhauser.
[9] Bradfield, 1 (1998). The modal mu-calculus alternation hierarchy is strict.
Theoretieal Computer Scienee 195, 133-153.
[10] Bradfield, 1 and Stirling, C. (1990). Verifying temporal properties of processes.
Leeture Notes in Computer Scienee 458, 115-125 .
[11] Bradfield, land Stirling, C. (1992). Local model checking for infinite state spaces.
Theoretieal Computer Scienee 96, 157-174 .
[12] Clarke, E., Emerson, E., and Sistla, A. (1983). Automatie verification offinite state
concurrent systems using temporal logic specifications: a practical approach. Proe.
10th ACM Symposium on Principles ofProgramming Languages, 117-126.
[13] Cleaveland, R. (1990). Tableau-based model checking in the propositional mu-
calculus. Aeta Informatiea 27, 725-747.
[14] The Concurrency Workbench, Edinburgh University,
http://www.dcs.ed.ac.ukJhomelcwb/index.html.
184 References

[15] Condon , A. (1992). The complexity of stochastic games. Information and


Computation 96, 203-224.
[16] Oe Nicola , R. and Hennessy, M. (1984). Testing equivalence s for processe s.
Theoretical Computer Scien ce 34, 83- 133.
[17] Oe Nicola, R. and Vaandrager, V. (1990). Three logics for branch ing bisimulation.
Proc. 5th IEEE Symposium on Logic in Computer Science, 118-129.
[18] Emerson, E., and Clarke, E. (1980). Characterizing correctness properties of'parallel
programs using fixpoints. Lecture Notes in Computer Science 85, 169-181.
[19] Emerson, E., and Jutla, C. (1991). Tree automata, mu-calculus and determinacy. In
Proc. 32nd IEEE Foundations ofComputer Scien ce, 368-377.
[20] Emerson, E., Jutla, c., and Sistla, A. (1993). On model checking for fragments of
J.l-calculus. Lecture Notes in Computer Science 697, 385-396 .
[21] Esparza, J. (1997). Oecidability of model-checking for concurrent infinite-state
systems. Acta Informatica 34,85-107.
[22] Glabbeek, J. van ( 1990). The linear time-branching time spectrum. Lecture Notes in
Computer Science 458, 278-297 .
[23] Glabbeek, J. van, and Weijland, W. (1989). Branching time and abstraction in
bisimulation semanties. Information Processing Letters 89, 613-618 .
[24] Groote , J. (1993). Transition system specification s with negative premises . Theoret-
ical Computer Science 118, 263-299.
[25] Groote, J., and Vaandrager, F. (1992) . Structured operation al semantics and
bisimulation as a congruence. Information and Computation 100, 202-260 .
[26] Hennessy, M. (1988) . An Algebraic Theory ofProcesses. MIT Press.
[27] Hennessy, M., and Ingolfsdottir, A. (1993). Communicating processes with value-
passing and assignment. Formal Aspects ofComputing 3,346-366.
[28] Hennessy, M., and Lin, H. (1995). Symbolic bisimulations. Theoretical Computer
Science 138, 353-3 89.
[29] Hennessy, M., and Milner, R. (1985). Algebraic laws for nondeterminism and
concurrency. Journal ofAssociation ofComputer Mach inery 32, 137-162.
[30] Hirshfeld, Y., and Moller, F. (1996). Oecidability results in automata and process
theory. Lecture Notes in Computer Science 1043, 102-149.
[31] Hoare, C. (1985). Communicating Sequential Processes. Prentice Hall.
[32] Jonsson, 8. , and Parrow, J. (1993) . Oeciding bisimulation equivalence for a class of
non-finite-state programs .Information and Computation 107, 272-302 .
[33] Kannellakis , P., and Smolka, S. ( 1990). CCS expressions, finite state processes, and
three problems of equivalence . Information and Computation 86,43-68.
[34] Kozen, O. (1983). Results on the propositional mu-calculus. Theoret ical Computer
Science 27,333-354.
[35] Lamport, L. (1983) Specifying concurrent program modules. ACM Transactions of
Programming Language Systems 6, 190-222.
[36] Larsen, K. (1990) . Proof systems for satisfiability in Hennessy-Milner logic with
recursion. Theoretical Computer Science 72, 265-288 .
[37] Larsen, K., and Skou, A. (1991). Bisimulation through probabilistic testing.
Information and Computation 94, 1-28.
References 185

[38] Long , D., Browne , A., Clarke, E., Jha, S., and Marrero, W. (1994 ). An impro ved
algorithm for the evaluation of fixpoint expre ssions. Leeture Notes in Computer
Scienee 818, 338-350.
[39] Mader, A. (1997 ). Verifieation ofmodal properties using boolean equation systems.
Doctoral thesis, Technical University of Munich.
[40] Manna, Z., and Pnueli , A. (1991 ). The Temporal Logic ofReactive aOO Concurrent
Systems. Springer.
[41] McNaughton, R. (1993). Infinite games played on finite graphs. Annals ofPure aOO
App lied Logic 65, 149-184.
[42] Milner, R. (19 80). A Calculus ofCommunicatingSystems. Lecture Notes in Computer
Science 92.
[43] Milner, R (1983). Calculi for synchrony and asynchrony. Theoretical Computer
Science 25, 267-310.
[44] Milner, R (1989). Communication aOO Concurrency. Prentice Hall.
[45] Milner, R, Parrow, 1., and Walker, D. (1992). A calculus ofmobile processes, Parts
I and 11. Information and Computation 100, 1-77.
[46] Niwinski, D. (1997). Fixed point characterization ofinfinite behavior offinite state
systems. Theoretieal Computer Scienee 189, 1-69.
[47] Park, D. (1981). Concurrency and automata on infinite sequences. Lecture Notes in
Computer Science 154, 56 I -572 .
[48] Papadimitriou, C. (1994). Computational Compl exity. Addison-Wesley.
[49] Plotkin, G. (198 I) . A structural approach to operational semantics. Technical Report,
DAIMI FN-19, Aarhus University.
[50] Pratt, V. (1982). A decidable mu-calculus. 2200 IEEE Symposium on Foundations
ofComputer Scienee, 421-427.
[51] Simone, R. de ( 1985). Higher-level synchronizing devices in Meije-SCCS.
Theoretical Computer Science 37 ,245-267.
[52] Sistla, P., Clarke, E., Francez, N., and Meyer, A. (1984). Can message buffer s be
axiomatized in linear temporal logic? Information and Control68, 88- 112.
[53] Stirling, C. (1987 ). Modal logics for communicating systems, Theoretieal Computer
Scienee49, 31 1-347.
[54] Stirling, C. (1995). Local model checking games. Lecture Notes in Computer Science
962, 1-11.
[55] Stirling, C., and Walker, D. (1991). Local model checking in themodal mu-calculus.
Theoretical Computer Scienee 89, 16I- 177.
[56] Streett, R., and Emerson, E. (1989). An automata theoretic decision procedure for
the propositional mu-calculus. Information aOO Computation 81, 249-264.
[57] Taubner, D. ( 1989). Finite Representations ofCCS aOO TCSP Programs by Automata
aOO Petri Nets. Lecture Notes in Computer Science 369.
[58] Thomas, W. (1995). On the synthesis of strategies in infinite games. Lecture Notes
in Computer Science 900, 1-13.
[59] Vardi, M., and Wolper, P. (1986). Automata-theoretic techniques for modal logics of
programs. Journal ofComputer System Scienc e 32 , 183-22 I .
[60] Walker, D. (1987). Introduction to a calculus of communicating systems. Technical
Report, ECS-LFCS-87-22 , Dept. ofComputer Science, Edinburgh University.
186 References

[61] Walker, D. (1990). Bisimulations and divergence. Information and Computation 85,
202-241.
[62] Winskel, G. (1988). A category oflabelled Petri Nets and compositional proofsystem .
Procs 3rd IEEE Symposium on Logic in Computer Science, 142-153.
Index

D,5 User, 8
K(E), 166 Use,54
Rep,55 Ven,2
0,5 I>n, 176
P,29 ~,64

P(E),29 u-tenninal,175
V[E/Z],92 companion, 175
=,12 ~, 73
=r,69 ~c, 75
11 eI> lIe, 92 freI>, Z], 93, 105
11 eI> nE , 84
11.2,97 ACP, 25, 26, 28
J1.M,104 actions
vZ ,97 -,34
rr-calculus. Z? - K , 34
C12 , 52 J+,40
Cl, I A,34
Cnt,1O 0,18
Cop, 5 r (internal action), 8, 17
Crossing, 12 all runs (A), 89
Cti,4 alternation hierarchy, 127
Div,45 always (G), 89
Divj,43 Austry,25
Protocol, 13
Regi,6 Backwardssound, 168
15
SMn , Benthem,64
Schedn , 25 Bergstra, 25
T(i), 7 Bernholtz, 151
Ucop,17 bisimilar, 64
188 Index

bisimulation player R, 135


branching, 88 player R (refuter), 57
closed, 113 player V, 135
equ ivalence, 64 player V (verifier) , 57
modal characterization , 70 property checking, 135
observable, 73 strategy, 59, 139
proof technique, 65 histo ry-free,59, 139, 148
relation, 64 winning,59, 139, 148
Bloom,53 Glabbeek, 71, 76, 88
Boudol,25 goal directed, 133, 168
Bradfield, 12, 15, 126, 127, 168, 174 Groote, 25, 53

CCS, 2, 5, 8, 11,25-29 Hennessy, 16,25,48,54,64,67,70


choice function, 166 Hirshfeld, 67, 164
Church-Turing thesi s, 30 Hoare,5
Clarke, 90, 106, 151
Cleaveland, 168 Image-finite, 70
colouring, 92 observable, 75
Condon, ISS Ingolfsdottir, 16
congruence,66 input of data, 5
CSP, 5, 25-27, 53
Jonsson, 165
De Bakker, 106 Jutla, 152, 161
De Nicola, 54
De Roever, 106 Kaivola, 112
dependant, 175 Kannellakis, 69
diverge s ( t), 48 Klop,25
diverges (t), 98 Knaster, 97
Kozen ,106
Emerson , 90,106, 145, 151, 152, 161
EPL ,25 Lamport,87
Esparza, 143 Larsen, 72, 95, 106, 168
eventually (F), 89 Lin,67
Long , 126
Failure, 25, 45 , 53
finite model property, 106 Machine
fixed point, 96, 104 parallel random-access, 28
greatest, 97 counter, 28
least, 97 Turing,28
flow graph, 10 Mader, ISS
Force , 61, 156 McNaughton,156
free variable, 104 MEIJE,25
Milner,2, 10,22-25,27,64,70
Game, 57 modal depth, 71
equi valence, 61 modal equation, 95
graph , 58 recursive, 95
observation al equi valence, 73 modallogic
play, 57, 135 J,LMo! ,111
Index 189

tLMO,111 refusal,45
tLMA,126 run, 83, 86
tLM1,123
characterization ofbisimilarity, 70 Satisfies, 32
dynamic logic, 107, 108 E FV <1>, 165
Hennessy-Milner logic, 32 SCCS, 25, 26
M,32 signature, 145
Mot,48 Simone,25
MO, 46 simulation , 68
M oo, 38 equivalence, 68
mu-calculus, 104 Sistla, 90,132,151,161
propositional dynamic logic, 147 Skou, 72
with variables, 92 Smolka,69
modal mu-calculus, 104 some run (E), 89
model checking , 151 specification, 52
Mol1er,67, 164 Streett, 106, 145
monotonie, 94, 105 Sub(<1», 37, 109
Sub(E),29
Negation substitution {. j.}, 5
(.....),36 subsumes , 109, 138
CC), 36,101 synchronization, 9
Niwinski ,127
normal formula, 108 Tableau proof, 168
NP neo-NP, 161 rule, 168
successful, 169, 176
Observable action, 17 terminal goal, 169
onverges (.p, 48 Tarski, 97, 116
ordinals, 60 Taubner, 28, 30
output of data, 5 temporallogic, 83, 89
CTL,90,147
Papadimitriou, 63 testing ,54
parity game, 152 trace, 17,53
Park, 64, 106 completed, 53
Parrow, 27, 165 observable, 18
Plotkin, 4 traB,175
postfixed point, 96 transition c1osed, 29
Pratt, 106 transition system, 2
prefixed point, 96 finite state, 4
properties infinite state, 4, 10
extensional, 128 observable, 19
intensional , 128 transitions, 2
liveness, 83, 87 observable, 18
safety, 83, 87 rules for, 2-4, 8, 9,11 ,17,19,22,26,55
weak Jiveness, 87
weak safety, 87 Until (U), 89

Rank, 61 Vaandrager, 25, 53


realises, 32, 106 valuation (V), 92
190 Index

Vardi,151 Weijland, 76, 88


Winskel,25
Walker, 8, 13,27, 168 Wolper, 151
TEXTS IN COMPUTER SCIENCE (continuedJrom page ii)

Merritt and Stix, Migrating from Pascal to C++

Munakata, Fundamentals of the New Artificial Intelligence

Nerode and Shore , Logic for Applications, Second Edition

Pearce, Programming and Meta-Programming in Scheme

Pefed, Software Reliability Methods

Schneider, On Concurrent Programming

Smith, A Recursive Introduction to the Theory of Computation

Socher-Ambrosius and Johann, Deduction Systems

Stirfing, Modal and Temporal Properties of Processes

Zeigfer, Objects and Systems

You might also like