Modal and Temporal Properties of Processes Compress
Modal and Temporal Properties of Processes Compress
Editors
David Gries
Fred B. Schneider
With 45 Illustrations
f springer
Colin Stirling
Division of Informatics
University of Edinburgh
Edinburgh EH9 3JZ UK
[email protected]
Series Editors
David Gries Fred B. Schneider
Department of Computer Science Department of Computer Science
415 Boyd Studies Research Center Upson Hall
The University ofGeorgia Cornell University
Athens, GA 30605, USA Ithaca, NY 14853-7501, USA
In this book we examine modal and temporal logics for processes. First, we in-
troduce concurrent processes as terms of an algebraic language comprising a few
basic operators. Their behaviours are described using transitions. Families of tran-
sitions can be arranged as labelIed graphs, concrete summaries of the behaviour
of proce sses. Various combinations of processes and their resulting behaviour, as
determined by the transition mies, are reviewed. Next, simple modal logics are
introduced for describing the capabilities of processes.
An important discussion point occurs when two processes may be deemed
to have the same behaviour. Such an abstraction can be presented by defining an
appropriate behavioural equivalence between processes. A more abstract approach
is to consider equivalence in terms of having the same pertinent properties. There
is special emphasis with bisimulation equivalence , since the discriminating power
of modal logic is tied to it.
More generally, practitioners have found it useful to be able to express tem-
poral properties of concurrent systems , especially Iiveness and safety properties.
A safety property amounts to "nothing bad ever happens," whereas a Iiveness
property expresses "something good does eventually happen ," The crucial safety
property of a mutual exclusion algorithm is that no two processes are ever in their
critical sections concurrently. And an important liveness property is that, whenever
a process requests execution of its critical section, then eventually it is granted.
Cyclic properties of systems are also salient: for instance , part of a specification
of a scheduler is that it must continually perform a particular sequence of ac-
tions. A logic expressing temporal notions provides a framework for the precise
formalisation of such specifications.
Forrnulas of the modal logic are not rich enough to express such temporal
properties, so an extra, fixed point operator, is added. The result is a very expressive
viii Preface
Colin Stirling
Edinburgh, United Kingdom
Contents
Preface vii
List of Figures xi
1 Processes 1
1.1 First examples 1
1.2 Concurrent interaction 8
1.3 Observable transitions 17
1.4 Renaming and linking 21
1.5 More combinations of processes 25
1.6 Sets of processes . 28
3 Bisimulations 51
3.1 Process equivalences 51
3.2 Interactive games 56
3.3 Bisimulation relations 64
3.4 Modal properties and equivalences 69
3.5 Observable bisimulations 72
3.6 Equivalence checking . 77
x Contents
4 Temporal Properties 83
4.1 Modal properties revisited 83
4.2 Processes and their runs . 85
4.3 The temporal logic CTI.. . 89
4.4 Modal formulas with variables 91
4.5 Modal equations and fixed points 95
4.6 Duality... . . . . . . . . . .. 100
References 183
Index 187
List of Figures
Processes
Cl ~ tick.CI
Names ofactions such as tick are in lower case, whereas names ofprocesses such
as Cl have an initial capital letter. A process definition ties a process name to a
I R(.) a .E ~ E I
A process a.E performs the action a and becomes E . An instance ofthis axiom is
. . tn,c
th e transrtion . k .CI ~
tick . . ru Ie re fiers to th e operator def
Cl. Th e next transition =,
and is presented with the desired conclusion uppermost.
R(~) P ~ F p~ E
E~F
applicable rule is R(~), the goal reduces to the subgoal tick.CI ~ E, and the
only possibility for deriving this subgoal is an application ofR(.), in which case a
is tick and E is Cl.
The behaviour of Cl is represented graphically in Figure 1.1. Ingredients of
this behaviour graph (known as a "transition system") are process express ions and
binary transition relations between them. Each vertex is a process expression, and
one of the vertices is the initial vertex Cl. Each derivable transition of avertex is
depicted. Transition systems abstract from the derivations of transitions.
An unsophisticated vending machine Ven is defined in Figure 1.2. The
definition ofVen employs the binary choice operator + (which has wider scope
than the prefix operator) from Milner's CCS, Calculus ofCommunicating Systems
[42,44]. InitiallyVenmay accept a 2p or lp coin, and thena button big orlittle
may be depressed depending on the coin deposited, and finally after an item is
tick
FIGURE1.1. The transition graphfor Cl
1.1. First examples 3
collected the process reverts to its initial state . There are two transition rules for +.
R(+)
collect b collect 1
Cto
def
= UP.Ctl + round.Cto
UP.Cti+2 + dovn.Ctj
def
=
up
UP.Ct4 + dOwn.Ct2 ~ Ct4
up
UP.Ct4 ~ Ct4
The rule R(1;f) is here applied to the instance Ct3 1;f UP.Ct4 + dOwn.Ct2' Each
member Cti determines the same transition graph ofFigure 1.5 which contains an
infinite number of vertices. This graph is " infinite state" because the behaviour of
Cti may progress through any ofthe processes Ctj, in contrast to the finite state
graphs ofFigures 1.1 and 1.3.
The operator + can be extended to indexed families L: {Ei : i EI} where I
is a set of indices . EI + E2 abbreviates ~) E i : i E {I, 2}}. Indexed sum may
be coupled with indexing of actions. An example is a register storing numbers,
represented as a family {Reg; : i E N}.
= read, ,R
Reg,i def eg'i + "{ '
LJ wntej .Reg,j : JEN}
The act ofreading the content ofthe register when i is stored is read. , whereas
wri te j is the action that updates its value to j . The single transition rule for L
generalises the rules for +.
up up
D a-- -- ---
~ ~ ~ ~
Ct Ct 1 Cti
Consequently, Reg~ is able to carry out any writej (and thereby changes to Regj)
as weIl as read, (and then remains unchanged). A special case is when the indexing
set I is empty. By the rule R(~), this process has no transitions, since the subgoal
can never be fulfilled. In CCS the nil process L:{Ei : i E 0} is abbreviated to 0
(and to STOP in Hoare's CSP, Communicating Sequential Processes [31]).
Actions can be viewed as ports or channels, means by which processes can
interact. It is then also important to consider the passage ofdata between processes
along these channels, or through these ports. In CCS, input of data at a port named
ais represented by the prefix a(x).E, where a(x) binds free occurrences of x in
E . (In CSP a(x) is written a?x.) The port label a no longer names a single action ,
instead it represents the set {a(v) : v E D} where D is the appropriate family of
data values. The transition axiom for trus prefix input form is
Cop = in(x).out
def -(
x).Cop
. a deri
Be Iow IS . orfth e transition
envation . . Cop 1n(v) - ( )
~ out v .Cop f or v E D.
in(v) _
Cop ~ out(v).Cop
_ in(v)_
in(x).out(x).Cop ~ out(v).Cop
The subgoal is an instance ofR(in), as (out(x) .Cop){v/x} is out(v).Cop2, and so
the goal follows by an application ofR(~). The process out(v).Cop has only one
•.
transition -
out() ~ Cop that
v .Cop out(v) at is an iinstance 0 f R(out,
IS an ) smce
. we assume
that Val(v) is v. Whenever Cop inputs a value at in, it immediately disgorges it
through out. The size of the transition graph for Cop depends on the size of the
data domain D, and is finite when D is a finite set.
IThe process a(x).E can be viewed as an abbreviation ofthe process L{av.E{vjx) : v E D), writing av
instead of a(v).
2Cop contains no free variables because in(x) bindsx, and so (OUt(x) .Cop){vjx) equals out(v).(Cop{vjx))
because xis free in out(x), and (Cop{vjx)) is Cop.
6 1. Processes
llI'ite(3)
Regs ~ Reg3
--
read(5).Regs + wnte(x).Regx llI'ite(3)
.
~ Reg3
llI'ite(3)
write(x).Regx ~ Reg3
The variable x in write(x) binds the free occurrence of x in Reg x ' An index can
also be presented explicitly as a parameter.
Example 2 The multiple copier Cop' uses the parameterised subprocess Cop(n, x), where n
ranges over N and x over texts .
def
Cop' = no(n) .in(x).Cop(n, x)
def
Cop(O, x) = out(x).Cop'
Cop(i + 1, x) def
= out(x).Cop(i, x)
The initial transition ofCop' detennines the number ofextra copies ofa manuscript,
for instance Cop' ~ in(x).Cop(4, x) . The next transition settles on the text,
in(x).Cop(4, x) ~ Cop(4, v) . Then before reverting to the initial state, five
copies of v are transmitted through the port out.
App ~ in(x).in(y).out(xAy).App
App receives two messages m and n on in and transmits their concatenation mrn
on out. We shall assume different expression types , such as boolean express ions.
An example is that Val(even(i)) = true ifi is an even integerand is false otherwise.
This allows us to use conditionals in the definition of a process as exemplified by
8 that sieves odd and even numbers.
8 def •
= in(x).lf even(x ) then -out e(x).8 else - ()
outi, x .8
1.1. First examples 7
R(if 1)
if b then EI else E2
a
.z; E' Va
l(b)
= true
E I -+ E'
E2 -+ E'
. . iallv
S IIDtl recei
Yrecerves . 1 value through th e port an
a numenca . , For mstance,
. S in(55)
-+
if even(55) then out e(55) .S else out o(55).S . It then outputs through out,
if the received value is even, or through oun, otherwise. In this example,
• _ _ out o(55)
lf even(55) then out e(55).S else out o(55).S -+ S.
7. For any processes E, Fand G, show that the transition graphs for E + F
and F + E are isomorphie, and that the transition graph for (E + F) + G is
isomorphic to that of E + (F + G).
8. From Walker [60]. Define a process Change that describes a change-making
machine with one input port and one output port, that is capable initially of
accepting either a 20p or a 10p coin, and that can then dispense any sequence
of Ip, 2p, 5p and 10p coins, the sum of whose values is equal to that of the
coin accepted , before returning to its initial state.
EIF~E'IF'
R(I com)
E~E' F~F'
If E can carry out an action and become E', and F can carry out its co-action and
become F' then ElF can perform the completed internal action T and become
E' I F'. Consider a potential user of the copier Cop of the previous section , who
first writes a file before sending it through the port in.
def
User = write(x).Userx
def
User v = in(v).User
As soon as User has written the file v, it becomes the process User; that can
communicate with Cop at the port in. Rule R(I com) is used in the following
1.2. Concurrent interaction 9
The goal transition is the resultant communication at in. Through this communi-
cation, the value v is sent from the user to the copier because User, performs the
output in(v) and Cop performs the input in(v), where they agree on the value v.
Data is thereby passed from one process to another. When the actions a and a do
not involve values, the resulting communication is a synchronization.
Several users can share the copying resource . Cop I (Uservl I User v2) involves
two users, but only one at a time is allowed to employ it. So, other transition rules
for I are needed, permitting components to proceed without communicating.
In the first of these rules, the process F does not contribute to the action a that E
performs . Below is a sampie derivation.
The goal transition reflects a communication between Cop and Uservi, meaning
Userv2 is not a contributor. Cop I (Uservii Userv2 ) is not forced to engage in
communication. Instead, it may carry out an input action in( v) , or an output action
in(ul ) or in( v2).
in(v)_
Cop I (User vl I User v2 ) ~ out(v).Cop I (Uservl I User v2)
IIi(vl)
Cop I (User vl I Userv2) ~ Cop I (User I Userv2)
IIi(v2)
Cop I (Userv, I User v2) ~ Cop I (User vl I User)
3We assurnethat I has greaterscope than otherprocessoperators. The process out( v).Cop I User is therefore
the parallel compositionof out(v).Cop and User.
10 1. Processes
The behaviour of the users sharing the copier is not impaired by the order
of parallel subcomponents, or by placement of brackets. Both processes (Cop I
Uservi) I User v2 and Uservl I (Cop I User v2) have the same capabilities as Cop I
(Uservl I User v2). These three process expressions have isomorphie transition
graphs, and therefore in the seque1 we omit brackets between multiple concurrent
processes",
The parallel operator is expressive1y powerful. It can be used to describe infinite
state systems without invoking infinite indices or value spaces. A simple example
is the following counter Cnt.
der
Cnt = up .(Cnt I down.O)
Cnt can perform up and become Cnt I down.O that can perform down, or a further
up and become Cnt I down.O I down.O, and so on.
Figure 1.6 offers an alternative pictorial representation of the copier Cop and
user User. Such diagrarns are called "flow graphs" by Milner [44] (and should
be distinguished from transition graphs). A flow graph summarizes the potential
movement of information flowing into and out of ports, and also exhibits the ports
through which a process is, in principle, willing to communicate. In the case of
User, the incoming arrow to the port labelIed write represents input , whereas the
outgoing arrow from in symbolises output. Figure 1.7 shows the flow graph for
Cop I User with the crucial feature that there is a potentiallinkage between the
output port in ofUser and its input in Cop, permitting information to circulate from
User to Cop when communication takes place. However, this port is still available
for other users. Both users in Cop I User I User are able to communicate at
different times with Cop, as illustrated in Figure 1.8
wr i t e ~in i n r lout
-~- -~--
FIGURE 1.6. Flow graphs ofUser and Cop
writeClin i'r-1out
~~--~"'~~~
wri~er::J.i~
~ ~n
iGJ-out
=<crl'V Cop ~
"'L=-r
F1GURE 1.8. Flow graph ofCop I User I User
=EJ----;"'~GJ-~-t
I
(Cop User) \ K
The situation in which a user has private access to a copier is modelled using
an abstraction or encapsulation operator that conceals ports . ces has a restrietion
operator \ J , where J ranges over families ofincomplete actions (thereby exc1uding
the complete action r). If K is (in( v) : v E D} when D contains the values that
can flow through in, then the port in within (Cop I User)\K is inaccessible to
otherusers. The flow graph of(Cop I User)\K is pictured in Figure 1.9, where the
linkage without names at the ports represents their concealment from other users,
so it can be simplified as in the second diagram of the figure.
The visual effect of \K on the flow graph in Figure 1.9 is justified by the
transition rule for restriction, which is as follows where J is {a : a E J} .
E\l ~ F\l -
R(\) a ~ J UJ
E~F
12 1. Processes
The behaviour of E\l is part of that of E, as any action that E\l may carry out
can also be performed by E, but not necessarily the other way round. For instance,
Cop I User is able to perform an in input action, whereas an attempt to derive an in
transition from (Cop I User)\K is prec1uded because ofthe side condition on the
rule for R(\). Thepresence of\K in (Cop I User)\K prevents Cop from everdoing
an in transition, except in the context ofa communication with User. Restrietion
can therefore be used to enforce communication between parallel components.
After the initial write transition (Cop I User)\K ,,~v) (Cop I Userv)\K, the
next transition must be a communication.
A port a is concealed by restricting all the actions {a(v) : v E D}, and therefore
we shall usually abbreviate such a subset within a restriction to {a} .
Process descriptions can become quite large, especially when they consist
of multiple components in parallel. We shall therefore employ abbreviations of
process expressions using the relation ==, where P == F means that P abbreviates
F , which is typically a large expression.
Example 1 The mesh of abstraction and concurrency is further revealed in the finite state
example without data of a level crossing in Figure 1.10 from Bradfield and the
author [10], consisting ofthree components Road, RaH and Signal. The actions
car and train represent the approach of a car and a train, up opens the gates for
the car, ccross is the car crossing, down closes the gates, green is the receipt of a
green signal by the train, tcross is the train crossing, and red automatically sets
the light red. Unlike most crossings, it keeps the barriers down except when a car
actually approaches and tries to cross . The flow graphs of the components, and of
the overall system are depicted in Figure 1.11. The transition graph is pictured in
Figure 1.12. Both Road and RaH are simple cyc1ers that can only perform a
determinate sequence of actions repeatedly.
def
Road car.up.ccross.down.Road
def
RaH = train.green.tcross.red.Rail
green.red.Signal + up.down.Signal
def
Signal =
Crossing (Road I RaH I Signal)\{green, red, up, down}
! car
~0~
!~ !red t red
! ca r
tra i n tcros s
Cross i ng
def
Sender = in(x).sm(x).Sendl(x)
ms.sm(x).Sendl(x) + ok.Sender
def
Sendl(x) =
def
Medium sm(y).Medl(y)
mr(y).Medium+ r .ms.Medi um
def
Medl(y) =
def
Receiver = mr(x).out(x).ok.Receiver
!
Pr otocol
'""'r: 1"""=1-·'··«"
Ln tm)
( smun ! •
7:'''"'~'·''·=·'~
(Sendl( rn) jMed i uml ou t(rn) . ok . Re c e i v e r ) \ J (Send l( m) Ims.Medium !Receiver ) \J
1 out Im) _
def
IO = slot .bank.(lost.loss.IO + release(y).win(y).IO)
def
Bn = bank.max(n + 1).left(y) .By
def
D = max(z).(lost.left(z).D + ~)release(y) .left(z - y).D 1~ y ~ zl)
Example 2 An example of an infinite state system from Bradfield and the author [10] is the
slot machine SMn defined in Figure 1.15. Its flow graph is also depicted there . A
coin is input (the action slot) and then, after some silent activity, either a loss or
a winning sum ofmoney is output. The system consists ofthree components: IO,
which handles the taking and paying out of money; Bn , a bank holding n pounds ;
and D, the wheel-spinning decision component.
R(tr) E ~ E
E~E' E' ~F
First is the axiom that any proces s may carry out the empty sequence and remain
unchanged. The second rule allows traces to be extended. If E ~ E' and E'
can perform the trace w and become F then E ~ F . No distin ction is made
between carrying out the action a and carrying out the trace a (understood as an
action sequence oflength one) . Below is the derivation ofthe extended transition
Venb bigco llectb
----7
. part 0 f th e vendimg mac hine 0 f Section
Yen when Venb lS . 1.1.
big collectb
Venb ----7 Yen
co llectb
collectb .Ven ----7 Yen
big
big.collectb.Ven ----7 co Ll.ec'tj..Ven
Uwrites a file before sending it through in and then waits for an acknowledgement.
(C I U)\{in, ok} has similar behaviour to Ucop.
def -
Ucop = write(x ).out(x).Ucop
18 1. Processes
The only ditference in their abilities is interna! activity. Both are initially able only
to carry out a write action
vrite(v)_
Ucop - out(v).Ucop
vrite(v) -
(C I U)\{in, ok} - (C I in(v).ok.U)\{in, ok},
Process out(v).Ucop outputs immediately, whereas the other process must first
perform a communication before it outputs, and then r again before a second
write can happen. By abstracting from silent behaviour, this ditference disappears.
Outwardly, both processes repeatedly write and output.
A trace w is a sequence of actions. The trace w r J is the subsequence of w
when actions that do not belong to J are erased.
=
_I
e rJ e
a(w r J) if a E J
aw rJ w rJ otherwise
Below are three simple examples.
(train r tcross r) r {tcross} = tcross
(r ccross r) r {tcross} = e
(write(v) r out(v) r) r {write, out} = write(v) out(v)
Associated with any trace w is the observable trace w r 0, where 0 is a universal
set of observable actions containing at least all actions mentioned in this work
apart from r. The etfect of r 0 on w is to erase all occurrences of the silent action
r , as illustrated by the following examples.
(in(m) r r out(m) r) r0 = in(m) out(m)
(in(m)r r r r) r0 = in(m)
(r r r r r r) r0 = e
To capture observable behaviour, another family oftransition relations between
processes is introduced. E ~ Fexpresses that E may carry out the observable
trace u and become F . The transition rule for observable traces is as follows.
E~F
R(Tr) u = w r0
E~F
' Protocol
An examp1e IS in(m)out(m)
='=} Protocol, w hose deri 'I'
envation utiuses the
. . in(m) T T out(m) T
extended transition Protocol _ Protocol.
Observable traces can also be built from their component observable actions .
The extended transition Crossing tra~oss Crossing is the result of gluing
.
toge th er crossmg ====> E an d E tcross
train . w hen thee imterme d'iate state E
====> Crossang
1.3. Observable transitions 19
E~F
R(d::» E~ E
E~E' E/~F
FI~F3
FI ~ F2 F2 d::> F3
F2 ~ F3 F3 d::> F3
out (v)
write(v)
~ 2 : (Clfn(v) .ok .U) \{in ,ok)
out(v)
write(v)
out (v)
FIGURE 1.16. Observable transition graphs for (e I U)\{in, ok} and Ucop
Exercises 1. Derive the extended transition SMn ~ SMn +1 when w is the following trace
slot r r r r 1055 and SMn is the slot machine.
2. Provide a fuH derivation of Protocol ~ Protocol when s is the trace
in(m) r r out(m) r.
3. List the members of the following sets:
. traintcross E}
{E : Cressang ==>
in(m)
{E : Protocol ==> E}
4. Show that E ~ F is derivable via the rules R(tr) and R(Tr) iff it is derivable
using the rules R( ~ ) and R( ~ ).
5. Draw the observable transition graphs for the processes : Cl, Ven and
Crossing.
6. Although observable traces abstract from silent activity, this does not mean
that internal actions can not contribute to differences in observable capability.
1.4. Renaming and Iinking 21
Let Yen' be a vending machine very similar to YenofFigure 1.2, except that the
initia12p action is prefaced by the silent action, Yen' ~ r .2p .Venb+ lp.Venl
a, Show that Yen and Yen' have the same observable traces.
b, Let Usel be the user Usej ~ lp.li trt Le.Usej , who is only inter-
ested in inserting the smaller coin. Show that the process (Ven' I
Usel)\{lp, 2p , little} may deadlock before an observable action is
carried out unlike (Ven I Usel)\{lp, 2p, little}.
c, Draw both kinds oftransition graphs for each ofthe processes in part (b).
7. Assuming just one daturn value, draw the observable graphs for processes
(Cop I User)\{in} and Protocol. What states ofthese graphs can be fused
together?
8. Let g(E) be the transition graph for E, and let gO(E) be its observable
transition graph. Define the graph transformation ° that maps 9(E) into gO( E) .
9. A process is said to be "divergent" if it can perform the r action forever.
a, Draw both kinds of transition graph for the following pair of processes ,
r.O and Div' ~ r.Div' + r .O.
b, Do you think that the processes Protocol and Cop have the same
observable behaviour? Give reasons for and against.
B ~ i(x).o(x).B
For instance, Cop is the process Bwhen port i is in and port 0 is out. Relabelling
of ports can be made explicit by introducing an operator which renames actions.
The crux of renaming is a function mapping actions into actions . To ensure
pleasant properties, a renaming function I is subject to a few restrictions. First, it
should respect complements . For any observable a, the actions I(a) and l(ii) are
co-actions, that is I(il) = I(a). Second, it should conserve the silent action, I(r)
= r. Associated with any function I obeying these conditions is the renaming
operator [n, which, when applied to process E, is written as E[f] ; this is the
process E whose actions are relabelled according to I.
A renaming function I can be abbreviated to its essential part. If each ai
is a distinct observable action, then bI/at • .. . . b. fa ; represents the function I
that renames a, to b, (and ai to bi ), and leaves any other action c unchanged .
For instance, Cop abbreviates the process B[in/i, out /o]: here we maintain the
convention that in stands for the family {in(v) : v E D} and for {i(v) : v E D},
ä
22 1. Processes
so in/i symbolises the function that also preserves values by mapping i(v) to
in(v) for each v. The transition rule for renaming is set forth below.
. . in(v) _ . . out(v) . .
B[J.n/l, out./o] ~ (o(v) .B)[J.n/l, out /o] ~ B[J.n/l,out/O]
B, - B[oI/o]
Bj+1 - B[Oj/i,oj+I/o] l s j < n - l
Bn - B[on-I/i]
The flow graph of BI I . .. I Bn is also shown in Figure 1.17, and contains the
intended links. The n-place buffer is the result of intemalizing these contiguous
links, (BI I .. . I Bn)\{OI, . . . , on-d ·
~ i0° B - ~ i0° B -
Part of the behaviour ofa two-place buffer is illustrated by the following cyele .
i{v)
(B[ot/o] I B[oJli])\{o\} ~ «o(v).B)[oJlo] I B[oJli])\{od
,J,r
i{ w)
«o(w).B)[ot/o] I (o(v).B)[Ot/i])\{od +- (B[ol/o] I (o(v).B)[o\/i])\{od
,J, o(v)
(B[ol/o] I (o(w).B)[ol/i])\{od
T
«o(w).B)[ot/o] I B[ol/i])\{od ~
,J, o(w)
(B[ol/o] I B[ot/i])\{od
Below is the derivation of the second transition.
i(x).o(x).B ~ o(v).B
A more involved example from Milner [44] refers to the construction of a
scheduler from small cyeling components. Assume n tasks when n > 1, and that
action ai initiates the ith task, whereas b, signals its completion. The scheduler
plans the order of task initiation, ensuring that the sequence of actions a\ ... an is
carried out cyclically starting with a\. The tasks may terminate in any order, but a
task can not be restarted until its previous operation has finished. So, the scheduler
must guarantee that the actions a, and b, happen altemately for each i.
Let Cy' be a cycler of length four, Cy ~ a.c.b .d.Cy', whose flow graph is
illustrated in Figure 1.18. In this case, the flow graph is very elose to its transition
graph, so we have cireled the a label to indicate that it is initially active . As soon as a
happens, control passes to the active action c. The elockwise movement of activity
around this flowgraph is its transition graph . A first attempt at building the required
scheduler is as a ring of n cyc1ers, where the a action is task initiation, the b action
is task termination, and the other actions c and d are used for synchronization.
Cy; carries out the cycle CY'I 0. ~ eR CYI and Ci; , for i > I carries out the different
/ Ci - I aj Cj b; ..r'
cyc Ie CYi ~ C,J i.
The flow graph of the process Cy; I Cy; I CyJ I C~ with initial active
transitions marked is pictured in Figure 1.19. Next , the Ci actions are intemalised.
Assume that Sched~ == (Cy; I Cy; I Cy) I C~)\{C l, .. . , C4}. Imagine that the
Ci action s are concealed in Figure 1.19, and notice then how the tasks must be
initiated cyc1ically. For example, a3 can only happen once al , and then a2, have
both happened. Moreover, no task can be reinitiated until its previous execution has
terminated. For example, a3 can not recur until b 3 has happened. However, Sched~
does not permit all possible acceptable behaviour. Put simply, action b4 cannot
happen before b, because ofthe synchronization between C4 and C4 , meaning task
four cannot terminate before the initial task.
Milner's solution in [44] to this problem is to redefine the cycler
Cy = a.c.(b.d.Cy + d.b.Cy)
der
1.5. More combinations of processes 25
and to use the same renaming funetions. Let CYi for 1 < i :::: n be the proeess
(d.Cy)[a ;/a, c.]c , b;/b, Ci-lid]
and let CYI be Cy[alla , c.]«, bllb , cnld]. The required seheduler is Sched., , the
proeess (CYl I··· I CYn)\{cI, .. . , cn }.
Exercises 1. Redefine Road and RaH from Seetion 1.2 as abbreviations of Cy' plus
renaming.
2. Assuming that the spaee of values eonsists of one element, draw both kinds
of transition graph for the three-plaee buffer
in category theory. Moreover, users of process notation can introduce their own
operators according to the application at hand.
Numerous parallel operators are proposed within the calculi mentioned
above. Their transition rules are of two kinds. First, where x is parallel, is a
synchronization rule.
Ex F ~ E' X F'
E~E' F~F'
Here, ab is the concurrent product of the component actions a and b, and .. .
may be filled in with a side condition. In the case of the parallel of Section 1.2,
the actions a and b must be co-actions, and their concurrent product is the silent
action. Other rules permit components to act alone.
Ex F ~ E' x F E x F~ Ex F '
E~E' F~F'
In the case of the parallel I there are no side conditions when applying these rules.
This general format covers a variety of parallel operators. At one extreme is the
case when x is a synchronous parallel (as in SeeS), when only the synchronization
rule applies , thereby forcing maximal concurrent interaction. At the other extreme
is a pure interleaving operator when the synchronization rule never applies. In
between are the parallel operators of ACP, ees and esp.
A different conception of synchronization underl ies the parallel operator of
esp (when data is not passed). Synchronization is "sharing" the same action . Ac-
tions now do not have partner co-actions because multiple parallel processes may
synchronize. Each process instance in CSP has an associated alphabet consisting
of the actions that it is willing to engage in. Two processes must synchronize on
common actions belonging to both component alphabets. An alternative presenta-
tion, which does not require alphabets, consists of introducing a family of binary
parallel operators 11 K indexed by a set K of actions that have to be shared. Rules
for 11 K are as follows.
lp
VenliKUse - + VenlIlKlittle.collectl .Use
2p
VenliKUse - + VenbllKUse
Adding another user does not change the possible behaviour. The process
VenliKUseliKUse also has an isomorphie transition graph to that ofVenliKUse,
as all components must synchronize on K actions . If instead K is the set
{lp, little, collectl , 2p}, then the graph ofVenliKUse is isomorphie to Use,
as the initial 2p transition is blocked .
Hiding is also useful for abstracting from observable behaviour of processes that
do not contain the sharing parallel operator.
Example 2 The scheduler of the previous section has to ensure that the sequence of actions
al . . . an
happens cyclically. The observable behaviour of Sched, \\{b l , •. • , bn }
and ofSched', \\{b l , . .. ,bn } is the infinite repetition ofthe sequence ai ... an ' For
example Sched~ \\{b l , • • • , b4} carries out the following cycle.
In ces, values can be passed between ports. A more general idea is to al-
low ports themselves to be passed between processes . For example, the process
in(x) .x.E receives aportat in, which itmaythen use to synchronize on. This kind
of general mechanism permits process mobility, since links may be dynamically
altered as a system evolves. This facility is basic to the zr-calculus, as developed
by Milner, Parrow and Walker [45].
There is a variety of extensions to basic process calculi for modelling real-
time phenomena, such as timeouts expressed using either action duration or delay
intervals between actions, priorities among actions or among processes, spatially
distributed systems using locations, and the description of stochastic behaviour
using probabilistic, instead of nondeterministic, choice . Some of these extensions
are useful for modelling hybrid systems that involve a mixture ofdiscrete and con-
tinuous, and can be found in control systems for manufacturing (such as chemical
plants) .
28 1. Processes
round.Count + up .ICount j
def
Count = I a.Count)\{a}
def
down.a.O + up .(Count2 I b. COuntl)\{b}
down .b .O + up{Courrt I a. Count2)\{a}
def
= j
1.6. Sets of processes 29
The reader is invited to draw the observable transition graph for Count and compare
it with Figure 1.4.
In the remaining chapters, we shall abstract from the behaviour of processes.
However, in some cases this requires us to define families of processes that en-
capsulate the behaviour of some initial processes. This naturally leads to sets of
processes that are "transition closed." A set of processes E is transition closed if,
for any process E in E, and for any action a, and for any transition E ~ F , then
F also belongs to E. For instance, the set of processes appearing in a transition
graph is transition closed. In later chapters we use P to range over non-empty
transition closed sets , and we use P(E) to range over transition closed sets that
contain E .
There is a smallest transition closed set containing E , given by the set {F :
E ~ F for some trace w} . However, it may be computationally difficult, and
in some cases undecidable, to determine this set. Instead, we can define larger
transition closed sets containing E inductively on the structure of E. The resultant
set is only an "estimate" ofa smallest transition closed set. Consider the following
definition of the set of subprocesses Sub( E) of an initial CCS process E that does
not involve value passing or parameterisation.
Sub(a.E) {a .E} U Sub(E)
Sub(E+F) {E + F} U Sub(E) U Sub(F)
Sub(E I F) = {E I F} U {E' I F ' : E' E Sub(E) and F ' E Sub(F)}
Sub(E\K) = {E'\K : E' E Sub(E)}
Sub(E[f]) {E'[f] : E' E Sub(E)}
Sub(P) = {P} U Sub(E) if P ~ E
Example 1 The set Sub(Crossing), where Crossing is defined in Figure 1.10, contains 125
elements including the following, where K is the set {green, red, up, down}.
(Road I Rail I up .down.Signal)\K
(Road I tcross .red.Rail I Signal)\K
(down.Road I Rail I red.Signal)\K
Only 12 of these elements belong to the smallest transition closed set containing
Crossing; see Figure 1.12.
The above definition ofSub( E) is transition closed (as the reader can check). As
an estimate ofa smallest transition closed set, it may be very generous as illustrated
in example 1. A more refined definition is possible which, for instance, would
have the consequence that Sub«a.E)\{a}) always has size one . The definition of
Sub can also be extended to processes defined using parameters, or to processes
containing value passing. One method is to instantiate the parameters immediately.
For example, the following would capture the case for the input prefix.
Sub(a(x).E) = {a(x) .E} U U {Sub(E{v/x}) : v E D}
30 1. Processes
Another method is to first define the set of process "shapes," processes with free
parameters, then to define instances of these shapes using substitution. The defi-
nition of the input prefix would now be as folIows, provided that x does not also
occur bound within E. We leave details to the reader.
Sub(a(x) .E) = {a(x) .E} U Sub(E)
Exercises 1. Draw the observable transition graph for Count. How does it compare with
the graph for Cto?
2. From Taubner [57]. Using two copies of Count, show how a two-counter
machine can be modelIed within the restricted process language ofthis section .
3. A process E can carry out the "completed observable trace" w if E ~ Fand
F is deadlocked, or has terminated. Assurne that CT( E) is the set ofcompleted
observable traces that E can carry out.
a. Prove that, for any process E defined in the restricted process language
ofthis section , the set CT(E) is recursively enumerable.
b. Let L be a recursively enumerable language over a finite set ofobservable
actions (which , therefore, excludes r), Prove that there is a process E of
the restricted process language with the property CT(E) = L.
4. Answer the following open-ended questions.
a. What criteria should be used for assessing the expressive power of a
processlanguage?
b. Should there be a "canonical" process calculus?
c. Is there a concurrent version of the Church-Turing thesis for sequential
programs?
5. Consider any process E without parameters or value passing.
a. Show that Sub(E) as defined above is transition closed .
b. List all members of Sub(Crossing) and compare that listing with the
smallest transition closed set P(Crossing).
c. Refine the definition of Sub(E). What size does Sub(Crossing) have
with your refined definition?
6. Extend the definition of Sub to processes containing parameters and value
passing in both ways suggested in the text.
2 _
Various examples ofprocesses have been presented so far from a simple clock to a
scheduler. In each case, a process is an expression constructed from a few process
operators. Behaviour is determined by the transition rules for process combinators.
These rules may involve side conditions relying on extra information. For instance ,
when data are involved, a partial evaluation function is used. Consequently, the
ingredients ofa process description are combinators, predicates and transition rules
that allow us to deduce behaviour.
In this chapter, some abstractions from the overall behaviour of a process are
considered. Already we have contrasted finite state from infinite state processes.
The size of a process is determined by its transition graph, although under some
circumstances an estimate is provided instead using Sub. Also, observable transi-
tions marked by the thicker transition arrows ~ have been distinguished from
their thinner counterparts ~. We examine simple properties ofprocesses as given
by modallogics, whose formulas express process capabilities and necessities, and
which can be used to focus on part of the behaviour of a process.
E 1= tt
E~ff
E 1= <1> /\ \11 iff E 1= <1> and E 1= \11
E 1= <1> v \11 iff E 1= <1> or E 1= \11
E 1= [K]<1> iff VF E {E' : E ~ E' and a E K}. F 1= <1>
E 1= (K)<1> iff 3F E {E' : E ~ E' and a E K} . F 1= <1>
Every process has the property tt, whereas no process has the property ff . A
process has the property <1> /\ \11 when it has the property <1> and the property \11 ,
and it satisfies <1> v \11 ifit satisfies one ofthe disjuncts . The definition ofsatisfaction
between processes and formulas prefaced by a modal operator appeals to behaviour
of processes as given by the rules for transitions. A process E has the property
[K]<1> if every process which E evolves to after carrying out any action in K has
the property <1>. And E satisfies (K) <1> if E can become a process that satisfies <1>
by carrying out an action in K . To reduce the number ofbrackets in modalities, we
write [al, . . . , an] and (al, . . . , an) instead of [{al, . . . , an}] and ({al , . . . , an})'
The modallogic M slightly generalises Hennessy-Milner logic, due to Hennessy
and Milner [29], because sets of actions, instead of single actions, occur in the
modalities.
The simple formula (tick)tt expresses an ability to carry out the action tick.
Process E has this property provided that there is a transition E ~ F . The
clock Cl from Section 1.1 has this property, whereas the vending machine Yen
of Figure 1.2 does not. In contrast, the formula [tick]ff expresses an inability
to carry out the action tick, because process E satisfies [tick]ff if E does
not have a transition E ~ F. These basic properties can be embedded within
2.1. Hennessy-Milner logic I 33
tick
iff Y F E {E : Cl ~ E}. F f= (tick}tt /\ [tock]ff
iff Cl f= (tick}tt /\ [tock]ff
iff Cl f= (tick}tt and Cl f= [tock]ff
tick
iff 3F E {E : Cl ~ E} and Cl f= [tock]ff
iff 3F E {Cl} and Cl f= [tock]ff
iff Cl f= [tock]ff
iff {E : Cl ~ E} = 0
iff 0=0
The fonnula (K) tt expresses a capability for carrying out some action in K ,
whereas [K]ff expresses an inability to initially perform any action in K. In the
case of the vending machine Yen, a button cannot be depressed before money is
deposited, so Yen f= [big, little]ff. Other interesting properties ofVen are as
folIows.
• Yen f= [2p]([little]ff /\ (big}tt): after 2p is deposited the little button
cannot be depressed whereas the big one can
• Yen f= [Ip, 2p][lp, 2p]ff: after a coin is entrusted no other coin (2p or lp)
may be deposited
• Yen f= [Ip, 2p][big, little](collectb, collectl}tt: after a coin is
deposited and a button is depressed, an item can be collected
Verifying that Yen has these properties is undemanding. A proof merely appeals
to the inductive definition of the satisfaction relation between a process and a
fonnula, which may rely on the rules for transitions. Similarly, establishing that a
process lacks a property is equally routine.
I We assurne that /\ and v have wider scope than the modalities [K], (K), and that brackets are introduced
to resolve any further ambiguities as to the structure of a formula. Therefore, /\ is the main connective of the
subformula (tick}tt /\ [tock]ff.
34 2. Modalities and Capabilities
Cop' F [no(i)][in(v)](out(v)}i+l t t
Venl
der
tp.tp.rtea.ven, + coff ee.Vem )
lp.(lp.tea.Ven2 + lp.coffee.Ven2)
der
Ven2
tp.tp.t.ea.ven, + Ip.Lp.cof f ee.Venj
der
Ven3 =
Give modal fonnulas that distinguish between them : that is, find fonnulas cf> j ,
1 ::: j ::: 3, such that Venj F cf> j but Ven, li= cf> j when i =I j .
5. Let Cl ~ tick.CI and Cl 2 ~ tick.tick.CI 2. Show that no modal fonnula
distinguishes between these clocks. That is, prove that Cl F cf> iff Cl2 F cf>
for all modal fonnulas cf>.
6. A modal fonnula cf> distinguishes between two processes E and F if either
E F cf> and F li= cf> , or E li= cf> and F F cf>. Provide a modal fonnula that
distinguishes between Scheda and Sched~ ofSection 1.4.
7. Express as a modal fonnula the property "the second action must be a
(parameterised) out action ," and show that Cop ofSection 1.1 has this property.
8. Express as a modal fonnula the property "the fourth action must be r ," and
show that the slot machine SMn ofFigure 1.15 has this property.
36 2. Modalities and Capabilities
However, for any formula <I> of M, there is the formula <l>C that expresses the
negation of <1>. The complementation operator C is defined inductively as follows.
tt C = ff ff C = tt
(<I» /\ <l>2r = <I>~ v <1>2 (<I» v <l>2r = <I>~ /\ <1>2
([K]<I» C = (K}<I>c «K}<I»C = [K]<I>C
<l>C is the result of replacing each operator in <I> with its "dual," where tt and
ff, /\ and v, and [K] and (K) are duals. For instance, the complement of
[tick]( (tick}tt /\ [tock]ff) is the formula (tick}([tick]ff v (tock}tt). The
following result shows that <l>C expresses ..,<1>.
Proposition 1 E F= <l>C ifJ E ~ <1>.
Proof. By induction on the structure of <1>, we show that, for any process F,
F F= <l>C iff F ~ <1>. The base cases are when <I> = tt and <I> = ff . Clearly,
F F= ff iff F ~ tt and F F= tt iff F ~ ff . For the induction step, assurne the
result for formulas <I> 1and <1>2. Ifwe can show that it also holds for <1» /\ <1>2, <1» V <1>2,
[K]<I>1 and (K}<I>I, then the result is proved. Let <I> = <1>1 /\ <I>2. F F= (<I» /\ <I>2Y
The case <I> = <1» V <1>2 is very similar. Let <I> = [K]<I». F F= ([K]<I»Y
iff F F= (K) <I>~ (by definition of C)
iff 3G.3a E K. F ~ G and G F= <I>~ (by clause for (K))
iff 3G.3a E K . F ~ G and G ~ <1» (by induction hypothesis)
iff F ~ [K]<I» (by clause for [Kl).
For any fonnula <1> , Sub( <1» is a finite set of fonnulas. For instance, if <I> is the
fonnula ([tick]«tick)tt /\ [tock]ff)), then Sub(<I» is the following set.
tt\l = tt ff\l = ff
(<I> /\ W}\l = <I>\l /\ W\1 (<I> V W}\1 = <I>\l V W\1
([K]<I>}\1 = [K - )+](<I>\1) «K)<I>}\l = (K - )+)(<I>\1)
The operator \1 removes actions from modalities, as the following example
illustrates.
[tick, tock]«-)tt /\ [tock]ff}\{tick} = [tock]«-tick)tt /\ [tock]ff}
The operator \) on formulas is an inverse of its application to processes, as the
next result shows.
hypothesis E F <1>1 \1 and E F <1>2\1 , and now by the inductive definition above
iff E F <1>\1 . The case when <1> is <1>1 V <1>2 is similar. Assurne <1> is [K]\}I and
E\1 F [K]\}I , but E ~ [K - J +](\}I\1 ). Therefore, E ~ F with a E K - J+
and F ~ \}I\1 . By the induction hypothesis it follows that F\1 ~ \}I , and because
a E K - J + we also know that E\1 ~ F\1. But then we have a contradiction
because this shows that E \1 ~ [ K] \}I. For the other direction, suppose that
E F [K - J+](\}I\1) , but E\J ~ [K]\}I . Therefore, E\1 ~ F\1 for some
a E K and F\1 ~ \}I. But this a must belong to K - J + by the transition rule for
\1, and by the induction hypothesis F ~ \}I\1. Therefore, E ~ ([K]\}I )\1 . The
final case when <1> is (K) \}I is similar. 0
There are similar inverse operations on formulas for renaming [f] of Sec-
tion 1.4 and for hiding \\1 of Section 1.5. We leave their exact definition as an
exercise,
Much more troublesome is coping with parallel composition. One idea, which
is not entirely satisfactory, is to define an "inverse" of parallel on formulas. For
each process F , and for each formula <1>, one defines the new formula <1>/ F with
the intention that for any E , ElF F <1> iff E F <1>/ F . Instead ofpresenting an
inductive definition of this "slicing operator," <1> / F , we illustrate a particular use
ofit.
Example 2 Let Ven be the vending machine and consider the following.
der
Use, = lp.li trt Ie.Usej
K = {lp,2p,little,big}
K+ = K UK
The first of these is derivable using Proposition 1. By the same Proposition, the
second is quivalent to the following.
Venl F= ([ -K+](collectl)tt}/little.Usel
There is now a similar argument. The process little.Usel can only contribute to
an action in [- K+] if it is part ofa communication. Therefore, one needs to show
the following pair.
Venl F= [- K+]( (collectl)tt/li trt Le .Usej)
Venl F= [little]«(collectl)tt/Usel}
The first is derivable using Proposition 1, and by the same Proposition the second
is equivalent to the following.
collectl.Ven F= «(collectl)tt}/Usel
This clearlyholds because collectl.Ven is able to perform co l.Lectij, and tt/ F
for any process F is just the fonnula tt.
Exercises 1. Using Proposition 1 and the semantic clauses for boolean connectives, show
the following.
3. Cl F= [tick, tock]«(tick)tt 1\ [tock]ff)
A process has the property [ ] <I> provided that it satisfies <I> and, after evolving
through any amount of silent activity, <I> remains true. To satisfy « )} <1>, a process
has to be able to evolve in zero or more T transitions to a process realizing <1> .
Neither [ ] nor « )} is definable within the modal logic M. A technique for
showing non-definability employs equivalence of fonnulas : two fonnulas <I> and
IJI are equivalent if, for every process E , E F= <I> iff E F= IJI. For instance ,
(t ick, tock} <I> is equivalent to (tick}<I> v (tock}<I>, for any <1>. Two fonnulas
are not equivalent if there is a process that realises one, but not the other, fonnula.
If [ ] is definable in M, then for any fonnula <I> in M there is also a fonnula in M
equivalent to [ ] <I> (and similarly for definabil ity of « }}). Non-definability of [ ]
is established ifthere is a <I> in M and [] <I> is not equivalent to any fonnula ofM.
A simple choice of<l>, namely [a]ff , suffices. A proce ss realises [] [a]ff if it is
unable to perform a after any amount of silent activity. We show that [ ] [a]f f is
not equivalent to any fonnula ofM. For this purpose, consider the two familie s of
similar processes {Div, : JE N} and {Divf : JE N}
r DO r D'J.Vj . .. D'J.Vl r O
DJ.Vo r 0
... ~ J.Vi+l ~
t
~ ~ ~
... ~
r DJ.vj+
°a ~
r D.J.vja . . . ~
t D'J.va ~
r DJ.vo
° a 0
_..a ~
1 1
Däv, F= [] [a]ff for each n , On the other hand, Div,: ~ [] [a]ff for each n
because ofthe transition Div,: ~ DivQ. For each fonnula IJI ofM, there is a
k ~ 0 with the feature that Divk F= IJI iffDi'1 F= IJI , and therefore [] [a]ff is
not equivalent to any M fonnula. The crucial step here in a strengthened form is
demonstrated in Proposition 1, below. Recall from Section 2.2 that the size of IIJI I
is the number of occurrences oftt, ff , r-; v, [K] and (K ) within it.
Proposition 1 If IJI E M and IIJI I = k , then for all m ~ k , DiVm F= IJI ifJ Di v,:, F= IJI.
44 2. Modalities and Capabilities
Using the new modal operators, supplementary modalities [K] and ((K}) are
definable as folIows, when K is a subset of observable actions O.
iff EF=[][K][]
iff VF E {E ' : E d::> E'}. F F= [K] [ ] <1>
iff VF E {E' : E d::> EI ~ E' and a E K} . F F= [] <1>
iff V F E {E' : E d::> E I ~ E2 d::> E' and a E K}. F F= <1>
Therefore, [-] and (( -)} are abbreviations of [0] and ((O}). The modal formula
[ - ] ff hence expresses an inability to carry out an observable action, so the
= r .D~' v re al'ises it,
process D~' v def ,
Modal formulas can be used to express notions that are basic to the theory of
CSP [31]. A process can carry out the observable trace al ' , .a; provided it has
the property ((al}) , , . ((an}}tt . A process is stable if it has no r-transitions (and,
therefore, if it has the property [.]ff). The formula [K] ff expresses that the
observable set of actions K is a "refusal," since a realizing process is unable to
perform observable actions belonging to K. The pair (al, . , an, K) is an observable
failure for a process provided it satisfies ((all) . .. ((an })( [ .]ff 1\ [K] ff): a process
realises this formula if it can carry out the observable trace al ' . . an and become
a stable process that is unable to carry out observable actions belonging to K .
Example 2 The processes Cop and Protocol have the same observable failures, as given by
the following sequence of forrnulas.
(( }}([.]ff 1\ [-{in(m) : m e D}]ff)
((in(m)}}([.]ff 1\ [-out(m)] ff)
((in(m)}} ((out(m)}}([.]ff 1\ [-{in(n) : nE D}]ff)
For instance, Protocol satisfies the second formula because of the following
transition,
There are two transition graphs asssociated with any process, as described in
Section 1.3, one built from thin transitions ~ and ~, and the other from the
observable transitions ~ and ~ . The modallogic M is associated with the
first kind of graph. For the second kind, we introduce observable modallogic MO,
whose formulas are defined inductively below.
<1> ::= tt I ff I <1>1 /\ <1>2 I <1>\ V <1>2 I [K] <1> I [] <1> I «K))<1> I « ))<1>
K ranges over subsets of O. The logic MO is closed under complement, as the
reader can ascertain (for instance ([K] <1» Cis «K)) <1>c and <1>)C is [ ] <1>C). «)
Exercises 1. Show that both [ ] and « )) are definable within infinitary modallogic M oo of
Section 2.2.
2. Show that « )) is not definable in M by employing a similar argument to that
used in Proposition 1, but with respect to the dual formula « ))(a) tt.
3. The modal depth of a formula \11 , written md(\I1), is defined inductively as
follows .
md(tt) = 0 = md(ff)
md(<1>l /\ <1>2) = max{md(<1>(), md(<1>2)} = md(<1> 1 v <1>2)
md([K]<1» = 1 + md(<1» = md«K)<1»
Show that Proposition 1 remains true if I\11 I is replaced with md(\I1).
4. Let M- represent the family of modal formulas of M that do not contain
occurrences of box modalities, [J] for any J. Show that, for any non-empty
J, the modality [J] is not definable in M-.
5. Prove the following pair.
a. Crossing ~ [car] [train] «(tcross))tt /\ «ccross))tt)
b. Protocoll= [in(m)] «(out(m)))tt /\ [(in(m) : m E D)] ff)
6. Consider the following three vending machines.
Ven3
def
= tp.tp.t ea.ven, + 1p .lp.coffee.Ven3
Show that Ven2 and Ven3 have the same set of observable failures, but that
Venl and Ven2 have different observable failures.
7. Show that (C I U)\{in,ok) and Ucop from Section 1.3 have the same MO
modal properties, but not the same M properties.
8. Show that for all <1> E MO, Cop 1= <1> iff Protocol 1= <1> .
9. An MO formula is realisable ifthere is a process that satisfies it. Which ofthe
following formulas are realisable?
2.5. Observable necessity and divergence 47
In the case of Protocol, the message m may also be continually lost during
transmission, and therefore may never be transmitted. This is not possible for Cop.
In contrast, CIs does not diverge, and so is said to converge. Following Hennessy
[26], let E t abbreviate that E diverges, and E ..j, abbreviate that E converges.
Neither convergence nor divergence is definable in the modal logics M and MO .
There is no formula <1> in these logics such that, for every process E, E 1= <1>
iff E t. Consequently, we introduce another pair of modalities [..j,] and (( t)),
analogous to [ ] and (( )), except that they contain information about divergence
and convergence.
A process satisfies [..j,] <1> if it converges and realises [] <1> . Dually, a process
realises (( t)) <1> if it diverges or satisfies (( )) <1> . Divergence and convergence are
expressible by means of these new modalities: [..j,] tt expresses convergence and
its dual (( t ))ff expresses divergence.
Let M°,J, be observable modallogic together with the two new modalities:
tt I ff I <1>] /\ <1>2 I <1>] V <1>2 I [K] <1> I [] <1> I [..j,] <1> I
((K)) <1> I (( )) <1> I (( t)) <1>
Within M°,J, , the strong observable inevitability that a must (and will) happen next
is expressible as
[..j,] ((-))tt /\ [-a] ff .
The initial conjunct preciudes the possibility that r could occur forever.
Consequently, CIs V= [..j,] ((-))tt /\ [-tick]ff.
Example 2 The difference between Cop and Protocol as described in Example 1 is expressed
as [in(m)] ([..j,] ((- ))tt /\ [-out(m)] ff) for any m. Protocol fails to have this
property because the message m may be continually lost during transmission, and
therefore may never be output.
Ancillary modalities can be defined in M°,J, as follows.
def
[..j, K] <1> = [..j,][K] <1>
def
[K ..j,] <1> = [K] [..j,] cf>
def
[..j,K..j,]cf> = [..j,] [K] [..j,] <1>
2.5. Observable necessity and divergence 49
Exercises 1. Show that the modalities [i] and ({ t)) are not definable in MO.
2. Show that Cop realises the property
[in(m)] ([i] {(-))tt /\ [-out(m)] ff)
and Protocol fails to have this property.
3. Prove that, Mo.!- is closed under complement.
4. Prove that, for all <I> in MO, 0 F= <I> iff Div F= <1>.
5. Prove that for all <I> in MO,!. , Cto F= <I> iff Count F= <1> .
6. Which of the following MO,!. formulas are realisable?
a. [K] ([..l.] «tick))tt /\ {( t)) {(tick))tt)
b. [K i] {(tl) [tick] ff
c. [K] ({(t)) {(tick))tt /\ [] [tick] ff)
d. [K i] ({{t)) {(tick))tt /\ [] [tick] ff)
3 _
Bisimulations
models of distinct kinds of objects. At all levels of description they differ: in their
algebraic expressions, their action names, their flow graphs and their transition
graphs. A concrete manifestation of their difference is their initial capabilities.
The clock Cl can perform the (observable) action tick, whereas Yen can not.
Syntactic differences alone should not be sufficient grounds for distinguishing
processes. It is important to allow the possibility that two process descriptions may
be equivalent, even though they may differ markedly in their level of detail. An
example is that of two descriptions of a counter Cto of Figure 1.4 and Count of
Section 1.6. An account of process equivalence has practical significance when
one views process expressions both as specifications and as descriptions of imple-
mentations. Cto is a specification (even the requirement specification) ofa counter,
whereas the finite description Count with its very different structure can be seen
as a description of a possible implementation. Similarly, the buffer Cop can be
seen as a specification of the process Protocol of Section 1.2. In this context,
an account of process equivalence could tell us when an implementation meets its
specification' .
Example 1 A stark description ofthe slot machine SMn ofFigure 1.15 is the process SM~
SMi
I
= slot.(Lloss.SM
def --
i+ 1 + L..Jr.wm(y).SM(i+ll-Y :
I ~ - .- I
1~ y ~
•
I + In,
which carries no assumptions as to how the slot machine is to be built from separate
but interacting concurrent components.
The two counters Cto and Count have the same flow graph. Not only do they
have the same initial observable capabilities, but this feature is also preserved as
observable actions are performed. There is a similarity between their observable
transition graphs, a resemblance not immediately easy to define. A much simpler
case is that ofthe two clocks, Cl ~ tick.CI and Cl2 ~ tick.tick.CI2 pictured
in Figure 3.1. Although they have different transition graphs, whatever transitions
one ofthese clocks makes can be matched by the other, and the resulting processes
also retain this property. An alternative basis for suggesting that these two clocks
are equivalent starts with the observation that Cl and tick.CI should count as
equivalent expressions because Cl is defined as tick.Cl. An important principle
/C'\ C1
2
eCk.
)
~ tick tick
tick Cl,
1 Similar comments could be made about refinement, where we would expect an ordering on processes.
3.1. Process equivalences 53
is that, iftwo express ions are equivalent, then replacing one with the other in some
other expression should preserve equivalence. Replacing Cl with tick.CI in the
expression E should result in a process expression equivalent to E. In particular, if
Eis tick.CI then tick.tick.CI and tick.CI shouldcountas equivalent. Because
particular names of processes are unimportant, this should imply that Cl and Cl 2
are also equivalent.
The extensionality principle here is that an equivalence should also be a con-
gruence. That is, the equivalence should be preserved by the various process
combinators. For instance, if E and F are equivalent, then E I G should be
equivalent to F I G and E\l should be equivalent to F\l , and so on for all
the combinators introduced in Chapter 1. If the decision component D of the slot
machine SMn breaks down, then replacing it with an equivalent component should
not affect the overall behaviour ofthe system (up to equivalence).
Clearly, iftwo processes have different initial capabilities, then they should not
be deemed equivalent. Distinguishability of processes can be extended to many
other features, such as their initial necessities, their (observable) traces, or their
completed traces- . A simple technique to guarantee that an equivalence is a congru-
ence is as follows. First, choose some simple properties as the basic distinguishable
features. Second, count two processes as equivalent if, whenever they are placed in
a process context, the resulting processes have the same basic properties. A process
context is a process expression "with a hole in it," such as (E I [])\ l , where [ ]
is the "hole." This approach is sensitive to three important considerations. First is
the choice ofwhat counts as a basic distinguishable property, and whether it refers
to observable behaviour as determined by the ~ and ~ transitions, or with be-
haviour as presented by the single arrow transitions. Second is the choice ofprocess
operators that are permitted in the definition of a process context. Lastly, there is
the question whether the resulting congruence can be characterized independently
of its definition as the equivalence preserved by all process contexts.
Interesting work has been done on this topic, mostly, however, with respect to
the behaviour ofprocesses as determined by the single thin transitions ~ . Candi-
dates for basic distinguishable features include traces and completed traces (given,
respectively, by formulas ofthe form (al) . . . (an)tt and (al) ... (an) [ - ]ff). There
are elegant results by Bloom et al, Groote and Vaandrager [7, 25, 24] that isolate
congruencies for traces and completed traces . These cover very general families
of process operators whose behavioural meaning is governed by the permissible
format of their transition rules . The resulting conguencies are independently de-
finable as equivalences", Results for observable behaviour include those for the
failures model of CSP, [31] which takes the notion of observable failure as basic.
2 A trace w for E is completed if there is an F such that E ~ Fand F is unable to perform any action.
3They include failures equivalence (expressed in terms of formulas of the form (al) .. . (a n )[ K]ff), two-
thirds bisimulation, two-nested simulation equivalence, and bisimulaton equivalence. Bisimulation equivalence
is discussed at length in later section s.
54 3. Bisimulations
l:r
I\:
tea • coffee tea • coffee
• Ve
tea coffee
def
Ven1 = Ip.Lp.rtea.Ven, + coff ee.Venj )
def
Ven2 = lp.(lp.tea.Ven2 + Ip.ccff ee.Ven-)
def
Yens = Ip.Lp.t ea .Veng + lp.lp.coffee.Vens
Related results are contained in the testing frarnework ofDe Nicola and Hennessy
[16, 26], where processes are tested for what they may and must do.
Example 2 Consider the three similar vending machines in Figure 3.2 (where we have left out
the names of the intermediate processes). These machines have the same
(observable) traces. Assume a user
def----
Use = lp.lp.tea.ok .O,
who only wishes to drink a single tea by offering coins and, having done so, ex-
presses visible satisfaction as the action ok. For each ofthe three vending machines ,
we can build the process (Ven; I Use)\K, where K is the set {lp, tea, coffee}.
If i = 1 then, there is a single completed trace r r r ok.
r r rOk
(Ven1 I Use)\K ----+ (Ven1 I O)\K
The user must then express satisfaction after some silent activity. In the other two
cases, there is another completed trace r r .
With respeet to the failures model of CSP [31] and the testing framework of
[16, 26], Ven2 and Ven3 are equivalent. These two proeesses obey the same failure
formulas. Finer equivalenees distinguish them on the basis that, onee a eoin has
been inserted in Ven3,any possible sueeessful collection oftea is already decided.
Imagine that, after a single coin has been inserted, the resulting process is copied
for a numberofusers. In the case ofVen3, all these users must express satisfaction,
or all of them must be precluded from doing so. Let Rep be a process operator
that replicates successor processes . There is a single transition rule for Rep .
(R ep(Ven2 I Use)} \K
,J, r
(E I lp.tea.ok.O I E I lp.tea.ok.O}\K
,J, r
(tea.Ven2 I tea.ok.O I E I lp.tea.ok.O}\K
,J, r
(tea.Ven2 I tea.ok.O I coffee.Ven2 I tea.ok.O}\K
,J, r
(Ven2 I ok.O I coffee.Ven2 I tea.ok.O}\K
,J, ok
(Ven2 I 0 I coffee.Ven2 I tea.ok.O}\K
(Rep(Ven3 I Use)} \K is unable to perform this completed trace, as can be seen
from its two possible initial transitions. First is the transition
(Rep(Ven3 I Use)} \K
,J, r
(Lp.t ea.Ven., l Ip.tea.ok .O 11p .tea.Ven3 11p .tea.ok .O}\K,
which must continue with two ok transitions in any completed trace. The second
is
(Rep(Ven3 I Use)} \K
,J, r
(Lp.coff ee.Venj 11p .tea.ok .O 11p.coffee.Ven3 11p.tea.ok.O}\K,
Exercises 1. a. For each i : 1 ::: i ::: 3, draw the transition graph ofthe following process
(Rep(Ven; I Use2))\K .
b. Show the following
3. Design a process context that distinguishes between the two vending machines
Vand U, below (by having different completed traces) .
Player R can then choose a transition, and player V will be unable to match it. We
call such positions "R-wins." A play is won by the refuter if it reaches an R-win
position. Any other play counts as a win for player V. Consequently, the verifier
wins if the play is infinite, or if the play reaches a position (E n • Fn ) and neither
process has an available transition . In both these circumstances, the refuter has
been unable to find a difference between the starting processes .
Example 1 The verifier wins any play of G(Cl . C12). A play proceeds
forever irrespective of the component the refuter chooses to make her move from.
4For instance, if E has two transitions E ~ E I and E ~ E 2 , then the observer is able to choose either
of'them, but therc is not a "te sting" proces s 'ä.F that can guarantee this choice in the context (li.F I E)\{a) : the
two results (F I E1)\{a} and (F I E 2)\{a} are equally likely after synchronization on a.
58 3. Bisimulations
8
/ I~
~
~-CI
~
~--o
~ I
o e
©
player R to move
R-win
1_ - player V to move
then player V can avoid defeat. Figure 3.3 depicts the game graph for
G(CI, CIs). Round vertices are positions from which the refuter moves, and rect-
angular vertices are positions from which the verifier moves. Edges of avertex are
the possible moves that a player can make from that vertex. A V-vertex is labelIed
with the transition player R has chosen" , This information constrains the choice of
move that player V can make, since she must respond with a corresponding tran-
sition from the other component. Vertices encircled twice are R-wins . The game
graph represents all possible plays of the game. It begins with a token on the initial
vertex (Cl, CIs). A play is the movement of the token around the graph. If the
token is at an R-vertex, the refuter moves it, and if it is at a V-vertex the verifier
moves it. Ifthe token reaches an R-win vertex, the game stops and the refuter wins.
sWe suppress the position in which this transition has been chosen, as can be easily seen from the graph.
3.2. Interactive games 59
Example 2 shows that different plays of agame can have different winners .
Nevertheles s, for each game one ofthe players is able to win any play irrespective
ofwhat moves her opponent makes. To make this precise, we introduce the notion
of a strategy. A strategy for a player is a family of rules that tell the player how to
move. For the refuter, a rule has the form "if the play so faris (Eo, Fo) . . . (Ei, F;) ,
then choose transition t , where t is either Ei ~ Ei+1 or F; ~ F';+I." Because
the verifier responds to the refuter's choice of transition, a rule for player V has the
form "ifthe play so far is (Eo, Fo) . . . (Ei , Fi) , and player R has chosen transition
t , then choose transition t '," where t' is a corresponding transition of the other
process. However, it turns out that we only need to consider history-free strategies
whose rules do not depend upon previou s positions in the play. For player R, a rule
is therefore of the form
at position (E, F) choose transition t,
Example 3 Player R's winning strategy for the game G(Cl , C15 ) consists ofthe single rule " at
(Cl, C15 ) choose C15 ~ 0." This has the effect of reducing the game graph of
Figure 3.3 to the smaller subgraph in Figure 3.4, as redundant player R choices
are removed.
Example 4 The game graph for G(Ven2, Ven3), two ofthe vending machines ofFigure 3.2, is
pictured in Figure 3.5. A winning strategy for the refuter consists ofthe following
two rules (where E, G and H are from Figure 3.5) .
lp
at (Ven2 , Ven3) choose Ven3 ~ G
lp
at (E, G) choose E ~ H
The reader is invited to find another winning strategy.
8 (Cl,Cl ,)
~ O
~-o - -- ~
I Ven, Ip F I I 1P
Venz-- E I I Ven, IP. G I
~ / -, /
~ ~
8 8/ ~
6 ~_lPJ 6 ~ ~ E9_1P
L
,-----
coffee
H - Ven,
- -,
~I
L=::j--ven 2 tea- Ven )
L I
E == lp.tea.Ven2 + tp.coffee .Ven-,
F _ Ip.cof f ee.Yens G - lp.tea.Ven3
H _ coffee.Ven2 J - tea.Ven2
K _ coffee.Ven3 L _ tea.Ven3
For each game, one ofthe players has a winning strategy. This we shall prove
below. The strategy relies on defining sets of pairs of processes iteratively using
ordinals as indices . Ordinals are ordered in increasing size, as follows .
0,1, .. . ,w,w+ 1, .. . ,w+w,w+w+ 1, . . .
The initiallirnit ordinal (that is, one without an immediate predecessor) is w and
to + 1 is its successor. The next limit ordinal is w + to .
3.2. Interactive games 61
Theorem 1 For any game G(E, F), either player R or player V has a history-free winning
strategy.
Proof. Consider the game G( E, F). The set of possible player R positions P is
the set
where ~ is the extended transition relation defined in Section 1.3. P contains all
possible positions of the game G( E , F) in which player R moves next. Let W ~ P
be the subset ofpositions that are R-wins. We now define the subset ofpositions
from which player R can force a win by eventually entering W . This set Force is
defined iteratively, starting with I and using ordinals, where A is a limit ordinal.
Force ' = W
Force?"! = Force" U {(G , H) E P :
3G' . G ~ G' and VR'. if H ~ H' then (G' , H') E Force"
or 3H' . H ~ H' and VG' . ifG ~ G' then (G', H') E Force"]
Force" = U[Force" : a < A}
Lastly we define? Force as the following subset of positions.
Ifplayer V wins the game G( E , F), then we say that process E is "game equiv-
alent" to process F , in which case player R cannot detect a difference in behaviour
between the pro cesses E and F . Game equivalence is indeed an equivalence
relation.
6The processes considered in this work have countable trans ition graphs, so the set P ofpositions is countable;
thereforc, we need only consider ordinals whose cardinality is at most that of'N ,
62 3. Bisimulations
1. If"at (E', F ') , when player R has chosen E' ~ EI!, choose t'" is in (J, and
" at (F ', G'), when player R has chosen t', choose t" is in it , then "at (E' , G') ,
when player R has chosen E' ~ EI!, choose t" is in rr 0 (J
2. If "at (F', G'), when player R has chosen G' ~ GI!, choose t'" is in st , and
"at (E ' , F') , when player R has chosen t', choose t" is in (J, then "at (E' , G') ,
when player R has chosen G' ~ GI!, choose t" is in n 0 (J
We leave as an exercise that n 0 (J is a winning strategy for player V for the game
G(E , G). 0
If E and F are finite state processes, then the proof of Theorem 1 provides a
straightforward algorithm for deciding whether E is game equivalent to F . Assume
that the number of processes in the transition graphs for both E and F are at most
n , One first computes the set P of possible player R positions, {(E', F') : E ~
E' and F ~ F'}. The size ofthis set is therefore bounded by n2 • One then picks
out Force', the subset W ~ P of R-wins. Next , one defines iteratively the sets
Force'"! for i 2: I, by adding pairs from P - Force' obeying the requirement.
The algorithm stops as soon as the set Force i is the same set as Force":" . This set
is then the set Force . It is clear that there can be at most n2 iterations before this
happens. If (E, F) f/. Force, then E is game equivalent to F, otherwise, they are
not game equivalent.
Exercises 1. Draw the game graphs for G(Venl , Ven2) and G(Venl . Vena) where the
vending machines are defined in Figure 3.2.
3.2. Interactive games 63
2. Show that the pair of vending machines V and U, below, are not game
equivalent.
lp.(tea.U + coffee.U)
def
=
3. Show that Player V has a winning strategy for the game
b. List all the pairs of vending machines from Venl, Ven2, and Ven3 of
the previous section for which player V has a winning strategy for the
S-game.
c. E is S-equivalent to F if player V has a winning strategy for the two
S-games (E , F) and (F, E). Prove that S-equivalence is an equivalence
relation. Give an example of two processes that are S-equivalent, but not
game equivalent.
Example 1 Assume Cl , Cl2 and CIs are the clocks of the previous section. The relation B
= {(Cl, C12), (Cl, tick.Cl 2)} is abisimulation. For example, if Cl, ~ Cl then
Cl2 ~ tick.Cl 2 and the resulting pair ofprocesses belongs to B. The relation
{(Cl, CIs)} is not abisimulation because of the transition CIs ~ O. Adding
(Cl, 0) does not rectify this because the transition Cl ~ Cl cannot then be
matched by a transition ofthe process O.
"They also occur in a slightly different form in the theory of modallogic as zig-zag relations; see Benthem
[6].
3.3. Bisimulation relations 65
Example 2 The following processes are not bisimilar, a.(b.O + c.O) and a .b.O + a.c.O. There
cannot be abisimulation relating the pair because it would have to include either
(b.O + c.O, b.O) or (b.O + c.O, c .O).
The parallel operator I and the sum operator + are both commutative and
associative with respect to bisimulation equivalence. This means that the following
hold for arbitrary processes E , F and G .
uP.Ct;+2 + dOWIl.Ct ; i 2: O.
def
Ct ;+1 =
Abisimulation that contains the pair (Cnt , Ct~) has infinite size because the pro-
cesses are infinite state. Let Pi be the following families of processes for i 2: 0
(when brackets are dropped between parallel components)
Po = {Cnt I oi : j 2: O}
Pi+1 = {E I oj I dOWIl.O I Ok : E E Pi andj 2: Oandk 2: O},
This transition is matched by Ct~ ~ Ct~ because Cnt I dOWIl.O I oj E PI. The
other case when i > 0 is left as an exercise for the reader.
The proofthat (C I U)\{in, ok] '" (C I C I U)\{in, ok) is given by the following
bisimulation relation B .
{«C I U)\{in, ok}, (C I C I U)\{in, ok})} U
{«C I in(v).ok .U)\{in, ok], (C I C I in(v).ok.U)\{in, ok}) : v E D} U
{«out(v).ok.C I ok.U)\{in, ok] , (out(v).ok.C I C I ok.U)\{in, ok}) : v E D} U
{«out(v).ok .C I ok .U)\{in, ok}, (C I out(v).ok .C I ok.U)\{in, ok}) : v E D} U
{«ok.C I ok.U)\{in, ok}, (ok.C I C I ok.U)\{in, ok}) : v E D} U
{«ok.C I ok.U)\{in, ok}, (C I ok.C I ok.U)\{in, ok}) : v E D}
We leave the reader to check that B is indeed abisimulation.
Proposition 3 If E '" F, then for any process G ,Jor any set ofactions K ,Jor any action a and
for any renamingfunction I,
3.3. Bisimulation relations 67
Proof. We show case 3 and leave the other cases for the reader to prove. The
relation B = {(E I G , F I G) : E F} is abisimulation. Assume that «E I
"V
for some F' . Therefore F I G ~ F ' I G, and so by definition «E' I G), (F' I
G» E B . Next , suppose G ~ G' and E' = E . So F I G ~ F I G', and by
definition «E I G') , (F I G'» E B. The last case is that E I G .z, E' I G' and
a ä a
E ~ E' and G ~ G'. However, F ~ F ' for some F' such that E' F ', "V
so F I G .z; F ' I G', and therefore «E' I G'), (F' I G'» E B. The argument is
symmetrie for a transition F I G ~ F ' I G' . 0
a. B- 1 is abisimulation
b. B n S is abisimulation
c. BUS is abisimulation
d, - B is abisimulation
e. B 0 S is abisimulation
where
B- 1 = {(F, E) : (E, F) E B}
-B = {(E, F) : (E, F) (j B}
B 0 S = {(E , G) : there is an F . (E, F) E Band (F, G) E S}
6. Prove the remaining cases ofProposition 3.
7. Define a process operator for which bisimulation equivalence is not a
congruence.
8. A relation B between processes is a simulation (half of abisimulation) pro-
vided that, whenever (E , F) E Band a E A, if E ~ E' , then F ~ F '
and (E', F ') E B for some F '. E and F are simulation equivalent provided
that there are simulations B and S such that (E , F) E B and (F, E) ES.
a. List a11 the pairs (E, F) ofvending machines from Venl, Ven2, and Ven3
ofFigure 3.2 for which there is a simulation B containing (E , F).
b. Give an example of two processes which are simulation equivalent but,
not bisimilar.
c. Show that E and F are simulation equivalent if, and only if, they are
S-equivalent (defined in the exercises ofthe previous section).
9. For each ordinal a , the notion of a-equivalence, "'a, is defined as fo11ows.
First, the base case, E "'0 F for a11 E and F. Next, for a successor ordinal,
E "'a+1 F ifffor any a E A,
if E ~ E' then for some F ' . F ~ F ' and E' '"a F '
if F ~ F ' then for some E' . E .z; E' and E' "'a F ' .
Lastly, for a limit ordinal A, E "'). F if, and only if, E "'a F for all a < A.
a. Give an example of a pair of processes E, F such that E "'3 F but
EhF.
b. Consider the game G(E, F) as defined in the previous section. Show that,
for any possible player R position, (G, H) of this game, G fa H iff
(G, H) E Force",
c, Prove that E '" F iff E '" a F for all a,
d. Eis image-finite iffor every word w the set {E' : E ~ E'} has finite
size . Show that, if E and F are image-finite, then E '" F iff E "'n F for
all n 2: O.
3.4. Modal properties and equivalences 69
9Similarly a "refinement" preorder!;r is definable as folIows: E !;r F iff for all <I> E F. if E F <I> then
F F <1>.
70 3. Bisimulations
clear. For the inductive step , the proof proceeds by case analysis. First, let <1> be
1JI 1 1\ 1J12 and assume that the result holds for IJI( and 1J12 . By the definition of
the satisfaction relation G F= <1> iff G F= IJI I and G F= 1J12 iff by the induction
hypothesis H F= IJI( and H F= 1J12, and therefore iff H F= <1> . A similar argument
applies to IJI) V 1J12• Next, assume that <1> is [KJIJI and G F= <1> . Therefore, for any
G' such that G ~ G' and a E K , it follows that G' F= IJI. To show that H F= <1>,
let H ~ H' (with a E K). However, we know that for some G' there is the
transition G ~ G' and G' rv H', so by the induction hypothesis H' F= IJI , and
therefore H F= <1> . The other modal cases are left as an exercise for the reader. 0
This result tells us that two bisimilar processes have the same capabilities, the
same necessities and the same divergence potentials. Although the converse of
Proposition 1 does not hold in general (see Example 1, below) , it does hold in the
case of a restricted set ofprocesses. A process E is immediately image-finite if, for
each a E A, the set {F : E ~ F} is finite. For each a E A, E has only finitely
manya-transitions. Eisimage-finiteifeverymemberof{F : 3w E A*. E ~ F}
is immediately image-finite. That is, a process is image-finite if all processes in its
transition graph are immediately image-finite. With this restriction to image-finite
processes, the converse of Proposition 1 holds .
Proposition 2 If E and F are image-finite and E =r F, then E rv F.
Proof. It suffices to prove the result for the case when r is the set of modal
formulas M of Section 2.1. We show that the following relation is abisimulation.
{(E , F) : E =M Fand E, F are image - finite}
But suppose not. Therefore, without loss of generality G =M H for some G
and H, and G ~ G' for some a and G', but G' t:M H ' for all H ' such that
H ~ H'. There are two possibilities. First, the set {H' : H ~ H '} is empty.
But G F= (a}tt because G ~ G' and H p6: (a}tt, which contradicts that
G =M H. Next , the set {H' : H ~ H '} is non-empty. However it is finite
because of image finiteness, and therefore assume it is {HI, .. . , H n } . Assume
G' t:M H; for each i : 1 S i S n, meaning there are formulas <1» , • • • , <1>n
such that G' F= <1>; and H; p6: <1>; . (Here we use the fact that M is closed under
complement; see Section 2.2.) Let IJI be the formula <1» 1\ .. • 1\ <1>n . Clearly
G' F= IJI and H; p6: IJI for each i , as it fails the i th component of the conjunction.
Therefore, G F= (a}1JI because G ~ G', and H p6: (a}1JI because each H; fails
to have property IJI. But this contradicts that G =M H . Therefore, the relation =M
between image-finite processes is abisimulation. 0
Example 1 The need for image finiteness in Proposition 2 is iIlustrated by the following
example. Consider the following family of clocks, ci: for i > 0,
def
tick.O
def
= tick.Cl i i ~ 1
and the clock Cl from Section 1.1. Let E be the process L{Cli i ~ I}, and let
:
A variety of the process equivalences in the linear and branching time spec-
trum as summarized by Glabbeek in [22] can be presented in terms ofhaving the
same modal properties drawn from sublogics ofM (when restricted to image-finite
processes). Also, these equivalences can often be presented game theoretically by
imposing restrictions on possible next moves in a play.
Exercises 1. Recall the definition ofe-equivalence from the exercise ofthe previous section.
Consider the restricted case when a E N. First, is the base case, E "'0 F for
all E and F . Next, for a successor, E "'i+1 F ifffor any a E A ,
if E .z, E' then for some F'. F ~ F' and E' "'i F'
if F ~ F' then for some E'. E ~ E' and E' "'i F.'
Another idea is that of modal depth (defined in the exercises of Section 2.4) .
The modal depth of an M formula is the maximum embedding of modal
operators within it. We let md( <t» be the modal depth of <t> defined as folIows :
md(tt)
md(<t> 1\ w)
=
=
°
max{md(<t», md(w)}
=
=
md(ff)
md(<t> v w)
md([K]<t» = 1 + md(<t» = md«(K)<t»
Let Mn be the set ofmodal M formulas <t> such that md(<t» ~ n.
a. Prove that E '"n F iff E =M. F.
b. Let E and F be arbitrary image-finite processes, and assume that E f F .
Present a method that constructs a formula <t> E M and distinguishing
between E and F, that is, for which E F <t> and F ~ <t>.
72 3. Bisimulations
2. Prove Proposition 3.
3. Give a fonnula CI> of Moo such that E F CI> and F ~ CI> when E and F are
the processes from example I.
4. Let r be the subset ofM fonnulas that do not contain any occurrence of a [K]
modality. If E and F are image-finite, prove that E =r F iff E and F are
simulation equivalent (defined in an exercise of the previous section) .
5. A relation B between processes is a 2/3-bisimulation (see Larsen and Skou
[37]) provided that, whenever (E , F) E B and a E A,
if E ~ E' then for some F ' . F .z; F ' and (E ', F') E B
if F .z; F ' then for some E'. E .z., E'.
Let r be the subset ofM fonnulas with the restriction that, for any subfonnula
[K]\IJ , the fonnula \IJ is ff . Prove the following for image-finite E and F :
E =r F iffthere are 2/3-bisimulations B and S with (E , F) E Band (F, E) E
S.
6. Let r be the subset of M fonnulas that do not contain an occurrence of a
[K] modality within the scope of a (J) modality. Provide adefinition of an
interactive game G'(E , F) such that player V wins G'(E, F) iff E =r F ,'
assuming that E and F are image-finite.
Example 1 The processes (C I U)\{in,ok} and Ucop from Section 1.3 are observationally
game equivalent. An example play is in Figure 3.6, where the refuter moves from
the round vertices and the verifier from the reetangular vertices. The reader is
invited to explore other possible plays.
Underpinning observational game equivalence is the existence ofan observable
bisimulation relation whose definition is as in Section 3.3, except with respect to
observable transitions ~.
• if E ~ E' then F ~ F' for some F' such that (E', F') E B, and
• if F ~ F' then E ~ E' for some E' such that (E' , F') E B
G
r:
J
8 1
E
o
- (C I U)\{in, ok}
F - Ucop
E IlI"~v) E'
t' E'o~) E
The proof of this result is the same as for Proposition 2 of Section 3.3 except,
with respect to observable transitions. Ifprocesses are bisimilar, then they are also
observably bisimilar, as the next result shows.
Proposition 2 If E '" F , then E :::::: F .
A direct proo fthat two processes are observably bisimilar consists ofexhibiting
an observable bisimulation relation containing them.
3.5. Observable bisimulations 75
{(Protocol, Cop)}U
(«Sendl(m) I Medium I ok.Receiver)\], Cop) : m E D} U
(«sm(m).Sendl(m) I Medium I Receiver)\], out(m).Cop) : m E D} U
(«Sendl(m) I Medl(m) I Receiver)\], out(m).Cop) : m E D} U
(«Sendl(m) I Medium I out(m).ok.Receiver)\], out(m).Cop) : m E D} U
(«Sendl(m) I mS .Medium I Receiver)\], out(m).Cop) : m E D}
The reader is invited to establish that B is an observable bisimulation.
the operators 11 of ces, except sum. It therefore turns out that the equivalence ~c
can be described independently of process contexts in terms of transitions; for it
is only the initial preemptive r transition that causes problems .
Proposition 5 E ~c F ifJ
1. E ~ Fand
2. if E ~ E', then F ~ FI ~ F' and E' ~ F' for some FI and F' , and
3. if F ~ F', then E ~ EI ~ E' and E' ~ F' for some EI and E' .
This Proposition can be viewed as the criterion for when E ~c F holds. If E and
F are initially unable to perform a silent action (as is the case with Protocol and
Cop of example 2), E ~ F implies E ~c F.
There is also a finer observable bisimulation equivalence called "branching
bisimulation equivalence" which is due to Glabbeek and Weijland [23]. Observable
bisimilarity and its congruence are not sensitive to divergence. So, they do not
preserve the strong necessity properties discussed in Section 2.5. However it is
possible to define equivalences that take divergence into account [26, 31, 61].
9. Prove Proposition 5
10. Show that Scheda >ft Sched~, wheretheseprocessesaredefinedinSection 1.4.
However, show that
Scheda \\{bi, .. . , b4 } ~ Sched~ \\{bi, ... , b4 } .
H. For a a
E A let be a if a # r, and let f be E. A binary relation B between
processes is an ob bisimulationjust in case whenever (E, F) E Band a E A,
a. if E ~ E' then F ~ F' for some F' such that (E' , F') E B, and
b. if F ~ F' then E ~ E ' for some E' such that (E', F') E B.
Two processes are ob equivalent, denoted by ~', if they are related by an ob
bisimulation relation. Prove that ~ = ~'.
12. Prove that Cto ~c Count, where these processes are given in Figure 1.4 and
Section 1.6.
13. Extend the modallogic MO so that two image-finite processes have the same
modal properties if, and only if, they are observably congruent.
The last four are clear from the behavioural meanings ofthe operators . The first two
are r-laws and show that we are dealing with an observable equivalence. We shall
also appeal to a rule schema called an "expansion law" by Milner [44], relating
concurrency and choice .
if x, = L{aij.xij : I ~ j ~ nd for i : I ~ i ~ m,
then XI I .. . I Xm = L{aij .Yij : 1 ~ i ~ m and 1 ~ j ~ nd
+ L{LYklij : 1 .s k < i .s m and akt = aij},
where Yij== XI I I Xi_I I Xij I Xi+1 I I Xm
and vuu == Xl I I Xk-I I xu I Xk+1 I I Xi_I I Xij I Xi+1 I ... I Xm •
For example, if
The expansion rule is justified by the transition rules for the parallel operator.
Proofrules for recursion ~g) are also needed. If E does not contain any occur-
rences ofthe parallel operator I, then P is said to be "guarded" in E, provided that
all occurrences of P in E are within the scope of aprefix a . and a is an observable
action (that is, not r). Assume that P is the only process constant in E . The guard-
edness condition guarantees that the equation P = E has a unique solution up to
~c . A solution to the equation P = E is a process F such that F ~c E {F / P}.
Uniqueness of solution is that, if both F and G are solutions, then F ~c G.
• if P ~ Ethen P = E
• if P = E and P is guarded in E, and Q = F and Q is guarded in F, and
E { Q/ P} = F , then P = F
3.6. Equivalence checking 79
def
10 = slot.10 1
def
10 1 = bank.10 2
lost.loss.IO + release.win.IO
def
10 2
def
B = bank.Bj
def
max.left.B
def
D max.Dj
lost.left.D + release.left.D
def
=
SM'
def
= slot.(r.loss.SM' + r .win .SM')
FIGURE 3.7. A simplified slot machine
Example 2 The recursion rules can be used to prove that Cl ~c Cl2 (where these processes
are pictured in Figure 3.1) as follows.
Cl = tick.CI by definition of Cl
tick.CI = tick.tick.CI by congruence
Cl = tick.tick.CI by transitivity of =
Cl 2 = tick.tick.C12 by definition of C12
(tick.tick.Cl){CI2/CI} = tick.tick.CI2 byequality
Cl = Cl 2 by the second recursion rule
The slot machine SM without data values, and its succinct description SM',
appear in Figure 3.7. We prove that SM = SM'. The idea behind the proofis to first
simplify SM by showing that it is equal to an expression E that does not contain
the parallel operator. The proof proceeds on SM using the expansion law, and the
laws earlier for \K l 0 and r (and the first recursion rule).
SM = (10 I B I D)\K
= (slot.I0 1 I bank.Bj I max.D1)\K
= (slot.(I0 1 I B I D) + bank.(IO I BI
I D) + max.(10 I B I Dl»\K
= (slot.(I0 1 I B I D»\K + (bank.(IO I Bi I D»\K + (max.(IO I B I D1»\K
= slot.(I0 1 I B I D)\K + 0 + 0
= slot.(I0 1 I B I D)\K
80 3. Bisimulations
Exercises 1. For each ofthe following cases of XI, X2 and X3, define XI I X2 I X3 using the
expansion theorem .
a,
XI = a .xlI + b,X12 + a.X12
X2 = a .x21 + C.X 22
X3 = a ,x31 + C ,X32
3.6. Equivalencechecking 81
b.
XI = LXII + LXI2
X2 = a.X21 + b.X22
X3 = a.x31 + a,x32 + 'foX33
c.
XI = a,xl1 + b ,xl2 + a.X12 + a .xl3
X2 = a.x21 + b ,X22 + a.X23
2. Refine the expansion law to take account of the restriction operator. That is,
assume that each x, has the form L {aij .xij : I ~ j ~ ni} and the parallel
form is (Xl I ... I xm)\K.
3. Let Cl ' ~ tick.tick.tick.CI' . Use the recursion proofrules to prove that
Cl ' = C12 •
4. Assume that there is just one datum value in the set D . Using equational
reasoning only, prove that Protocol ~c Cop (where these processes are
defined in Chapter 1).
S. Compare the different methods for proving equivalence, by showing that SM ~
SM':
a. directly using games
b. directly by exhibiting an observable bisimulation
c. directly by showing that they obey the same modal properties in MO
6. Extend the proof rules of the equational system to take into account value
passing, and then prove equationally that SMn = SM~ .
7. Extend the second recursion rule so that the expressions E and F can contain
occurrences of parallel. (To do this we need to refine the definition of being
guarded.)
4 _
Temporal Properties
example, the fonnula (tick}tt divides {CI 1 , tock.Cld into two subsets {Cld
and [tock.Cf-}. Given a fonnula cI> and a set ofprocesses E, we define 11 cI> IIE be
the subset of processes in E having the modal property cI>.
I 11 cI>
E
II ~ {E E E : E F= cI>} I
Therefore, each modal fonnula cI> partitions E into 11 cI> 11 E and E - 11 cI> 11 E•
There are many different notations for describing properties of processes.
Modallogic is one such fonnalism. Other notations are also logical and inc1ude
first-order and second-order logic over transition graphs. Another kind offonnal-
ism is automata, which recognises words, trees or graphs. The expressive power
of different notations can be compared by examining the properties of processes
they are able to define. Whatever the notation, a property of a set of processes can
be identified with the subset that has it. Consider the following family of clocks
E = {Cl i , Cl : i ::: I}, where ci' is defined in example I of Section 3.4. For
instance, the transition graph for Cl4 is as follows.
Cl 4 ~ Cl 3 ~ Cl 2 ~ CI l ~ 0
Cl is distinguishable from other members of E because of its long tenn capability
for ticking endlessly. Each clock ci' ticks exact1y i times before stopping. The
property "can tick forever" partitions E into two subsets {Cl} and E - {Cl}. How-
ever, this partition cannot be captured by a single modal fonnula, as the following
result shows .
Proposition 1 For any modal cI> EMU M°.J- , if Cl F= cI>, then there is a j ::: I such that, for all
k ::: j, Cl k F e .
Proof. By induction on the structure of cI>. The base case when cI> is tt or ff is
c1ear. The induction step divides into subcases. Let cI> = 'lJ1 1\ 'lJ2 and assume that
Cl F= cI>. Therefore, Cl F 'lJ1 and Cl F= 'lJ2. By the induction hypothesis, there
is a j I ::: I and a j2 ::: I such that Clk1 F= 'lJ1 for all k I ::: j I and Cl k2 F 'lJ2
for all k2 ::: j2. Let j be the maximum of {jl, j2}, and so Cl k F cI> for all
k::: j . Next,assumethatcI> = 'lJ1 V'IJ2 and C'l F cI>. SoCI F= 'lJ1 or ci F 'lJ2.
Suppose it is the first of these two. By the induction hypothesis, there is a j ::: I
such that Clk F 'IJ1 for all k ::: l, and therefore also Cl k F= cI> for each k ::: j,
Let cI> = [K]'IJ and assume that Cl F cI>. Iftick rt K, then Cli F cI> for all
i ::: 1. Otherwise tick E K, and since Cl ~ Cl it follows that Cl F= 'IJ. By
the induction hypothesis, there is a j ::: I such that Cl k F= 'IJ for all k ::: j .
However, since Cl i +1 ~ ci' it follows that Cl k F cI> for all k ::: j + 1. The case
cI> = (K) 'IJ is similar. The other modal cases when cI> is (( )} 'IJ, [ ] 'IJ, [.J,] 'IJ, or
((t)} 'IJ are more straightforward and are left as an exercise. 0
Proposition I shows that no modal fonnula partitions the set E into the pair {Cl}
and E - {Cl}, and therefore the enduring capability ofbeing able to tick forever is
not expressible in the modallogics introduced in Chapter 2. A similar argument
4.2. Processes and their runs 85
establishes that the property "t i ck eventually happens" is also not definable within
these modallogics.
11 [K] II
E
EI = {F E E : if F ~ E and a E K then E E EIl
a, Define the trans formers 11 ( K ) II E , 11 ((K)) II E and 11 [-!-] II E •
b. An indexed family of subsets {Ei ~ E : j ::: O} of E is achain if Ei ~ Ej
when j ~ j . A modal operator # is U-continuous iffor any such chain the
set 11 # II E U{E i : j ::: O} is the same as U{li # II EEi : j ::: O}. Prove that, if
each member ofE is finitely branching (that is {F : E ~ F with a E A}
is finite for each E E E), then [K] is U-continuous. Give an example
process which fails U-continuity.
4. Prove that the property "tick eventually happens" is not definable within M
UM°.J, .
contrast between the local capability for ticking and the enduring capability for
ticking forever, and the contrast between the urgent inevitability that tick must
happen next with the lingering inevitability that tick eventually happens.
Another abstraction from behaviour is a run of a process. A run of Eo is a finite
or infinite length sequence of transitions
with "maximal" length. This means that , if a run has finite length, then its final
process is unable to perform a transition because, otherwise, the sequence can be
extended. A deadlocked process E only has the zero length run E. A run of a
process carves out a path through its trans ition graph.
tick tock tick . .
Example 1 CI l ---* tiock.Cl., ---* CI l ---* . . . IS the only run from the clock CI l and it
has infinite length. CIs I has infinitely many finite length runs, each of the form
tick tick tock ... tick tick
CIs ---* . . . ---* CIs ---* 0 and the smgle infimte length run CIs ---* CIs ---*
.. .. An infinite length cyclic run of Crossing is
. car r ccross r . car
Croaamg ---* EI ---* E4 ---* Eg ---* Crossäng ---* .. . ,
where EI , E4 , and Eg are as in Figure 1.12.
Runs provide a means for distinguishing between local and long term features
ofprocesses. For instance, a process has the capability for ticking forever provided
that it has a perpetual run of tick transitions. The property of a process that tick
eventually happens is the requirement that, within every run ofit, there is at least one
tick transition. The clock CI l ofExample 1 has the property that tock eventually
happens. However, the other clock CIs fails to have this trait because of its sole
infinite length run, which does not contain the action tock.
Bisimulation equivalence "preserves" runs in the sense that, if two processes
are bisimilar then for any run of one of the processes there is a corresponding run
of the other process.
Example 2 A property that distinguishes each clock ci ' of the previous section from Cl is
eventual termination. The good feature is expressed by the formula [-]ff, and
the property is given as Liveness([ - ]ff). On the other hand, termination can also
be viewed as defective , as exhaustion of the clock. In this case, Cl has the safety
property of absence ofdeadlock, expressed as Safety( (-}tt), which each ci' fails
to have.
Liveness and safety properties of a process pertain to all of its runs. Weaker
properties relate to some runs of a process. A "weak" safety property states " in
some run nothing bad ever happens," and a "weak" liveness property asserts " in
some run something good eventually happens ."
WSafety( <t>) for some run x , <t> is never true in n
WLiveness( <t>) for some run n , <t> is true in x
Notice that a weak safety property is the "dual" of a liveness property (and a weak
liveness property is the "dual" of a safety property).
WSafety( <1» iff not(L iveness(not<l»)
WLiveness( <t>) iff not(Safety(not<l»)
Example 3 A weak liveness property ofSMn ofFigure 1.15 is that it may eventually pay out a
windfall ofa million pounds . The good feature is (win(l 06))tt, and so the property
is given by WLiveness«(win(l06))tt).
There are also intermediate cases between all and some runs when liveness
or safety properties pertain to special families of runs which obey some general
constraints.
Example 4 A desirable property ofCrossing is that, whenever a car approaches, eventually
it crosses. This requirement is that any run containing the action car also contains
ccross as a later action.
Sometimes, the relevant constraints are complex and depend on assumptions out-
with the possible behaviour of the process itself. For example, .Protocol of
Figure 1.13 fails to have the property "whenever a message is input eventually
it is output" because of runs where a message is forever retransmitted. However,
88 4. Temporal Properties
we mayassume that the medium does eventually pass to the receiver a repeatedly
retransmitted message, meaning these deficient runs are thereby precluded.
Div~ Div~ .. .
even though Div ~ r .O, We may try to weaken the matehing requirement by
stipulating that, for any run from one process, there is a corresponding run
from the other process such that there is a finite or an infinite partition across
these runs containing equivalent processes.
a, SpeIl out adefinition of equivalence = based on this partitioning
requirement.
b, Prove a.(E + r .F) + a .E ~ a.(E + r .F) for any E and F. Is this still
true ifyou replace ~ with =?
=
c. What is the relation between and branching bisimulation, as defined
by Glabbeek and Weijland [23]?
6. Prove that the following properties are not definable within modal logic M U
M°.J,.
a, "in some run tick does not happen"
b. "in some run eventually tick happens"
c, "the sequence of actions al .. . a4 happens cyclically forever starting with
al"
4.3. The temporal logic CTL 89
The operator [-] expresses "next" over all runs and its dual (-) expresses weak
"next."
A useful temporal operator is the binary until operator U. A finite or infinite
length run Eo ~ E I ~ . . . satisfies the formula <I> U \11 , "<I> is true until \11,"
provided there is an i 2: 0 such that Ei F= \11 , and for each j : 0 S j < i the
intermediate process E j has property <1>.
Eo
F=
<I>
The index i can be 0 (in which case <I> does not have to be true at any point of the
run). If the run has zero lengtlr' then the index i must be zero. A special instance
ofU is when the first formula <I> is tt . The formula tt U \11 expresses "eventually
\11 ," which we abbreviate to F\I1 .
A finite or infinite length run E o ~ EI ~ . . . satisfies .....(tt U ..... \11)4, that
is .....F..... \11 , if every process Ei within the run has the property \11 .
Eo a i +l
~ ...
F=
\11
This, therefore, expresses "always \11," which we abbreviate to G\I1. Notice that F
and G are duals of each other.
Modallogic can be enriched by adding temporal operators to it. For each
temporal operator (such as F) there are two variants, the strong variant ranging
over all runs of a process, and the weak variant ranging over some run of the
process . We preface the strong variant with A "for all runs" and the weak variant
with E "for some run." Liveness and safety as described in the previous section
<I> ::= tt I . . . <1> I <1>1 1\ <1>2 I [K]<I> I A(<I>I U <1>2) I E(<I>I U <1>2) I
The definition of satisfaction between a process Eo and a formula proceeds by
induction on the formula. The only new clauses are for the two U operators that
appeal to runs of Eo.
Example 2 The weak liveness property ofthe slot machine SMn that it may eventually pay out
a windfall is expressed as EF(win(106)}tt.
As with modal operators, temporal operators may be embedded within each
other to express complex process features. An example is "eventually, action a is
possible until b is always impossible," AF«(a}tt U AG[b]ff).
In Section 4.1 it was shown that the ability to tick forever is not expressible in
modallogic. It is also not directly expressible in CTL as defined here . For instance,
the formula EG(tick}tt states that action tick is possible throughout some run,
and process Cl' ~ tock.CI' + tick.O satisfies it. The problem is that the CTL
temporal operators are not relativised to actions. A variant "always" operator is
4.4. Modal formulas with variables 91
Modal fonnulas may contain propositional variables, and therefore the satisfaction
relation F between a process and a fonnula needs to be refined. The important
case is when a process has the property Z. Think of a propositional variable as a
colour that can be ascribed to processes. A particular process may have a variety of
different colours. Processes can be "coloured" arbitrarily. A particular colouring
is defined by a "valuation" function V that assigns to each colour Z a subset of
processes V(Z) having this colour. If there are two colours X, red, and Z, blue,
then V(X) is the set ofred coloured processes and V(Z) is the set ofblue coloured
processes.
The satisfaction relation between a process and a fonnula is relativised to a
valuation. We write E FV <I> when E has the property <I> relative to the valuation
V, and E ~v <I> when E fails to have the property <I> relative to V. First is the
semantic clause for a variable.
I E FV Z iff E E V(Z) I
Process E has colour Z relative to V iff E belongs to V(Z). The remaining semantic
clauses are as in Section 2.1 for the logic M, except for the relativization to the
colouring V. For example, the semantic clause for /\ is as folIows.
E FV <1>1 /\ <1>2 iff E FV <1>( and E FV <1>2
The notation for the subset of E processes with property <1>, 11 <I> 11 E, is also
refined by an additional colouring component, 11 <I> II~ .
for any variable Z it follows that 11 Z II C is a sub set of 11 Z II C,. Thi s feature extends
to all fonnulas of M with variables.
Proposition 1 If V' extends V, then 11 <I> II C ~ 11 <I> lIe,.
Proof. The proof proceeds by induction on <1>. We drop the index P. The
base cases are when <I> is a variable Z, tt or ff. The first of these case s fol-
lows directly from the definition of ~ on valuations. Clearly, it also holds for
tt and for ff. The general case divides into the various subcases. First, sup-
pose <I> is \111 /\ \112, and assume that V ~ V'. By definition 11 <I> IIv is the set
{E : E FV \111 /\ \112}, which is just the set 11 \11 1 IIv n 11 \112 IIv. By the induction
hypothesis 11 \11; Ilv ~ 11 \11; 11 v' for i = I and i = 2. Therefore, I1 <I> IIv ~ 11 <I> 11 v' .
The case when <I> is \111 V \112 is similar. Next, assume that <I> is [K] \11 . The set
11 <I> IIv is therefore equal to {E : if E ~ Fand a E K then F FV \11}, wh ich is
{E : if E ~ F and a E K then FEil \11 11 v}. By the induction hypothesis this
is a subset of {E : if E ~ Fand a E K then FEil \11 11 v' }, which is the def-
inition of 11 <I> IIv'. The other modal case, <I> is (K}\I1 , is similar and is left as an
exercise. 0
def
f[<I>. Z](E) {E E P : E FV[E/Z] <I>}
Example 1 For any valuation V, the function f[(tick}Z , Z] map s E into the set ofprocesses
that has a tick tran sition into E.
f[(tick)Z . Z](E) = {E E P : E FV[E/Z] (tick}Z}
= tick
{E E P : 3F . E ----+ Fand F FV[E/ Z] Z}
tick
= {E E P : 3F E E. E ----+ F}
= {E E P :
tick
3F. E ----+ Fand F FV[E/ Yj Z}
tick
= {E E P : 3F E V(Z) . E ----+ F}
Example 2 Assume that <1> is a fonnula that does not eontain the variable Z . For any V, the
funetion 1[<1> v (-) Z, Z] maps E into the set of proeesses that have property <1>,
or ean do a transition into E.
1[<1> v (-)Z , Z](E) = {E E P : E FV[E/Z] <1> V (-)Z}
= {E E P : E FV[E/Z] <1> or E FV[E/Z] (-)Z}
= {E E P : E FV <1> or 3F E E 3a . E ~ F}
With respeet to P and V the set 1[<1> , ZJ(E) can also be defined direetly by
induetion on <1>. For example, the following eovers the ease when the fonnula is a
eonjunetion.
Proposition I, above , has the eonsequenee that, for any fonnula <1> and variable Z ,
the funetion /[<1>, Z] is monotonie with respeet to any P and V.
Corollary 1 For any P and V, thefunetion 1[<1> , Z] is monotonie .
then for any fonnula <1>, E FV <1> iff F FV <1>. For instanee, ifV(Z) is the singleton
set{ E} and E F , then E FV Z but F ~v Z. However, ifthe eolouring respeets
'V
Exercises 1. Prove by induction on <1> that, for any set E if Z does not occur in <1>, then
E FV <1> iff E FV[E/Z] <1>.
2. Show that, iffor any variable Z, V'(Z) = V(Z) n E, then the followingis true:
11 <1> II~ = 11 <1> II~, .
3. Relative to P and V, define the function f[<1>, Z] directly by induction on the
structure of <1>.
4. If negation, ...." is added to modallogic with variables, then show that Propo-
sition I fails to hold, and that therefore there are functions [[<1>, Z] that are
not monotonie.
S. Consider the extended modallogic with variables and the CTL temporal op-
erators A(<1>U\IJ) and E(<1>U\IJ). Show that Proposition 1 still holds for this
extended logic.
6. Assume that <1> and \IJ do not contain the variable Z . For any V and P, work
out the following functions.
a. [[<1>/\ [-]Z , Z]
b. [[<1> /\ (-)Z, Z]
c, [[\IJ v (<1> /\ «-)tt /\ [-]Z)) , Z]
d. [[\IJ v (<1> /\ (-)Z), Z]
e. [[[a]«(b)tt v y) /\ Z), Z]
f. f[[a]«(b)tt v y) /\ Z) , Y]
7. Prove Proposition 2.
There may be many different subsets of P that are solutions. One example is the
empty set
o = {E E P : 3F E 0. E
tick
~ F}
because the right hand set must be empty. When P is the set {Cl}, then P is also a
solution because of the transition Cl ~ Cl.
{Cl} = {E E {Cl} : 3F E {Cl}. E ~ F}
tick
Example 1 If P is {Cl}, then both candidate sets 0 and P obey the closure conditions PRE
and POST, and are therefore fixed points of f[ (tick) Z, Z] . The solutions can be
ordered by subset inclusion, 0 ~ {Cl}, offering aleast and a greatest solution. In
4.5. Modal equations and fixed points 97
the case ofthe more sonorous clock CI l , which alternately ticks and tocks , there
are more candidates for solutions, the sets 0, {Cld, [tiock.C'lj ], {CI l , tock.Cld.
Let [abbreviate [[{tick)Z, Z].
[(0) 0
[({Cld) 0
[({tock.Cld) = {Cld
[({CI l , tock.Cld) = {Cld
The prefixed points of [ are 0 and {CI l , t.ock .C'lj}, and 0 is the only postfixed
point. Therefore, there is just a single fixed point in this example.
With respect to any set of processes P, the equation Z ~ (t i ck) Z has both
aleast and a greatest solution (which may coincide) with respect to the subset
ordering. The general result guaranteeing these extremal solutions is due to Tarski
and Knaster. It shows that the least solution is the interseetion ofall prefixed points,
of all those subsets obeying PRE, and that the greatest solution is the union of all
postfixed points , ofall those subsets fulfilling POST. The result applies to arbitrary
monotonie funetions from subsets of P to subsets of P.
Proposition 1 If g : 2P ~ 2P is a monoton ic function with respeet to ~, then g
1. has a leastfixed point given as the set n{E ~ P : g(E) ~ E},
2. has a greatestfixed point given as the set U{E ~ P : E ~ g(E)} .
Proposition I guarantees that any reeursive modal equation Z ~ <I> has ex-
tremal solutions , whieh are least and greatest fixed points ofthe monotonie funetion
[[<1>, Z] . Relinquishing the equational format, let JL Z . <I> express the property given
by the least fixed point of [[<I>, Z] and let vZ. <I> express the property determined
by its greatest fixed point.
What properties are expressed by the extremal solutions of the modal equation
Z ~ {tick}Z? The least ease, JLZ. {tick}Z, isoflittleimportbeeauseitexpresses
98 4. Temporal Properties
{E E P : E FV 4>} U {E E P : 3a E A. 3F E E. E ~ F}.
Because 4> does not contain Z , valuation V can be used instead ofV[E/Z] in the
first subset. A fixed point E of this function is subject to the following closure
condit ions .
PRE if E E P and (E FV 4> or 3a E A. 3F E E. E ~ F) then E E E
POST if E E Ethen E FV 4> or 3a E A. 3F E E. E ~ F
A subset E satisfying the condition PRE has to contain those processes with the
property 4>. But then it also has to include processes F, that fail to satisfy 4> , but
have a transition F ~ E , where E FV 4>. And so on. It turns out that a process
Eo has the property J-LZ. 4> V {-)Z ifthere is a run Eo ~ EI ~ ... and an
i ~ 0 and Ei FV 4>. That is, if Eo has the weak eventually property EF4> of
CTL. The largest solution also includes the extra possibility ofperfonning actions
forever without 4> ever becoming true, as the reader can check. A slight variant is
to consider J-LZ . 4> V {K)Z , where K is a family ofactions. This expresses that
a process is able to perform K actions until 4> holds. When K is the singleton
set {r}, this fonnula expresses that, after some silent aetivity, <1> is true, expressed
modallyas « )) <1>.
Another example is the reeursive equation Z ~ \IJ v (<1> 1\ (-) Z ), where neither
<1> nor \IJ eontains Z. The funetion f[\IJ v (<1> 1\ (- ) Z), Z] applied to E is the set
The least set E will be the union ofthe sets Ei . Pietorially, the stages are as folIows.
1 2 i+ 1
E E
0.
----+ EI E
at ~
----+ . . . E j . . . ----+ Ei
Fv Fv Fv Fv Fv Fv
\IJ <1> \IJ <1> <1> \IJ
The fonnula J-LZ . \IJ v (<1> 1\ (-)Z) eaptures the weak until ofCTL, E(<1>U\IJ), as
deseribed in Seetion 4.3. Later, we shall deseribe the idea of eomputing stages
more fonnally using approximants . The strong until A(<1>U\IJ) of CTL is defined
as J-LZ. \IJ v (<1> 1\ «-)tt 1\ [- ]Z», where Z does not oeeur in \IJ or <1>. Later, we
shall see that we ean also define properties that are not expressible in CTL.
Exercises 1. Show that if g : 2 P -+ 2 P maps all arguments to different sets (that is, g(E) =1= E
for all E), then g is not monotonie.
2. Assume h : 2 P -+ 2 P is monotonie with respeetto ~' Prove the following.
a. if E] and E2 are prefixed points of h, then EI n E2 is also a prefixed point
ofh
b, if EI and E2 are postfixed points of h, then EI U E2 is also a postfixed
point of h
100 4. Temporal Properties
4.6 Duality
The expressive power of modal logie is inereased when extended with least and
greatest solutions of reeursive modal equations. For instanee, the least solution to
Z ~ <1> v (-) Z (when <1> does not eontain Z) expresses the weak liveness property
EF<1> . The eomplement of weak liveness is safety. A proeess E does not satisfy
f.J. Z. <1> v (-) Z if <1> never beeomes true in any run of E.
Complements are direetly expressible when negation is freely admitted into
formulas as with CTL. The reason for avoiding negation in modal formulas is to
preserve monotonieity. A simple example is the equation Z ~ -.Z. The funetion
f[ -.Z, Z] is not monotonie beeause f[ -.Z, Z](E) = P - E. A fixed point E of
this funetion must obey E = P - E, whieh is impossible beeause P is non-empty.
Another example is Z ~ (tock)tt /\ [tick]-.Z. The funetion f[(tock}tt /\
4.6. Duality 101
n
which by the induction hypothesis is as follows.
iff E~ {P - E : 11 <1>c IICcIP-E/Z] ~ P - E}
iff E~ n {E ~ P : 11 <1>c IICc[E/z] ~ E}
iff E \;6vc f..L Z . <1>c
iff E \;6vc (v Z. <I»C
102 4. Temporal Properties
The other case E I=v J-L Z. <I> is similar and is left as an exercise. D
b. Give a set P such that f has both least and greatest fixed points
2. Assume that formulas may contain occurrences of -.. Prove that, if every
occurrence of Z within <I> lies within the scope ofan even number ofnegations ,
then the recursive equation Z ~ <I> has both aleast and a greatest solution.
3. When -. is freely admitted into formulas , show the following (where two
formulas <I> and IIJ are equivalent if for all processes E, E 1= <t> iff E 1= IIJ).
a, J-LZ . <t> is equivalent to -.vZ . -.<t>{-.ZjZ}
b. vZ. <t> is equivalent to -'J-LZ . -.<I>{-.ZjZ}
4. What property ofprocesses does J-LZ. [-]Z express?
5. Prove that [ ] <I> is expressed as the formula v Z . <t> /\ [r] Z .
5 _
Modal Mu-Calculus
In the previous chapter we saw that modal formulas are not very expressive . They
can not capture enduring traits ofprocesses, the properties definable within tempo-
rallogic. However, these longerterm properties can be viewed as closure conditions
on immediate capabilities and necessities that modallogic captures . By permitting
recursive modal equations, these temporal properties are expressible as extremal
solutions of such equations. The property "whenever a coin is inserted, eventually
an item is collected" is expressed using two recursive modal equations with dif-
ferent solutions. In the previous chapter, least and greatest solutions to recursive
modal equations were represented using the fixed point quantifiers J1-Z and vZ.
In this chapter we shall explicitly add these connectives to modallogic, thereby
providing a very rich temporallogic.
In the sequel, we let a range over the set {J.L, v]. Formulas of J.LM may contain
multiple occurrences of fixed point operators. An occurrence of a variable Z is
"free" within a formula if it is not within the scope of an occurrence of a Z. The
operator a Z in the formula a Z. <I> is a quantifier that binds free occurrences of Z
in <1>. We assume that o Z has wider scope than other operators. The scope of J.LZ
is the rest of the formula in
J.LZ. (J) Y v «(b)Z /\ J.LY. vZ. ([b]Y /\ [K]Z))
and it binds the occurrence of Z in the subformula (b)Z , but does not bind the
occurrence of Z in [K]Z , which is bound by the occurrence of vZ. There is just
one free variable occurrence in this formula, that of Y in (J) Y. An occurrence of
a Z may bind more than one occurrence of Z, as in vZ . (tick)Z /\ (t ock)Z.
The satisfaction relation FV between processes and formulas (relative to the
valuation V) is defined inductively on the structure of'formulas. First, are the cases
that have been presented previously.
E FV tt
E ~v ff
E FV Z iff E E V(Z)
E FV <I> /\ \IJ iff E FV <I> and E FV \IJ
E FV <I> v \IJ iff E FV <I> or E FV \IJ
E FV [K]<I> iff VF E {E' : E ~ E' and a E K} . F FV <I>
E FV (K)<I> iff 3F E {E' : E ~ E' anda E K}. F FV <I>
The remaining cases are the fixed point operators, and we appeal to sets of
processes, 11 <I> IIC, as defined in Section 4.4, {E E P : E FV <I>}, in theirdefinition.
It is assumed in both cases that E belongs to P.
FV vZ . <I> U {E S; P
n
E iff E E E S; 11 <I> IIC[E/Zl}
E FV J.LZ . <I> iff E E {E S; P 11 <I> II C[E/Zl S; E}
These clauses are instances ofProposition I ofSection 4.5. A greatest fixed point
is the union of postfixedpoints, and aleast fixed point is the intersection of prefixed
5.1. Modallogic with fixed points 105
points. To justify their use here we need to show that, for any variable Z and JlM
fonnula <1>, the funetion f[<I>. Z] is monotonic' . The fonnula <I> may eontain fixed
points, and therefore we need to extend Proposition I of Seetion 4.4 from modal
fonnulas to JlM fonnulas. Reeall that the valuation V' extends V, written V ~ V',
iffor eaeh variable Z , V(Z) ~ V'(Z) .
Proposition 1 If V' extends V, then 11 <I> IIC ~ 11 <I> IIC,.
Proof. We show by induetion on <I> that, ifV ~ V', then 11 <I> IIC ~ 11 <I> IIC,. We
now drop the index P. The base eases, boolean eases and modal eases are exaetly
as expressed in the proof ofProposition 1 of Seetion 4.4. This just leaves the fixed
point eases . Let <I> be the fonnula vY. \fJ . Suppose E E 11 <I> IIv. By the semantie
clause for v Y, there is a set E eontaining E with the property that E ~ 11 \fJ IIv[E/ Yj.
Beeause V ~ V', it follows that V[E/ Y] ~ V'[E/ Y] and therefore by the indue-
tion hypothesis 11 \fJ IIv[E/Yj ~ 11 \fJ IIv'[E/Yj . Consequently, E ~ 11 \fJ IIv'[E/Yj, and
therefore E E 11 <I> IIv' too. The other ease when <I> is JlY. \fJ is similar and is left
as an exereise . D
A straightforward eorollary of Proposition I is that, for any JlM fonnula <I> and
valuation Vif E ~ F, then 11 <I> 11 C[E/ zj ~ 11 <I> 11 C[F/ zj - and that therefore the funetion
f[ <1> , Z] is monotonie.
A slightly different presentation of the elauses for the fixed points dispenses
with explieit use of sets 11 <I> IJC.
I Relative to P and V, for any E ~ P, 1[<1>. Z](E) is the set 11 <I> IIC[E/zj '
106 5. Modal Mu-Calculus
Modal mu-calculus was originally proposed by Kozen [34] (and see also Pratt
[50]) as an extension of propositional dynamic logic/ . Its roots lie in more general
program logics with extremal fixed points, originally developed by Park, De Bakker
and De Roever. Larsen suggested that Hennessy-Milner logic with fixed points is
useful for describing properties ofprocesses [36]. Previously, Clarke and Emerson
used extremal fixed points on top of a temporallogic for expressing properties of
concurrent systems [18].
Example 1 The vending machine Yen of Section 1.1 has the property "whenever a coin is
inserted, eventually an item is collected," expressed as vZ. [2p, lp]\II /\ [-]Z,
when \11 is /-LY. (-)tt /\ [-{collectb, collectd]Y. The appropriate set P is
For example, if Eis the subset P - {Venb}, then Venb satisfies the antecedent of
this closure condition, and therefore E is not a prefixed point. Similarly, 10 fails
to be a prefixed point because collectb.Ven satisfies the antecedent. Given that
11 \llIIC is P, it follows that 11 vZ . [2p, lp]\II /\ [-]Z IIC is also P.
Macros can also be introduced for the starring operator of propositional dynamic
logic (where again cl> does not contain Z).
Formulas of j.tM exhibit a duality. A formula cl> that does not contain oc-
currences of free variables has a straightforward complement cl>c, as defined in
Section 4.6. The following is an example.
(vZ. ur. vX. [a]«(b}X /\ Z) v [K]Y)r = oz. vY. ux. (a}«[b]X v Z) /\ (K}Y)
Because CTL has explicit negation, we can also introduce macros for negations of
the two kinds ofuntil formulas as follows (where again Z does not occur in cl> or
IJI).
The fixed point versions of these formulas are arguably easier to understand than
their CTL versions.For instance, the second formula expresses "lIJc unless cl>c/\ IIJC
is true."
A formula that contains free occurrences ofvariables does not have an explicit
complement. For example, Z does not have a complement in j.tM. However, the
semantics of free formulas require valuations. Therefore, we can appeal to the
complement valuation VC , as in Section 4.6. Consequently, E FV cl> iff E ~Vc cl>c.
Bound variables can be changed in formulas without affectingtheir meaning. If
Y does not occur at all in a Z. cI>, then this formula can be rewritten a Y. (cl> { Y/ Zn.
It is useful to write formulas in such a way that bound variables be unique. This
supports the following definition.
Definition 1 A formula cl> is "normal" provided that
1. if a\ Z\ and a2Z2 are two different occurrences ofbinders in cl> then Z\ =I Z2;
and
2. no occurrence of a free variable Z is also used in a binder a Z in cI> .
5.2. Macras and normal formulas 109
Every fonnula can be easily converted into a normal fonnula of the same size and
shape by renaming bound variables.
can be rewritten as
This he1ps us to understand which occurrences of variables are free, and which
occurrences are bound. In the sequel, we shall exc1usively make use of normal
fonnulas.
In later applications, we shall need to know the set of subfonnulas of a fonnula
<1>. This set Sub(<1» is finite and extends the definition provided in Seetion 2.2 for
modal fonnulas.
Definition 2 Sub(<1» is defined inductively by case analysis on <1>
Sub(tt) = {tt}
Sub(ff) = {ff}
Sub(X) {X}
Sub(<1> I 1\ <1>2) = {<1>1 1\ <1>2} U Sub(<1>I) U SUb(<1>2)
Sub(<1> I v <1>2) {<1>1 V<1>2} U Sub(<1>I) U SUb(<1>2)
Sub([K]<1» = {[ K]<1>} U Sub(<1»
Sub«(K)<1» {(K}<1>} U Sub(<1»
Sub(vZ. <1» {vZ. <1>} U Sub(<1»
Sub(JlZ. <1» {JlZ . <1>} U Sub(<1»
{JlX. vY. ([b]X 1\ [K]Y), vY. [b]X 1\ [K]Y, [b]X 1\ [K]Y, [b]X , [K]Y, X, Y},
Proposition 1 1. X subsumes X.
2. Jf X subsumes Z and Z subsumes Y, then X subsumes Y .
3. Jf X subsumes Y and X i= Y, then not Y subsumes X.
Exercises 1. Introduce the following operators ofCTL as macros by defining them as fixed
points AP, EF, AG and EG.
2. Put the following fonnulas into normal form
a. /LY.[K]Y I\vY.(a)Y
b. [J]Y 1\ (K)Y
6. Prove Proposition 1.
The contrast in meaning between [ ] and [ ~] is the difference in their fixed point.
The formula [] <1> isjust [r*]<1>, since <1> has to hold throughout any amount of
silent activity. A divergent process may have the property [ ] <1>, but it cannot
satisfy [~] <1>. Hence, the change in fixed point. The set
n {E ~ P : 11 <1> ;\ [r]Z IIC[E/Zl ~ E}
at least contains stable processes (those unable to perform a silent action) with the
property <1>. Therefore, the set also contains any process that satisfies <1> and that
eventuallystabilizesafter some amount of silent activity,providedthat <1> continues
to be true.
The modal logics MO and MO.\. are therefore special sublogics of JlM. The
derived operators [K] and [~ K] are defined using embedded fixed points, as
follows (where <1> does not contain Z or Y).
def
[K]<1> = [ ] [K] [ ] <1>
vZ . [K] [ ] <1> ;\ [r]Z
= vZ . [K](vY. <1> ;\ [r]Y);\ [r]Z
def
[.J, K] <1> [H [K] [ ] <1>
JlZ. [K] [ ] <1> ;\ [r]Z
JlZ. [K](vY. <1> ;\ [r]Y) ;\ [r]Z
Observable modallogic MOcan be extended with fixed points. The formulas
of observable mu-calculus, JlMo, are as follows.
<1> Z I tt I ff I <1>1 ;\ <1>2 I <1>1 V <1>2 I [K] <1> I ((K))<1> I
[ ] <1> I (0) <1> I vZ. <1> I Jl Z. <1>
Here K ranges over sets of observable actions (which exclude r). This sublogic is
suitable for describing properties of observable transition systems.
Wecan also definethe sublogic JlM°.\. , which containsthe additional modalities
[~] and (( t)) . Unlike JlMo, this logic is sensitive to divergence.
Exercises 1. Show that the fixed point definitions of (( )) , [], ((t)) and U] are indeed
correct.
112 5. Modal Mu-Calculus
2. Provide fixed point definitions for the following modalities [K ~], [~ K ~],
«K)) , and «t K t)).
3. Show that divergence is not definable in J-tMo . That is, prove that there is not
a formula <I> E J-tMo with the feature that E t iff E F= <I> for any E .
4. Show that Protocol and Cop have the same J-tMo properties, but not the same
J-tM°,J. properties.
an.b.P(n + 1)
def
P(n) =
def
R = L{a.Qi : i E l}
P(l) + R
def
P =
The behaviour of P(1) is as folIows.
a n + 1b
P(l) ---+ P(2) ---+ P(3) ---+ .. . ---+ P(n + 1) ---+ .. .
ab aab aaab anb
5.4. Preservation of bisimulation equivalence 113
The processes P and R are not bisimilar because no finite state process can be
bisimulation equivalent to b.P(2) (via the pumping lemma for regular languages).
However, P and Rhave the same /lM properties, when expressed by formulas with-
out free variables. To see this, suppose that there is a formula <I> that distinguishes
between P and R: it follows that there is a formula IIJ such that b.P(2) 1= IIJ and
Qi li= IIJ for all i E I . By the finite model theorem, Proposition 3 of Section 5.1,
there is a finite state process E such that E 1= IIJ. A small argument shows that E
can be built from the actions a and b. Consequently, E is Qj for some j E I . But
this contradicts that every process Qi fails to have the property IIJ .
Not only do bisimilar processes have the same modal properties, but they also
have the same /lM properties.
Proposition 2 If E "" F, then E =r F.
An indirect proof of this Proposition uses the facts that it holds for the logic Moo ,
and that /lM is a sublogic of M oo, as we shall see in the next section. However,
we shall prove this result directly; for we wish to expose some of the inductive
structure of modal mu-calculus.
A subset E of P is bisimulation closed if, whenever E E E and F E P and
E "" F, then F is also in E. The desired result, Proposition 2, is equivalent to the
claim that, for any formula <I> without free variables and set ofprocesses P, the set
11 <I> 11 P is bisimulation closed. If this is true, then it is not possible that there can be
a /lM formula <I> and a pair ofbisimilar processes E and F such that E 1= <I> and
F li= <1>. Conversely, if 11 <I> 11 P is bisimulation c1osed, then any pair ofprocesses E
and F such that E 1= <I> and F li= <I> can not be bisimilar. The next lemma states
some straightforward features ofbisimulation closure .
bisimulation closed, and therefore by Lemma I part 2 the set 11 <I> Ilv is as weIl.
The arguments for 2 and 3 follow those for the case of /\, above: both sets 11 <I> IIvd,
11 <I> IIv' are bisimulation closed. So, if E E 11 <I> IIvd, then E E 1I <I> IIv and also
E E Ed . And if E E EU, then there is some FEE such that F '" E, meaning
FEil <I> IIv and therefore FEil <I> IIv'. The other modal case when <I> = (K} IJI
is similar, using Lemma I part 3.
The interesting cases are when <I> is a fixed point formula. Suppose first that <I> is
J.lZ. IJI. To show I, we need to establish that the least E such that IIlJ1l1v[E/z j ~ Eis
bisimulation closed. By assumption, V is bisimulation closed, and so V = Vd from
Lemma 2 parts 2 and 3. Therefore, by the induction hypothesis on 2, we know that
IIlJ1l1v[Ed/ Zj ~ Ed . Because Ed ~ E and E is the least set obeying 1IIJIIlv[E/zj ~ E,
it follows that E = Ed and is therefore bisimulation closed. Cases 2 and 3 follow
the same pattern as before. The final case is <I> = v Z . IJI. The argument is similar
to that just employed for the least fixed point , except that to establish I we use
the induction hypothesis on 3: we need to show that the largest set E such that
E ~ IIlJ1l1v[E/z j is bisimulation closed . Because V is bisimulation closed V = Vu ,
so EU ~ 11 IJI 11 V[E"/ Zj by the induction hypothesis on 3, and therefore E = EU .
Cases 2 and 3 folIowas before. 0
This result teIls us more than that bisimilar processes have the same proper-
ties when expressed as a formula without free variables. They also have the same
properties when expressed by formulas with free variables, provided the meanings
of the free variables are bisimulation closed. The proof of this result also estab-
lishes that formulas of J.lMo, observable modal mu-calculus, which do not contain
free variables, are preserved by observable bisimulation equivalence. The earlier
lemmas and Proposition 3 all hold for observable bisimulation equivalence, and
J.lMo .
5.5 Approximants
At first sight, there is achasm between the meaning of an extremal fixed point and
techniques (other than exhaustive analysis) for actually finding it. However, there
116 5. Modal Mu-Calculus
I1g = n (E ~ P : g(E) ~ E}
vg = U(E ~ P : E ~ g(E)}
Suppose we wish to determine the set vg. Let vi g for i ~ 0 be defined iteratively
as follows.
U U U
vg vg vg
If Vi g = Vi+1g, then the fixed point vg is Vi g. Why is this? Beeause Vig is then a
fixed point of g, and vg ~ vi g and vg is the greatest fixed point.
These observations suggest a strategy for diseovering vg. Iterativelyeonstruet
the sets Vi g starting with i = 0, until Vig is the same set as v i +1g . If Pis a finite
set eontaining n proeesses , then this iterative eonstruetion terminates at, or before ,
the ease i = n, and therefore vg is equal to v n g .
Example 1 Let P be {Cl, tick.O, O}, and let g be the following funetion f[(tick}Z , Z].
vOg = P = {Cl, tick.O, O}
v1g = 11 (tick) Z 1I~[vog/Z] {Cl, tick.O}
v 2g = 11 (tick) Z 1I~[v' g/Z] = {Cl}
v3g = 11 (tick) Z 1I~[v2g/Z] = {Cl}
Stabilization oeeurs at the stage v 2 g beeause this set eoincides with v3 g . The fixed
point vg, the set ofproeesses 11 vZ. (tick}Z IIC, is therefore the singleton set {Cl}.
IfP is not a finite set ofproeesses, then we ean still guarantee that vg is reaehable
iteratively by invoking ordinals as indices . Reeall that ordinals are ordered as
5.5. Approximants 117
folIows.
0,1 , .. . ,w,w+ 1, . . . , w + w, w + w + 1, . . .
The ordinal W is the initial limit ordinal (which has no immediate predecessor),
whereas W + I is its successor. Assume that a and Ä range over ordinals. We
define vag, for any ordinal a ~ 0, with the base case and successor case as before,
vOg = P and va+lg = g(vag). The case when Ä is a limit ordinal is defined as
folIows.
vg vg
The fixed point set vg appears somewhere in the sequence, at the first point when
vag = va+lg.
Example 2 Let P be the set {C, Bi : i ~ O} when Cis the cell
def
C = in(x).Bx where x : N
def
= dOWIl.B n for n ~ °
Let g be the function f [{- }Z, Z] . The fixed point vg is the empty set.
vOg = P {C, Bi : i ~ O}
vlg = 11 {-}Z IIC[vOg/ZI {C, Bi : i ~ I}
3Notice that vg ~ vAg when Ais a limit ordinalbecause vg ~ vag for an a < A.
118 5. Modal Mu-Calculus
vw+1g = {C', B~ : k ~ I}
vW+Wg {C'}
vw+w+1g = 0
J-t Ag = U{J-tag:a<A}
J-tg J-tg
U U
C u" g C J-tw+l g C
The fixed point J-tg is a superset of each of the iterates J-ta g. First, J-t0 g ~ p.g, and
so by monotonicity of g (and limit considerations) J-ta g f; J-tg for any a. The first
point that J-ta g is equal to its successor J-ta+l gis the required fixed point J-tg . An
iterative method for finding tig is to construct the sets J-ta g starting with J-t 0g until
it is the same as its successor.
Example 3 Let g be be the function f[[tick)ff V (-)Z, Z) . Assume P is the set ofprocesses
in example 1 above .
J-t0g = 0
J-tlg = 11 [tick)ff v (-)Z 1I~[1l0g/Z1 = {O}
J-t2 g 11 [tick)ff V (-)Z 1I~[lllg/Z1 = {tick.O,O}
J-t3 g = "[tick]ff V (-)Z "~[1l2g/Z1 = {tick.O,O}
Stabilization occurs at u. 2g, which is the required fixed point. Notice that if we
consider vg instead, then we obtain the following different set.
P = {Cl, tick.O, O}
11 [tick]ff V (-)Z 1I~[vOg/zl = P
This stabilizes at the initial point.
5.5. Approximants 119
CI I ~ tick.O
Cli +1 def tick.Cl i i::: 1
Let E be L{
Cl i : i ::: I}. E models an arbitrary new clock, which will eventually
break down . Let P be the set {E, 0, ci' : i ::: I}. Because all behaviour is
finite, each process in P has the property J.LZ. [tick]Z. Let g be the funct ion
f[[tick]Z, Z].
J.L0g = o
J.Ll g 11 [tick]Z IIC1/Log/ ZI {O}
Proposition 1 Fix P and V and let g be the function f[ <1>, Z]. Then the approximant o" g =
11 a z-. <I> IIC for any ordinal a,
Proof. By induction on a , We drop the index P. The base cases are straight-
forward. Suppose the result holds for all a < 8. If 8 is a successor ordinal,
120 5. Modal Mu-Calculus
The quantification over ordinals in these clauses is bounded by the size of P(E) .
The following is a corollary of the discussion in this section. It will turn out to be
very usefullater, since it provides "least approximants" for when a process has a
least fixed point property, and when a process fails to have a greatest fixed point
property.
Proposition 2 1. If E I=v J.tZ . <1>, then there is aleast ordinal Ci such that E I=v u.Z" , <I> and
forall ß < Ci, E \i=v J.tZ ß. <1> .
2. If E ~v v Z. <1>, then there is aleast ordinal Ci such that E ~v v Z" . <I> and
forall ß < Ci, E I=v «z». <1> .
Proof. Notice that for any E , E \i=v J.tZo. <1>. Therefore, if E I=v u.Z", <1>, then
for all ß > Ci it follows by monotonicity that E I=v J.tZ ß. <1>. Consequently, there
is aleast ordinal Ci for which E I=v u.Z" , <I> when E has the property J.tZ. cI>
relative to V. Case 2 is dual. 0
Example 5 InSection5.3, thedefinitions of[] <I> and[.J..] <I> in u.M werecontrasted. Let vZ. \11
be the formula v Z. <I> 1\ [r] Z (expressing [ ] <1» and let u.Z . \11 be J.t Z. cI> 1\ [r] Z
(expressing [.J..] <1»4. These formulas generate different approximants.
tt = ff
= <1>1\ [r]tt = <I> = <I> 1\ [r]ff
= <l>1\[r]<I> = <I> 1\ [r](<I> 1\ [r]ff)
The approximant jLZ i . \IJ carries the extra demand that there cannot be a sequence
of silent actions of length i . Hence, [..j,] <I> requires all immediate t behaviour to
eventually peter out.
Exercises 1. Let P be the set ofprocesses P(Ven). Using approximants, determine the sets
11 <I> IIC when <I> is each ofthe following.
the iterative technique in the presence of multiple fixed points, and comment on
entanglement of approximants.
Example 1 Yen has the property vZ. [2p, lp]\I1/\ [- ]Z , when \11 is the formula /LY. (-}tt /\
[-{collectb ' collectd]Y ; see example 1 of Section 5.1. Let P be the set
{Yen, Venb, Venl , collectb .Ven, collectl .Ven}.First, the subset ofP is calcu-
lated for the embedded fixed point \11, as follows. Its ith approximant is represented
as /Ly i (where we drop the index P).
o
11 (-}tt /\ [-{collectb, collectd]Y IIv[/LYo/y]
Therefore, 11 \111Iv is P. Next, the outermost fixed point is evaluated , given that
every process in P has the property \11 . Its i th approximant is v Zi .
vZo = P
1
vZ = 11 [2p , lp]\I1 /\ [-]Z IIv[vzo/Z] = P
In this example , the embedded fixed point can be evaluated independently of the
outermost fixed point.
Example 1 illustrates how the iterative technique works for formulas with mul-
tiple fixed points that are independent of each other. The formula of example 1
has the form vZ . <I>(Z, /LY. \I1(Y» where the notation makes explicit what vari-
ables can be free in subformulas. Z does not occur free within the subformula
/LY. \I1(Y), but may occur within the subformula <I>(Z, /LY. \I1(Y» . Consequently,
when evaluating the outermost fixed point, we have that:
vZo = P
1
vZ 11 <I>(Z, /LY. \I1(Y» IIv[vzojZ]
The subset of fonnulas of JLM, written JLMI, which has the property that all
its fixed points are independent of each other, is characterised as folIows.
cf> E JLMI iff if al Y. \lJ 1 E Sub(cf» and a2Z . \lJ2 E Sub(cf» and Y =1= Z,
then Y is not free in \lJ2 and Z is not free in \IJ 1.
CTL fonnulas, when understood as fixed point fonnulas, belong to this subclass
of JLM fonnulas.
Proposition 1 CTLformulas belong /0 JLMI.
Proof. This follows from the fixed point definitions ofuntil fonnulas (and their
negations) presentedin Seetion 5.2. Forexample, A(cf> U \IJ) is defined as JLZ . \IJ V
(cf> /\ «(- )t t /\ [-] Z»
where Z does not occur in cf> or \IJ . Therefore, any further
fixed points introduced into these subfonnulas do not contain Z. 0
Example 2 Consider the simple processes in Figure 5.1. Let P be the set {O, 0' , O"} and let \IJ
be the fonnula
This fonnula does not belong to JLMI because its innennost fixed point subfor-
mula v Z . . . contains Y free. The calculation of the outer fixed point depends on
calculating the inner fixed point at each index. We use «z» to represent the ith
approximant of the subfonnula prefaced with v Z when any free occurrence of Y
D"
o
vZ . [a]«(b}tt
11 v Y) /\ Z) IIv[/LYo/y]
vZ oo = P
I
vZO = 11 [a]«(b}tt v Y) /\ Z) lI(v[/LYo/y])[vz oo /Z]
= {O",O}
vZ02 = 11 [a]«(b}tt v Y) /\ Z) II(V[/Lyo/Y])[vzol/Z]
= {O"}
3
vZ0 = 11 [a]«(b}tt v Y) /\ Z) 1I(V[/Lyo/Y])[vzo2/Z]
= {O"}
{O"}
11 vZ. [a]«(b}tt v Y) /\ Z) IIv[/Lyl/Y]
IO
vZ = P
vz" = 11 [a]«(b}tt v Y) /\ Z) lI(v[/Lyl /y])[vzlO/Z]
= {O",O}
VZ 12 = 11 [a]«(b}tt v Y) /\ Z) 1I(V[/Lyl /Y])[vz lI/Z]
= {O"}
vZ
13
= 11 [a]«(b}tt v Y) /\ Z) 11(V[/Lyl/Y])[vZ I2/Z]
= {O"}
So j.Ly 2 = {O"}
The innennost fixed point is evaluated with respect to more than one outennost
approximant.
Example 2 illustrates dependence of fixed points. The fonnula has the form
j.LY. <I>(Y, vZ. \II(Y, Z)), where Y is free in the innermost fixed point. The inter-
pretation ofthe subfonnula vZ. \II(Y, Z) may vary according to the interpretation
ofY.
A simple measure of how much work has to be done when calculating the
subset of P with a fixed point property is the number of approximants that must
be calculated. For a simple fonnula (J Z. <I>, when <I> does not contain fixed points
and when IPI = n, the maximum number of approximants is n. If<I> E j.LMI and
contains k fixed points, then the maximum calculation needed is k x n. In the
case of the fonnula of example 2 with two fixed points, it appears that n2 is the
upper bound. For each outennost approximant, we may have to calculate n inner
approximants. There are at most n outer approximants, and therefore the upper
bound is n2 •
The initial approximants for fixed points are vZo = P and j.LZo = 0. Initial
approximations that are closer to the required fixed point will, in general, mean that
5.6. Embedded approximants 125
less work has to be done in the calculation. For instance, if we know that E has the
property P ;2 E;2 11 vZ. 4> IIv, then we can set vZo to E. This observation can be
used when evaluating the embedded fixed point formula v Z. 4>( Z, v Y. \11 (Z , Y»).
vZo = P
vZ I 114>(Z, vY. \I1(Z, Y») IIv[vzo/Z )
vy oo P
I
vyO = 11 \I1(Z, Y) lI(v[vz o/Z))[vYoo/y)
Therefore, we can use 11 vY. \I1(Z, Y) IIv[vzo/Z) as the initial approximant vY IO, and
so on as folIows.
vZ 2 = 114>(Z, vY. \I1(Z, Y») IIv[vz 1/Z)
vy lO = 11 vY. \I1(Z, Y) IIv[vz o/Z)
ll
vy 11 \I1(Z, Y) lI(v[vz 1/Z))[vYIO /y)
vY iO
= 11 vY. \I1(Z, Y) IIV[vzi/zl
il
vY = 11 \I1(Z, Y) 1I(V[vz i /Z))[vY iO /y)
VZI . 4>1 (ZI, VZ2 . 4>2(ZI , Z2, . . . , VZk. 4>k(ZI, . . . , Zk) . . .))
when at most k x n approximants need to be calculated.
The situation is dual forleastfixed points. If0 ~ E ~ 11 J,lZ. 4> IIv, then we can
set the initial set J,lZo to be E. In the case ofthe formula J,lZ. 4>(Z, J,lY. \I1(Z, Y)),
by monotonicity
so the set 11 JtY. \fJ (Z , Y) II v[/L zi / Z ] is a better initial approximation than 0 for jlYiO .
Again, we need only calculate 2 x n approximants instead of n 2 • This can be
extended to multiple occurrences of embedded least fixed points.
jlZl . <f» (Z] , jlZ2. <f>2(ZI , Z2, . .. jlZk. <f>k(Z Io .. . , Zk) . . .))
More complex are formulas , as in example 2, that contain multiple oc-
currences of dependent but different fixed points . In the case of a formula
vZ. <f>(Z, jl Y. \fJ(Z, Y)), we cannot use 11 jlY. \fJ(Z, Y) IIV[vzo/zl as an initial ap-
proximant jlY IO because the ordering is the wrong way around to be of help,
since vzo :;:2 vZ' . Least fixed points are approximated from below and not from
above. Similar comments apply to the calculation of u.Z, <f>(Z, vY. \fJ(Z, Y)). In
both these cases, a best worst case estimate of the number of approximants is n2 •
However, when we consider three fixed points
ur. <f>t (Y, vZ. <f>2(Y, Z , jlX. <f>3(Y, Z, X))) ,
the maximum number of calculations needed is less than n 3 • As the reader can
verify montonicity can be used on the innermost fixed point because jlXij k ~
jlXi'j k when i S i'. Roughly speaking , the number of calculations required for k
dependent alternating fixed points is n *~I , as was proved by Long et al [38].
The general issue of how many approximants are needed to calculate an em-
bedded fixed point remains an open problem. The observations here suggest that
the more alternating dependent fixed points there are, the more calculations are
needed , and that the number of calculat ions is exponential in this alternation . In-
deed, we may wonder if somehow the expressive power of jlM grows as more
dependent alternating fixed points are permitted. The answer is "yes," as -was
proved by Bradfield [9].
A more generous subset of jlM than jlMI is the alternation-free fragment,
jlMA defined as follows.
<f> E jlMA iff if jlY. \fJ 1 E Sub(<f» and vZ. \fJ2 E Sub(<f» ,
then Y is not free in \fJ2 and Z is not free in \fJI.
This subset permits dependence offixed points, provided they are ofthe same kind.
For instance, a formula ofthe following form is permitted.
vZ! . <f>t(ZI, VZ2· <f>2(Zt , Z2, jlY1• \fJ,(Yt , jl Y2· \fJ2(Y1 , Y2))))
An example of a formula that does not belong to jlMA is \fJ from example 2, since
Y is free in the subformula vZ . . .. From the analysis above, any formula of jlMA
with k fixed point operators requires the calculation of at most k x n approximants.
What upper bound is there for a formula with k dependent alternating fixed
points?
4. Prove Proposition I in full. Show using approximants that there is a linear
time algorithm for checking whether E F <1>, when E is finite state and <I> is
a CTL formula.
5. Show that, for any <I> E p,MA with k fixed point operators, at most k x n
approximants need to be calculated to work out which subset of P has property
<I> when IPI = n.
6. The following is a technical definition of the full alternation hierarchy from
Niwinski and Bradfield [46, 9].
a, If<l> contains no fixed point operators, then <I> E ~o and <I> E TIo
b. If<l> E ~n U TIn, then <I> E ~n+\ and e TIn+\
E
c. If <1>, IlJ E ~n(TIn), then [K]<I>, (K}<I>, <I> /\ IlJ, <I> v IlJ E ~n(TIn)
To satisfy it, a process must not be able to perform actions from the set A - K
forever. Consider the syntactic approximation s (as described in Section 5.5) this
fonnula generates. We abbreviate the ith approximant to /l-Zi.
/l-Zo ff
/l-Zl (-)tt 1\ [-K]/l-Zo = (-)tt 1\ [-K]ff
/l-Z2 = (-)tt 1\ [-K]/l-Zl = (-)tt 1\ [-K]«-)tt 1\ [-K]ff)
The approximant /l-Zl expresses that a K action must happen next. /l-Z2 states
that a K action must happen within two transitions , that is, for any sequence of
transitions ~ either a E K or b E K. Moreover, the fonnula requires that,
if there is a run with just one transition, then it is a K transition. General ising,
/l-Zi states that any sequence of transitions of length i contains a K action (and
any run whose transition length is less than i contains a K transition). Therefore ,
/l-ZlJJ+l = V {/l-Z i : i ~ O} guarantees the liveness property that in every run
there is a K action.
Liveness with respect to actions does not appear to be expressible in tenns of
liveness with respect to state, that is, as a fonnula ofthe form u.Z: lI> v «- )tt 1\
[ - ]Z) where lI> does not contain fixed points (or Z ). For instance , neither of the
following fonnulas captures liveness.
lI> I /l-Z. (K )t t v «-)tt 1\ [-]Z)
lI>2 u.Z , « - )tt 1\ [- K]ff) v « - )tt 1\ [- ]Z)
lI> I is too weak, since it merely states that eventually some action in K is possible
without any guarantee that it happens. In contrast, <1>2 is too strong, since it states
that eventually only K actions are possible (and therefore must happen) .
Example 2 Consider the following definitions of processe s.
def
Ao a. L{Ai : i ~ O}
def
Ai+l b.Ai i ~ 0
Bo
def
a. L{Bi : i ~ O} + b. L {Bi i ~ O}
def
Bi+1 b.Bi i ~ 0
130 5. Modal Mu-Calculus
a bi
For any i ::: 0, process Ao can perform the cycle AO ~ Ao, whereas BO can
ab' b b' .
perform the cycles Bo ~ Bo and Bo ~ Bo. When J ::: 0, Aj I Aj has the
property "eventually a happens," which is not shared by Bj I Bl : In every run of
Aj I Aj , the action a occurs. There are runs from Bj I Bj consisting only of b actions.
Therefore, Aj I Aj 1= J.,LZ . (- )tt /\ [-a]Z and Bj I Bj Pf: J.,LZ . (-)tt /\ [-a]Z .
On the other hand , both these processes have the property <1> I, above, when K is
the singleton set [zr}. Process s, I Bj eventually reaches Bo I Bk or Bk I Bo, and
both satisfy (a)tt. Moreover, both fail to have the property <1>2, above , when K is
the set {al. For instance, in the case of Aj I Aj , there is no guarantee that in every
run Ao I Ao is reached because this is the only process ofthe form Ai I Aj satisfying
(-)tt /\ [-a]ff .
Liveness and safety may relate to subsets of runs. For instance, they may be
triggered by particular actions or states . A simple case is "if action a ever happens,
then eventually b happens," any run with an a action has a later b action. This is
expressed by the following fonnula.
An example of a conditional safety property is "whenever \11 holds , <1>c will never
become true ," which is expressed as folIows.
However, this only holds if we assume that the signal is fair. Let Q and R be
variables, and V a valuation, such that Q is true when the crossing is in any state
where Rail has the form green.tcross.red.Rail (the states E2, E 3, E6, and
E 10 of figure 1.12) and R holds when it is in any state where Road has the form
up. ccross .dovn.Road (the states EI , E3, E7 and EIl). The liveness property now
becomes "for any run , if QC is true infinitely often and R C is also true infinitely
often, then whenever a car approaches the crossing, eventually it crosses," which
5.7. Expressing properties 131
Example 4 Recall the scheduler from Section 1.4 that schedules a sequence oftasks, and must
ensure that a task cannot be restarted until its previous operation has finished .
Suppose that initiation ofone ofthe tasks is given by the action a and its tennination
by b. The scheduler therefore has to guarantee the cyclic behaviour a b when other
actions may occur before and after each occurrence of a and each occurrence of
b. This property can be defined inductively as follows.
def
Cycle(ab) = [b]ff /\ [a]Cycle(ba) /\ [-a]Cycle(ab)
def
Cycle(ba) = [a]ff /\ [b]Cycle(ab) /\ [-b]Cycle(ba)
Here we have left open the possibility that runs have finite length. Appropriate oc-
currences of (-) tt within the definition preclude it. An important issue is whether
these recursive definitions are to be interpreted with least or greatest fixed points,
5This fonnula leaves open the possibility that a run has finite length. To preclude it, we add (-)tt at the
outer and inner levels.
132 5. Modal Mu-Calculus
or a mixture of the two. This depends upon whether intervening actions are al-
lowed to continue forever without the next a or b happening. If we prohibit this,
the cyclic property is expressed using least fixed points.
JLY. [b]ff 1\ [a](JLZ. [a]ff 1\ [b]Y 1\ [-b]Z) 1\ [-a]Y
Ifwe permit other actions to intervene forever, but insist that whenever a happens
b happens later, a mixture of fixed points is required .
vY. [b]ff 1\ [a](JLZ. [a]ff 1\ [b]Y 1\ [-b]Z) 1\ [-a]Y
The length of the cycle can be extended. As an exercise the reader is invited to
define the formula for Cycle(abcd).
Another class of properties involves counting . An instance is that, in each Tun
there are exactly two a actions, given as folIows.
JLX. [a](JLY. [a](vZ . [a]ff 1\ [- ]Z) 1\ (- }tt 1\ [-a]Y) 1\ (- }tt 1\ [-a]X
Another property is that, in every Tun, there are at least two a actions. Even more
general is the property "in each Tun, a can only happen finitely often," which
is expressed by JLX. vY. [a]X 1\ [-a] Y. However, there are also many counting
properties that are not expressible in the logic. A notable case is the following
property of a buffer, "the number of out actions never exceeds the number of in
actions."
Subgoal : F F <I> ?
reduces to
Subgoal : E F <1>1 ?
or to
Subgoal : E F <1>2 ?
This technique has the merit that the formula \11 in any subgoal is a proper sub-
formula of the goal formula <1>, that is \11 E Sub( <1» . The method thereby supports
the general principle that a proof of a property "follows from" subproofs of
subproperties.
Checking whether a process satisfies a /LM formula is not as straightforward.
The problem is what to do with fixed point formulas , for instance with the following
goal.
Goal: E FV vZ . <I> ?
According to the semantic clause for v Z in Section 5.1 the goal reduces to the
subgoals
for each FEE when E is a subset of P(E). This requires us first to identify the
set P(E), and then to choose an appropriate subset E.
We can avoid calculating P(E) and choosing subsets of it by appealing to
approximants, as presented in Sections 5.5 and 5.6. The goal now reduces to the
subgoals
for each ordinal (1 . At this point we may use induction over ordinals . However,
we need to be careful with limit ordinals. If the formula contains embedded fixed
points, we may need to use simultaneous induction over ordinals. However, if E
is a finite state process, then the goal reduces to the single subgoal
where n is the number ofprocesses in the transition graph for E (or an overestimate
ofits size). This allows one to reduce verification ofa /LMproperty to an M property.
6.2. Property checking games 135
However, it has the disadvantage that a proof of a property no longer follows from
subproofs of subproperties.
Discovering a fixed point set in general is not easy, and is therefore prone to
error, We therefore prefer simpler, and consequently safer, methods for checking
whether temporal properties hold. Towards this end, we first provide a different
characterisation of the satisfaction relation between a process and a formula in
terms of game playing.
Exercises 1. a. Develop a set of goal directed rules for checking M properties of finite
state processes. This can be viewed as a "top down" approach to property
checking.
b. Develop a "bottom up" approach to property checking offinite state pro-
cesses. That is, develop a method that when given processes that have
subformula properties, constructs the processes having the property itself.
c. Give advantages and disadvantages of these two approaches , top down
and bottom up. Is there a way of combining their advantages?
2. a. Develop a set of goal directed rules for checking CTL properties of finite
state systems. Use your rules to show the following.
i. Yen F AG([tick]ff v AF(collectl}tt)
ü. Cl F AG((- }tt 1\ [-tick]ff)
iü. Crossing F EF(ccross}tt
iv. Cl! F A«( -}tt 1\ [-tick]ff) U (tock}tt)
b. Develop a bottom up approach to CTL property checking of finite state
processes . That is, develop a method that when given processes that have
subformula properties, constructs the processes having the property itself.
Illustrate your technique on the examples above.
3. Present an inductive technique for showing when E FV v Z. <I> that uses
induction on ordinals a.
• if <I> j = qJl /\ qJ2, then player R ehooses a eonjunet qJ; where i E {I, 2}: the
proeess E j+I is E j and <I> Ht is qJ;
• if <I> j = qJl V qJ2, then player V ehooses a disjunet qJi where i E {l, 2}: the
proeess EH t is E j and <l>j+1 is qJ;
• if <I> j = [K] qJ, then player R ehooses a transition E j ~ EH 1 with a E K
and <I>H 1 is qJ
• if <I> j = (K) qJ, then player V ehooses a transition E j ~ EH t with a E K
and <I>j +1 is qJ
• if <I> j = a Z . qJ, then <I> H f is Z and EHI is E j
• if <I> j = Z and the subfonnula of <1>0 identified by Z is a Z. \IJ, then <I> j+ 1 is
qJ and EH 1 is s,
where eaeh fonnula <1>; is a subfonnula of <1>0, that is <1>; E Sub(<I>o), and eaeh
proeess E; belongs to P(Eo). Ifpart ofa play is (Eo, <1>0) ••• (E j , <I> j ), then the next
move , and which player makes it, depends on the main eonneetive of the fonnula
<I> i - All the possibilities are presented in Figure 6.1. Players do not neeessarily
take tums, as in the bisimulation game', There is a duality between the rules for /\
and v, and [K] and (K). Player R makes the move when the main eonneetive is /\
or [K], whereas player V makes a similar move when it is V or (K) . The rules for
fixed point fonnulas use the fact that the starting formula <1>0 is normal, so eaeh
fixed point subformula is uniquely identified by its bound variable. Each time the
eurrentpositionis (E, o Z . qJ), thenextpositionis (E, Z), andeaeh time it is (F, Z)
the next position is (F, qJ), meaning the fixed point subfonnula Z identifies is, in
effeet, unfolded onee. Beeause there are no ehoices, neither player is responsible
for these moves.
1It would be straightforward, but somewhatartificial, to make playerstake turn, by addingextra null moves.
6.2. Property checking games 137
"'-
(Cl, Z)
"'-
(Cl, [tock]ff /\ (tick}Z) R
/ ~
(Cl, [tock]ff) (Cl, (tick}Z) V
"'-
(Cl, Z)
The arrows indicate which of the positions can lead to a subsequent position, and
the label at a position indicates what player is responsible for the move. The position
(Cl, Z) is repeated. Hence , this game has only finitely many different positions.
The "game graph" is the graphical presentation of the positions as above. A play
of the game can be viewed as the sequence ofpositions that a token passes through
as the players move it around the game graph. For instance , the single infinite play
repeatedly cycles through (Cl, Z) .
Example 2 Consider the counter Cto ofFigure 1.4, which is a simple infinite state system and
the game G(Cto, fLZ. [up]Z). The positions are very straightforward.
(Cto, fLZ . [up]Z)
"'-
(Cta , Z)
"'-
(Cto , [up]Z) R
"'-
(Ct}. Z)
"'-
Player R wins
1. The play is (Eo. <1>0) • • • (En• <l>n) and
• <l>n = ff, or
• <l>n = Z and Z is free in <1>0 and E; rt V(Z), or
• <l>n = (K}\lf and {F : E; ~ F anda E K} = 10
2. The play (E o• <1>0) • • • (E n, <l>n) • • • has infinite length and the unique vari-
able X, which occurs infinitely often and which subsumes all other variables
occurring infinitely often, identifies aleast fixed point subfonnula JlX . \lf
Player V wins
1. The play is (Eo. <1>0) • • • (E n• <l>n) and
• <l>n = tt, or
• <l>n = Z and Z is free in <1>0 and En E V(Z), or
• <l>n = [K]\lf and {F : E; ~ Fand a E K} = 10
2. The play (Eo . <1>0) •• • (E n• <l>n) . • • has infinite length and the unique vari-
able X. which occurs infinitely often and which subsumes all other variables
occurring infinitely often, identifies a greatest fixed point subfonnula v X. \lf
often. This X j is the outennost fixed point subformula and it decides which player
wins the play. Proposition I makes precise this observation.
Proposition 1 Jf tE«; <1>0) ... (En, <l>n) . . . is an infinite length play ofthe game Gv(Eo, <1>0), then
there is a unique variable X that
1. occurs infinitely often, that is for infinitely many i. X = <I>i - and
2. if Y also occurs infinitely often, then X subsumes Y .
Proof. Leta XI. \111, . . . , o X n • \IIn be all the fixed point subformulas in Sub( <1>0) in
decreasing order ofsize". Therefore if i < j , then X j cannot subsume Xi. Consider
the next position (E j+ 1, <I> j+ 1) after (E i- <I> j) in a play: if <I> j is not a variable, then
1<1> j+ r] < 1<1> j I. Because each subformula has finite size, an infinite length play
must proceed infinitely often through variables belonging to {X" . .. , X n}. Hence,
there is at least one variable occurring infinitely often. If a subpart of the play has
the form
and K, =f:. X], then either X, subsurnes X j or X j subsumes Xi, but not both; see
Proposition I of Section 5.2. Consequently, by transitivity of subsumption, there
is exactly one variable X, occurring infinitely often and subsurning any other X],
which also occurs infinitely often. 0
A strategy for a player is a family of rules telling the player how to move.
It suffices (as with the bisimulation game of Chapter 3) to consider his tory-free
strategies, whose rules do not depend upon previous positions in the play, For
player R, rules have the following form,
• at position (E, <1>, /\ <1>2) choose (E, <l>i) where i = I or i = 2
• at position (E, [K]<I» choose (F, <1» where E ~ Fand a E K
For the verifier rules have a similar form,
• at position (E, <1>1 V <1>2) choose (E, <l>i) where i = 1 or i = 2
• at position (E, (K)<I» choose (F, <1» where E ~ Fand a E K
A player uses the strategy 1r in a play provided all her moves in the play obey
the rules in tt . The strategy n is winning if the player wins every play in which
she uses n . The following result provides an alternative account ofthe satisfaction
relation between processes and formulas.
Theorem 1 1. E FV <I> ijJ player V has a history-free winning strategyfor Gv(E, <1».
2. E ~v <I> ijJplayer R has a history-free winning strategyfor Gv(E, <1».
The proof ofTheorem I is deferred until the next section.
3The size of a fonnula <J>, written I<J> I, is the number of connectives within it.
140 6. Verifying Temporal Properties
D"
1
2 ~---
13
~4---
1
5 R
1
.r >:
6 R
v 7 12~
Vj/~r/ j'R
9 11 R 14
V 15
1
116
1: (0, J1Y. vZ . [a]«(b}tt v Y) /\ Z)) 9: (D//,tt)
2: (0, y) 10: (0', Y)
3: (0, vZ. [a]«(b}tt v y) /\ Z)) 11: (0' , vZ . [a]«(b}tt v Y) /\ Z))
4: (0, Z) 12: (D',Z)
5: (0, [a]«(b}tt v Y) /\ Z)) 13: (0', [a]«(b}tt v Y) /\ Z))
6: (0' , «(b}tt v Y) /\ Z) 14: (0, «(b}tt v Y) /\ Z)
7: (0', (b}tt v Y) 15: (0, (b}tt v Y)
8: (0' , (b}tt) 16: (0, (b}tt)
R R v v
1-2 - 3-4-5 -6 -7 - 8 - 9
"----/
1: JLY. vZ. [a]«(b)tt v y) /\ Z) 5: [a](((b)tt v Y) /\ Z)
2: Y 6: (b)tt v Y) /\ Z
3: vZ. [a](((b)tt v Y) /\ Z) 7: (b)tt v Y
4: Z 8: (b)tt
9: tt
decides which player makes the next move. This is a more compact representation
of agame because it requires only the sum of the size of the formula and the
number ofprocesses in a transition graph, whereas the representation using explicit
positions may require the product of the two.
Example 3 The following family ofprocesses is from example 2 ofSection 5.7.
a. I)B; :i
der
Bo ~ O} + b. L{B; : i ~ O}
der
b .B; i ~ 0
For each j , the result is an infinite play passing through formula 2 infinitely often.
The game theoretic account of having a property holds for arbitrary processes
whether they be finite or infinite state. Generally, property checking is undecidable:
for instance, the halting problem is equivalent to whether TM n F vZ . (-)tt /\
[- ]Z, where TM n is the coding of the nth Turing machine. However, for some
classes of infinite state processes property checking is decidable; see for example
Esparza [21] .
The general problem ofproperty checking is closed under complement. Recall
the definition of complement <1>c of a formula <1>. For any valuation V, assume that
VC is its " complement" (with respect to a fixed E), the valuation such that for any
=
Z, the set VC(Z) P(E) - V(Z). In the following result, let player P be either the
refuter R or the verifier V throughout the Proposition.
Proposition 2 Player P does not have a history-free winning strategyfor Gv(E, <1» iff Player P
has a history-free winning stategyfor Gvc(E, <1>C).
Proof. Gvc(E, <1>C) is the dual ofGv(E , <I» , where the players reverse theirroles,
and the blatantly true and false configurations are interchanged. Therefore, ifplayer
P does not win Gv ( E , <1» , then the opponent player 0 wins this game. But then P
wins the dual game Gvc (E, <1>C). The converse holds by similar reasoning. 0
Theorem 1 1. E FV <1> iff player V has a history-free winning strategyfor Gv(E, <1» .
2. E ~v <1> iffplayer R has a history-free winning strategyfor Gv(E , <1» .
Proof. AssumethatEo FV <1>0 . We show thatplayerV has ahistory-freewinning
strategy for Gv(Eo, <1>0). The proof idea is straightforward, since we show that
player V is always able to preserve "truth" of game configurations by making
her choices wisely, so winning any play. The subtlety is the construction of the
history-free winning strategy, and it is here that we use approximants to make
optimal choices. We develop first the idea of "least" approximants.
Let a I Z I.\I1I, . . . , anZn' \I1n be all the fixed point subformulas in Sub(<1>o) in
decreasing order of size (as in the proof of Lemma I of the previous section).
Therefore, if Z, subsumes Z] , this means that i :5 j : it is not possible for Z, to
subsurne Z, when i < j. A position in the game Gv(Eo, <1>0) has the form (F, \11),
where \11 may contain free variables in {ZI, .. . , Zn}. But really these variables are
bound. Therefore, we now consider the correct semantics for understanding the
formulas in a play.
Let P bethe set ofprocesses P(Eo). Wedefine valuations Vo , . .. , Vn iteratively
as follows
Vo V
Vi+1 = Vi[Ei+I!Zi+Il ,
where Ei+l = {E E P : E FV i ai+IZi+1 . \I1 i +d . The valuation Vn captures the
meaning ofall the bound variables Zi. We say that a game position (F, \11) is true if
F FVn \11. Clearly, the starting position is true because Eo FVn <1>0 iff Eo FV <1>0 ,
and by assumption Eo FV <1>0 .
6.3. Correetness 01 games 145
Given any true position, we define a refined valuation that identifies the smallest
least fixed point approximants making the configuration true . To define this, let
jLYI. \11; • . . . , jLYk. \II~ be allieastfixed point subformulas in Sub(<I>o) in decreasing
order of size , meaning each Yi is some Z i - We define a signature as a sequence of
ordinals oflengthk, Cli .•. ai, We assume the lexicographic ordering on signatures:
Cli • • • Clk < ßI . . . ßk if there is an i .::: k such that Clj = ß j for all I .::: j < i and
a, < ßi . Given a signature S = Cli . •. Clb we define its associated valuation V~ as
folIows . Again the definition is iterative, since we define valuations Vb•. .. , V~
Vb V
Vf+1 Vf[Ei+I/Zi+il.
where now the set EH 1 depends on the kind offixed point 0i+! Zi+l.
We leave as an exercise that, if F ~v. \11 , then there is indeed a smallest signature
S such that F ~v~ \11. It is this result that generalises the "least approximant" result
ofSection 5.5. Given a true configuration (F. \11), we therefore define its signature
to be the least S such that F ~v~ \11 . Signatures were introduced by Streett and
Emerson in [56].
Next, we define the history-free winning strategy for player V. Consider any
true position, which is a player V position. If the position is (F, \111 v \112), then
consider its signature s. It follows that F ~v~ \111 V \112. Therefore, F ~v~ \IIi
for i = I or i = 2. Choose one of these \IIi that holds , and add the rule " at
(F. \11 1 v \112) choose (F. \11;)." The construction is similar for a true position of
the form (F, (K }\II). Let s be its signature, and therefore F ~v~ {K} \II. Hence,
there is a transition F ~ F ' with a E K , and F ' ~v~ \11. Therefore, the rule
in player V's strategy is " at position (F, {K} \11) choose (F' . \11)." The result is a
history-free strategy. The rest of the proof cons ists of showing that indeed it is a
winning strategy.
But suppose not. Assume that player R can defeat player V. The starting position
is true, and as play proceeds we show that the play can not reach a false position
because player V uses her strategy. Moreover, we shall carry around the signature
of each position. The initial position is (Eo. <1>0) and Eo ~v:.o <1>0. Assume that
(E m , <l>m) is the current position in the play and E m ~V:;" <l>m . Ifthe position is a
final position, then clearl y player R is not the winner. Therefore, either player V
wins , or the play is not yet complete. In the latter case , we show how the play is
extended to (E m + 1, <l>m+l) by case analysis on <l>m .
If <I>m = \11 1 /\ \112 , then player R chooses \11;, i E {I , 2}, and the next position
(Em+l, <l>m+l) is truewhere E m+1 = E m and <l>m+1 = \11;, and Em+1 ~V~"+l <l>m+l .
Clearly, the signature Sm+1 .::: Sm . A similar argument applies to the other player
R move when <l>m = [K]\II , and again Sm+1 .::: Sm'
146 6. Verifying Temporal Properties
If <l>m = 1111 V 1112, then player V uses the strategy to choose the next position.
Clearly, this preserves truth because Sm must be the same signature as when defining
the strategy. A similar argument applies to the other case of a player V move when
<l>m = (K}III. Again, in both cases Sm+1 ::: Sm '
If <l>m = o, Z;. 111;, then the next game configuration is the true position
(Em+l , <l>m+I), where E m+1 = E m and <l>m+l = Z;. Notice the signature may
increase if a, = /L. If <l>m = Z, and o, = v, then the next position (E m+l, <l>m+l) is
true , where E m+1 = E m and <l>m+1 = 111; and Sm+1 = Sm' If <l>m = Z; and a, = /L
then the next position (E m+1, <l>m+l) is true, where E m+1 = E m and <l>m+1 = 111;
and Sm+l < Sm because, in effect, the fixed point has been unfolded. In this case
there is a genuine decrease in signature.
The proof is completed by showing that player V must win any such play. As
we have seen, this is the case when a play has finite length. Consider therefore
an infinite length play. By Proposition I of the previous section, there is a unique
Z; occurring infinitely often, and which subsumes any other Z j also occurring
infinitely often. Consider the game play (E b <l>d .. • from the point k such that
every occurrence of a variable Zj in the play is subsumed by Zi, and let k 1, k2,
.. . be the positions in this suffix play where Z; occurs. Z; cannot identify aleast
fixed point subformula, For by the construction above, this would require there
to be a strictly decreasing sequence of signatures sets SkI > Sk2 > ..., which is
impossible. Therefore Z, identifies a greatest fixed point subformula.
Part 2 of the proof is similar. We need to generalise the other half of the least
approximant result; Proposition 2 of Section 5.5. The proof is left as an exercise
for the reader. 0
Exercises 1. Prove the smallest signature result used in Theorem 1: if F FVn 111 , then there
is a smallest signature S such that F FV~ 111 .
2. For each ofthe following construct its signature
a, CI l F vZ . (tick}(tock}Z
b. CI l F /LY. (tock}tt v [-]Y
c. T(19) f- /LY. (-}tt /\ [-out(I)]Y
d. Sem I Sem I Sem f- vZ . [get](/LY. (-}tt /\ [-put]Y) /\ [-]Z
Player R wins
1. The play is (Eo, <1>0) •.. (En, <1>n) and
• <1>n = -.tt, or
• <1>n = -.[K]\II and {F : E ~ Fand a E K} = 0
2. The play (Eo , <1>0)' .• (E n, <1>n) • . • has infinite length and there is an until
fonnula A(\II) U\II2) or E(\II) U\II2) occurring infinitely often in the play
Player V wins
1. The play is (E o, <1>0) ' .. (E n, <1>n) and
• <1>n = tt, or
• <1>n = [K]\II and {F : E ~ Fand a E K} =0
2. The play (Eo, <1>0) • . . (En, <1>n) •• • has infinite length and there is a negated
until fonnula -.A(\II) U\II2) or -.E(\II) U\II2) occurring infinitely often in the
play
chooses the next position when the fonnula has the form \111 /\ \112 or [K]\II, and
the verifier chooses when it has the form -.(\11) /\ \112) or -.[K]\II . Because there
are no choices in the remaining mies, neither player is responsible for them . The
first of these reduces a double negation. The remaining mies are for until fonnulas
and their negations, and are detennined by their fixed point definitions.
Figure 6.8 captures when a player is said to win a play of agame. Player R
wins a play if a blatantly false position is reached, and V wins if an obviously true
position is reached. The other condition identifies which of the players wins an
infinite length play. This is much easier to decide than for full JLM. For any infinite
length play of a CTL game, there is only one until fonnula or negation of an until
fonnula occurring infinitely often. It is this fonnula that decides who wins . If it is
an until fonnula, and therefore aleast fixed point fonnula, then R wins the play; if
it is the negation of an until fonnula, and therefore a maximal fixed point fonnula,
then V wins the play.
Again a history-free strategy for a player is a family of rules telling the player
how to move, and independent of previous positions. For the refuter, mies have
the following form.
• at position (E, <1» /\ <1>2) choose (E, <1>i) where i = I or i = 2
• at position (E, [K]<1» choose (F, <1» where E ~ Fand a E K
For the verifier mies have a similar form .
• at position (E , -.(<1» /\ <1>2)) choose (E, -.<1>i) where i = 1 or i = 2
• at position (E, -.[K]<1» choose (F, -.<1» where E ~ Fand a E K
A player uses the strategy n in a play if all her moves in the play obey the mies in
it , and Jr is winning if the player wins every play in which she uses Jr •
6.4. CTL games 149
j
V 2
3
/~ 5
4
/ ~R 6
V 7
/~R 9
8
/
•
/~
10 V 11
R
Example 1 Just for illustration, we show that the refuter has a winning strategy for the game
G(CI, A«(tick}ttU(tock}tt» . The game graph is presented in Figure 6.9. Po-
sition 4 is a winning position for the refuter, and 8 is a winning position for the
verifier. The refuter's winning strategy is to choose 9 at 6, and 11 at 9. Therefore,
the only infiniteplay cycles through the until formula, and is thereby won by player
R.
150 6. Verifying Temporal Properties
Proposition 1 1. If <I> E CTL, then E F <I> i.fJ player V has a history-free winning strategy
for G(E , <1» .
2. If <I>
E CTL, then E ~ <I> i.fJplayer R has a history-free winning strategy for
G(E, <1» .
The characteristic of a play of a CTL game is inherited for any play of agame
Gv( E, <1» when <I> E JLMI, that for any infinite play there is just one fixed point
variable that occurs infinitely often . A more general fragment of JLM is JLMA,
the alternation free formulas of modal mu-caIculus. These fragments are defined
in Section 5.6. In any infinite play of Gv( E, <1» when <I> E JLMA all fixed point
variables occurring infinitely often are of the same kind, that is, they are either all
greatest or all least fixed point variables .
Proposition 2 1. If <I> E JLMI, then in any infinite play ofthe game Gv(E, <1» there isjust one
fixed point variable X occurring infinitely often.
2. If <I>E JLMA, then in any infinite play ofthe game Gv(E, '1» the variables
occurring infinitely often are either all greatest fixed point variables, or are
allieastfixed point variables .
Proof. For both cases it is c1ear that, in any infinite length play, there is at least
one fixed point variable that occurs infinitely often. If there is just one fixed point
variable that occurs infinitely often, then the results follow. Otherwise, assume an
infinite play in which the distinct variables X], . .. , X n all occur infinitely often.
Assume that they identify the fixed point subformulas u, X,. \11], • . . , U n Xn - \11 n, in
decreasing order ofsize. Consider now the first case, when '1> E JLMI. Because it is
possible to proceed in a number ofmoves from some position (F], X]) to (F2 , X2),
and then to (F3, X3) and so on to ( Fn, Xn) and then back to (F{, X]), it follows that
X] must occur free in at least one of the subformulas U2X] , \11], . . •, unX n. \I1n,
which in the first case contradicts that the starting formula <I> E JLMI. For the
second case, assume that X; is the first variable that identifies a different kind of
fixed point to that of X] . Because it is possible to proceed in a number of moves
from (F;, X;) to (F{, X]), at least one of the variables Xs- 1 :::: j < i must occur
free in u;X; . \11;, which contradicts that the starting formula <I> E JLMA. 0
In this section, model checking is abstracted into a simpler graph game , (which
is a slight variant ofwhat is called the parity game; see Emerson and Jutla [19]).
A parity game is a directed graph G = (N,~, L) whose set ofvertices N is a
finite subset of N and whose binary edge relation ~ relates vertices. As usual we
write i ~ j instead of (i, j) E~. The vertices are the positions of the game. The
third component L labels each vertex with R or with V: L(i) teIls us which player
is responsible for moving from vertex i . A play always has infinite length because
we impose the condition that each vertex i has at least one edge i --+ j. A parity
game is a contest between player Rand V. It begins with a token on the least vertex
j (with respect to < on N). When the token is on vertex i and L(i) = P , player
P moves it along one of the outgoing edges of i , A play therefore consists of an
infinite length path through the graph along which the token passes.
The winner of a play is detennined by the label of the least vertex i that occurs
infinitely often in the play. If L(i) = R, then player R wins; otherwise L(i) = V ,
and player V wins. We now show how to transform the property checking game
for finite state processes into an equivalent parity game. Let Gv(E , <1» be the
property checking game for the finite state process E and the normal fonnula <1>.
The transfonnation into the parity game Gv [E , <1>] proceeds as follows.
1. Let EI , , Ern be a list ofall processes in P(E) with E = EI .
2. Let ZI, , Zk be a list of all bound variables in <1>. Therefore, for each Z,
there is the associated subfonnula a, Zr. Wi'
3. Let <1>1 , • . • , <1>/ be a list of all fonnulas in Sub(<1» - (ZI, . . . , Zd in
decreasing order of size. This means that <1> = <1> I. Extend the list by in-
serting each Z, directly after the fixed point subfonnula associated with it:
<1>1, . .. , aiZi .Wi, Zi , .. . , <1>/ . The result is a list ofall fonnulas <1>1, .. • , <1>n
in Sub( <1» in decreasing order of size, except that the bound variable Z, is
"bigger" than Wi but "smaller" than o, Zi. Wi.
4. The possible positions of the property checking game Gv( E , <1» is now listed
in the following order. Positions earlier in the list contain "larger" fonnulas.
(EI , <1>J>, . •• , (Ern , <1>J>, (EI, <1>2) , . . " (EI, <1>n), " " (Ern, <1>n)
The vertex set N of the parity game Gv [E , <1>] is {l , . . . , m x n}. Each vertex
i = m x (k - 1) + j represents the position (E] , <1>t).
5. Next, we define the labelling L(i) of vertex i and its edges by case analysis
on the position (F, w) that i represents.
• If W is Z and Z is free in the starting fonnula <1> and F E V(Z) , then
L(i) = V and there is the edge i ~ i. Instead, if F (j. V(Z), then
L(i) = R and there is the edge i ~ i
• If W is tt, then L(i) = V and there is the edge i ~ i
• Ifw is ff, then L(i) = Rand there is the edge i ~ i
• If W is WI /\ W2, then L(i) = R and there are edges i ~ j 1 and i ~ j2
where j 1 represents (F , WI) and j2 represents (F, W2)
6.5. Parity games 153
D Player V vertices
o Player R vertices
instead if J-tZl : \l1 j is in Sub( <1», then player R wins. This agrees with the property
checking game.
The notions of a (history- free) strategy and of a winning strategy for the parity
game are very similar to the property checking game. For player P, a history-free
strategy is a set of mies of the form "at i choose j" where L(i) = P and there
is an edge i -+ j. A strategy is winning for player P if she wins every play in
which that strategy is employed. The following proposition is deducible from the
analysis and observations above.
Proposition 1 Player P has a winningstrategyfor Gv(E, <1» iffPlayer P has a winningstrategy
for Gv[E, <1>].
Example 1 The game G[D, vZ. (b)tt 1\ (-)Z], where Dis depicted in Figure 6.3 is given in
Figure 6.10. Vertices (such as 2 and 3) not reachable from vertex I in the game are
excluded. The representation of positions from the model checking game are also
presented.
6.5. Parity games 155
Exercises 1. Define the parit y games G[E, <1>] when E and <1> are the following.
a. D and \lJ from Section 6.2.
b. Yen and vX . (-}X
c, Crossing and j.lX. [-]X
2. Prove Proposition 1.
3. Prove that there is a converse to Proposition 1. Any parity game can be trans-
formed into a property checking game who se size is polynomially bounded
by the parity game. (See Mader for an explicit construction [39].)
4. Define infinite state parity games so that Proposition 1 also holds for infinite
state processes E. (Hint: use a finite set of indexed colours.)
5. Boolean fixed point logic is defined as folIows .
<1> ::= Z I tt I ff I <1>1 /\<1>2 I <1>1 V <1>2 I vZ. <1> I j.lZ. <1>
Formulas are interpreted over the two element set 10 (false) and {I} (true) , A
closed formula is therefore either true or false. For instance, u.Z , Z is false
because 10 is the least solution to the equation Z = Z .
a. Show that a parity game can be direct1y translated into a closed formula
of boolean fixed point logic in such a way that player V wins the game
iff the translated formula is true.
b, Show the converse , that any closed boolean formula can be translated into
a game in such a way that the formula is true iff player V wins the game .
6. A simple stochastic game , SSG , is a graph game whose vertices are labelIed R,
Vor A (average) , and where there are two special vertices R-sink and V-sink
(which have no outgoing edges). Each R and V vertex (other than the sinks)
has at least one outgoing edge , and each Avertex has exact1y two outgoing
edges. At an average vertex during agame playa coin is tossed to determine
which of the two edges is traversed, each having probability ~ . Agame play
ends when a sink vertex is reached: player V wins if it is the V-sink, and player
R otherwise (which includes the eventuality ofthe game continuing forever).
The decision question is whether the probability that player V wins is greater
than ~. It is not known whether this problem can be solved in polynomial
time; see Condon [15].
Show how to (polynomially) transform a parity game into an SSG in such a
way that the same player wins both games. (Hint: add the two sink vertices,
and an average vertex i 1 for each vertex i for which there is an edge j ---7 i
with j ~ i. Each such edge j ---7 i when j ~ i is removed, and the edge
j ---7 i 1 is added. Two new edges are added for each Avertex i 1: first an
edge to i , and second an edge to R-sink, if i is labelled R, or to V-sink if it
is labelIed V. The difficult question is: what probabilities these edges should
have?)
156 6. Verifying Temporal Properties
Force~(X) = X for P E {R , V}
Force~+I(X) = Force~(X)
U {j: L(j) = Rand 3k E Force~(X) . j -+ k}
U {j : L(j) = V and Vk. if j -+ k then k E Force~(X)}
Force~+I(X) = Force~(X)
U {j : L(j) = V and 3k E Force~(X) . j -+ k}
U {j : L(j) = R and Vk . if j -+ k then k E Force~(X)}
Ifvertex j E Force p (X) and the current position is j , then player P can force play
from j into X irrespective ofwhatever moves her opponent makes. Vertex j itself
need not belong to player P. The rank of such avertex j is the least index i such
that j E Force~(X) . The rank is an upper bound on the total number ofmoves it
takes for player P to force play into X. There is an associated strategy for player P.
6.6. Deciding parity games 157
So, Forcev({12 , l5}) = {6,9, 11, 12, l5}, and 11 has rank 3. The different set
ForceR({l2, l5}) = {6, 9,12, l5}.
The following result shows that removing a force set from agame leaves a
subgame.
Proposition 1 If Gis agame and X is a subset ofvertices, then the subgraph G - Forcep(X) is
asubgame.
We wish to provide adecision procedure for parity games that not only com-
putes the winner but also a winning strategy. Such a procedure can be extracted
from the proof of the following theorem.
Theorem 1 For any parity game G and vertex i, one ofthe p/ayers has a history-free winning
strategyfor G(i).
W'R
]
~] W'v
W Uk)
G-
--jl
V-
jl
W'0
G
[ WUk)
]
3. Else solve the subgame G - X, and let W~ and W~ be the winning vertices
for R and V in the subgame. Let Y = {j : k --+ j}
a. If Y n (W{(k) U X) =1= 0, then return WL(k) = W{(k) U X and also Wo =
W'o
b. Else let Z = Forceo(Wo) . Solve the subgame G - Z, and let W~ and W~
be the winning vertices for R and V in the subgame. Return Wo=Z U W ö
and WL(k) = WZ(k)
The algorithm in Theorem I is exponential in the size of N because it may
twice call the solve procedure on games of smaller size at stage 3, and then again
at stage 3b. It is an open question whether there is a polynomial time algorithm for
this problem. We shall now look at subcases where model checking can be done
more quickly.
The first case is for alternation free formulas, the sublogic JtMA that contains
JtMI and also CTL. Recall that if<l> is a CTL formula, then the game G(E, <1»
has the property that, in any infinite length run, there is just one until formula, or
the negation of an until formula, occurring infinitely often. More generally (see
Proposition 2 of Section 6.4) if<l> belongs to JtMA, then the game Gv(E, <1» has
the property that, in any infinite length run all the variables that occur infinitely
often are ofthe same kind. This means that the resulting parity game Gv[E, <1>]
has a pleasant structure. Its vertices can be partitioned into a family of subsets
NI, . .. , N; such that the following properties hold.
1. Ni n N j = 0 when i i= j
2. if any play is at avertex in N], then the play cannot return to avertex in Ni
when i < j
3. for each i , all plays that remain in Ni forever are always won by the same
player
For such a parity game there is a straightforward iterative linear time algorithm that
finds the winner as follows. Start with the subgame whose vertices are N« . This
mustbeasubgame; forthereare no edges from Nn to N j,j < n. Oneoftheplayers
wins every vertex of N n . Suppose it is player P. Let K; = Forcep(Nn ) . Player P
wins every vertex in X n • Let G' = G - X; and assurne its vertex set is N'. This is
a subgame. Consider the non-empty sets in the partition NI n N' , . . . , Nn-I n N' .
These sets will inherit the properties above. Consider the final such set. One of the
players wins every vertex in this set, and so on.
The second case we consider is when one of the players never has a choice of
move. The game obeys the following condition in case of one of the players P.
Games displaya subgame invariance property for the opponent O. Let Q be one
ofthe players, and let G' = G - ForceQ(X). Ifplayer 0 wins vertex i E G', then
she also wins i E G because player P cannot escape from G' into ForceQ(X).
6.6. Deciding parity games 161
Therefore, we can use Theorem 1 to provide a polynomial time algorithm for this
case.
Exercises 1. Consider vertices in the parity game of Figure 6.10. Work out the following
force sets.
a. Forcev({l8})
b. ForceR({18})
c. Forcev({13, 15, 12})
d. ForceR({13, 15, 12})
2. Use the algorithm from Theorem 1 to decide who wins each vertex, and what
the winning strategy is, for the game in Figure 6.10.
3. Emerson, Juda and Sistla show that the model checking decision problem
belongs to NP n co-NP because deciding who wins a parity game belongs to
NP, and model checking is closed under complement [20]. Give a proof that
deciding who wins a parity game belongs to NP.
4. Prove that, if<1> E JlMA, then the parity game Gv[E, <1>] has the pleasant
structure decribed. That is, its vertices can be partitioned into a family of
subsets NI, . . . , N; such that the following properties hold .
a. Ni n N j = 0 when i =f:. j
b. if any play is at avertex in N] , then the play cannot return to avertex in
Ni wheni < j
c. for each i, all plays that remain in Ni forever are always won by the same
player
5. Assume the parity game G obeys the following condition for player P whose
opponent is 0 :
for any vertex i, if L(i) = P and i -+ j and i -+ k, then j = k.
Prove that, if Q is one of the players, and G' = G - ForceQ(X) and player 0
wins vertex i E G', then she also wins i E G. Use this fact to provide a quick
algorithm for deciding parity games obeying this condition.
7 _
Exposing Structure
this requires us to expose structure that is common to the whole family ofprocesses.
In this case, of freedom from deadlock, the property itself is not parameterised.
Much more complex is the situation wherein it is. An example is again the case
of the scheduler, that Sched, has the property Cycle(a l .. . an) for every n > I
(where the parameterised property is defined in Section 5.7).
Exercises 1. a, Provide adefinition for when a value passing process is data independent.
b, Can you generalise your definition for when a value passing process
only depends on finitely many different values (and so is "almest" data
independent).
Compare your definitions with those of Jonsson and Parrow [32].
2. Show that C '" C', where these processes are given above.
3. Prove the following for all n > I.
a. Sched, F vZ. (-)tt 1\ [-]Z
b. Sched, F Cycletc, . .. an)
4. Assume CYn = a!. . . . anCYn ' for any n >
def
l.
a, Prove that Sched; \\{b 1 , •• •• bn} ~ CYn ' for all n > I
b. Now provide adefinition of a parameterised bisimulation relation
covering the example in part (a)
Example 1 The family of counters of Figure 1.4, {et; : i 2: O}, has the property
[up]([round]ff /\ [up](down)(down)tt). The following "proof" uses properties
of the generalised satisfaction relation, which are fonnalised precisely below.
{Ct; : i 2: O} 1= [up]([round]ff /\ [up](down) (down)tt)
iff {Ct; : i 2: I} 1= [round]ff /\ [up](down)(down)tt
iff {Ct; : i 2: I} 1= [round]ff and {Ct; : i 2: I} 1= [up](down)(down)tt
iff {Ct; : i 2: I} 1= [up](down)(down)tt
iff {Ct;: i ~ 2} 1= (down)(down)tt
iff {Ct; : i 2: I} 1= (down)tt
iff {Ct; : i ~ O} 1= tt
Arguably, this is a more direct proofthan to appeal to induction on process indices .
The reader is invited to prove it inductively, and expose the required induction
hypothesis.
Example 1uses various features ofthe satisfaction relation between sets ofpro-
cesses and fonnulas . The case when a fonnula is a conjunction oftwo subfonnulas
is the most straightforward, E I=v CI> /\ I}/ iff E I=v CI> and also E I=v I}/ . Disjunc-
tion is more complicated. If E I=v CI> V I}/ , then this does not imply that E I=v CI>
or E I=v I}/ . A simple illustration is that, although {O, a .O} 1= (a) t t V [a]ff, it is
neither the case that {O, a.O} 1= (a)tt, nor is it the case that {O, a .O} 1= [a]ff. In
general, if E I=v CI> v I}/ , then some processes in E may have the property CI> and
the rest have the property I}/. Therefore, E can be spit into two subsets EI and E2
in such a way that EI I=v eI> and E2 I=v I}/. One of these sets could be empty'.
To understand the situation when a set of processes satisfies a modal fonnula,
a little notation is introduced. If K is a set of actions, and E a set ofprocesses, then
K (E) is the following set.
def a
K (E) = {F : E ~ F for some E E E and a E K}
Therefore, K (E) is the set of processes reachable by K transitions from members
of E. For instance, {up}({Ct; : i 2: On
in example 1 is the set {Ct; : i ~ I}.
Clearly, E I=v [K]eI> iff K(E) I=v eI>. This principle is used in example 1. One
case' is {Ct; : i ~ I} 1= [up](down)(down)tt iff
{up}({Ct; : i 2: I} 1= (down)(down)tt.
A function f : E ~ K (E) that maps processes in E into processes in K (E) is
a "choice function" if, for each E E E, there is an a E K such that E ~
feE) . If f : E ~ K(E) is a choice function, then feE) is the set of processes
E FV Z iff E ~ V(Z)
E FV <1> 1\ 1}1 iff E Fv <1> and E FV 1}1
E FV <1> V 1}1 iff E = EI U E2 and EI FV <1> and E2 FV 1}1
Exercises 1. Prove example I using the principles ofFigure 7.1. Also provide a proofthat
Ct ; F [up]([round]ff 1\ [up](down) (down)tt) for all i 2: 0 using induction
on i. What induction hypothesis do you appeal to?
2. Show the following
a, {Ct; : i 2: O} ~ [up](down}{down)tt
b. {TU) : i S 1O} F p,Y. (-)tt 1\ [-out(l)]Y
c, [Sched, : n> I} F Cycle(ala2)
where T(i) is defined in Seetion 1.1.
3. Can you show the following?
= Sem
= Sem I Sem; i 2: 1.
folIows .
def
Cop in(x ).out(x).Cop
def
User; write(x).in(x).User;
Demonstrate that this system has the property that, if VI and then V2 are written,
then the output of either of these values may be the next observable action.
5. Develop similar principles to those in Figure 7.1 that also allow parameteri-
sation of formulas . Can you prove the following using these princ iples, for all
n > I?
Schad, F Cycletc, . . . an)
7.3 Tableaux I
In the previous chapter, games were developed for checking properties ofindividual
processes . Our interest is now in checking properties ofsets ofprocesses. Instead of
defining games to do this, we provide a tableau proofsystem for proving properties
of sets of processes. A tableau proof system is goal directed, similar in spirit to
the mechanism in Chapter I for deriving transitions. Its main ingredient is a finite
family of proof rules that allow reduction of goals to subgoals. Tableaux have
been used for property checking ofboth finite state and infinite state systems; see
Bradfield, Cleaveland, Larsen and Walker [8, 10, 11, 13,36,55].
A goal has the form E rv <I> where E is a set of processes, <I> is anormal
formula as defined in Section 5.2 and V is a valuation. The intention is to try to
achie ve the goal E rv <1>, that is, to show that it is true, that E Fv <1> . The proof
rules allow us to reduce goals to subgoals, and each rule has one of the following
two forms :
E I-v <I> C E rv <I> C,
F I- v W FI I- v WI F2 I- v W2
where C is a side condition. In both cases, the premise, E I- v <1>, is the goal that
we are trying to achieve (that E has the property <I> relative to V), and the subgoals
in the consequent are what the goal reduces to.
The tableau proofrules are presented in Figure 7.2. Other than Thin, each rule
operates on the main logical connective ofthe formula in the goal . The logical rules
for boolean and modal connectives follow the stipulations described in Figure 7.1.
For instance, to establish the goal E I- v <I> /\ W, it is necessary to establish that E
has the property <1>, and that E has the property W, both relative to V. A fixed point
formula is abbreviated by its bound variable, since we are dealing with normal
formulas. The rule for abound variable is just the fixed point unfolding rule. Thin
is a structural rule , which allows the set of processes in a goal to be expanded.
The rules are backwards sound, which means that, if all the subgoals in the
consequent ofarule are true, then so is the premise goal. The stipulations introduced
7.3. Tableaux I 169
EI-y<l>/\ W
/\
El-v<l> El-vW
v El-y<l>vW E E UE
E, I-v<l> E21-vW = 1 2
El-vIK )<I>
[K] K(E)l-v<l>
Thin EI-y<l> EC F
FI-v<l>
in the previous section justify this for the boolean and modal operators, and in the
case of the fixed point rules this follows from the fixed point unfolding property,
Proposition 2 of Section 5.1. Backwards soundness of Thin is also clear because,
if F FV <1> and E C F, then E FV <1> .
The other ingredient of a tableau proof system is when a goal counts as a
"terminal" goal, so that no rule is then applied to it. As we shall see, terminal goals
are either "successful" or " unsuccessful," To show that all the processes in E have
the property <1> relative to V, we try to achieve the goal E f-v <1> by building a
successful tableau whose root is this initial goal. A successful tableau is a finite
proof tree whose root is the initial goal, and all whose leaves are successful termin al
goals. Intermediate subgoals of the proof tree are detennined by an application of
one of the rules to the goal immediatel y above them.
The definition ofwhen a goal in a tableau is terminal is underpinned by the game
theoretic characterization of satisfaction. A tableau represents a family of games.
Each process in the set of processes of a goal detennines a play. The definition of
when a goal F f- v \11 in a proof tree is terminal is presented in Figure 7.3. Clearly,
goals fulfilling 1 or 2 are correct. For instance, F ~v (K)<1> if there is an F E F
and F ~v (K) <1> . The justification for the success of condition 3 in the case of a
successful terminal follows from considering any infinite length game play from
a process in E with respect to the property Z that "cycles" through the terminal
goal of the proof tree F f- v Z . Because F ~ E, the play jumps back to the
companion goal (the goal E FV Z ). Such an infinite play must pass through the
variable Z infinitely often , and because Z subsurnes any other variable that also
occurs infinitel y often , and Z iden tifies a maximal fixed point fonnula, the play is
therefore a win for the verifier. The justification for the failure of condition 3 in the
case of an unsuccessfulleaf is more involved, and will be properly accounted for in
the next section when correctness of the proof method is shown. Howe ver, notice
that if E = F, then any infinite length play from a process in E that repeatedly
170 7. Exposing Structure
cycles through the terminal goal F f-v Z (with the play repeatedly jumping to the
companion goal) is won by the refuter.
A successful tableau for E f-v <I> is a finite proof tree, all of whose terminal
goals are successful. A successful tableau only contains true goals . The proof of
this is deferred until the next section .
Proposition 1 If E f-v <I> has a successful tableau , then E FV <1>.
However, as the proofsystem stands , the converse is not true. A further tennination
condition is needed . But this condition is a little complex, so we defer its discussion
until the next section. Instead, we present various examples that can be proved
without it. Below we often write E f-v <I> instead of {E} f-v <1> , and also drop the
index V when it is not gennane to the proof.
7.3. Tableaux I 171
Example 1 The model checking game showing that Cnt has the property vZ . (up)Z consists
of an infinite set of positions. Cnt is the infinite state process, Cnt ~ up.(Cnt I
down.0). There is a very simple proof of the property using tableaux. Let Cnto be
Cnt and let Cnti+l be Cnti I down.O for i ~ O.
Cnt I- v Z . (up) Z
{Cnti : i ~ O} I- v Z . (up)Z
(*) {Cnti : i ~ O} I- Z
{Cnti : i ~ O} I- (up)Z
[Crrt, : i ~ I} I- Z
Thin is used immediately to extend the set of processes from Cnt to Cnt, for all
i ~ O. The application ofthe (up) rule employs the choice function t, which we
have left implicit in the proof and which maps each Crrt, to Cnti+l. The final goal
is a successful terminal, by condition 3 ofFigure 7.3, owing to the goal (*) above
it.
The tableau proof system also applies to finite state processes, as illustrated in
the next example.
Example 2 We consider the process 0 described in Figure 5.1 and the following property <1>:
Process 0 has the property <1> . A succe ssful tableau proving this is given in
Figure 7.4 . The terminal goals are all successful. The goal (**) is terminal because
of its companion (*). Although both variables Z and Y occur on the path between
these goals, Z subsumes Y and Z identifies a maximal fixed point formula; there-
fore, the terminal goal (**) is successful. Notice that the goal 2 : D I- Y is not a
leaf, even though there is the goal l : D I- Y above it: this is because Z occurs on
the path between these goals , at (*), and Y does not subsume Z .
Example 2 illustrates an application of the tableau proof system to a finite
state example. However, in this case the proof is really very elose to the game. Of
more interest is that we can use the proof system to provide a much more succinct
presentation of a verifier's winning strategy by considering sets of processes.
Example 3 Recall the level crossing of Figure l.lO. Its safety property is expressed as
vZ .([tcross]ff V [ccross]ff) /\ [-]Z. Let <I> be this formula. We employ the
abbreviations in Figure 1.12, and let E be the full set {Crossing} U {EI, ... , EIl}.
172 7. Exposing Structure
D f- <1>
D f- Z
D f- tLY. [a ]« (b }t t 1\ Z) V Y)
l:Df-Y
o f- [a ]« (b }t t 1\ Z) V Y)
0' f- «b}tt 1\ Z) V Y
D' f- (b}tt 1\ Z
D' f- [a ]« (b}tt 1\ Z) V Y)
D f- «b}tt 1\ Z) V Y
2 :0f-Y
D f- [a ]« (b}tt 1\ Z) V Y)
0' f- «b}tt 1\' Z) V Y
0' f- (b}tt 1\ Z
D' f- (b}tt (**) D' f- Z
o f- tt
FIGURE 7.4. A successfultableau for D I- <I>
Below is a successful tableau showing that the crossing has this property.
Crossing f- <1>
E f- <1>
Ef-Z
E f- ([tcross]ff v [ccross]ff) 1\ [-]Z
E f- [tcross]ff V [ccross]ff E f- [-]Z
E - {Es , E 7 } f- [tcross]ff E - {E4, E6} f- [ccross]ff E f- Z
o f- ff 0 f- ff
Notice the essential use ofthe Thin rule at the first step.
7.4. Tableaux 11 173
Exercises 1. Recall the definitions of[], [K], () and (K) as fixed points in Section 5.3.
Provide tableau proofs of the following
a. [Däv, : n ~ O} F= [] [a]ff
b. Crossing F= [car] [train] «((tcross))tt V ((ccross))tt)
c. Protocol F= [in(m)] «((out(m)))tt /\ [(in(m) : mE D}]ff),
a. CI l F= vZ. (tick)(tock)Z
b. CI l F= jLY. (tock)tt v [-]Y
c. T(l9) I- jLY. (-)tt /\ [-out(l)]Y
d. Sem I Sem I Sem I- vZ. [get](jLY. (-)tt /\ [-put]Y) /\ [-]Z
e, As lAs F= jLZ. (-)tt /\ [-a]Z,
7.4 Tableaux 11
In the previous section, a tableau proof system was presented for showing that
sets of processes have properties. The idea is to build proofs around goals of the
form E I- v <1>, which, if successful, show that each process in E has the property
<I> relative to V. As it stands, the proof system is not complete.
An example for which there is not a successful tableau is given by the following
cell, from example 2 ofSection5.5.
def
C in(x).Bx wherex: N
def
doen.B, for n ~ 0
174 7. Exposing Structure
This cell has the property of eventual tennination, J-LZ . [- ] Z . The only possible
tableau for C f- J-LZ. [-]Z up to inessential applications ofThin is as folIows.
Cf-J-LZ.[-]Z
Cf-Z
Cf-[-]Z
1: {Bi : i 2: O} f- Z
2 : {Bi : i 2: O} f- [- ]Z
3: {Bi : i 2: O} f- Z
The goal 3 is terminal because of the repeat goal 1, and it is unsuccessful be-
cause Z identifies aleast fixed point fonnula. However, the verifier wins the game
G(C, J-LZ. [- ]Z), as the reader can verify. One solution to the problem is to permit
induction on top of the current proof system. The goallabelIed 1 is provable using
induction on i. There is a successful tableau for the base case Ba f- Z, and as-
suming successful tableaux for Bi f- Z for all i ~ n, it is straightforward to show
that there is also one for Bn+1 f- Z. However, we wish to avoid explicit induction
principles. Instead, we shall present criteria for success that capture player V's
winning strategy. This requires one more condition for tennination.
The additional circumstance for being a terminal goal of a proof tree concerns
least fixed point variables. A goal F f-v Z is also a terminal if it obeys the (almost
repeat) condition ofFigure 7.5. This circumstance is very similar to condition 3 of
Figure 7.3 ofthe previous section for being an unsuccessful terminal goal, except
that here F ~ E. It is also similar to condition 3 for being a successful terminal ,
except that here Z identifies aleast fixed point variable. Not all terminal goals that
obey this new condition are successful. The definition ofsuccess (taken essentially
from Bradfield and the author [11]) is intricate, and requires some notation.
A terminal goal that obeys condition 3 of Figure 7.3 for being a successful
terminal, or the new terminal condition of Figure 7.5, is called a "o -terminal,"
where a may be instantiated by a v or u. depending on the fixed point variable Z.
Anode of a tableau (which is just a proof tree) contains a goal, and we use
boldface numbering to indicate nodes . For instance, n : E f- v <I> picks out the
node n ofthe tableau that has the goal E f-v <1>. Suppose node n ' is an immediate
successor of n , and n contains the goal E f- v <I> and n ' contains E' f- v <1>'.
n : Ef-<I>
. . . n ' : E' f- <1>' •. .
The . . . on both sides ofthe bottom goal suggest that there may be another subgoal
in one ofthese positions. Agame play proceeding through (E, <1» where E E E
can have as its next configuration (E ', <1>') , where E' E E' provided the rule applied
at n is not the rule Thin . Which possible processes E' E E' can be in this next
configuration depend on the structure of <1>. This motivates the following notion .
We say that E' E E' at n ' is a "dependant" of E E E at n if
• the rule applied to n is /\, v, a Z, Z or Thin, and E = E', or
• the rule is [K] and E ~ E' for some a E K , or
• the rule is (K }, and E' = feE) where f is the choice function.
All the possibilities are covered here. An example is that each Bi at node 2 in
the earlier tableau is adependant of the same Bi at node 1, and each Bi at 1 is a
dependant of C at the node directly above 1, since the rule applied is the [K] rule,
f . th .. in(i)
an d or eac h Bi there IS e transition e --+ Bi,
The "companion" of a o -tenninal is the most recent node above it that makes
it a terminal. (There may be more than one node above a a-tenninal that makes it
a terminal, hence we take the lowest.) Next, we define the notion of a "trail,"
Definition 1 Assurne that node nk is a jL-tenninal and node nl is its companion. A trail from
process EI at nl to Ek at nk is a sequence of pairs of nodes and processes
(nj , EI) , " " (nk, E k ) such that for all i with 1 ~ i < k either
1. E i+1 at ni+1 is a dependant of E i at ni , or
2. n, is the immediate predecessor of a a -tenninal node n ' (where n ' is different
from nk) whose companion is nj for some j : 1 ~ j ~ i, and ni+! = nj and
Ei+1 at n ' is adependant of Ei at nr.
A simple trail from B2 at 1 to BI at 3 in the earlier tableau earlier is:
(1, B2 ) (2, B2 ) (3, BI)'
Here B2 at 2 is a dependant of'B, atl, and B, at3 is a dependantof'Be at2. Condition
2 ofDefinition I is necessary to take account ofthe possibility of embedded fixed
points as pictured in Figure 7.6. A trail from (nj , EI) to (nk , E k ) maypass through
Bj repeatedly before continuing to nk via ni . In this case, nk is a jL-tenninal,
meaning Z identifies aleast fixed point fonnula. However, nl may be either a jL or
a v-terminal: in both cases FI ~ EI and in both cases Z must subsume Y because
176 7. Exposing Structure
01 : E f- Z
Oj : F' f- <1>'
0k : F f- Z 0, : F, f- Y
0k labels a leaf. 10 fact, oode 0i here could be 01 with 0k and o. both sharing the
same companioo. There can be additional iteratioos ofembedded a-terminals. For
instance , there could be another a-companion along the path from n, to 0, whose
partner labels a c-terminal, and so on. There is an intimate relation between the
sequence of processes in a trail and a sequence ofprocesses in part of agame play
from a companion node to a o -terminal.
The companion oode 0 of a fL-terminal induces a relation 1>0' defined as
folIows.
Success means, therefore, that the relation induced by the companion of a fL-
terminal is weIl founded . This precludes the possibility of an infinite game play
asssociated with a tableau that cycles through the node 0 infinitely ofteo so that
player R wins . Ifthere is an infinite descending chain Eo 1>0 E, t>o • • • then any
fL-terminal whose companion is node 0 is unsuccessful.
A tableau is successful if it is finite and all its terminal goals are successful
(obey either conditions I, 2 or 3 ofFigure 7.3 for being successful or is a successful
fL-terminal). The tableau technique is both sound and complete for arbitrary (infi-
nite state) processes. Again, the result is proved using the game characterization
of satisfaction. The following theorem includes Proposition I from the previous
section.
7.4. Tableaux 11 177
Theorem 1 There is a successful tableau with root node E f-v <I> ifJ E FV <1> .
Proof. Assume that the goal E f-v <I> has a successful tableau . We show that
player V wins every game Gv( E , <1» when E E E. Consider a play ofsuch agame.
The idea is to follow the play through the tableau retuming to the companion node
ofa o -terminal. All choices for player R are given in the tableau . Choices for player
V are determined by the tableau . Suppose (F, IJI\ v 1J12) is the position reached
in the game play, and F f-v IJI\ V 1J12 is the goal reached in the tableau where
F E Fand the goal is not a leaf. Then F] f-v IJI\ and F2 f-v 1J12 are the immediate
successors in the tableau . If F E F\ , then Player V chooses (F, IJI\) as the next
position in the game. If F fj. F\, then F E F2 and player V chooses (F , 1J12) as the
next position. Next, assume that (F, (K)IJI) is the position reached in the game
play, and F f-v (K) IJI is the goal reached in the tableau where F E F and the goal
is not a leaf. Then f(F) f-v IJI is the immediate successor in the tableau, where
f is a choice function . Player V chooses (f(F), IJI) as the next position in the
game . Ifthe game configuration is (F, IJI) and the associated goal in the tableau is
F f-v IJI with Thin applied to it, then we associate the same game position with the
successor goal in the tableau . Any such play must be won by player V as the reader
can verify. The only interesting case to show is that a play can not proceed through
aleast fixed point variable Z infinitely often, where Z subsumes all other variables
occurring infinitely often . This would break the well foundedness requirement on
fL-terminals.
For the other half of the proof, assume that E FV <1>. We build a successful
tableau for E f-v <I> by preserving truth, and with a judicious use of the Thin rule.
Suppose we have built part ofthe tableau and goal F f-v IJI still has to be developed,
and F FV IJI. If IJI is tt, or the variable Z that is free in the starting formula <1> ,
then the goal is a successful terminal. If IJI is IJI\ /\ 1J12 , then the goal reduces to
the subgoals F f-v 1JI1 and F f-v 1J12. If IJI is IJI] V 1J12, then consider each game
Gv(F, IJI). By assumption, player V has a history-free winning strategy for any
such game. Let Fj for i E {I, 2} contain the processes F E F such that player V's
winning strategy for Gv(F, IJI) inc1udes the rule "at (F, IJI) choose (F, lJI i ) ." The
goal F f-v IJI reduces to the subgoals F\ f-v IJI\ and F2 f-v 1J12 . Clearly, F, FV lJI i
and F = F 1 UF2. IflJl = [K]IJI\ , then the goal reduces to the subgoal K (F) f- IJI! . If
IJI is (K) IJI! , then again consider each game Gv( F, (K) IJI]) for F E F. Player V has
a history-free winning strategy for any such game. Let f be the following choice
function: f(F) is the process G such that "at (F, (K) 1JI1) choose F ~ G" is the
rule in the winning strategy for Gv(F, IJI). The goal F FV IJI therefore reduces to
the true subgoal f(F) f-v IJI\ . Next , suppose IJI is aZ.IJI\ . Let P be the smallest
transition closed set containing F as a subset. Let F! be the set 11 o Z .IJI\ IIC. If
F\ ;:2 F apply the rule Thin to give the subgoal Fl f-v aZ .1JI 1 , and then one
introduces the subgoal F I f- v Z followed by F1 f-v IJI! . The final part of the proof
is to show that the tableau is successful, and this we leave as an exercise for the
reader. 0
178 7. Exposing Structure
Example 1 The tableau at the start of this section, reproduced below, is now successful.
CI-JLZ.[-]Z
CI-Z
CI-[-]Z
1: {B; : i ::: O} I- Z
2: {B; : i:::O}I-[-]Z
3 : {B; : i ::: O} I- Z
Each B;+I has the extra capability of performing up. An attempt to prove that C
eventually terminates yields the same tableau as above . However, the tableau is
unsuccessful. There are two two trails from B;+I at 1 to the leafnode 3.
(1, B;+1)(2, B;+1)(3, B;)
(1, B;+I) (2, Bi+l) (3, B;+2)
Therefore B;+I 1>1 B; and B;+I 1>1 B;+2. Consequently, there is a variety ofinfinite
decreasing sequences from B;+1. One example is the following sequence, B;+1 I> 1
B;+2 1>1 B;+I 1>1 ·· ..
Example 2 The crossing in Section 1.2 has the liveness property "whenever a car approaches,
eventually it crosses" provided the signal is fair, as stated in Section 5.7. Let Q
and R be variables and V a valuation such that Q is true when the crossing is in
any state where Rail has the form green.tcross.red.Rail (the states E2, E3 ,
E6, and EIO of Figure 1.12) and R holds when it is in any state where Road is
up.ccross .down.Road (the states EI, E3, E7 and Eil). The liveness property now
becomes "for any run, if QC is true infinitely often and R C is also true infinitely
often, then whenever a car approaches the crossing eventually it crosses," which is
expressed by the following formula with free variables relative to V, v Y.[car]IJII /\
[-]Y, where IJII is
JLX.vY1.(Q v[ ccross](vY2 .(R V X} /\ [ ccross]Y2» /\ [ ccross]YI .
Let E be the full set {Crossing} U {EI, . . . , Eil} ofprocesses in Figure 1.12. A
proof that the crossing has this property is given as a successful tableau in stages
in Figure 7.7. Throughout the tableau, we omit the index Von I-v.
7.4. Tableaux 11 179
E I- Y
E I- [carN' ] 1\ [-] Y
E I- [car]\II ( E I-[-] Y
{E I , E3. E7. E ll } I- \11] EI- Y
Tl
EI = {E I , E3, E4• E6, E 7, Eid
Tl
EI I- \11 1
1 : EI I- X
E] I- vYJ.{ Q V [ ccross](v Y2.(R V X) 1\ [ ccross] Y2)) 1\ [ ccross] Y]
EI I- Y]
EI I- Q V [ ccross ](v Y2.(R V X) 1\ [ ccross] Y2) 1\ [ ccross] YI
EI I- Q V [- c cr oss](vY2.( R V X) 1\ [- ccr os s]Y2) EI I- [ ccross]YI
T2
{E I . E 4 • E 7 • Ei d I- [ ccross](v Y2.(R V X) 1\ [ ccross] Y2)
{EI . E3, E 4, E6, Ell} I- v Y2.(R V X) 1\ [ ccross] Y2
{E I. E3, E4• E6. E 7 , Ell } I- vY2.( R V X) 1\ [ ccross] Y2
{E], E3. E 4, E6• E7 , E ll} I- Y2
Example 3 The proof that the slot machine SMn of Section 1.2 has the property that a windfall
can be won infinitely often is given by the following successful tableau.
EI-Y
EI- J,LZ.(wrn(l06»)y v (-)Z
1 : EI-Z
E~ I- Y
E is the set of all derivatives. The vital mies in this tableau are the disjunction
at node 1, where EI is exactly those processes capable ofperforming a win(10 6)
action, and E2 is the remainder; and the (K) rule at node 3, where f is defined to
ensure that EI is eventually reached: for processes with less than 106 in the bank,
f chooses events leading towards loss, so as to increase the amount in the bank;
and for processes with more than 106, f chooses to release(10 6). The formal
proof requires partitioning E2 into several classes, each parametrised by an integer
n, and showing that while n < 106, n is strictly increasing over a cycle through
the classes; then, when n = 106, f selects a successor that is not in E2, so a trail
from Eo through nodes 1, 2, 3, 4 terminates, and therefore node 4 is a successful
J,L-terminal.
Example 4 Consider the processes T(i) for i ~ I from Section 1.1.
then the following tableau is successful, and otherwise it is not. But which ofthese
holds is not known!
1 : {T(i) : i ~ I} I- Y
{T(i) : i ~ I} I- (out(2»)tt v [-]Y
The problem is that we don't know whether the relation 1>1 induced by the
companion node 1 of the j,L-terminal is weIl founded.
Exercises 1. Show that the verifier wins the game G(C, j,LZ. [- ]Z).
2. The following processes are from Section 5.7.
def
Ao = a·L{Ai : i ::: O}
def
Ai+l = b.Ai i ::: 0
Bo
def
= a·L{Bj : i ::: O} + b. L{Bi : i ::: O}
def
Bi+l = b.Bj i ::: 0
Provide a successful tableau whose root is
{Aj I Aj : j ::: O} I- j,LY. (-)tt /\ [-a]Y.
Show that there is not a successful tableau when A is replaced by B.
3. Complete the proof ofTheorem 1.
4. For the various interpretations of Cycle(...) of Section 5.7 give successful
tableaux for
a. Sched~ F= Cycleür, a2 a3)
b. Sched3 F= Cyclefcj az a3)
where the schedulers are from Section 1.4.
References
[I] Andersen, H., Stirling, C., and Winskel, G. (1994). A compositional proofsystem for
the modal rnu-calculus, Proes 9th IEEE Symposium on Logie in Computer Seienee,
144-153 .
[2] Austry, D., and Boudol, G. (1984). Algebra de processus et synchronisation.
Theoretieal Computer Sciene e 30, 90-131 .
[3] Baeten, L, and Weijland , W. (1990). Proeess Algebra. Cambridge University Press .
[4] Bernholtz, 0 ., Vardi, M. and Wolper, P. (1994). An automata-theoretic approach to
branching-time model checking. Leeture Notes in Computer Scienee 818, 142-155 .
[5] Bergstra, L, and Klop, 1 (1989). Process theory based on bisimulation semanties.
Leeture Notes in Computer Scienee 354 , 50-122 .
[6] Benthem, J. van (1984). Correspondence theory. In Handbook of Philosophieal
Logie , Vol. 11, ed. Gabbay, D. and Guenthner, F., 167-248 . Reidel.
[7] Bloom, B., Istrail , S., and Meyer, A. (1988). Bisimulation cant be traced. In 15th
Annual Symposium on the Principles ofProgramming Languages, 229-239 .
[8] Bradfield, 1 (1992). Verifying Temporal Properties ofSystems. Birkhauser.
[9] Bradfield, 1 (1998). The modal mu-calculus alternation hierarchy is strict.
Theoretieal Computer Scienee 195, 133-153.
[10] Bradfield, 1 and Stirling, C. (1990). Verifying temporal properties of processes.
Leeture Notes in Computer Scienee 458, 115-125 .
[11] Bradfield, land Stirling, C. (1992). Local model checking for infinite state spaces.
Theoretieal Computer Scienee 96, 157-174 .
[12] Clarke, E., Emerson, E., and Sistla, A. (1983). Automatie verification offinite state
concurrent systems using temporal logic specifications: a practical approach. Proe.
10th ACM Symposium on Principles ofProgramming Languages, 117-126.
[13] Cleaveland, R. (1990). Tableau-based model checking in the propositional mu-
calculus. Aeta Informatiea 27, 725-747.
[14] The Concurrency Workbench, Edinburgh University,
http://www.dcs.ed.ac.ukJhomelcwb/index.html.
184 References
[38] Long , D., Browne , A., Clarke, E., Jha, S., and Marrero, W. (1994 ). An impro ved
algorithm for the evaluation of fixpoint expre ssions. Leeture Notes in Computer
Scienee 818, 338-350.
[39] Mader, A. (1997 ). Verifieation ofmodal properties using boolean equation systems.
Doctoral thesis, Technical University of Munich.
[40] Manna, Z., and Pnueli , A. (1991 ). The Temporal Logic ofReactive aOO Concurrent
Systems. Springer.
[41] McNaughton, R. (1993). Infinite games played on finite graphs. Annals ofPure aOO
App lied Logic 65, 149-184.
[42] Milner, R. (19 80). A Calculus ofCommunicatingSystems. Lecture Notes in Computer
Science 92.
[43] Milner, R (1983). Calculi for synchrony and asynchrony. Theoretical Computer
Science 25, 267-310.
[44] Milner, R (1989). Communication aOO Concurrency. Prentice Hall.
[45] Milner, R, Parrow, 1., and Walker, D. (1992). A calculus ofmobile processes, Parts
I and 11. Information and Computation 100, 1-77.
[46] Niwinski, D. (1997). Fixed point characterization ofinfinite behavior offinite state
systems. Theoretieal Computer Scienee 189, 1-69.
[47] Park, D. (1981). Concurrency and automata on infinite sequences. Lecture Notes in
Computer Science 154, 56 I -572 .
[48] Papadimitriou, C. (1994). Computational Compl exity. Addison-Wesley.
[49] Plotkin, G. (198 I) . A structural approach to operational semantics. Technical Report,
DAIMI FN-19, Aarhus University.
[50] Pratt, V. (1982). A decidable mu-calculus. 2200 IEEE Symposium on Foundations
ofComputer Scienee, 421-427.
[51] Simone, R. de ( 1985). Higher-level synchronizing devices in Meije-SCCS.
Theoretical Computer Science 37 ,245-267.
[52] Sistla, P., Clarke, E., Francez, N., and Meyer, A. (1984). Can message buffer s be
axiomatized in linear temporal logic? Information and Control68, 88- 112.
[53] Stirling, C. (1987 ). Modal logics for communicating systems, Theoretieal Computer
Scienee49, 31 1-347.
[54] Stirling, C. (1995). Local model checking games. Lecture Notes in Computer Science
962, 1-11.
[55] Stirling, C., and Walker, D. (1991). Local model checking in themodal mu-calculus.
Theoretical Computer Scienee 89, 16I- 177.
[56] Streett, R., and Emerson, E. (1989). An automata theoretic decision procedure for
the propositional mu-calculus. Information aOO Computation 81, 249-264.
[57] Taubner, D. ( 1989). Finite Representations ofCCS aOO TCSP Programs by Automata
aOO Petri Nets. Lecture Notes in Computer Science 369.
[58] Thomas, W. (1995). On the synthesis of strategies in infinite games. Lecture Notes
in Computer Science 900, 1-13.
[59] Vardi, M., and Wolper, P. (1986). Automata-theoretic techniques for modal logics of
programs. Journal ofComputer System Scienc e 32 , 183-22 I .
[60] Walker, D. (1987). Introduction to a calculus of communicating systems. Technical
Report, ECS-LFCS-87-22 , Dept. ofComputer Science, Edinburgh University.
186 References
[61] Walker, D. (1990). Bisimulations and divergence. Information and Computation 85,
202-241.
[62] Winskel, G. (1988). A category oflabelled Petri Nets and compositional proofsystem .
Procs 3rd IEEE Symposium on Logic in Computer Science, 142-153.
Index
D,5 User, 8
K(E), 166 Use,54
Rep,55 Ven,2
0,5 I>n, 176
P,29 ~,64
P(E),29 u-tenninal,175
V[E/Z],92 companion, 175
=,12 ~, 73
=r,69 ~c, 75
11 eI> lIe, 92 freI>, Z], 93, 105
11 eI> nE , 84
11.2,97 ACP, 25, 26, 28
J1.M,104 actions
vZ ,97 -,34
rr-calculus. Z? - K , 34
C12 , 52 J+,40
Cl, I A,34
Cnt,1O 0,18
Cop, 5 r (internal action), 8, 17
Crossing, 12 all runs (A), 89
Cti,4 alternation hierarchy, 127
Div,45 always (G), 89
Divj,43 Austry,25
Protocol, 13
Regi,6 Backwardssound, 168
15
SMn , Benthem,64
Schedn , 25 Bergstra, 25
T(i), 7 Bernholtz, 151
Ucop,17 bisimilar, 64
188 Index
tLMO,111 refusal,45
tLMA,126 run, 83, 86
tLM1,123
characterization ofbisimilarity, 70 Satisfies, 32
dynamic logic, 107, 108 E FV <1>, 165
Hennessy-Milner logic, 32 SCCS, 25, 26
M,32 signature, 145
Mot,48 Simone,25
MO, 46 simulation , 68
M oo, 38 equivalence, 68
mu-calculus, 104 Sistla, 90,132,151,161
propositional dynamic logic, 147 Skou, 72
with variables, 92 Smolka,69
modal mu-calculus, 104 some run (E), 89
model checking , 151 specification, 52
Mol1er,67, 164 Streett, 106, 145
monotonie, 94, 105 Sub(<1», 37, 109
Sub(E),29
Negation substitution {. j.}, 5
(.....),36 subsumes , 109, 138
CC), 36,101 synchronization, 9
Niwinski ,127
normal formula, 108 Tableau proof, 168
NP neo-NP, 161 rule, 168
successful, 169, 176
Observable action, 17 terminal goal, 169
onverges (.p, 48 Tarski, 97, 116
ordinals, 60 Taubner, 28, 30
output of data, 5 temporallogic, 83, 89
CTL,90,147
Papadimitriou, 63 testing ,54
parity game, 152 trace, 17,53
Park, 64, 106 completed, 53
Parrow, 27, 165 observable, 18
Plotkin, 4 traB,175
postfixed point, 96 transition c1osed, 29
Pratt, 106 transition system, 2
prefixed point, 96 finite state, 4
properties infinite state, 4, 10
extensional, 128 observable, 19
intensional , 128 transitions, 2
liveness, 83, 87 observable, 18
safety, 83, 87 rules for, 2-4, 8, 9,11 ,17,19,22,26,55
weak Jiveness, 87
weak safety, 87 Until (U), 89