0% found this document useful (0 votes)
83 views35 pages

A Temporal CCS - Moller and Tofts

This document describes TCCS, a temporal calculus for communicating systems that extends CCS with temporal constructs. TCCS allows the formal analysis of timing aspects of systems, such as when events can and must occur and how fast a process may execute. The document includes examples of using TCCS, a description of its semantics, bisimulation relations for analyzing processes, equational theories for reasoning about systems, and an example analysis of an alternating bit protocol.

Uploaded by

Crumbles Domti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views35 pages

A Temporal CCS - Moller and Tofts

This document describes TCCS, a temporal calculus for communicating systems that extends CCS with temporal constructs. TCCS allows the formal analysis of timing aspects of systems, such as when events can and must occur and how fast a process may execute. The document includes examples of using TCCS, a description of its semantics, bisimulation relations for analyzing processes, equational theories for reasoning about systems, and an example analysis of an alternating bit protocol.

Uploaded by

Crumbles Domti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

TCCS : A Temporal Calculus of

Communicating Systems
(DRAFT)
Faron Moller Chris Tofts
Department of Computer Science Department of Computer Science
University of Edinburgh University College Swansea
fm@[Link] cschris@[Link]

November 15, 2001

Abstract

In this paper we describe the calculus TCCS, an extension of the process algebra
CCS with temporal constructs. The calculus is useful for the formal analysis of
the timing aspects of systems, as well as their functional behaviour. Such aspects
include when events can and must occur, as well as how fast a process may execute.
The paper includes extensive examples, as well as a detailed description of the
theoretical underpinnings of the language. We also include a description of a logic
for specification, which is an extension of Hennessy-Milner logic. Finally, we present
a brief comparison with various other similar approaches to the analysis of the
temporal properties of systems.

Contents

1 Introduction 1

2 The Language TCCS 2

3 Examples 6
3.1 Mutual Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Horse Racing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Simple Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 The Semantics of TCCS 9

1
5 Bisimulation Relations 12
5.1 Strong Similarity and Bisimilarity . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Observational Similarity and Bisimilarity . . . . . . . . . . . . . . . . . . . 13
5.3 Strong Faster Than . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Observational Faster Than . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

6 Equational Theories 20

7 Example: The Alternating Bit Protocol 27

8 A Logic for Specification 30

9 Conclusion 30

1 Introduction

Process algebra is the general study of distributed concurrent systems in an algebraic


framework. There have been many successful models formulated within this frame-
work, notably Milner’s CCS ([Mil80, Mil89]), Hoare’s CSP ([Hoa78, Hoa85]), Bergstra
and Klop’s ACP ([BK85]), Boudol’s Meije ([Bou85]), and Hennessy’s EPL ([Hen88]).
Each of these approaches has added to more than a decade of fruitful discoveries on the
mathematical foundations of concurrent processes, so that now it is the case that these
theories can be applied in practice, perhaps using automated tools such as the Edinburgh
Concurrency Workbench ([Cle89, Mol91a]).
However, perhaps by necessity for the development of such a complex field, each of
these approaches includes restrictions on the nature of the systems which they attempt to
model; restrictions which are now being challenged more and more by the engineering com-
munity who try to apply such mathematical theories to develop rigorously-verified systems
in practice. Such restrictions include for example the inability within these frameworks
to naturally describe properties of timing, probability and priority of events performed
by the components of the system being modelled.
Recently however, there has been considerable interest in the use of process algebraic
methods to describe the temporal properties of systems. Presentations of such frameworks
include [Mil83, RR86, Tof88, Gro90, NRSV90, Wan90, BB91, Che91]. The aim of such
a development is the ability to express and analyze system timing properties, including
fundamental notions such as system timeouts and duration control.
In this paper we describe the Temporal Calculus of Communicating Systems, TCCS,
which is an extension of Milner’s Calculus of Communicating Systems CCS. The semantic

2
underpinning of the language has undergone development in [Tof90, MT90, Mol91, MT91,
MT92]. We present the main technical results and several examples of the descriptive
use of the calculus. The calculus has been implemented in the Edinburgh Concurrency
Workbench (Version 6.0), a prototype tool for the analysis of concurrent systems. Details
of this implementation can be found in [Mol91a].
The paper is organized as follows. In Section 2 we introduce the syntax of the calculus,
with an informal description of each of the constructs of the language. In Section 3 we
present several simple examples which demonstrate the use of TCCS for describing sys-
tems. In Section 4 we formalize the definition of the language by providing an operational
semantic interpretation of the calculus. In Section 5 we present several preorder and
equivalence relations based on the notion of bisimilarity of [Par81]. These relations define
different ideas of equivalence, simulation, and ideas of when processes are functionally
equivalent but temporally distinct; that is, one executes faster than the other. In Sec-
tion 6 we provide axiom systems for reasoning about systems syntactically. In Section 7
we present a nontrivial example of the use of the calculus, in the form of the analysis
of an alternating bit protocol. In Section 8 we describe a logic for specification based
on Hennessy-Milner Logic HML. Finally, in Section ?? we compare the present approach
with that of several other similar approaches.
We include proofs of all technical results. However, we minimize these by making
much reference to [Mil89] where the corresponding results are proven for the symbolic
calculus CCS. The proofs referred to will be assumed of the reader, so we only provide
the refinements required to understand the proofs within the setting of our timed calculus.

2 The Language TCCS

To define the language, we first presuppose a set L of atomic action symbols not containing
τ or ε, and we define Act = L ∪ {τ }. We assume that L can be partitioned into two
equinumerous sets with a complementation bijection · defined between them satisfying
a = a. These complementary actions form the basis of the handshake communication
used in our calculus, analogous to that of CCS. We also presuppose some set X of process
variables, and take T to represent some time domain (that is, a totally ordered set,
bounded below by, but not containing, 0, the unit of a natural addition operation +
defined on T ), be it for instance the discrete domain of positive integers, or the dense
domain of positive rationals, or the continuous domain of positive reals. We shall for the
most part not put any restriction on the time domain, but we shall present some results
which hold only over some particular time domain such as the integers.
The collection of TCCS expressions, ranged over by P , is then defined by the BNF
expression given in Figure 1, where we take a ∈ Act, x ∈ X , t ∈ T , and f ranging over
relabelling functions, those f : Act → Act such that f (a) = f (a) for a 6= τ and f (τ ) = τ .
We also stipulate that all variables appear both closed and guarded; that is, each variable
x appears within the scope of an action or temporal prefix, which in turn appears within
the scope of a recursive construct µx. The intuitive interpretation of these terms can be

3
P ::= 0 nil process
| 0 nontemporal nil process
| x process variable
| a.P (atomic) action prefix
| a.P delayed action prefix
| (t).P temporal prefix
| P +P weak choice
| P ++ P strong choice
| P |P parallel composition
| P \a restriction
| P [f ] relabelling
| µx.P (recursive) process definition

Figure 1: Syntax of TCCS

given as follows.

– 0 represents the completely dead process. It can neither perform any computation,
nor can it witness the passage of any time. In terms of a concrete machine, it may
be thought of as one which is shut down or with its “plug pulled”. One of the
consequences is that whenever 0 is a concurrent component of a machine, then the
whole machine will be temporally dead; the local temporal deadlock gives rise to a
global temporal deadlock.

– 0 represents the process which can never perform any actions, but rather idles
forever.

– x represents the process bound to the variable x in some assumed environment.

– a.P represents the process which can perform the (atomic) action a and evolve into
the process P upon so doing. The process cannot progress through time before
performing the action a, and the action is assumed to occur instantaneously in
time. If we wish to model a system where actions have some amount of duration
(for instance, if we wish to talk of the computation of some hardware component
with a certain known propagation delay), then we can naturally treat the action as a
composite event consisting of the two atomic actions as (start event a) and af (finish
event a), with the two actions separated in time by the appropriate duration ta .
Also note that atomic actions can be causally related without one being temporally

4
dependent on the other. For instance, if we consider the process a.b.P , the action b
is causally dependent on the action a in that it cannot have occurred unless (notice
that we do not say until) the action a has occurred; however, the two actions are
considered to happen simultaneously in time.

– a.P represents the process which can perform the (atomic) action a, but will allow
any amount of time to pass before doing the action.

– (t).P represents the process which will do no computation for an amount of time t,
but at that point in time will commence behaving as the process P . Thus prefixing
P by a composite event a consisting of start and finish actions as and af respectively
with duration ta would be written as as .(ta ).af .P .

– P + Q represents a choice between the two processes P and Q. The process behaves
as the process P or the process Q, with the choice being made at the time of the first
action, or else at the occurrence of a passage of time when only one of the operands
may allow the time passage to occur. In this latter case, the second temporally
deadlocked process is dropped from the computation. This operator is referred to
as weak choice.

– P ++ Q represents a stronger notion of choice between the two processes P and Q.


The process behaves as the process P or the process Q, with the choice being made
only at the time of the first action. Thus for instance any initial passage of time
must be allowed by both P and Q. This operator is referred to as strong choice.

– P | Q represents the parallel composition of the two processes P and Q. Each


of the processes may do any actions independently, or they may synchronise on
complementary actions, resulting in a τ action. Any passage of time must be allowed
and recorded by each of P and Q.

– P \ a represents the process P with the action a ∈ L (as well as its complement
action a) restricted away, that is, not allowed to occur.

– P [f ] represents the process P with its actions renamed by the relabelling function
f.

– µx.P represents the (least fixed point) solution to the equation x = P .

Remark 2.1 (The Calculus CCS) The symbolic calculus CCS can be viewed as the
subcalculus of TCCS obtained by omitting deadlock 0, action prefix a.P , temporal prefix
(t).P and strong choice P ++ Q. In CCS, the interpretation of behaviour is that it occurs
within time, but that the time is abstracted away; one action occurs after another, but there
is no way of expressing or knowing how much time may or must pass before this second
action occurs. The nil process of CCS is interpreted by 0 rather than 0, as time can always
pass when it is being ignored. For a similar reason, action prefix is interpreted by a.P
rather than a.P . There is no place for temporal prefix in the subcalculus for CCS, as the
delaying would simply be ignored. Finally, in this situation, where no temporal deadlocking

5
can occur, the two choice operators coincide semantically, so the strong choice operator is
dropped.
It would have made more sense to reverse the rôles of the two nil constants, as well
as the two action prefix operators to stay consistent with the standard syntax of CCS
(where the nil and prefix constructs are written without underscores). However, we chose
to abide by the convention introduced in [BB91] of having undecorated actions interpreted
as immediate events and underscored actions interpreted as delayed events.

Remark 2.2 (The Calculus `TCCS) By adding to the above subcalculus for CCS the
operation of temporal prefix (t).P , we arrive at a subcalculus which can express exactly
those processes which can never deadlock temporally; every process term must allow time to
pass. (This property can be proved using a simple structural inductive argument; however,
so far this claim can only be based on the informal explanation of the constructors of the
language.) This proves to be a useful subcalculus, both practically and theoretically. Prac-
tically, we can imagine its utility for example in modelling hardware components which
are connected in a pipelined fashion in which the components have different propagation
delays. One component may be ready to receive data from the previous component, but
this component might not be prepared to send the data. In this case, the data request will
remain at the input port of the first component until such time as the sending component is
prepared to send it. In this sense, the components are delay insensitive, in a sense likened
to that of [Udd86]. Theoretically, this subcalculus is important when we consider compar-
ing the relative speeds of behaviourally functionally equivalent processes. This aspect is
explored in Section 5. We shall henceforth emphasize the importance of this subcalculus
by referring to it as `TCCS, or liberal TCCS.

As a point of notation, we shall occasionally omit the dot when applying the prefix
operators, and also drop trailing 0’s, thus for instance rendering a.0 + (t).b.0 as a + (t)b.
Also, we shall allow the prefix operators to take precedence over the concurrency operator,
which will take precedence over the recursion operator, which finally in turn will take
precedence over the choice operators; outside of this, the binary operators will be left
associative, and we shall freely use parentheses to override the precedence rules. We
shall allow ourselves to write P \L for L ⊆ L, and interpret it in the expected fashion.
Finally, we shall allow ourselves to specify processes definitionally, by providing recursive
def def
definitions of processes. For example, we shall write A = a.A rather than A = µx.a.x.
Each process term P has an associated sort L(P ) which represents the set of actions
which the process is potentially capable of performing during its execution. This syntactic
property is defined as follows.

Definition 2.3 (Syntactic sort)

L(0) = L(0) = L(X) = ∅ L(P | Q) = L(P ) ∪ L(Q)


L(a.P
 ) = L(a.P ) = {a : a 6= τ } ∪ L(P ) L(P
 \a) = L(P
 ) − {a,
 a}
L (t).P = L(P ) L P [f ] = f L(P )
L(P + Q) = L(P ++ Q) = L(P ) ∪ L(Q) L(µx.P ) = L(P )

6
Throughout the sequel, we reserve the symbol ≡ to represent syntactic identity.

3 Examples

In this section, we present several simple examples of using TCCS for describing (compo-
nents of) concurrent systems.

3.1 Mutual Exclusion

Consider the following two recursively-defined processes.


(1)

def
A B
A = a.A0 + (1).B
a b
def 0 (1)
B = b.B + (1).A
A0 B0

Here we are representing a system which is allowing one of two possible computation
paths to be followed, and furthermore allowing the choice to be determined by time, in
that the first path can be followed only at even time units, whilst the second path can
be followed only at odd time units. This system can be considered as the basis of, for
example, a mutual exclusion algorithm for two independent processes sharing a common
critical section. We can see this by interpreting the actions a and b as the first and second
processes entering the critical section, respectively. Then the processes A0 and B 0 can
def def
be reasonably defined by A0 = c.B and B 0 = d.A, where the actions c and d represent
the first and second processes exiting the critical section, respectively. We can trivially
generalise this system to give a simple mutual exclusion algorithm for any number of
processes sharing a common resource.
Note that this example can be interpreted over a real time domain, as well as an
integer time domain; it just insists that accesses via the actions a and b occur at integral
moments in time. In the next example, we present an inherently real-time system.

3.2 Clocks

In this example, we present three clocks of varying accuracy. This example comes from
[BB91], and shows how inherently real-time systems can be elegantly modelled in TCCS.
The three clocks are defined as follows.

7
def
C0 = (1).tick.C0
 
def
C1 = (1 − ε). tick.0 ++ (2ε).0 | (1).C1
 
def
C2 = (1 − ε). tick.C2 ++ (2ε).0

The first clock is a perfect clock, with its ticks occurring exactly one unit of time apart.
The second clock keeps accurate time, though its ticks may individually fluctuate within
ε of when they are supposed to occur. Finally, the third clock is inaccurate, in that its
ticks fluctuate as with the second clock, but these fluctuation errors are cumulative.

3.3 Horse Racing

Suppose we have some collection of (distinct) times ti ∈ T (1 ≤ i ≤ n), and we define the
simple processes Hi by

def
Hi = (ti ).finishi .0.

These processes can be viewed as horses competing in a race; the time which horse i takes
to cross the finish line is given by ti .
Following this interpretation, if we then denote a race to be the parallel composition
of the horse processes, that is,

def
Race = H1 | H2 | · · · | Hn ,

then we discover that

Race = (tm ).finishm .0,

where we interpret equality informally as “having the same transitions,” and where m is
such that tm < ti for each i 6= m. Hence the effect of a race is to report the winning horse.
If we now denote another process Race+ by

def
Race+ = H1 + H2 + · · · + Hn ,

then we can discover in fact that

Race = Race+ .

This anomaly, where parallel computation is confused with (nondeterministic) choice, has
an analogy in computation theory, where the process Race is viewed as the nondetermin-
istic Turing machine concept of acceptance — try each path in parallel, and accept if one
of the paths succeeds.

8
Note that if we changed the definition of our horses slightly by defining the processes
as

def
Hi0 = (ti ).finishi .0,

then this analogy somewhat breaks down. The corresponding race processes Race0 and
Race0+ no longer have the same transition sequences, as Race0+ still only reports the winning
horse, whereas Race0 reports the order of all subsequent horses.

3.4 Simple Gates

Consider the definition of an And gate in CCS, using the paradigm of transition signalling
[Sut89], where only changes in the signals (from high to low or low to high) are observed.
We label the input lines of the gate by a and b, and the output line by c. Its definition is
as follows.

def def
$
A = [Link] + [Link] Ab = [Link] + b.A a
A c
def def b
Aa = a.A + [Link] Aab = [Link] + [Link] %

In the state Aa if the line b goes high followed by line a going low, then we will observe
two output transitions, representing the output going high and then low. This must occur
no matter what the separation in time between the two signals. However if the changes
occur in the opposite order then no change will occur. If we use the component in some
external sense, then we can ensure that there is no temporal collision between changes on
the a and b lines. However when this process is used as part of a bigger system, it will
often be the case that changes on the a and b lines should be considered as simultaneous.
Unfortunately it is clear from the above that the order in which we interleave the a and b
actions is critical to the performance of the above system. Technically the system is not
confluent (in an external sense) to the effects of a and b applications.
Whilst it may be important for circuit designers to be wary of the possibility of prob-
lems caused by such races, it is often the case that the designer is aware of the problem
and knows that the physics of the circuit will in fact render the race irrelevant. For in-
stance, the response time of the device may be sufficiently slow to ensure that both the
inputs change before the output is changed. However we still want the ability to respond
to a single line change. Basically we need a period during which if both lines change
then we consider them as simultaneous; otherwise we consider them as separate. We can
model this easily in TCCS using a timeout construct. Clearly in real hardware this time
period will be very short; for convenience we shall assume it has duration 1 and all other
durations have been adjusted to take this into account.

9
def
And = [Link] + [Link]
 
def
Anda = [Link] + b. [Link] ++ [Link] ++ (1).[Link]
 
def
Andb = a. [Link] ++ [Link] ++ (1).[Link] + [Link]
 
def
Andab = a. [Link] ++ [Link] ++ (1).[Link]
 
+ b. [Link] ++ [Link] ++ (1).[Link]

If we now consider the state Anda , if transitions occur on a and b within one time unit
of each other, then the circuit responds by moving to state Andb irrespective of the order
of arrival. Unfortunately it is still the case that if the transitions differ by at least 1 unit
of time, this may not work; however, defining this threshold duration, and ensuring that
the environment using the component respects this threshold, is the responsibility of the
circuit designer.

4 The Semantics of TCCS

We have now seen the whole of the language TCCS, along with several examples of its
use in describing systems. With these examples, we gave informal explanations of their
behaviours, based on informal intuitions inferred from the description of the language
constructs. However, to perform rigorous (mathematical) analyses of systems, we must
provide a mathematical semantic definition of the language. This is provided in the
present section.
The semantics of TCCS is transition based, structurally presented in the style of
[Plo81], outlining what actions and time delays as process can witness. In order to define
our semantics however, we must first define a function which will allow us to describe
when a process may idle for a particular amount of time. This function is defined as
follows.

Definition 4.1 The function | · |T : TCCS → {0, 1, 2, . . . , ∞} defines the maximum delay
which a process may allow before forcing a computation (or deadlock) to occur. Formally
this function is defined as follows:

def def def


|0|T = 0 |(t).P |T = t + |P |T |P \a|T = |P |T
 
def def def
|0|T = ∞ |P + Q|T = min |P |T , |Q|T |P [f ]|T = |P |T
  n o
def def def
|a.P |T = 0 |P ++ Q|T = max |P |T , |Q|T |µx.P |T = |P µx.P x |T
 
def def
|a.P |T = ∞ |P | Q|T = max |P |T , |Q|T

This function is well defined over TCCS, due to the requirement that all terms be
closed and guarded; it would be impossible however to assign a unique value to a term
containing an unguarded recursive variable, such as µx.x.

10
a a a
P −→ P 0 P −→ P 0 , Q −→ Q0
a a τ
a.P −→ P P ++ Q −→ P 0 P | Q −→ P 0 | Q0
a a
Q −→ Q0 P −→ P 0 (a 6= b, b̄)
a a a
a.P −→ P P ++ Q −→ Q0 P \b −→ P 0 \b
a a a
P −→ P 0 P −→ P 0 P −→ P 0
a a
P + Q −→ P 0 P | Q −→ P 0 | Q f (a)
P [f ] −→ P 0 [f ]
n o
a
a
Q −→ Q0
a
Q −→ Q0 P µx.P x −→ P 0
a a a
P + Q −→ Q0 P | Q −→ P | Q0 µx.P −→ P 0

Figure 2: Action Derivation Rules

t t t
P ; P 0 , Q ; Q0 P ; P0
t t t
0;0 P + Q ; P 0 + Q0 P \a ; P 0 \a
t t
P ; P 0 (|Q| < t) P ; P0
t t T t
a.P ; a.P P + Q ; P0 P [f ] ; P 0 [f ]
n o
t
Q;Q
t 0 P µx.P x ; P 0
s t (|P |T < t) t
(s + t).P ; (t).P P + Q ; Q0 µx.P ; P 0
t t
P ; P 0 , Q ; Q0
t t
(t).P ; P P ++ Q ; P 0 ++ Q0

s t t
P ; P0 P ; P 0 , Q ; Q0
s+t t
(t).P ; P 0 P | Q ; P 0 | Q0

Figure 3: Temporal Derivation Rules

11
In Figures 2 and 3, we present the operational rules for our language. They are
presented in an identical natural deduction style, and are to be read as follows: if the
relation(s) above the inference line can be inferred, then we can infer the relation below
the line. Our transitional semantics over TCCS then is defined by the least relations
a t
−→⊆ TCCS × Act × TCCS and ;⊆ TCCS × T × TCCS (written P −→ Q and P ; Q
for (P, a, Q) ∈−→ and (P, t, Q) ∈; respectively) satisfying the rules laid out in Figures 2
and 3. These could have been presented as one transition system −→⊆ TCCS × (Act ∪ T ) × TCCS,
as long as Act and T were disjoint. They are presented separately to clearly distinguish
between the two notions. Notice that these rules respect the informal description of the
constructs given in the previous section.
a a a a
We shall often write P −→ to mean P −→ P 0 for some P 0 , and P −→
6 to mean P −→6 P0
0 t
for all P . We use similar abbreviations for ; and the derived transition relations defined
later.
There are a few points of coherence in the semantic transition relations which are
worth noting. Our first point justifies the informal description of the function | · |T .

t
Fact 4.2 P ; if and only if |P |T ≥ t.

Proof: The only if direction is proven by a straightforward induction on the depth of


t
inference of P ; P 0 . The if direction is proven by a straightforward induction on
the number of times the function | · |T is required in order to evaluate |P |T . 2

We state two further points of interest regarding the idling behaviour of process terms.
The first concerns the temporal determinism of our model, in that it demonstrates that
the passage of time cannot in itself cause nondeterministic behaviour. The latter concerns
the coherence of the passage of time.

Fact 4.3

t t
• If P ; P0 and P ; P 00 then P 0 ≡ P 00 .
s t s+t
• P ; P 0 ; P 00 for some P 0 if and only if P ; P 00 .

Proof: Straightforward inductions on depths of inferences. The if part of the second


point requires the only if result, as well as Fact 4.2 2

We shall eventually want to reason about the external behaviour of systems, where
internal communication actions τ are ignored. To do this, we shall require the following
definitions for τ abstractions.

Definition 4.4 If s ∈ Act∗ , then sb ∈ Λ is the sequence s with all occurrences of τ deleted.

12
Definition 4.5

 τ
=⇒ = (−→)∗ (also written =⇒)
as a s
=⇒ = =⇒−→=⇒
t 1 t n t
≈> = =⇒;=⇒ · · · =⇒;=⇒ where t = t1 + . . . + tn .

Finally, as one last point of notation, we shall sometimes allow ourselves to write (0)P
0
to be read as P , (∞)P to be read as 0, and P ; Q to be read as P ≡ Q. However, we
shall make it clear when this is allowed, so that unless stated, temporal constants from
T are always to be assumed to be finite and positive.

5 Bisimulation Relations

In this section, we describe several bisimulation-based equivalences and preorders for


TCCS, which give formal definitions of various notions of equivalence. The asymmetric
preorders are of two varieties, being either simulation preorders, or faster-than preorders
which relate functionally equivalent processes differing only in the relative speeds of the
processes.
We shall define all of our relations only over closed process terms, but we shall interpret
them over arbitrary open terms using the following extensionality definition.

Definition
 5.1
 For 
any binary
 relation  over closed TCCS terms, we let E  F when-
ever E P xe  F P xe for all instantiations Pe of the free variables xe of E and F .
e e

5.1 Strong Similarity and Bisimilarity

Milner defines simulation and bisimulation relations over CCS processes informally as
follows: a process P simulates another process Q if whatever event Q may perform, P
may mimic this event and evolve into a new state which continues to be in the simulation
relation with the new state of Q. Bisimilarity of two processes is then simply defined by
requiring that the simulation relation is symmetric, that is, each process can mimic any
event of the other whilst remaining in the bisimulation relation with the new state of the
former process.
Analogous to Milner’s strong (pre-)congruence for CCS, we have a form of (pre-)con-
gruence for TCCS which requires processes to match both on the performance of actions
and their time passage capabilities.

Definition 5.2 A binary relation R over agents is a strong T -simulation if whenever


(P, Q) ∈ R then ∀a ∈ Act and ∀t ∈ T :

13
a a
(i) if Q −→ Q0 then P −→ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(ii) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.

P strongly T -simulates Q, written P Q, if (P, Q) ∈ R for some strong T -simulation R.


If this R is symmetric, then P strongly T -bisimulates Q, written P ∼ Q.

Fact 5.3

• is the largest strong T -simulation, and is a precongruence. That is, it is a preorder


relation, and P Q implies that C [P ] C [Q] for all contexts C [·].

• ∼ is the largest strong T -bisimulation, and is a congruence. That is, it is an equiv-


alence relation, and P ∼ Q implies that C [P ] ∼ C [Q] for all contexts C [·].

Proof: Standard; see [Mil89], Proposition 4.2 and Section 4.4. 2

Bisimilarity was defined and examined in [MT90] where several simple examples were
included, and a collection of equational laws were presented for the calculus. However,
as with CCS, this relation does not abstract away internal communication, so is of little
interest when dealing with real verification problems.

5.2 Observational Similarity and Bisimilarity

In CCS, a notion of observational (bi-)similarity is introduced which treats internal com-


munications as silent transitions. We have suitably extended this notion to TCCS. How-
ever there are several subtle difficulties which appear in the calculus as a result of the
temporal extensions.

Definition 5.4 A binary relation R over agents is a T -simulation if whenever (P, Q) ∈ R


then ∀a ∈ Act and ∀t ∈ T :

a a
(i) if Q −→ Q0 then P =⇒ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
b

t t
(ii) if Q ; Q0 then P ≈> P 0 for some P 0 with (P 0 , Q0 ) ∈ R.

P T -simulates Q, written P ≺
≈Q, if (P, Q) ∈ R for some
≺-simulation
≈ R. If this R is
symmetric, then P T -bisimulates Q, written P ≈ Q.

Fact 5.5

• ≺ is the largest T -simulation, and is a preorder relation.


14
• ≈ is the largest T -bisimulation, and is an equivalence relation.

Proof: Standard; see [Mil89], Proposition 5.2. 2

As with the case of the symbolic calculus CCS, these relations are not substitutive over
the whole calculus, in particular, over the choice operators. The usual counterexamples
are as follows: a ≈ τ.a, but

a + b 6≺
≈ τ.a + b and a ++ b 6 ≺
≈ τ.a ++ b.

There are also other interesting counterexamples to congruicity involving the timing con-
structs. For example, we have that (s)τ (t)0 ≈ (s + t)0, but

(s)τ (t)0 ++ (s + t)aP 6 ≺


≈ (s + t)0 ++ (s + t)aP .

In fact, the left hand side will always deadlock after time s + t, whereas the right hand
side will always perform the a event and evolve into P after time s + t.
We are thus interested in refining this relation to obtain a congruence. This stronger
relation can in fact be defined as follows.

Definition 5.6 A binary relation R over agents is a T -precongruence if whenever (P, Q) ∈ R


then ∀a ∈ Act and ∀t ∈ T :

a a
(i) if Q −→ Q0 then P =⇒ P 0 for some P 0 with P 0 ≈ Q0 ;
t t
(ii) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.

P and Q are T -precongruent, written P ≺


=Q, if (P, Q) ∈ R for some T -congruence R. If
this R is symmetric, then P and Q are T -congruent, written P ∼
= Q.

Hence the basic refinement required to Definition 5.4 is in the very first action of the
process. While the process idles, the definition remains restrictive (the final clause),
whereas after performing an action, the resulting derivative terms need only be T -similar,
according to Definition 5.4.

Fact 5.7

• ≺
= is the largest observational T -precongruence, and is indeed a precongruence.

• ∼
= is the largest observational T -congruence, and is indeed a congruence.

Proof: Straightforward; again see Proposition 5.2 of [Mil89]. 2

15
This is the least refinement we could have made of our original T -bisimulation relation
in order to get a congruence, in the sense that ∼= is the largest congruence contained in
≈. To prove this fact, we rely on a sequence of lemmata. Firstly, we demonstrate the
easy side of this fact.

Fact 5.8 ∼
= is contained within ≈.

Proof: It is easy to verify that ∼


= is a T -bisimulation. 2

We now proceed to demonstrate that T -congruence is indeed a congruence. We start


out by showing that it is substitutive with respect to all of the finite constructs of the
calculus.

Fact 5.9 If P ∼
= Q, then
(i) a.P ∼= a.Q (iv) P + R ∼=Q+R (vii) P \a ∼ = Q\a
(ii) a.P ∼= a.Q (v) P ++ R ∼= Q ++ R (viii) P [f ] ∼
= Q[f ]
(iii) (t).P ∼
= (t).Q ∼
(vi) P | R = Q | R

Proof: Straightforward by constructing appropriate T -congruences. For (vi), (vii) and


(viii), we first need to establish that P ≈ Q implies P | R ≈ Q | R for all R, and
that P \a ≈ Q\a and P [f ] ≈ Q[f ] for all a and f . 2

In order to prove that ∼= is a congruence over the whole calculus with recursive defi-
nitions, we need first to introduce an auxiliary relation.

Definition 5.10 A binary relation S over agents is a T -bisimulation up to ∼


= if (P, Q) ∈ S
implies ∀a ∈ Act and ∀t ∈ T :
a a
(i) if P −→ P 0 then Q =⇒ Q0 for some Q0 with (P 0 , Q0 ) ∈ S ≈;
a a
(ii) if Q −→ Q0 then P =⇒ P 0 for some P 0 with (P 0 , Q0 ) ∈ ≈ S;
(iii) if P ; P 0 then Q ; Q0 for some Q0 with (P 0 , Q0 ) ∈ S ∼
t t
=;
(iv) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ ∼
t t
= S.

Fact 5.11 If S is a T -bisimulation up to ∼


=, then ≈ S ≈ is a T -equivalence, and ∼
=S∼
=
is a T -congruence.

Proof: The proofs proceed analogously to that for Proposition 4.5 of [Mil89]. The first
conclusion is needed to prove the second. 2

Fact 5.12 If P ∼
= Q then µx.P ∼
= µx.Q.

Proof: Let A ≡ µx.P and B ≡ µx.Q. It suffices to demonstrate that the relation

16
 n o n o 
def
S = G A x ,G B x

is a bisimulation up to ∼
=. From this, by taking G ≡ x, we will get (A, B) ∈ S, so
that A = S = B, which by Fact 5.11 will give us A ∼
∼ ∼ = B as required.
The proof now proceeds analogously to that for Proposition 4.12 of [Mil89]. 2

Thus we do indeed have a congruence relation. Moreover, this is the least refinement
we could have made of our original T -bisimulation in order to get a congruence, in the
sense that ∼
= is the largest congruence contained in ≈. To prove this, we use the following
alternative characterization of ∼
=.

Fact 5.13 Assume that L(P ) ∪ L(Q) 6= L. P ∼


= Q iff P ++ R ≈ Q ++ R for all R.

Proof: (=⇒) It is straightforward to check that


{(P ++ R, Q ++ R) : P ∼
= Q} ∪ ≈
is a T -bisimulation.
(⇐=) It suffices to demonstrate that
{(P, Q) : P ++ R ≈ Q ++ R for all R}
is a T -congruence. Thus assume that P ++ R ≈ Q ++ R for all R. To start, we
assume that a 6∈ L(P ) ∪ L(Q) is in L.
a a a a
If P −→ P 0 then Q ++ a =⇒ Q0 ≈ P 0 , so Q0 −→,
6 from which we get Q =⇒ Q0 .
b

t t a
If P ; P 0 , then Q ++ (a + 0) ≈> S ≈ P 0 ++ (a + 0). Since S =⇒, we must
t
have that S ≡ Q0 ++ (t)(a + 0) where Q ; Q0 . Furthermore, we can verify from
P 0 ++ (a + 0) ≈ Q0 ++ (a + 0) that P 0 ≈ Q0 . To do this, we note that to match
a a a
P 0 −→ P 00 , we have that Q0 ++ (a + 0) =⇒ Q00 ≈ P 00 , and since Q00 =⇒,
6 we must in
b
a t
fact have that Q0 =⇒ Q00 . We also note in matching P 0 ; P 00 that 0 is a unit for
b

++ with respect to ≈.
It remains now to show that P 0 ++ R ≈ Q0 ++ R for all R. To do this, we demonstrate
that
n
t t
(P 0 ++ R, Q0 ++ R) : ∃P, Q, t with P ; P 0 , Q ; Q0 ,
o
and P ++ R ≈ Q ++ R for all R ∪ ≈
a a
is a T -bisimulation. To match the transition P 0 ++ R −→ P 00 where P 0 −→ P 00 , we
again use the fact that P 0 ++ (a + 0) ≈ Q0 ++ (a + 0). Finally to match the transition
s s s s+t
P 0 ++ R ; P 00 ++ R0 where P 0 ; P 00 and R ; R0 , we use the above to get Q ; Q00
s
(and hence Q0 ; Q00 ) 2

Now we can finally state and prove our result.

Fact 5.14 ∼
= is the largest congruence contained within ≈.

17
Proof: Let ≈c denote the largest congruence contained in ≈. We have from Fact 5.8 that

= is a congruence contained in ≈. It simply remains to show that ≈c is contained
within ∼
=. Hence assume that P ≈c Q. Then P ++ R ≈c Q ++ R for all R, so
P ++ R ≈ Q ++ R for all R. But then by Fact 5.13, we have that P ∼= Q. 2

5.3 Strong Faster Than

Within the calculus of TCCS, we can describe different timeout processes which will allow
a certain computation to take place within a given amount of time, but will preempt the
computation and proceed with an alternate computation if that amount of time is allowed
to pass without the desired computation being performed. As an example, the TCCS term
P ++ (1).b.0 will allow the process P to proceed within 1 unit of time, but will subsequently
perform the action b and evolve into a deadlocked 0 process if the process P does not
proceed within the required time. Hence, P ++ (1).b.0 can be regarded as a timeout
context. If we were to replace P with each of the terms (1).a.0 and (2).a.0 which represent
the processes which may perform an a action after 1 and 2 units of time respectively, then
in the first case, we would result in the process (behaviour) (1).(a.0 ++ b.0), whereas in
the second case we would result in (1).b.0 due to the different timeout scenarios. Clearly
we would want to consider the process term (1).a.0 to be faster than the process term
(2).a.0. However, if we further desired that our faster than preorder be a precongruence
(that is, that it be substitutive), then we would be led to deduce that the process term
(1).(a.0 ++ b.0) is faster than the process term (1).b.0. This is undesirable though, as
these two terms are not even functionally behaviourally equivalent: the first may perform
an a action at some point in its computation, whereas the second cannot.
The problem arises due to the preemptive nature of passing time in the semantics of
the operators — it is possible to lose the capability of following a particular computation
path through idling. In [Tof90], this problem is challenged by allowing all processes to
idle indefinitely without changing state. However, this approach involves an undesirable
weakening of the semantic development of the calculus TCCS. Instead we opt here for
a more subtle approach to weakening the expressive power of the calculus. What we
actually do is restrict our language to a subset of TCCS where the above undesirable
timeout notions are no longer definable. We need to insist that, though we can have
timeout events made available in the environment after some amount of time, we can
never lose the ability to do some other event due to the passing of time. This is not
an unreasonable restriction, particularly in the design of hardware circuits, as this is
precisely how an implementation behaves — if a port is prepared to communicate, then
this communication capability can only be removed through an actual change in state, and
not through idling. In order that we may reason about relative process speeds we work
within the liberal subcalculus `TCCS of TCCS defined in Remark 2.2, where processes
lack the power of preemption.
We can then define our faster than preorder using the following bisimulation-like
definition.

Definition 5.15 A binary relation R over agents is a -bisimulation if whenever (P, Q) ∈ R

18
then ∀a ∈ Act and ∀t ∈ T :
a s a s
(i) if P −→P 0 then Q ; · −→ Q00 and P 0 ; P 00 for some s (possibly 0), Q00 and
P 00 with P 00 , Q00 ∈ R;
a a
(ii) if Q −→ Q0 then P −→ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(iii) if P ; P 0 then Q ; Q0 for some Q0 with (P 0 , Q0 ) ∈ R;
t t
(iv) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.

P is strongly faster than Q, written P Q, if (P, Q) ∈ R for some -bisimulation R.

Thus the only difference between this definition and the definition of the equivalence
given above appears in the first clause: if the first (faster) process term can perform a
particular action, then the second (slower) process term can either perform that action
right away and evolve into a new process state which is slower than that into which the
first process evolved, or else it can idle for some amount of time s and reach a state in
which it can perform the action and thus evolve into a state which, while not necessarily
itself slower than that into which the first process evolved, but slower than that state once
the idling time is accounted for. As an example, we would clearly want that

a0 | (1)b0 (1)a0 | (1)b0.

Now in the faster term, the action transition

a
a0 | (1)b0 −→ 0 | (1)b0

is matched in the slower term by the sequence of transitions

1 a
(1)a0 | (1)b0 ; a0 | b0 −→ 0 | b0

and while 0 | (1)b0 6 0 | b0, we only require in the definition that 0 | b0 0 | b0, as

1
0 | (1)b0 ; 0 | b0.

Fact 5.16 The relation is a precongruence over `TCCS. That is, it is a preorder, and
P Q implies that C [P ] C [Q] for all `TCCS contexts C [·].

Proof: Straightforward. 2

It is clear from the similarities in the definitions of ∼ and that for P and Q being
two terms of `TCCS, if P ∼ Q then P Q (and QP ). However the reverse implication does
not hold; that is, P Q and QP does not necessarily imply that P ∼ Q. A suitable simple
counter-example is provided by the following two process terms:

19
def def
A = ab0 + a(1)b0 + a(2)b0 B = ab0 + a(2)b0

These two processes are equally fast by the above definition, yet are not equivalent, as
a a
A −→ (1)b0, but for no B 0 ∼ (1)b0 does B −→ B 0 . Hence another equivalence of interest
is ∩ , which we represent by ∼
=.
We have another general anomaly with this — or indeed any — faster than preorder
for nondeterministic processes: we cannot guarantee that if P Q, then P will necessarily
execute faster than Q, but only that it has the capability of so doing. We would for
example insist by reflexivity that for the above process A, AA; but in executing the two
instances of A, the first (supposedly faster) version may start with an a transition to the
state (2)b0, whereas the second version may start with an a transition to the state b0.
However, this problem only arises in the presence of nondeterminism, and also vanishes if
we assume some form of built-in priority allowing faster computation paths to be followed
whenever possible.

5.4 Observational Faster Than

We extend our notion of faster than by allowing it to abstract internal communications


or τ actions.

Definition 5.17 A binary relation R over agents as a -bisimulation if whenever (P, Q) ∈ R


then ∀a ∈ Act and ∀t ∈ T :
a s a s
(i) if P =⇒ P 0 then Q ≈> · =⇒ Q00 and P 0 ≈> P 00 for some s (possibly 0), Q00 and
P 00 with (P 00 , Q00 ) ∈ R;
a a
(ii) if Q =⇒ Q0 then P =⇒ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(iii) if P ≈> P 0 then Q ≈> Q0 for some Q0 with (P 0 , Q0 ) ∈ R;
t t
(iv) if Q ≈> Q0 then P ≈> P 0 for some P 0 with (P 0 , Q0 ) ∈ R;

P is faster pre-equivalent to Q, written P Q, if (P, Q) ∈ R for some -bisimulation R.

Fact 5.18 is the largest strong -bisimulation, and is an equivalence relation.

Proof: Straightforward. 2

However for the usual reasons this is not substitutive for all the operators of `TCCS.
In particular τ.aa but τ.a + b 6 a + b. So we need to correct the definition as follows.

Definition 5.19 A binary relation R over agents is a -bisimulation if whenever (P, Q) ∈ R


then ∀a ∈ Act and ∀t ∈ T :
a s a s
(i) if P −→ P 0 then Q ; · =⇒ Q00 and P 0 ≈> P 00 for some s (possibly 0), Q00 and
P 00 with (P 00 , Q00 ) ∈ R;

20
a a
(ii) if Q −→ Q0 then P =⇒ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(iii) if P ; P 0 then Q ; Q0 for some Q0 with (P 0 , Q0 ) ∈ R;
t t
(iv) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.

P is faster than Q, written P Q, if (P, Q) ∈ R for some -bisimulation R.

Fact 5.20 is the largest -bisimulation, and is a precongruence. That is, it is a preorder,
and P Q implies that C [P ] C [Q].

Proof: Straightforward. 2

This is the least refinement we could have made of our original -bisimulation in order
to get a precongruence, in the sense that is the largest precongruence contained in as
stated in the following result.

Fact 5.21 is the largest precongruence contained within .

Proof: Straightforward. 2

6 Equational Theories

In the previous section, we formally defined several congruence and precongruence rela-
tions for process terms. They are all based on the notion of a bisimulation, so they can be
decided (for finite-state processes) using a partition-based algorithm. Such an automated
decision procedure is implemented in the Edinburgh Concurrency Workbench [Mol91a],
and has been used to analyze several simple timed systems such as the protocol examined
in Section 7. However, using this definition by hand can be tedious, so in this section we
consider equational (and inequational) theories for proving congruicity between processes.
Firstly, we develop a set of equational laws for the calculus TCCS. We shall use = to
represent derivability in our equational theory.
As a start, we restrict our attention to the finite sequential subset of TCCS, that is,
the subcalculus given by the following syntax.

P ::= 0 | 0 | a.P | a.P | (t).P | P + P | P ++ P

We shall refer to this subcalculus as TCCS0 .


Our equational theory for TCCS0 is presented in Figure 4; these laws are collectively
referred to by A. We can straightforwardly verify that these laws are sound with re-
spect to congruence ∼ over the full calculus TCCS, by constructing appropriate strong
congruences. Hence we get the following result.

21
(s)(t)x = (s + t)x 0 = (t)0

(x + y) + z = x + (y + z) (x ++ y) ++ z = x ++ (y ++ z)
x+y = y+x x ++ y = y ++ x
x+x = x x ++ x = x
x+0 = x x ++ 0 = x

(t)x + (t)y = (t)(x + y) (t)x ++ (t)y = (t)(x ++ y)

ax + (y ++ 0) = ax ++ y ax + by = ax ++ by
ax + (t)by = ax ++ (t)by

x ++ (y + z) = (x ++ y) + (x ++ z)

ax + 0 = ax ax ++ 0 = ax
ax = ax + ax (t)x ++ 0 = 0
ax = ax + (t)ax
     
ax ++ (t)y + (t) ax ++ z = ax ++ (t) y + z

Figure 4: Equational Theory A for TCCS0

ax = aτ x a(t)x = a(t)τ x τx = τx + x
   
τ x + (ay ++ z) = τ x + (ay ++ z) + ay
   
a x + (τ y ++ z) = a x + (τ y ++ z) + ay

Figure 5: Tau Laws for TCCS0

22
Fact 6.1 (Soundness of strong T -congruence laws) A ` P = Q implies P ∼ Q.

Proof: Straightforward, by constructing appropriate strong T -bisimulations. 2

These laws are actually complete for reasoning over TCCS0 for strong congruence, as
indicated by the following.

Fact 6.2 (Completeness of strong T -congruence laws) For P and Q in T CCS0 ,


P ∼ Q implies A ` P = Q.

Proof: See [MT90]. 2

The tau laws presented in Figure 5 are the timed versions of the the analogous tau laws
for CCS as presented in [Mil89]; these taken with the theory A are collectively referred
to by Aτ . There are many varieties of tau laws in our calculus such as x ++ τ x = τ x and
a(x ++ τ y) + ay = a(x ++ τ y) + ay. However, these are all derivable from those given in
Figure 5. We can show that the laws of the theory Aτ are sound with respect to the
congruence ∼ = over the full calculus TCCS thus obtaining the following result.

Fact 6.3 (Soundness of T -congruence laws) Aτ ` P = Q implies P ∼


= Q.

Proof: Straightforward, by constructing appropriate T -bisimulations. 2

We now spend some effort on demonstrating the completeness of the theory Aτ for
reasoning about T -congruence over TCCS0 . To do this, we define several normal forms.
First we define some notation.
 
Notation We use the abbreviation [a]ts p to stand for (s) ap ++ (t)0 , which represents
the process which must perform the action a and evolve into process p after time s but
before time s + t.

Definition 6.4 A term P is in normal form (NF) if it is of the form

X s0
[ai ]sii pi + (s)0
i∈I

where each pi is itself in NF, si may be 0, each of s and s0i may be 0 or ∞, and s ≥ si + s0i
for each i ∈ I.
A term P is in saturated normal form (SNF) if it is of the form

X s0
[ai ]sii pi + (s)0
i∈I

23
where each pi is itself in SNF, si may be 0, each of s and s0i may be 0 or ∞, s ≥ si + s0i
t a t a
for each i ∈ I, and P ; · =⇒ P 0 implies P ; · −→ P 0 .
A term P is in disjoint saturated normal form (DSNF) if it is of the form

X s0
[ai ]sii pi + (s)0
i∈I

where each pi is itself in DSNF, si may be 0, each of s and s0i may be 0 or ∞, s ≥ si + s0i
t a t a
for each i ∈ I, P ; · =⇒ P 0 implies P ; · −→ P 0 , no pi is of the form (t) τ p ++ 0 + 0,
and if ai = aj and pi ≈ pj and si ≤ sj then si + s0i < sj .

We aim to demonstrate that every term in TCCS0 can be equated to some term in
DSNF; the weaker varieties of normal forms are included simply to break down the proof
of this fact. First we demonstrate that any term can be converted into NF.

Fact 6.5 Every term in TCCS0 can be equated to a term in NF.

Proof: By induction on the structure of terms. We demonstrate here only the case of
P ++ Q. So we can assume that P and Q can be equated to NF terms.
Suppose
X s0 X t0
P = [ai ]sii pi + (s)0 and Q = [bj ]tjj qj + (t)0
i∈I j∈J

are normal form expressions for P and Q. Using distributivity of ++ over + we can
derive
X s0 t0 
P ++ Q = [ai ]sii pi ++ [bj ]tjj qj
i∈I
j∈J
X s0
 X t0 
+ [ai ]sii pi ++ (t)0 + [bj ]tjj qj ++ (s)0
i∈I j∈J

Each summand can easily be shown to be provably equal to a NF term. 2

Next, we further convert such a term into SNF.

Fact 6.6 Every term in NF can be equated using the equational theory to a term in SNF.

Proof: We will demonstrate how to transform a term in NF into one in SNF. Hence
assume that
X s0
P = [ai ]sii pi + (s)0
i∈I

24
t a t a
is in NF. If P ; · =⇒ P 0 implies P ; · −→ P 0 , then P is already in SNF. The
proof will proceed now by induction on the action depth of P (hence we shall assume
t a
that each pi is in SNF) as well as the number of ways in which P ; · =⇒ P 0 where
t a t a t a
P ; · 6−→ P 0 for some t. Thus we assume that P ; · =⇒ P 0 and P ; · −→ 6 P 0.
There is some i ∈ I with si ≤ t ≤ si + s0i , and either a = ai and pi =⇒ P 0 , or ai = τ
a
and pi =⇒ P 0 . Let
X t0
pi = [bj ]tjj qj + (t)0.
j∈J

t a
First assume that a = ai and pi =⇒ P 0 . If pi ≡ P 0 then P ; · −→ P 0 , which is a
τ τ
contradiction. Hence assume that pi =⇒ P 0 . By induction, pi −→ P 0 . Thus there is
some j ∈ J such that tj = 0, bj = τ and qj ≡ P 0 . Hence
     
0
ai pi = ai p i + τ P ++ (t0j )0 = ai p i + τ P 0
++ (t0j )0 + ai P 0 ,
so ai pi = ai pi + ai P 0 .
Now
s0
 
[ai ]sii pi = (si ) (ai pi + ai P 0 ) ++ (s0i )0
s0 s0
= [ai ]sii pi + [ai ]sii pi .
s0
Thus P = P + [ai ]sii P 0 , giving our required result.
a a
Now assume that ai = τ and pi =⇒ P 0 . By induction, pi −→ P 0 . Thus there is some
j ∈ J such that tj = 0, bj = a and qj ≡ P 0 . Hence
    
0 0
τ pi = τ pi + (aP ++ (tj )0) = τ pi + aP ++ (tj )0 + aP 0
so τ pi = τ pi + aP 0 .
Now
s0
P = P + [τ ]sii pi  
= P + (si ) (τ pi + aP 0 ) ++ (si )0
   
= P + (si ) τ pi ++ (s0i )0 + (si ) aP 0 ++ (s0i )0
s0
so P = P + [a]sii P 0 , giving our required result. 2

We can almost now finish off our proof obligation by further transforming such a term
into DSNF. First though we prove the following result.

Fact 6.7 If P and Q are in DSNF and P ≈ Q, then aP = aQ.

Proof: ... 2

Fact 6.8 Every term in SNF can be equated using the equational theory to a term in
DSNF.

25
Proof: We will demonstrate how to transform a term in SNF into one in DSNF. Hence
assume that
X s0
P = [ai ]sii pi + (s)0
i∈I

is in SNF. The proof will proceed by induction on the number of (j,k) pairs such
that aj = ak , pj ≈ pk , and sj ≤ sk ≤ sj + s0j .
Suppose aj = ak , pj ≈ pk , and sj ≤ sk ≤ sj + s0j . Then using aj pj = ak pk we have
s0 s0
[aj ]sjj pj + [ak ]skk pk
s0

= (sj ) (aj pj ++ (s0j )0) + [aj ]skk −sj pj
 
= (sj ) aj pj ++ (sk − sj )((sj + s0j − sk )0 + (s0k )0)
max(s0j ,sk +s0k −sj )
= [aj ]sj pj .
Hence
X s0 max(s0j ,sk +s0k −sj )
P = [ai ]sii pi + [aj ]sj pj
i∈I\{j,k}

which by induction can be equated to a term in DSNF. 2

The importance of terms being expressible in DSNF is to facilitate the following result.

s 0
[ai ]sii pi + (s)0 is in DSNF and P ∼
X 0 0
Fact 6.9 If P = = P + [a]ss p then P = P + [a]ss p.
i∈I

Proof: If there is some i such that ai = a, pi ≈ p, and si ≤ s ≤ s + s0 ≤ si + s0i , then we


can show that
s0 0 s0
[ai ]sii pi + [a]ss p = [ai ]sii pi
from which the result follows.
Hence suppose that there is no i such that ai = a, pi ≈ p, and si ≤ s ≤ s + s0 ≤ si + s0i .
We must have some i such that ai = a, pi ≈ p, and si ≤ s ≤ si + s0i . As P is in DSNF,
we can choose some t with si + s0i < t ≤ s + s0 such that there is no j with aj = a,
t a
pj ≈ p, and sj ≤ t ≤ sj + s0j . But then P ; · −→ P 0 for no P 0 ≡ p, contradicting
P ∼
0
= P + [a]ss p. 2

From here, we can now finally demonstrate the completeness of the equational theory
over TCCS0 by considering congruent terms in DSNF.

Fact 6.10 If P and Q are in DSNF and P ∼


= Q, then P = Q.

Proof: Suppose
X s0 X t0
P = [ai ]sii pi + (s)0 and Q = [bj ]tjj qj + (t)0
i∈I j∈J

26
are in DSNF. We will demonstrate that P = P + Q, and by a symmetric argument
that Q = P + Q, from which we can deduce our result. The proof will proceed by
induction on the action depths of P and Q, so that we will assume the conclusion to
hold for all pi and qj .
t0
From this, using Fact 6.9, we can show that P = P + [bj ]tjj qj for each j ∈ J.
It is then straightforward to show that P = P + Q. 2

Corollary 6.11 (Completeness of T -congruence laws) For p and q in T CCS0 , p ∼


=
q implies p = q.

Proof: Assume that p ∼= q for some p and q in T CCS0 . By Proposition 6.8, we can find
DSNF terms P and Q such that p = P and q = Q. But then by Proposition 6.10,
we must have that P = Q, whence we have that p = q. 2

We can add laws for reasoning completely about terms containing relabelling and
restriction, as these operators distribute through the sequential operators as expected (cf
the analogous laws in CCS as presented in [Mil89]). However, there is a problem with the
complete axiomatization of the language of finite agents containing the parallel construct.
The usual approach is via an Expansion Theorem which is used to transform any term
into a congruent sequential one. However, this cannot be accomplished in general. For
instance, the term

(1)a0 | b0

has no equivalent sequential term. Naı̈vely one might expect this term to be equivalent to
the term b(1)a0 + (1)(a0 | b0); however, this expanded term cannot perform any action
after any fraction of a time unit. In the case of a discrete time domain though, there does
exist an Expansion Theorem; this has been presented in [MT90].
In [Che91], a subcalculus of TCCS has been extended with time variables which allow
an Expansion Theorem to be proven for strong congruence over any arbitrary time domain.
However, the price paid with the introduction of time variables is a more complicated
calculus with restrictions on the use of the time variables to maintain decidability of
strong congruence.
Over the discrete time domain {1, 2, 3, . . .} we have the following progression law for
delayed actions:

a.p = a.p + (1)a.p

When we add the parallel operator to the finite language, we get a complete theory
for strong congruence by adding the above law to those of Figure 4 and Figure 6. For
observational congruence we get a complete theory by taking the above law, together with
those of Figure 4, Figure 5 and Figure 6.

27
X X
Let X = ai xi and Y = bj yj .
1≤i≤m 1≤j≤n

X X X
(E1 ) X | Y = ai (xi | Y ) + bj (X | yj ) + τ (xi | yj )
1≤i≤m 1≤j≤n ai =bj

X X X
(E2 ) X | (Y + (1)y) = ai (xi | (Y + (1)y)) + bj (X | yj ) + τ (xi | yj )
1≤i≤m 1≤j≤n ai =bj

X X X
(E3 ) (X + (1)x) | Y = ai (xi | Y ) + bj ((X + (1)x) | yj )) + τ (xi | yj )
1≤i≤m 1≤j≤n ai =bj

X X
(E4 ) (X + (1)x) | (Y + (1)y) = ai (xi | (Y + (1)y)) + bj ((X + (1)x) | yj ))
1≤i≤m 1≤j≤n
X
+ τ (xi | yj ) + (1)(x | y)
ai =bj

Figure 6: Equational Theory for Parallel Terms

x ≤ (t)x

Figure 7: Inequational law for `TCCS

28

Ack
 
ZZZ
}
 ZZ
= 

accept  Z deliver
Ac Tm

Z 
ZZZ   >
~ Z
Z  
T ns


Figure 8: Alternating Bit Protocol

Taking the laws of Figure 4 and the law of Figure 7 we have a sound and complete
system for reasoning inequationally over the sequential fragment of `TCCS. Unfortunately
the expansion laws of Figure 6 cannot be expressed in a manner wholly contained within
`TCCS. However, by replacing 0 by 0 and a.P by a.P throughout, we arrive at valid
axiom schemata.
Taking the laws of Figure 4, Figure 5 and Figure 7; we have a sound and complete
set of inequational laws for the sequential fragment of `TCCS. Again, we get Expansion
Theorems as above.

7 Example: The Alternating Bit Protocol

In [Mil89] Milner presents an implementation of the alternating bit protocol in CCS, and
demonstrates that the protocol is correct. Perforce this implementation ignores the exact
temporal properties of the system and its components. We extend the implementation in
CCS to one in TCCS, over the discrete time domain {1, 2, 3, · · ·}, where we can analyze
(and take advantage of) the temporal properties of the system components. In this version
some of the system complexity can be reduced by exploiting the temporal information.
This proof was checked using the Edinburgh Concurrency Workbench [Mol91a].
Our alternating bit protocol realisation is depicted in Figure 8. The process Ac will
work in the following manner. After accepting a message, it sends it with bit b along the
channel T ns and waits. Subsequently there are three possibilities:

• it times out and retransmits the message;


• it gets an acknowledgement b from the Ack line (signifying a correct transmission),
so that it can now accept another message;
• it gets an acknowledgement ¬b from the Ack line (signifying a superfluous extra
acknowledgement of earlier message) which it ignores.

29
The replier T m works in a dual manner. After a message is delivered it sends an
acknowledgement with bit b along the Ack line. Subsequently there are again three
possibilities:

• it times out, and retransmits the acknowledgement;


• it gets a new message with bit ¬b from the T ns line, which it delivers and acknowl-
edges with bit ¬b;
• it gets a repetition of the old message with bit b which it ignores.

We define the transmitter and receiver by the following TCCS processes.

def
T mc(b) = ack(b).T mc(b) ++ ack(¬b).T mc(b) ++ [Link] (b).T ms(¬b)
def
T ms(b) = ack(b).T ms(b) ++ ack(¬b).stops . T mc(¬b)
def
Sends = starts (b).send(b).Send(b) ++ stops .Sends
def
Send(b) = (trt )send(b).Send(b) ++ stops .Sends
def
Recc(b) = transmit(b).Recc(b) ++ transmit(¬b)Recc(b) ++ [Link] (b).Recs(b)
def
Recs(b) = transmit(¬b).Recs(b) ++ transmit(b). stopr .Recc(¬b)
def
Reply = startr (b).reply(b).Replys(b) ++ stopr .Reply
def
Replys(b) = (trt )reply(b).Replys(b) ++ stopr .Reply

The channels in the implementation have the ability to lose messages. For convenience,
we ignore actual messages and concentrate on the value of the control bits.

def
Ack = reply(b).Ack(b)
def
Ack(b) = (ts )ack(b).Ack + τ.Ack
def
T ns = send(b).T ns(b)
def
T ns(b) = (ts )transmit(b).T ns + τ.T ns

We impose the condition on the retry rate that trt > 2ts . This seems reasonable as we
cannot be sure that a transmission has failed until this period of time has elapsed. The
external environment is forced to take delivery as soon as possible. This enables us to
calculate the period of time between the reception of an accept action, and the return of
the process to a position where the next transmission may be attempted.
It should be noted that we have arranged that at no time is an immediate ack or
transmit action of either type impossible, apart from a period immediately after the
reception of such an action, and then the evolution is possible before any more actions
of that type can be produced by the channels. Thus we can use actions with no delay
guarding in the channels to force the evolution to proceed at certain intervals. The initial
state of our alternating bit process is the following.
 
T mc(0) | Sends | T ns | Ack | Recs(1) | Reply 2 (1) \{transmit, ack, send, reply,
startr , starts , stopr , stops }.

30
  
deliver
Sends T ns Recs
  


 
accept
Tm Ack Reply

 

Figure 9: Flow Diagram 2

A static flow diagram for the above is given in Figure 9.


Using our equational theory for observational congruence we find that our alternating
bit system is equivalent to the following process (when ts = 2 and trt = 5).

Atb(0) = accept.T ryLR(0)


T ryLR(0) = (2)ScLR(0) + τ.(3)τ.(τ.(2)T ryLR(0) + (2)τ.T ryLR(0))
ScLR(0) = τ.deliver.T ryRL(0)
T ryRL(0) = (2)τ.Atb(1) + τ.(3)τ.(τ.(2)T ryRL(0) + (2)τ.T ryRL(0))

We can define an ideal timed alternating bit protocol as follows.

Iab(b) = [Link](b)
ItryLR(b) = (2)IsuccLR + τ.(5)ItryLR(b)
ISuccLR(b) = [Link](b)
ItryRL(b) = (2).Iab(¬b) + τ.(5)ItryRL(b)

For the above it is clear that

Atb(0) ≈ Iab(0)

It is relatively easy to check that the time performance of transmission is linearly


affected by the number of transmission errors.

8 A Logic for Specification

In the usual case of CCS with bisimulation semantics, we have a powerful logic for spec-
ification in the form of Hennessy-Milner Logic (HML) as described in [HM85, Mil89]. It

31
would be an ideal situation if the logic would extend itself to the calculus TCCS in some
analogous fashion.
The logic in question would consist of the following formulae, which we refer to as
timed HML formulae.

φ ::= true | ¬φ | φ0 ∧ φ1 | haiφ | htiφ

We can define the rest of the typical formulae as derived notions in the expected fashion:

def def
false = ¬true [a]φ = ¬hai¬φ
def def
φ0 ∨ φ1 = ¬(¬φ0 ∧ ¬φ1 ) [a]φ = ¬hai¬φ

Terms in TCCS (or in `TCCS) would then satisfy such formulae according to the following
rules.

P |= true for all P


P |= ¬φ iff P 6|= φ
P |= φ0 ∧ φ1 iff P |= φ0 and P |= φ1
a
P |= haiφ iff P −→ P 0 for some P 0 such that P 0 |= φ
t
P |= htiφ iff P ; P 0 for some P 0 such that P 0 |= φ

We can then define an equivalence ' between terms according to the timed HML formulae
which the terms respectively satisfy. That is, we could allow P ' Q if and only if

for all timed HML formulae φ, P |= φ iff Q |= φ.

The fact is, this equivalence corresponds to strong congruence, as expected. Also as
expected, we can define a weak version of the logic which characterizes weak equivalence.
We would also be interested in finding asymmetric relations based on this (or some
similar) logic which would coincide with the faster-than relations. To do so, we could
imagine having to define a faster-than relation over timed HML formulae to capture the
asymmetry in the definition of the relation. Furthermore, we would have to capture the
“borrowing-of-time” notion from the first clause of the faster-than bisimulation definitions.

9 Conclusion

We have presented the theory of observational congruence over the calculus TCCS. This
extends the work on the calculus introduced in [MT90]. Our language is one of many
attempts to provide a concurrent calculus for reasoning about the effects of time. Similar
works can be found for example in [Mil83, BB91, Che91, HR90, Gro90, RR86, NRSV90,

32
Wan90]. We believe that TCCS gives one of the clearer accounts of the nature of non-
determinism that can occur within temporal systems. Though syntactically it is a simple
extension to CCS, its naturality is demonstrated by the similarity of the equational ex-
tensions required to address the inclusion of abstraction.
The syntax of the calculus has changed somewhat from that original paper in several
respects. Firstly, the arbitrary delay operator δ.P has been replaced by the delaying
constructs presented above, namely delayed action prefix a.P and nontemporal nil 0,
as these are the only interesting occurrences of temporal delay. More importantly, the
weak and strong choice operators have changed from ⊕ and + respectively to + and
++ respectively. Though this may lead to confusion, there are several reasons for this
recasting of the syntax.

• The choice operator of CCS is more naturally represented in a timing framework


by the weak choice operator. In particular, the Expansion Theorem presented in
[MT90] is expressed using the weak sum of prefixed terms.
• The ⊕ operator has a usual interpretation (as advocated for example by [Hoa78,
Hen88]) as internal nondeterminism.
• Most other timed calculi have only one explicit choice operator, generally represented
by the + operator, which for the most case closely resemble the weak choice operator
of TCCS. Hence, the new syntax allows for an easier comparison of the various calculi.
• The calculus is implemented in the Edinburgh Concurrency Workbench [Mol91a].
For this reason, some ascii symbol had to replace the ⊕ operator for the user interface.

The definition of behavioural abstraction which we have presented seems the natural
extension to that of CCS for a timed system. The cases where the equivalence is not a
congruence show some interesting properties of silent actions with respect to temporal
behaviour, notably the fact that (t)τ (s)P ∼
6= (t + s)P .
The equational theory is a simple and natural extension of the basic equational theory
of TCCS, matching ones intuitions in the light of the additions required to the theory of
CCS to represent abstraction.
Finally, as we demonstrated with our example, it is possible to derive the properties
of highly complex systems, without the terms of the calculus themselves becoming too
complicated.

References
[BB91] Baeten, J.C.M. and J.A. Bergstra, Real Time Process Algebra, Formal Aspects
of Computing, Vol 3, 1991.
[BK85] Bergstra, J.A. and J.W. Klop, Algebra for Communicating Processes with
Abstraction, Journal of Theoretical Computer Science, Vol 37, 1985.
[Bou85] Boudol, G., Notes on Algebraic Calculi of Processes, Logics and Models of
Concurrent Systems, NATO ASI Series f13 (K. Apt, ed), 1985.

33
[BHR84] Brookes, S.D., C.A.R. Hoare and A.W. Roscoe, A Theory of Communicating
Sequential Processes, Journal of ACM, Vol 31, 1984.
[Cle89] Cleaveland, R., J. Parrow and B. Steffen, “The Concurrency Workbench: A
Semantics-based Verification Tool for Finite-state Systems”, Proceedings of
the Workshop on Automated Verification Methods for Finite-state Systems,
Lecture Notes in Computer Science 407, Springer-Verlag, 1989.
[Che91] Chen, L., Decidability and Completeness in Real Time Processes, Research
Report No ECS-LFCS-91-185, University of Edinburgh Computer Science De-
partment, 1991.
[Gro90] Groote, J.F., Specification and Verification of Real Time Systems in ACP,
Proceedings of the 10th International Symposium on Protocol Specification,
Testing and Verification, Ottawa, 1990.
[HM85] Hennessy, M.C. and R. Milner, Algebraic Laws for Nondeterminism and Con-
currency, Journal of the ACM, Vol 32, No 1, 1985.
[Hen88] Hennessy, M.C., Algebraic Theory of Processes, MIT Press, 1988.
[HR90] Hennessy, M. and T. Regan, A Temporal Process Algebra Technical Report
No. 2/90, University of Sussex Computer Science Department, April, 1990.
[Hoa78] Hoare, C.A.R., Communicating Sequential Processes, Communications of
ACM, Vol 21, 1978.
[Hoa85] Hoare, C.A.R., Communicating Sequential Processes, Prentice–Hall In-
ternational, 1985.
[Mil80] Milner, R., A Calculus of Communicating Systems, Lecture Notes in
Computer Science 92, Springer-Verlag, 1980.
[Mil83] Milner, R., Calculi for Synchrony and Asynchrony, Theoretical Computer
Science, Vol 25, 1983.
[Mil89] Milner, R., Communication and Concurrency, Prentice–Hall Interna-
tional, 1989.
[Mil89a] Milner, R., A Complete Axiomatisation for Observational Congruence of
Finite-State Behaviours, Information and Computation, Vol 81, 1989.
[MT90] Moller, F. and C. Tofts, A Temporal Calculus of Communicating Systems,
Proceedings of CONCUR’90, Lecture Notes in Computer Science 458, Ams-
terdam, August 1990.
[MT91] Moller, F. and C. Tofts, Relating Processes with Respect to Speed, Proceed-
ings of CONCUR’91, Lecture Notes in Computer Science 527, (Theories of
Concurrency: Unification and Extension), Amsterdam, August 1991.
[MT92] Moller, F. and C. Tofts, Behavioural Abstraction in TCCS Proceedings of
ICALP’92, Lecture Notes in Computer Science, Vienna, July 1992.
[Mol91] Moller, F., Process Algebra as a Tool for Real Time Analysis, Proceedings of
the IV Banff Higher Order Workshop, Springer-Verlag, 1991.
[Mol91a] Moller, F., The Edinburgh Concurrency Workbench (Version 6.0), Technical
Note No LFCS-TN-34, University of Edinburgh Computer Science Depart-
ment, 1991.

34
[NRSV90] Nicollin, X., J.L. Richier, J. Sifakis and J. Voiron, ATP: An Algebra for Timed
Processes, Proceedings of IFIP Working Conference on Programming Con-
cepts and Methods, North Holland, 1990.
[Par81] Park, D.M.R., Concurrency and Automata on Infinite Sequences, Lecture
Notes in Computer Science 104, Springer–Verlag, 1981.
[Plo81] Plotkin, G.D., A Structural Approach to Operational Semantics, Re-
port DAIMI FN-19, Computer Science Department, Århus University, Den-
mark, 1981.
[RR86] Reed, G.M. and A. Roscoe, A Timed Model for Communicating Sequential
Processes, Proceedings of ICALP’86, Lecture Notes in Computer Science No
226, Springer Verlag, 1986.
[Sut89] Sutherland, I.E., Micropipelines, Communications of the ACM, Vol 32, No 6,
pp720-738, 1989.
[Tof88] Tofts, C., Temporal Ordering for Concurrency, Research Report No ECS-
LFCS-88-48, University of Edinburgh Computer Science Department, 1987.
[Tof90] Tofts, C., Proof Systems and Pragmatics for Parallel Programming, PhD The-
sis, University of Edinburgh, 1990.
[Udd86] Udding, J.T., A Formal Model for Defining and Classifying Delay-Insensitive
Circuits and Systems, Distributed Computing, Vol 1, 1986.
[Wan90] Wang Yi, Real-time Behaviour of Asynchronous Agents, Proceedings of CON-
CUR’90 (Theories of Concurrency: Unification and Extension), Amsterdam,
August 1990.

35

You might also like