A Temporal CCS - Moller and Tofts
A Temporal CCS - Moller and Tofts
Communicating Systems
(DRAFT)
Faron Moller Chris Tofts
Department of Computer Science Department of Computer Science
University of Edinburgh University College Swansea
fm@[Link] cschris@[Link]
Abstract
In this paper we describe the calculus TCCS, an extension of the process algebra
CCS with temporal constructs. The calculus is useful for the formal analysis of
the timing aspects of systems, as well as their functional behaviour. Such aspects
include when events can and must occur, as well as how fast a process may execute.
The paper includes extensive examples, as well as a detailed description of the
theoretical underpinnings of the language. We also include a description of a logic
for specification, which is an extension of Hennessy-Milner logic. Finally, we present
a brief comparison with various other similar approaches to the analysis of the
temporal properties of systems.
Contents
1 Introduction 1
3 Examples 6
3.1 Mutual Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Horse Racing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Simple Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1
5 Bisimulation Relations 12
5.1 Strong Similarity and Bisimilarity . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Observational Similarity and Bisimilarity . . . . . . . . . . . . . . . . . . . 13
5.3 Strong Faster Than . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Observational Faster Than . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6 Equational Theories 20
9 Conclusion 30
1 Introduction
2
underpinning of the language has undergone development in [Tof90, MT90, Mol91, MT91,
MT92]. We present the main technical results and several examples of the descriptive
use of the calculus. The calculus has been implemented in the Edinburgh Concurrency
Workbench (Version 6.0), a prototype tool for the analysis of concurrent systems. Details
of this implementation can be found in [Mol91a].
The paper is organized as follows. In Section 2 we introduce the syntax of the calculus,
with an informal description of each of the constructs of the language. In Section 3 we
present several simple examples which demonstrate the use of TCCS for describing sys-
tems. In Section 4 we formalize the definition of the language by providing an operational
semantic interpretation of the calculus. In Section 5 we present several preorder and
equivalence relations based on the notion of bisimilarity of [Par81]. These relations define
different ideas of equivalence, simulation, and ideas of when processes are functionally
equivalent but temporally distinct; that is, one executes faster than the other. In Sec-
tion 6 we provide axiom systems for reasoning about systems syntactically. In Section 7
we present a nontrivial example of the use of the calculus, in the form of the analysis
of an alternating bit protocol. In Section 8 we describe a logic for specification based
on Hennessy-Milner Logic HML. Finally, in Section ?? we compare the present approach
with that of several other similar approaches.
We include proofs of all technical results. However, we minimize these by making
much reference to [Mil89] where the corresponding results are proven for the symbolic
calculus CCS. The proofs referred to will be assumed of the reader, so we only provide
the refinements required to understand the proofs within the setting of our timed calculus.
To define the language, we first presuppose a set L of atomic action symbols not containing
τ or ε, and we define Act = L ∪ {τ }. We assume that L can be partitioned into two
equinumerous sets with a complementation bijection · defined between them satisfying
a = a. These complementary actions form the basis of the handshake communication
used in our calculus, analogous to that of CCS. We also presuppose some set X of process
variables, and take T to represent some time domain (that is, a totally ordered set,
bounded below by, but not containing, 0, the unit of a natural addition operation +
defined on T ), be it for instance the discrete domain of positive integers, or the dense
domain of positive rationals, or the continuous domain of positive reals. We shall for the
most part not put any restriction on the time domain, but we shall present some results
which hold only over some particular time domain such as the integers.
The collection of TCCS expressions, ranged over by P , is then defined by the BNF
expression given in Figure 1, where we take a ∈ Act, x ∈ X , t ∈ T , and f ranging over
relabelling functions, those f : Act → Act such that f (a) = f (a) for a 6= τ and f (τ ) = τ .
We also stipulate that all variables appear both closed and guarded; that is, each variable
x appears within the scope of an action or temporal prefix, which in turn appears within
the scope of a recursive construct µx. The intuitive interpretation of these terms can be
3
P ::= 0 nil process
| 0 nontemporal nil process
| x process variable
| a.P (atomic) action prefix
| a.P delayed action prefix
| (t).P temporal prefix
| P +P weak choice
| P ++ P strong choice
| P |P parallel composition
| P \a restriction
| P [f ] relabelling
| µx.P (recursive) process definition
given as follows.
– 0 represents the completely dead process. It can neither perform any computation,
nor can it witness the passage of any time. In terms of a concrete machine, it may
be thought of as one which is shut down or with its “plug pulled”. One of the
consequences is that whenever 0 is a concurrent component of a machine, then the
whole machine will be temporally dead; the local temporal deadlock gives rise to a
global temporal deadlock.
– 0 represents the process which can never perform any actions, but rather idles
forever.
– a.P represents the process which can perform the (atomic) action a and evolve into
the process P upon so doing. The process cannot progress through time before
performing the action a, and the action is assumed to occur instantaneously in
time. If we wish to model a system where actions have some amount of duration
(for instance, if we wish to talk of the computation of some hardware component
with a certain known propagation delay), then we can naturally treat the action as a
composite event consisting of the two atomic actions as (start event a) and af (finish
event a), with the two actions separated in time by the appropriate duration ta .
Also note that atomic actions can be causally related without one being temporally
4
dependent on the other. For instance, if we consider the process a.b.P , the action b
is causally dependent on the action a in that it cannot have occurred unless (notice
that we do not say until) the action a has occurred; however, the two actions are
considered to happen simultaneously in time.
– a.P represents the process which can perform the (atomic) action a, but will allow
any amount of time to pass before doing the action.
– (t).P represents the process which will do no computation for an amount of time t,
but at that point in time will commence behaving as the process P . Thus prefixing
P by a composite event a consisting of start and finish actions as and af respectively
with duration ta would be written as as .(ta ).af .P .
– P + Q represents a choice between the two processes P and Q. The process behaves
as the process P or the process Q, with the choice being made at the time of the first
action, or else at the occurrence of a passage of time when only one of the operands
may allow the time passage to occur. In this latter case, the second temporally
deadlocked process is dropped from the computation. This operator is referred to
as weak choice.
– P \ a represents the process P with the action a ∈ L (as well as its complement
action a) restricted away, that is, not allowed to occur.
– P [f ] represents the process P with its actions renamed by the relabelling function
f.
Remark 2.1 (The Calculus CCS) The symbolic calculus CCS can be viewed as the
subcalculus of TCCS obtained by omitting deadlock 0, action prefix a.P , temporal prefix
(t).P and strong choice P ++ Q. In CCS, the interpretation of behaviour is that it occurs
within time, but that the time is abstracted away; one action occurs after another, but there
is no way of expressing or knowing how much time may or must pass before this second
action occurs. The nil process of CCS is interpreted by 0 rather than 0, as time can always
pass when it is being ignored. For a similar reason, action prefix is interpreted by a.P
rather than a.P . There is no place for temporal prefix in the subcalculus for CCS, as the
delaying would simply be ignored. Finally, in this situation, where no temporal deadlocking
5
can occur, the two choice operators coincide semantically, so the strong choice operator is
dropped.
It would have made more sense to reverse the rôles of the two nil constants, as well
as the two action prefix operators to stay consistent with the standard syntax of CCS
(where the nil and prefix constructs are written without underscores). However, we chose
to abide by the convention introduced in [BB91] of having undecorated actions interpreted
as immediate events and underscored actions interpreted as delayed events.
Remark 2.2 (The Calculus `TCCS) By adding to the above subcalculus for CCS the
operation of temporal prefix (t).P , we arrive at a subcalculus which can express exactly
those processes which can never deadlock temporally; every process term must allow time to
pass. (This property can be proved using a simple structural inductive argument; however,
so far this claim can only be based on the informal explanation of the constructors of the
language.) This proves to be a useful subcalculus, both practically and theoretically. Prac-
tically, we can imagine its utility for example in modelling hardware components which
are connected in a pipelined fashion in which the components have different propagation
delays. One component may be ready to receive data from the previous component, but
this component might not be prepared to send the data. In this case, the data request will
remain at the input port of the first component until such time as the sending component is
prepared to send it. In this sense, the components are delay insensitive, in a sense likened
to that of [Udd86]. Theoretically, this subcalculus is important when we consider compar-
ing the relative speeds of behaviourally functionally equivalent processes. This aspect is
explored in Section 5. We shall henceforth emphasize the importance of this subcalculus
by referring to it as `TCCS, or liberal TCCS.
As a point of notation, we shall occasionally omit the dot when applying the prefix
operators, and also drop trailing 0’s, thus for instance rendering a.0 + (t).b.0 as a + (t)b.
Also, we shall allow the prefix operators to take precedence over the concurrency operator,
which will take precedence over the recursion operator, which finally in turn will take
precedence over the choice operators; outside of this, the binary operators will be left
associative, and we shall freely use parentheses to override the precedence rules. We
shall allow ourselves to write P \L for L ⊆ L, and interpret it in the expected fashion.
Finally, we shall allow ourselves to specify processes definitionally, by providing recursive
def def
definitions of processes. For example, we shall write A = a.A rather than A = µx.a.x.
Each process term P has an associated sort L(P ) which represents the set of actions
which the process is potentially capable of performing during its execution. This syntactic
property is defined as follows.
6
Throughout the sequel, we reserve the symbol ≡ to represent syntactic identity.
3 Examples
In this section, we present several simple examples of using TCCS for describing (compo-
nents of) concurrent systems.
def
A B
A = a.A0 + (1).B
a b
def 0 (1)
B = b.B + (1).A
A0 B0
Here we are representing a system which is allowing one of two possible computation
paths to be followed, and furthermore allowing the choice to be determined by time, in
that the first path can be followed only at even time units, whilst the second path can
be followed only at odd time units. This system can be considered as the basis of, for
example, a mutual exclusion algorithm for two independent processes sharing a common
critical section. We can see this by interpreting the actions a and b as the first and second
processes entering the critical section, respectively. Then the processes A0 and B 0 can
def def
be reasonably defined by A0 = c.B and B 0 = d.A, where the actions c and d represent
the first and second processes exiting the critical section, respectively. We can trivially
generalise this system to give a simple mutual exclusion algorithm for any number of
processes sharing a common resource.
Note that this example can be interpreted over a real time domain, as well as an
integer time domain; it just insists that accesses via the actions a and b occur at integral
moments in time. In the next example, we present an inherently real-time system.
3.2 Clocks
In this example, we present three clocks of varying accuracy. This example comes from
[BB91], and shows how inherently real-time systems can be elegantly modelled in TCCS.
The three clocks are defined as follows.
7
def
C0 = (1).tick.C0
def
C1 = (1 − ε). tick.0 ++ (2ε).0 | (1).C1
def
C2 = (1 − ε). tick.C2 ++ (2ε).0
The first clock is a perfect clock, with its ticks occurring exactly one unit of time apart.
The second clock keeps accurate time, though its ticks may individually fluctuate within
ε of when they are supposed to occur. Finally, the third clock is inaccurate, in that its
ticks fluctuate as with the second clock, but these fluctuation errors are cumulative.
Suppose we have some collection of (distinct) times ti ∈ T (1 ≤ i ≤ n), and we define the
simple processes Hi by
def
Hi = (ti ).finishi .0.
These processes can be viewed as horses competing in a race; the time which horse i takes
to cross the finish line is given by ti .
Following this interpretation, if we then denote a race to be the parallel composition
of the horse processes, that is,
def
Race = H1 | H2 | · · · | Hn ,
where we interpret equality informally as “having the same transitions,” and where m is
such that tm < ti for each i 6= m. Hence the effect of a race is to report the winning horse.
If we now denote another process Race+ by
def
Race+ = H1 + H2 + · · · + Hn ,
Race = Race+ .
This anomaly, where parallel computation is confused with (nondeterministic) choice, has
an analogy in computation theory, where the process Race is viewed as the nondetermin-
istic Turing machine concept of acceptance — try each path in parallel, and accept if one
of the paths succeeds.
8
Note that if we changed the definition of our horses slightly by defining the processes
as
def
Hi0 = (ti ).finishi .0,
then this analogy somewhat breaks down. The corresponding race processes Race0 and
Race0+ no longer have the same transition sequences, as Race0+ still only reports the winning
horse, whereas Race0 reports the order of all subsequent horses.
Consider the definition of an And gate in CCS, using the paradigm of transition signalling
[Sut89], where only changes in the signals (from high to low or low to high) are observed.
We label the input lines of the gate by a and b, and the output line by c. Its definition is
as follows.
def def
$
A = [Link] + [Link] Ab = [Link] + b.A a
A c
def def b
Aa = a.A + [Link] Aab = [Link] + [Link] %
In the state Aa if the line b goes high followed by line a going low, then we will observe
two output transitions, representing the output going high and then low. This must occur
no matter what the separation in time between the two signals. However if the changes
occur in the opposite order then no change will occur. If we use the component in some
external sense, then we can ensure that there is no temporal collision between changes on
the a and b lines. However when this process is used as part of a bigger system, it will
often be the case that changes on the a and b lines should be considered as simultaneous.
Unfortunately it is clear from the above that the order in which we interleave the a and b
actions is critical to the performance of the above system. Technically the system is not
confluent (in an external sense) to the effects of a and b applications.
Whilst it may be important for circuit designers to be wary of the possibility of prob-
lems caused by such races, it is often the case that the designer is aware of the problem
and knows that the physics of the circuit will in fact render the race irrelevant. For in-
stance, the response time of the device may be sufficiently slow to ensure that both the
inputs change before the output is changed. However we still want the ability to respond
to a single line change. Basically we need a period during which if both lines change
then we consider them as simultaneous; otherwise we consider them as separate. We can
model this easily in TCCS using a timeout construct. Clearly in real hardware this time
period will be very short; for convenience we shall assume it has duration 1 and all other
durations have been adjusted to take this into account.
9
def
And = [Link] + [Link]
def
Anda = [Link] + b. [Link] ++ [Link] ++ (1).[Link]
def
Andb = a. [Link] ++ [Link] ++ (1).[Link] + [Link]
def
Andab = a. [Link] ++ [Link] ++ (1).[Link]
+ b. [Link] ++ [Link] ++ (1).[Link]
If we now consider the state Anda , if transitions occur on a and b within one time unit
of each other, then the circuit responds by moving to state Andb irrespective of the order
of arrival. Unfortunately it is still the case that if the transitions differ by at least 1 unit
of time, this may not work; however, defining this threshold duration, and ensuring that
the environment using the component respects this threshold, is the responsibility of the
circuit designer.
We have now seen the whole of the language TCCS, along with several examples of its
use in describing systems. With these examples, we gave informal explanations of their
behaviours, based on informal intuitions inferred from the description of the language
constructs. However, to perform rigorous (mathematical) analyses of systems, we must
provide a mathematical semantic definition of the language. This is provided in the
present section.
The semantics of TCCS is transition based, structurally presented in the style of
[Plo81], outlining what actions and time delays as process can witness. In order to define
our semantics however, we must first define a function which will allow us to describe
when a process may idle for a particular amount of time. This function is defined as
follows.
Definition 4.1 The function | · |T : TCCS → {0, 1, 2, . . . , ∞} defines the maximum delay
which a process may allow before forcing a computation (or deadlock) to occur. Formally
this function is defined as follows:
This function is well defined over TCCS, due to the requirement that all terms be
closed and guarded; it would be impossible however to assign a unique value to a term
containing an unguarded recursive variable, such as µx.x.
10
a a a
P −→ P 0 P −→ P 0 , Q −→ Q0
a a τ
a.P −→ P P ++ Q −→ P 0 P | Q −→ P 0 | Q0
a a
Q −→ Q0 P −→ P 0 (a 6= b, b̄)
a a a
a.P −→ P P ++ Q −→ Q0 P \b −→ P 0 \b
a a a
P −→ P 0 P −→ P 0 P −→ P 0
a a
P + Q −→ P 0 P | Q −→ P 0 | Q f (a)
P [f ] −→ P 0 [f ]
n o
a
a
Q −→ Q0
a
Q −→ Q0 P µx.P x −→ P 0
a a a
P + Q −→ Q0 P | Q −→ P | Q0 µx.P −→ P 0
t t t
P ; P 0 , Q ; Q0 P ; P0
t t t
0;0 P + Q ; P 0 + Q0 P \a ; P 0 \a
t t
P ; P 0 (|Q| < t) P ; P0
t t T t
a.P ; a.P P + Q ; P0 P [f ] ; P 0 [f ]
n o
t
Q;Q
t 0 P µx.P x ; P 0
s t (|P |T < t) t
(s + t).P ; (t).P P + Q ; Q0 µx.P ; P 0
t t
P ; P 0 , Q ; Q0
t t
(t).P ; P P ++ Q ; P 0 ++ Q0
s t t
P ; P0 P ; P 0 , Q ; Q0
s+t t
(t).P ; P 0 P | Q ; P 0 | Q0
11
In Figures 2 and 3, we present the operational rules for our language. They are
presented in an identical natural deduction style, and are to be read as follows: if the
relation(s) above the inference line can be inferred, then we can infer the relation below
the line. Our transitional semantics over TCCS then is defined by the least relations
a t
−→⊆ TCCS × Act × TCCS and ;⊆ TCCS × T × TCCS (written P −→ Q and P ; Q
for (P, a, Q) ∈−→ and (P, t, Q) ∈; respectively) satisfying the rules laid out in Figures 2
and 3. These could have been presented as one transition system −→⊆ TCCS × (Act ∪ T ) × TCCS,
as long as Act and T were disjoint. They are presented separately to clearly distinguish
between the two notions. Notice that these rules respect the informal description of the
constructs given in the previous section.
a a a a
We shall often write P −→ to mean P −→ P 0 for some P 0 , and P −→
6 to mean P −→6 P0
0 t
for all P . We use similar abbreviations for ; and the derived transition relations defined
later.
There are a few points of coherence in the semantic transition relations which are
worth noting. Our first point justifies the informal description of the function | · |T .
t
Fact 4.2 P ; if and only if |P |T ≥ t.
We state two further points of interest regarding the idling behaviour of process terms.
The first concerns the temporal determinism of our model, in that it demonstrates that
the passage of time cannot in itself cause nondeterministic behaviour. The latter concerns
the coherence of the passage of time.
Fact 4.3
t t
• If P ; P0 and P ; P 00 then P 0 ≡ P 00 .
s t s+t
• P ; P 0 ; P 00 for some P 0 if and only if P ; P 00 .
We shall eventually want to reason about the external behaviour of systems, where
internal communication actions τ are ignored. To do this, we shall require the following
definitions for τ abstractions.
Definition 4.4 If s ∈ Act∗ , then sb ∈ Λ is the sequence s with all occurrences of τ deleted.
12
Definition 4.5
τ
=⇒ = (−→)∗ (also written =⇒)
as a s
=⇒ = =⇒−→=⇒
t 1 t n t
≈> = =⇒;=⇒ · · · =⇒;=⇒ where t = t1 + . . . + tn .
Finally, as one last point of notation, we shall sometimes allow ourselves to write (0)P
0
to be read as P , (∞)P to be read as 0, and P ; Q to be read as P ≡ Q. However, we
shall make it clear when this is allowed, so that unless stated, temporal constants from
T are always to be assumed to be finite and positive.
5 Bisimulation Relations
Definition
5.1
For
any binary
relation over closed TCCS terms, we let E F when-
ever E P xe F P xe for all instantiations Pe of the free variables xe of E and F .
e e
Milner defines simulation and bisimulation relations over CCS processes informally as
follows: a process P simulates another process Q if whatever event Q may perform, P
may mimic this event and evolve into a new state which continues to be in the simulation
relation with the new state of Q. Bisimilarity of two processes is then simply defined by
requiring that the simulation relation is symmetric, that is, each process can mimic any
event of the other whilst remaining in the bisimulation relation with the new state of the
former process.
Analogous to Milner’s strong (pre-)congruence for CCS, we have a form of (pre-)con-
gruence for TCCS which requires processes to match both on the performance of actions
and their time passage capabilities.
13
a a
(i) if Q −→ Q0 then P −→ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(ii) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.
Fact 5.3
Bisimilarity was defined and examined in [MT90] where several simple examples were
included, and a collection of equational laws were presented for the calculus. However,
as with CCS, this relation does not abstract away internal communication, so is of little
interest when dealing with real verification problems.
a a
(i) if Q −→ Q0 then P =⇒ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
b
t t
(ii) if Q ; Q0 then P ≈> P 0 for some P 0 with (P 0 , Q0 ) ∈ R.
P T -simulates Q, written P ≺
≈Q, if (P, Q) ∈ R for some
≺-simulation
≈ R. If this R is
symmetric, then P T -bisimulates Q, written P ≈ Q.
Fact 5.5
14
• ≈ is the largest T -bisimulation, and is an equivalence relation.
As with the case of the symbolic calculus CCS, these relations are not substitutive over
the whole calculus, in particular, over the choice operators. The usual counterexamples
are as follows: a ≈ τ.a, but
a + b 6≺
≈ τ.a + b and a ++ b 6 ≺
≈ τ.a ++ b.
There are also other interesting counterexamples to congruicity involving the timing con-
structs. For example, we have that (s)τ (t)0 ≈ (s + t)0, but
In fact, the left hand side will always deadlock after time s + t, whereas the right hand
side will always perform the a event and evolve into P after time s + t.
We are thus interested in refining this relation to obtain a congruence. This stronger
relation can in fact be defined as follows.
a a
(i) if Q −→ Q0 then P =⇒ P 0 for some P 0 with P 0 ≈ Q0 ;
t t
(ii) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.
Hence the basic refinement required to Definition 5.4 is in the very first action of the
process. While the process idles, the definition remains restrictive (the final clause),
whereas after performing an action, the resulting derivative terms need only be T -similar,
according to Definition 5.4.
Fact 5.7
• ≺
= is the largest observational T -precongruence, and is indeed a precongruence.
• ∼
= is the largest observational T -congruence, and is indeed a congruence.
15
This is the least refinement we could have made of our original T -bisimulation relation
in order to get a congruence, in the sense that ∼= is the largest congruence contained in
≈. To prove this fact, we rely on a sequence of lemmata. Firstly, we demonstrate the
easy side of this fact.
Fact 5.8 ∼
= is contained within ≈.
Fact 5.9 If P ∼
= Q, then
(i) a.P ∼= a.Q (iv) P + R ∼=Q+R (vii) P \a ∼ = Q\a
(ii) a.P ∼= a.Q (v) P ++ R ∼= Q ++ R (viii) P [f ] ∼
= Q[f ]
(iii) (t).P ∼
= (t).Q ∼
(vi) P | R = Q | R
In order to prove that ∼= is a congruence over the whole calculus with recursive defi-
nitions, we need first to introduce an auxiliary relation.
Proof: The proofs proceed analogously to that for Proposition 4.5 of [Mil89]. The first
conclusion is needed to prove the second. 2
Fact 5.12 If P ∼
= Q then µx.P ∼
= µx.Q.
Proof: Let A ≡ µx.P and B ≡ µx.Q. It suffices to demonstrate that the relation
16
n o n o
def
S = G A x ,G B x
is a bisimulation up to ∼
=. From this, by taking G ≡ x, we will get (A, B) ∈ S, so
that A = S = B, which by Fact 5.11 will give us A ∼
∼ ∼ = B as required.
The proof now proceeds analogously to that for Proposition 4.12 of [Mil89]. 2
Thus we do indeed have a congruence relation. Moreover, this is the least refinement
we could have made of our original T -bisimulation in order to get a congruence, in the
sense that ∼
= is the largest congruence contained in ≈. To prove this, we use the following
alternative characterization of ∼
=.
t t a
If P ; P 0 , then Q ++ (a + 0) ≈> S ≈ P 0 ++ (a + 0). Since S =⇒, we must
t
have that S ≡ Q0 ++ (t)(a + 0) where Q ; Q0 . Furthermore, we can verify from
P 0 ++ (a + 0) ≈ Q0 ++ (a + 0) that P 0 ≈ Q0 . To do this, we note that to match
a a a
P 0 −→ P 00 , we have that Q0 ++ (a + 0) =⇒ Q00 ≈ P 00 , and since Q00 =⇒,
6 we must in
b
a t
fact have that Q0 =⇒ Q00 . We also note in matching P 0 ; P 00 that 0 is a unit for
b
++ with respect to ≈.
It remains now to show that P 0 ++ R ≈ Q0 ++ R for all R. To do this, we demonstrate
that
n
t t
(P 0 ++ R, Q0 ++ R) : ∃P, Q, t with P ; P 0 , Q ; Q0 ,
o
and P ++ R ≈ Q ++ R for all R ∪ ≈
a a
is a T -bisimulation. To match the transition P 0 ++ R −→ P 00 where P 0 −→ P 00 , we
again use the fact that P 0 ++ (a + 0) ≈ Q0 ++ (a + 0). Finally to match the transition
s s s s+t
P 0 ++ R ; P 00 ++ R0 where P 0 ; P 00 and R ; R0 , we use the above to get Q ; Q00
s
(and hence Q0 ; Q00 ) 2
Fact 5.14 ∼
= is the largest congruence contained within ≈.
17
Proof: Let ≈c denote the largest congruence contained in ≈. We have from Fact 5.8 that
∼
= is a congruence contained in ≈. It simply remains to show that ≈c is contained
within ∼
=. Hence assume that P ≈c Q. Then P ++ R ≈c Q ++ R for all R, so
P ++ R ≈ Q ++ R for all R. But then by Fact 5.13, we have that P ∼= Q. 2
Within the calculus of TCCS, we can describe different timeout processes which will allow
a certain computation to take place within a given amount of time, but will preempt the
computation and proceed with an alternate computation if that amount of time is allowed
to pass without the desired computation being performed. As an example, the TCCS term
P ++ (1).b.0 will allow the process P to proceed within 1 unit of time, but will subsequently
perform the action b and evolve into a deadlocked 0 process if the process P does not
proceed within the required time. Hence, P ++ (1).b.0 can be regarded as a timeout
context. If we were to replace P with each of the terms (1).a.0 and (2).a.0 which represent
the processes which may perform an a action after 1 and 2 units of time respectively, then
in the first case, we would result in the process (behaviour) (1).(a.0 ++ b.0), whereas in
the second case we would result in (1).b.0 due to the different timeout scenarios. Clearly
we would want to consider the process term (1).a.0 to be faster than the process term
(2).a.0. However, if we further desired that our faster than preorder be a precongruence
(that is, that it be substitutive), then we would be led to deduce that the process term
(1).(a.0 ++ b.0) is faster than the process term (1).b.0. This is undesirable though, as
these two terms are not even functionally behaviourally equivalent: the first may perform
an a action at some point in its computation, whereas the second cannot.
The problem arises due to the preemptive nature of passing time in the semantics of
the operators — it is possible to lose the capability of following a particular computation
path through idling. In [Tof90], this problem is challenged by allowing all processes to
idle indefinitely without changing state. However, this approach involves an undesirable
weakening of the semantic development of the calculus TCCS. Instead we opt here for
a more subtle approach to weakening the expressive power of the calculus. What we
actually do is restrict our language to a subset of TCCS where the above undesirable
timeout notions are no longer definable. We need to insist that, though we can have
timeout events made available in the environment after some amount of time, we can
never lose the ability to do some other event due to the passing of time. This is not
an unreasonable restriction, particularly in the design of hardware circuits, as this is
precisely how an implementation behaves — if a port is prepared to communicate, then
this communication capability can only be removed through an actual change in state, and
not through idling. In order that we may reason about relative process speeds we work
within the liberal subcalculus `TCCS of TCCS defined in Remark 2.2, where processes
lack the power of preemption.
We can then define our faster than preorder using the following bisimulation-like
definition.
18
then ∀a ∈ Act and ∀t ∈ T :
a s a s
(i) if P −→P 0 then Q ; · −→ Q00 and P 0 ; P 00 for some s (possibly 0), Q00 and
P 00 with P 00 , Q00 ∈ R;
a a
(ii) if Q −→ Q0 then P −→ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(iii) if P ; P 0 then Q ; Q0 for some Q0 with (P 0 , Q0 ) ∈ R;
t t
(iv) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.
Thus the only difference between this definition and the definition of the equivalence
given above appears in the first clause: if the first (faster) process term can perform a
particular action, then the second (slower) process term can either perform that action
right away and evolve into a new process state which is slower than that into which the
first process evolved, or else it can idle for some amount of time s and reach a state in
which it can perform the action and thus evolve into a state which, while not necessarily
itself slower than that into which the first process evolved, but slower than that state once
the idling time is accounted for. As an example, we would clearly want that
a
a0 | (1)b0 −→ 0 | (1)b0
1 a
(1)a0 | (1)b0 ; a0 | b0 −→ 0 | b0
and while 0 | (1)b0 6 0 | b0, we only require in the definition that 0 | b0 0 | b0, as
1
0 | (1)b0 ; 0 | b0.
Fact 5.16 The relation is a precongruence over `TCCS. That is, it is a preorder, and
P Q implies that C [P ] C [Q] for all `TCCS contexts C [·].
Proof: Straightforward. 2
It is clear from the similarities in the definitions of ∼ and that for P and Q being
two terms of `TCCS, if P ∼ Q then P Q (and QP ). However the reverse implication does
not hold; that is, P Q and QP does not necessarily imply that P ∼ Q. A suitable simple
counter-example is provided by the following two process terms:
19
def def
A = ab0 + a(1)b0 + a(2)b0 B = ab0 + a(2)b0
These two processes are equally fast by the above definition, yet are not equivalent, as
a a
A −→ (1)b0, but for no B 0 ∼ (1)b0 does B −→ B 0 . Hence another equivalence of interest
is ∩ , which we represent by ∼
=.
We have another general anomaly with this — or indeed any — faster than preorder
for nondeterministic processes: we cannot guarantee that if P Q, then P will necessarily
execute faster than Q, but only that it has the capability of so doing. We would for
example insist by reflexivity that for the above process A, AA; but in executing the two
instances of A, the first (supposedly faster) version may start with an a transition to the
state (2)b0, whereas the second version may start with an a transition to the state b0.
However, this problem only arises in the presence of nondeterminism, and also vanishes if
we assume some form of built-in priority allowing faster computation paths to be followed
whenever possible.
Proof: Straightforward. 2
However for the usual reasons this is not substitutive for all the operators of `TCCS.
In particular τ.aa but τ.a + b 6 a + b. So we need to correct the definition as follows.
20
a a
(ii) if Q −→ Q0 then P =⇒ P 0 for some P 0 with (P 0 , Q0 ) ∈ R;
t t
(iii) if P ; P 0 then Q ; Q0 for some Q0 with (P 0 , Q0 ) ∈ R;
t t
(iv) if Q ; Q0 then P ; P 0 for some P 0 with (P 0 , Q0 ) ∈ R.
Fact 5.20 is the largest -bisimulation, and is a precongruence. That is, it is a preorder,
and P Q implies that C [P ] C [Q].
Proof: Straightforward. 2
This is the least refinement we could have made of our original -bisimulation in order
to get a precongruence, in the sense that is the largest precongruence contained in as
stated in the following result.
Proof: Straightforward. 2
6 Equational Theories
In the previous section, we formally defined several congruence and precongruence rela-
tions for process terms. They are all based on the notion of a bisimulation, so they can be
decided (for finite-state processes) using a partition-based algorithm. Such an automated
decision procedure is implemented in the Edinburgh Concurrency Workbench [Mol91a],
and has been used to analyze several simple timed systems such as the protocol examined
in Section 7. However, using this definition by hand can be tedious, so in this section we
consider equational (and inequational) theories for proving congruicity between processes.
Firstly, we develop a set of equational laws for the calculus TCCS. We shall use = to
represent derivability in our equational theory.
As a start, we restrict our attention to the finite sequential subset of TCCS, that is,
the subcalculus given by the following syntax.
21
(s)(t)x = (s + t)x 0 = (t)0
(x + y) + z = x + (y + z) (x ++ y) ++ z = x ++ (y ++ z)
x+y = y+x x ++ y = y ++ x
x+x = x x ++ x = x
x+0 = x x ++ 0 = x
ax + (y ++ 0) = ax ++ y ax + by = ax ++ by
ax + (t)by = ax ++ (t)by
x ++ (y + z) = (x ++ y) + (x ++ z)
ax + 0 = ax ax ++ 0 = ax
ax = ax + ax (t)x ++ 0 = 0
ax = ax + (t)ax
ax ++ (t)y + (t) ax ++ z = ax ++ (t) y + z
ax = aτ x a(t)x = a(t)τ x τx = τx + x
τ x + (ay ++ z) = τ x + (ay ++ z) + ay
a x + (τ y ++ z) = a x + (τ y ++ z) + ay
22
Fact 6.1 (Soundness of strong T -congruence laws) A ` P = Q implies P ∼ Q.
These laws are actually complete for reasoning over TCCS0 for strong congruence, as
indicated by the following.
The tau laws presented in Figure 5 are the timed versions of the the analogous tau laws
for CCS as presented in [Mil89]; these taken with the theory A are collectively referred
to by Aτ . There are many varieties of tau laws in our calculus such as x ++ τ x = τ x and
a(x ++ τ y) + ay = a(x ++ τ y) + ay. However, these are all derivable from those given in
Figure 5. We can show that the laws of the theory Aτ are sound with respect to the
congruence ∼ = over the full calculus TCCS thus obtaining the following result.
We now spend some effort on demonstrating the completeness of the theory Aτ for
reasoning about T -congruence over TCCS0 . To do this, we define several normal forms.
First we define some notation.
Notation We use the abbreviation [a]ts p to stand for (s) ap ++ (t)0 , which represents
the process which must perform the action a and evolve into process p after time s but
before time s + t.
X s0
[ai ]sii pi + (s)0
i∈I
where each pi is itself in NF, si may be 0, each of s and s0i may be 0 or ∞, and s ≥ si + s0i
for each i ∈ I.
A term P is in saturated normal form (SNF) if it is of the form
X s0
[ai ]sii pi + (s)0
i∈I
23
where each pi is itself in SNF, si may be 0, each of s and s0i may be 0 or ∞, s ≥ si + s0i
t a t a
for each i ∈ I, and P ; · =⇒ P 0 implies P ; · −→ P 0 .
A term P is in disjoint saturated normal form (DSNF) if it is of the form
X s0
[ai ]sii pi + (s)0
i∈I
where each pi is itself in DSNF, si may be 0, each of s and s0i may be 0 or ∞, s ≥ si + s0i
t a t a
for each i ∈ I, P ; · =⇒ P 0 implies P ; · −→ P 0 , no pi is of the form (t) τ p ++ 0 + 0,
and if ai = aj and pi ≈ pj and si ≤ sj then si + s0i < sj .
We aim to demonstrate that every term in TCCS0 can be equated to some term in
DSNF; the weaker varieties of normal forms are included simply to break down the proof
of this fact. First we demonstrate that any term can be converted into NF.
Proof: By induction on the structure of terms. We demonstrate here only the case of
P ++ Q. So we can assume that P and Q can be equated to NF terms.
Suppose
X s0 X t0
P = [ai ]sii pi + (s)0 and Q = [bj ]tjj qj + (t)0
i∈I j∈J
are normal form expressions for P and Q. Using distributivity of ++ over + we can
derive
X s0 t0
P ++ Q = [ai ]sii pi ++ [bj ]tjj qj
i∈I
j∈J
X s0
X t0
+ [ai ]sii pi ++ (t)0 + [bj ]tjj qj ++ (s)0
i∈I j∈J
Fact 6.6 Every term in NF can be equated using the equational theory to a term in SNF.
Proof: We will demonstrate how to transform a term in NF into one in SNF. Hence
assume that
X s0
P = [ai ]sii pi + (s)0
i∈I
24
t a t a
is in NF. If P ; · =⇒ P 0 implies P ; · −→ P 0 , then P is already in SNF. The
proof will proceed now by induction on the action depth of P (hence we shall assume
t a
that each pi is in SNF) as well as the number of ways in which P ; · =⇒ P 0 where
t a t a t a
P ; · 6−→ P 0 for some t. Thus we assume that P ; · =⇒ P 0 and P ; · −→ 6 P 0.
There is some i ∈ I with si ≤ t ≤ si + s0i , and either a = ai and pi =⇒ P 0 , or ai = τ
a
and pi =⇒ P 0 . Let
X t0
pi = [bj ]tjj qj + (t)0.
j∈J
t a
First assume that a = ai and pi =⇒ P 0 . If pi ≡ P 0 then P ; · −→ P 0 , which is a
τ τ
contradiction. Hence assume that pi =⇒ P 0 . By induction, pi −→ P 0 . Thus there is
some j ∈ J such that tj = 0, bj = τ and qj ≡ P 0 . Hence
0
ai pi = ai p i + τ P ++ (t0j )0 = ai p i + τ P 0
++ (t0j )0 + ai P 0 ,
so ai pi = ai pi + ai P 0 .
Now
s0
[ai ]sii pi = (si ) (ai pi + ai P 0 ) ++ (s0i )0
s0 s0
= [ai ]sii pi + [ai ]sii pi .
s0
Thus P = P + [ai ]sii P 0 , giving our required result.
a a
Now assume that ai = τ and pi =⇒ P 0 . By induction, pi −→ P 0 . Thus there is some
j ∈ J such that tj = 0, bj = a and qj ≡ P 0 . Hence
0 0
τ pi = τ pi + (aP ++ (tj )0) = τ pi + aP ++ (tj )0 + aP 0
so τ pi = τ pi + aP 0 .
Now
s0
P = P + [τ ]sii pi
= P + (si ) (τ pi + aP 0 ) ++ (si )0
= P + (si ) τ pi ++ (s0i )0 + (si ) aP 0 ++ (s0i )0
s0
so P = P + [a]sii P 0 , giving our required result. 2
We can almost now finish off our proof obligation by further transforming such a term
into DSNF. First though we prove the following result.
Proof: ... 2
Fact 6.8 Every term in SNF can be equated using the equational theory to a term in
DSNF.
25
Proof: We will demonstrate how to transform a term in SNF into one in DSNF. Hence
assume that
X s0
P = [ai ]sii pi + (s)0
i∈I
is in SNF. The proof will proceed by induction on the number of (j,k) pairs such
that aj = ak , pj ≈ pk , and sj ≤ sk ≤ sj + s0j .
Suppose aj = ak , pj ≈ pk , and sj ≤ sk ≤ sj + s0j . Then using aj pj = ak pk we have
s0 s0
[aj ]sjj pj + [ak ]skk pk
s0
= (sj ) (aj pj ++ (s0j )0) + [aj ]skk −sj pj
= (sj ) aj pj ++ (sk − sj )((sj + s0j − sk )0 + (s0k )0)
max(s0j ,sk +s0k −sj )
= [aj ]sj pj .
Hence
X s0 max(s0j ,sk +s0k −sj )
P = [ai ]sii pi + [aj ]sj pj
i∈I\{j,k}
The importance of terms being expressible in DSNF is to facilitate the following result.
s 0
[ai ]sii pi + (s)0 is in DSNF and P ∼
X 0 0
Fact 6.9 If P = = P + [a]ss p then P = P + [a]ss p.
i∈I
From here, we can now finally demonstrate the completeness of the equational theory
over TCCS0 by considering congruent terms in DSNF.
Proof: Suppose
X s0 X t0
P = [ai ]sii pi + (s)0 and Q = [bj ]tjj qj + (t)0
i∈I j∈J
26
are in DSNF. We will demonstrate that P = P + Q, and by a symmetric argument
that Q = P + Q, from which we can deduce our result. The proof will proceed by
induction on the action depths of P and Q, so that we will assume the conclusion to
hold for all pi and qj .
t0
From this, using Fact 6.9, we can show that P = P + [bj ]tjj qj for each j ∈ J.
It is then straightforward to show that P = P + Q. 2
Proof: Assume that p ∼= q for some p and q in T CCS0 . By Proposition 6.8, we can find
DSNF terms P and Q such that p = P and q = Q. But then by Proposition 6.10,
we must have that P = Q, whence we have that p = q. 2
We can add laws for reasoning completely about terms containing relabelling and
restriction, as these operators distribute through the sequential operators as expected (cf
the analogous laws in CCS as presented in [Mil89]). However, there is a problem with the
complete axiomatization of the language of finite agents containing the parallel construct.
The usual approach is via an Expansion Theorem which is used to transform any term
into a congruent sequential one. However, this cannot be accomplished in general. For
instance, the term
(1)a0 | b0
has no equivalent sequential term. Naı̈vely one might expect this term to be equivalent to
the term b(1)a0 + (1)(a0 | b0); however, this expanded term cannot perform any action
after any fraction of a time unit. In the case of a discrete time domain though, there does
exist an Expansion Theorem; this has been presented in [MT90].
In [Che91], a subcalculus of TCCS has been extended with time variables which allow
an Expansion Theorem to be proven for strong congruence over any arbitrary time domain.
However, the price paid with the introduction of time variables is a more complicated
calculus with restrictions on the use of the time variables to maintain decidability of
strong congruence.
Over the discrete time domain {1, 2, 3, . . .} we have the following progression law for
delayed actions:
When we add the parallel operator to the finite language, we get a complete theory
for strong congruence by adding the above law to those of Figure 4 and Figure 6. For
observational congruence we get a complete theory by taking the above law, together with
those of Figure 4, Figure 5 and Figure 6.
27
X X
Let X = ai xi and Y = bj yj .
1≤i≤m 1≤j≤n
X X X
(E1 ) X | Y = ai (xi | Y ) + bj (X | yj ) + τ (xi | yj )
1≤i≤m 1≤j≤n ai =bj
X X X
(E2 ) X | (Y + (1)y) = ai (xi | (Y + (1)y)) + bj (X | yj ) + τ (xi | yj )
1≤i≤m 1≤j≤n ai =bj
X X X
(E3 ) (X + (1)x) | Y = ai (xi | Y ) + bj ((X + (1)x) | yj )) + τ (xi | yj )
1≤i≤m 1≤j≤n ai =bj
X X
(E4 ) (X + (1)x) | (Y + (1)y) = ai (xi | (Y + (1)y)) + bj ((X + (1)x) | yj ))
1≤i≤m 1≤j≤n
X
+ τ (xi | yj ) + (1)(x | y)
ai =bj
x ≤ (t)x
28
Ack
ZZZ
}
ZZ
=
accept Z deliver
Ac Tm
Z
ZZZ >
~ Z
Z
T ns
Taking the laws of Figure 4 and the law of Figure 7 we have a sound and complete
system for reasoning inequationally over the sequential fragment of `TCCS. Unfortunately
the expansion laws of Figure 6 cannot be expressed in a manner wholly contained within
`TCCS. However, by replacing 0 by 0 and a.P by a.P throughout, we arrive at valid
axiom schemata.
Taking the laws of Figure 4, Figure 5 and Figure 7; we have a sound and complete
set of inequational laws for the sequential fragment of `TCCS. Again, we get Expansion
Theorems as above.
In [Mil89] Milner presents an implementation of the alternating bit protocol in CCS, and
demonstrates that the protocol is correct. Perforce this implementation ignores the exact
temporal properties of the system and its components. We extend the implementation in
CCS to one in TCCS, over the discrete time domain {1, 2, 3, · · ·}, where we can analyze
(and take advantage of) the temporal properties of the system components. In this version
some of the system complexity can be reduced by exploiting the temporal information.
This proof was checked using the Edinburgh Concurrency Workbench [Mol91a].
Our alternating bit protocol realisation is depicted in Figure 8. The process Ac will
work in the following manner. After accepting a message, it sends it with bit b along the
channel T ns and waits. Subsequently there are three possibilities:
29
The replier T m works in a dual manner. After a message is delivered it sends an
acknowledgement with bit b along the Ack line. Subsequently there are again three
possibilities:
def
T mc(b) = ack(b).T mc(b) ++ ack(¬b).T mc(b) ++ [Link] (b).T ms(¬b)
def
T ms(b) = ack(b).T ms(b) ++ ack(¬b).stops . T mc(¬b)
def
Sends = starts (b).send(b).Send(b) ++ stops .Sends
def
Send(b) = (trt )send(b).Send(b) ++ stops .Sends
def
Recc(b) = transmit(b).Recc(b) ++ transmit(¬b)Recc(b) ++ [Link] (b).Recs(b)
def
Recs(b) = transmit(¬b).Recs(b) ++ transmit(b). stopr .Recc(¬b)
def
Reply = startr (b).reply(b).Replys(b) ++ stopr .Reply
def
Replys(b) = (trt )reply(b).Replys(b) ++ stopr .Reply
The channels in the implementation have the ability to lose messages. For convenience,
we ignore actual messages and concentrate on the value of the control bits.
def
Ack = reply(b).Ack(b)
def
Ack(b) = (ts )ack(b).Ack + τ.Ack
def
T ns = send(b).T ns(b)
def
T ns(b) = (ts )transmit(b).T ns + τ.T ns
We impose the condition on the retry rate that trt > 2ts . This seems reasonable as we
cannot be sure that a transmission has failed until this period of time has elapsed. The
external environment is forced to take delivery as soon as possible. This enables us to
calculate the period of time between the reception of an accept action, and the return of
the process to a position where the next transmission may be attempted.
It should be noted that we have arranged that at no time is an immediate ack or
transmit action of either type impossible, apart from a period immediately after the
reception of such an action, and then the evolution is possible before any more actions
of that type can be produced by the channels. Thus we can use actions with no delay
guarding in the channels to force the evolution to proceed at certain intervals. The initial
state of our alternating bit process is the following.
T mc(0) | Sends | T ns | Ack | Recs(1) | Reply 2 (1) \{transmit, ack, send, reply,
startr , starts , stopr , stops }.
30
deliver
Sends T ns Recs
accept
Tm Ack Reply
Iab(b) = [Link](b)
ItryLR(b) = (2)IsuccLR + τ.(5)ItryLR(b)
ISuccLR(b) = [Link](b)
ItryRL(b) = (2).Iab(¬b) + τ.(5)ItryRL(b)
Atb(0) ≈ Iab(0)
In the usual case of CCS with bisimulation semantics, we have a powerful logic for spec-
ification in the form of Hennessy-Milner Logic (HML) as described in [HM85, Mil89]. It
31
would be an ideal situation if the logic would extend itself to the calculus TCCS in some
analogous fashion.
The logic in question would consist of the following formulae, which we refer to as
timed HML formulae.
We can define the rest of the typical formulae as derived notions in the expected fashion:
def def
false = ¬true [a]φ = ¬hai¬φ
def def
φ0 ∨ φ1 = ¬(¬φ0 ∧ ¬φ1 ) [a]φ = ¬hai¬φ
Terms in TCCS (or in `TCCS) would then satisfy such formulae according to the following
rules.
We can then define an equivalence ' between terms according to the timed HML formulae
which the terms respectively satisfy. That is, we could allow P ' Q if and only if
The fact is, this equivalence corresponds to strong congruence, as expected. Also as
expected, we can define a weak version of the logic which characterizes weak equivalence.
We would also be interested in finding asymmetric relations based on this (or some
similar) logic which would coincide with the faster-than relations. To do so, we could
imagine having to define a faster-than relation over timed HML formulae to capture the
asymmetry in the definition of the relation. Furthermore, we would have to capture the
“borrowing-of-time” notion from the first clause of the faster-than bisimulation definitions.
9 Conclusion
We have presented the theory of observational congruence over the calculus TCCS. This
extends the work on the calculus introduced in [MT90]. Our language is one of many
attempts to provide a concurrent calculus for reasoning about the effects of time. Similar
works can be found for example in [Mil83, BB91, Che91, HR90, Gro90, RR86, NRSV90,
32
Wan90]. We believe that TCCS gives one of the clearer accounts of the nature of non-
determinism that can occur within temporal systems. Though syntactically it is a simple
extension to CCS, its naturality is demonstrated by the similarity of the equational ex-
tensions required to address the inclusion of abstraction.
The syntax of the calculus has changed somewhat from that original paper in several
respects. Firstly, the arbitrary delay operator δ.P has been replaced by the delaying
constructs presented above, namely delayed action prefix a.P and nontemporal nil 0,
as these are the only interesting occurrences of temporal delay. More importantly, the
weak and strong choice operators have changed from ⊕ and + respectively to + and
++ respectively. Though this may lead to confusion, there are several reasons for this
recasting of the syntax.
The definition of behavioural abstraction which we have presented seems the natural
extension to that of CCS for a timed system. The cases where the equivalence is not a
congruence show some interesting properties of silent actions with respect to temporal
behaviour, notably the fact that (t)τ (s)P ∼
6= (t + s)P .
The equational theory is a simple and natural extension of the basic equational theory
of TCCS, matching ones intuitions in the light of the additions required to the theory of
CCS to represent abstraction.
Finally, as we demonstrated with our example, it is possible to derive the properties
of highly complex systems, without the terms of the calculus themselves becoming too
complicated.
References
[BB91] Baeten, J.C.M. and J.A. Bergstra, Real Time Process Algebra, Formal Aspects
of Computing, Vol 3, 1991.
[BK85] Bergstra, J.A. and J.W. Klop, Algebra for Communicating Processes with
Abstraction, Journal of Theoretical Computer Science, Vol 37, 1985.
[Bou85] Boudol, G., Notes on Algebraic Calculi of Processes, Logics and Models of
Concurrent Systems, NATO ASI Series f13 (K. Apt, ed), 1985.
33
[BHR84] Brookes, S.D., C.A.R. Hoare and A.W. Roscoe, A Theory of Communicating
Sequential Processes, Journal of ACM, Vol 31, 1984.
[Cle89] Cleaveland, R., J. Parrow and B. Steffen, “The Concurrency Workbench: A
Semantics-based Verification Tool for Finite-state Systems”, Proceedings of
the Workshop on Automated Verification Methods for Finite-state Systems,
Lecture Notes in Computer Science 407, Springer-Verlag, 1989.
[Che91] Chen, L., Decidability and Completeness in Real Time Processes, Research
Report No ECS-LFCS-91-185, University of Edinburgh Computer Science De-
partment, 1991.
[Gro90] Groote, J.F., Specification and Verification of Real Time Systems in ACP,
Proceedings of the 10th International Symposium on Protocol Specification,
Testing and Verification, Ottawa, 1990.
[HM85] Hennessy, M.C. and R. Milner, Algebraic Laws for Nondeterminism and Con-
currency, Journal of the ACM, Vol 32, No 1, 1985.
[Hen88] Hennessy, M.C., Algebraic Theory of Processes, MIT Press, 1988.
[HR90] Hennessy, M. and T. Regan, A Temporal Process Algebra Technical Report
No. 2/90, University of Sussex Computer Science Department, April, 1990.
[Hoa78] Hoare, C.A.R., Communicating Sequential Processes, Communications of
ACM, Vol 21, 1978.
[Hoa85] Hoare, C.A.R., Communicating Sequential Processes, Prentice–Hall In-
ternational, 1985.
[Mil80] Milner, R., A Calculus of Communicating Systems, Lecture Notes in
Computer Science 92, Springer-Verlag, 1980.
[Mil83] Milner, R., Calculi for Synchrony and Asynchrony, Theoretical Computer
Science, Vol 25, 1983.
[Mil89] Milner, R., Communication and Concurrency, Prentice–Hall Interna-
tional, 1989.
[Mil89a] Milner, R., A Complete Axiomatisation for Observational Congruence of
Finite-State Behaviours, Information and Computation, Vol 81, 1989.
[MT90] Moller, F. and C. Tofts, A Temporal Calculus of Communicating Systems,
Proceedings of CONCUR’90, Lecture Notes in Computer Science 458, Ams-
terdam, August 1990.
[MT91] Moller, F. and C. Tofts, Relating Processes with Respect to Speed, Proceed-
ings of CONCUR’91, Lecture Notes in Computer Science 527, (Theories of
Concurrency: Unification and Extension), Amsterdam, August 1991.
[MT92] Moller, F. and C. Tofts, Behavioural Abstraction in TCCS Proceedings of
ICALP’92, Lecture Notes in Computer Science, Vienna, July 1992.
[Mol91] Moller, F., Process Algebra as a Tool for Real Time Analysis, Proceedings of
the IV Banff Higher Order Workshop, Springer-Verlag, 1991.
[Mol91a] Moller, F., The Edinburgh Concurrency Workbench (Version 6.0), Technical
Note No LFCS-TN-34, University of Edinburgh Computer Science Depart-
ment, 1991.
34
[NRSV90] Nicollin, X., J.L. Richier, J. Sifakis and J. Voiron, ATP: An Algebra for Timed
Processes, Proceedings of IFIP Working Conference on Programming Con-
cepts and Methods, North Holland, 1990.
[Par81] Park, D.M.R., Concurrency and Automata on Infinite Sequences, Lecture
Notes in Computer Science 104, Springer–Verlag, 1981.
[Plo81] Plotkin, G.D., A Structural Approach to Operational Semantics, Re-
port DAIMI FN-19, Computer Science Department, Århus University, Den-
mark, 1981.
[RR86] Reed, G.M. and A. Roscoe, A Timed Model for Communicating Sequential
Processes, Proceedings of ICALP’86, Lecture Notes in Computer Science No
226, Springer Verlag, 1986.
[Sut89] Sutherland, I.E., Micropipelines, Communications of the ACM, Vol 32, No 6,
pp720-738, 1989.
[Tof88] Tofts, C., Temporal Ordering for Concurrency, Research Report No ECS-
LFCS-88-48, University of Edinburgh Computer Science Department, 1987.
[Tof90] Tofts, C., Proof Systems and Pragmatics for Parallel Programming, PhD The-
sis, University of Edinburgh, 1990.
[Udd86] Udding, J.T., A Formal Model for Defining and Classifying Delay-Insensitive
Circuits and Systems, Distributed Computing, Vol 1, 1986.
[Wan90] Wang Yi, Real-time Behaviour of Asynchronous Agents, Proceedings of CON-
CUR’90 (Theories of Concurrency: Unification and Extension), Amsterdam,
August 1990.
35