Anoma Research Topics | TECHNICAL REPORT
Message Logic: a logical foundation for
service commitments
Murdoch J. Gabbay a and Naqib Zarina
a Heliax AG
Abstract
We introduce Message Logic, a dynamic modal logic for specifying and reasoning about message-
passing in distributed systems. We study in particular how to express that participants make
commitments to each other about their future behaviour. This leads us to develop a kind
of ‘partial implication’ which we call pragmatic implication, and to an axiomatisation of the
action of offering a commitment. We then discuss how this logic can be understood in the
context of industrial practice.
Keywords: Distributed Systems ; Dynamic Modal Logic ; Service Commitments
(Received: November 30, 2024; Version: December 4, 2024)
Contents
1 Introduction 2
1.1 What is a distributed system? . . . . . . . . . . . . . . . . . . . 2
1.2 An informal example: back-scratching . . . . . . . . . . . . . . . 3
1.3 An example from distributed systems: packet forwarding . . . . 4
1.4 General observations . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Research objectives . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 The logic 6
2.1 Types and syntax . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 The in and out modalities . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Pragmatic implication . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Example: the PingCommit service commitment . . . . . . . . . 20
2.6 Expert mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.7 Using EOU and BOU to evaluate service commitments . . . . . . . 21
2.8 How to express offers, using axioms . . . . . . . . . . . . . . . . 22
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 1–36
3 The operational framework 24
3.1 Differences between our logic and the real world . . . . . . . . . 24
3.2 The seven stages of a service commitment . . . . . . . . . . . . 26
3.2.1 Service offer creation . . . . . . . . . . . . . . . . . . . . 26
3.2.2 Service offer dissemination . . . . . . . . . . . . . . . . 27
3.2.3 Service offer negotiation . . . . . . . . . . . . . . . . . . 28
3.2.4 Service commitment activation . . . . . . . . . . . . . . 28
3.2.5 Service consumption . . . . . . . . . . . . . . . . . . . . 28
3.2.6 Service commitment evaluation . . . . . . . . . . . . . . 29
3.2.7 Service commitment termination . . . . . . . . . . . . . 29
3.3 Application scenario . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.1 An extremely simple application scenario . . . . . . . . 29
3.3.2 A simple application scenario . . . . . . . . . . . . . . . 31
3.3.3 A silly case . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Conclusions 32
4.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.1 Extending Message Logic for real-world scenarios . . . 34
4.3.2 Building infrastructure around Message Logic . . . . . . 35
References 36
1. Introduction
1.1. What is a distributed system?
A distributed system consists of a set of participants (or processes) working
in collaboration by passing messages to one another. Furthermore, we assume
that messages are the only way for participants to communicate, and the only
observable of the system.1
In such a situation, the behaviour of a system is the messages that get sent,
and expectations on its behaviour are expectations about who sends messages,
the contents of the messages, and when the messages are sent. Accordingly, we
define:
Definition 1.1.1. A service commitment is an assertion that purports to restrict
messaging behaviour.
1 If participants have internal state, the only way this has any relevance to the system is if . . . they put it in a message.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 2
We write ‘purports to’ because, as we shall see, there are (at least) three
concepts in play here:
1. making a service commitment,
2. upholding a service commitment, and
3. being evaluated by another participant to have upheld a service commit-
ment.
Matters are further complicated because service commitments may be conditional
(‘𝑝 will uphold 𝑋 , barring network outages’), probabilistic (‘𝑝 will uphold 𝑋 at
least half of the time’), and evaluation may be based on incomplete or gossiped
information (‘𝑝 has upheld 𝑋 so far, but we haven’t been doing business for very
long’ or ‘𝑞 told me that 𝑝 upheld 𝑋 , and I trust 𝑞’).
In this document, we will build a logical and operational framework within
which the intuitions of
• making and upholding (or not upholding) service commitments, and also
• evaluating whether other participants have done so,
can be formulated and made operationally useful. This logical framework should
be of interest to the following readers:
1. Programmers of distributed systems, who might benefit from a logical
framework which to design and implement correct and safe algorithms
that make, communicate, evaluate, and act on service commitments.
2. Computer scientists, especially those interested in applying formal verifi-
cation to assert and prove correctness properties from a high-level logical
description.
3. Users, who may also want a high-level description of how a distributed
system handles its service commitments — one that is higher-level than
raw code, but more rigorous than a verbal description.
4. Mathematicians (especially logicians active in modal and epistemic logics)
who may be interested in our framework as a modal logic structure with
strong practical applications.
1.2. An informal example: back-scratching
We will start by introducing service commitments, using a simple informal
example. Even from this example, we will see that the simplicity is deceptive,
matters are not completely straightforward, and the ideas conceal quite a lot of
structure.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 3
Example 1.2.1. Consider this (apparently) simple service commitment taken from
informal life:
‘If you scratch my back, I’ll scratch yours’.
This seems a straightforward enough commitment, but it is not. For, suppose
that you scratch my back — and then you wait, and you wait, but I do not scratch
yours. Am I in breach of my service commitment?
Maybe, but maybe not. It depends:
1. I could say ‘I said I would scratch your back, but I didn’t say when; be patient’.
When do you stop waiting and conclude that I broke my promise? A microsecond?
A day? A hundred years?
2. I could say ‘For my requirements, your backscratch was not of sufficient quality;
I’m not reciprocating on the basis of that’.
3. I could say ‘Yes, I did say I would scratch your back . . . but every time I came
to scratch your back, you were out’ or ‘. . . but you kept asking me at clearly
inappropriate times, like at 2am’, and so on.
4. I could say ‘Actually, I didn’t say that at all! Someone must have impersonated
me to get a free backscratch. Sorry for your trouble, but I can’t help’.
5. You could have been lying:
(a) You could have claimed to have scratched my back (without having actually
done so) and then you demand a ‘free’ backscratch from me.
(b) Regardless of whether I actually scratched your back and/or you actually
scratched mine, once we have been seen to communicate, you can attack my
reputation by reporting negative reviews: that I breached our agreement,
or that my backscratching was of poor quality. This may cause third parties
to do business with you rather than with me.
1.3. An example from distributed systems: packet forwarding
One of the key challenges of distributed systems is how information gets dissemi-
nated to the network of participants. Coordination in an all-to-all communication
pattern is too resource-intensive, especially if the network grows large. Instead,
participants coordinate with a smaller portion of the network (their peers) and
rely on these participants to receive or forward new information packets.
Example 1.3.1. Here is a packet relay service commitment that participants might
make to their peers:
‘if you send me a data packet destined for a participant 𝑝, then I will forward it
to 𝑝 as soon as possible’.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 4
Again, this is not as straightforward as it seems:
1. What does “soon as possible mean”? Perhaps the packet contains a vote
for a block proposal, which becomes irrelevant after a certain deadline.
2. What does “forward it to 𝑝 mean”? Am I in breach of my service commit-
ment if I forward the packet to one my peers 𝑞 and request that 𝑞 deliver
the packet to 𝑝, because I know that 𝑞 has 𝑝’s IP address (and I do not)?
3. What if I have successfully relayed 99 data packets, but then I fail to forward
the 100th. Am I in breach of my service commitment — or can I rely on
other participants excusing this, given by previous 99 successes?
4. How will you be able to tell if I actually sent the packet to 𝑝, or just claim
to have done so?
5. Am I responsible only for sending the packet, or am I also responsible for
node 𝑝 actually receiving it?
6. What if 𝑝 maliciously claims that it has not received the packet, even
though I have sent it?
7. Am I in breach of my service commitment if my failure to deliver the 100th
packet was due to an event outside of my reasonable control, such as some
major incident that takes down the entire network? How about if that
incident was initiated by 𝑝? What if it was provoked by you?
8. Am I in breach of my service commitment if 𝑝 left the network and cannot
receive any messages?
9. Am I in breach of my service commitment if I leave the network (voluntarily,
or perhaps not)?
1.4. General observations
Based on the previous two examples, we make some general observations about
service commitments:
1. Service commitments should be unambiguous. An omniscient observer
should be able to decide for sure what a service commitment means and
whether it has been kept. Participants, may have only partial or incorrect
information so may only be able to approximate this certainty, but a service
commitment should be something that has a truth-value, even if it may not
always be easy or even possible for participants in the network to know
for sure what it is.
2. Service commitments should be time-bounded. In practice, we care about
commitments that are bounded to a specific time interval (e.g. ‘the next 10
microseconds’ or ‘every Tuesday and Wednesday for the next six months’).
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 5
3. Service commitments can be probabilistic, and/or subject to rate-limiting and
other similar provisos.
4. Service commitments may be gossiped in the network. Participants may com-
municate amongst themselves the service commitments they know about,
and also evaluations of to what extent and how reliably these commitments
have been kept.
5. Service commitments cannot be made on behalf of someone else. The only
commitment that a participant 𝑝 can make is about its own behaviour.
(This is important especially in the presence of gossip; if 𝑝 says that 𝑞 made
a commitment, this is not in and of itself a commitment on 𝑞.)
1.5. Research objectives
Our challenge is to build a logical and operational framework within which the
intuitions in the previous sections can be formulated and then made operationally
useful in practical large scale distributed systems.
2. The logic
2.1. Types and syntax
Remark 2.1.1. We proceed as follows:
1. We will need to talk about cardinalities and proportions, so we start by defining
the extended rational numbers in Definition 2.1.3.
2. We will need to talk about data being passed in messages, so we define a simple
type system using base datatypes, tuples, and powersets, in Definition 2.1.6 and
Figure 1.
3. We define the terms and predicates of message logic in Definition 2.1.9 and
Figure 2.
2.1.1. Types
Notation 2.1.2. Write N = {0, 1, 2, . . . } for the natural numbers (starting at 0). If
𝑛 ∈ N, write N≥𝑛
Definition 2.1.3. Define the extended rational numbers Q∞ by
Q∞ = Q ∪ {∞}.
We let Q∞ have addition +, multiplication ∗, and division /, which agree with
those on Q where they coincide and are such that:
1. ∞ + 𝑥 = 𝑥 + ∞ = ∞.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 6
𝛼, 𝛽, 𝛾 ::= (𝛿 ∈ BaseDatatype) | 𝛼 × · · · × 𝛼 | 𝛼 + · · · + 𝛼 | Pow(𝛼).
𝜁 ::= 𝛼 → 𝛼
Figure 1. Syntax of datatypes and function types (Definition 2.1.6)
𝑎 : 𝜙 .𝜙 | P(𝑡) | @𝑡 𝜙 | in𝑝 (𝑡) | out𝑝 (𝑡) | BOU(𝜙) | EOU(𝜙)
Î𝑟
𝜙 ::= ⊥ |
𝑡 ::= (𝑎 ∈ VarSymb) | (𝑑 ∈ 𝛿 ∈ BaseDatatype) | f (𝑡) |
here | now | { 𝑎 : 𝛼 | 𝜙 }
Figure 2. Syntax of terms and predicates (Definition 2.1.9)
2. ∞ ∗ 𝑥 = 𝑥 ∗ ∞ = ∞.
3. 𝑥/0 = ∞ for any 𝑥 (including 𝑥 = 0).
4. 𝑥/∞ = 0 for any 𝑥 (including 𝑥 = ∞).
Remark 2.1.4. We currently have just one ∞ (rather than a positive or negative
one). Topologically, this means that Q∞ is a circle, where we can view 0 as being
at the bottom of the circle and ∞ is at the top.2
Definition 2.1.5. We assume some base datatypes 𝛿 ∈ BaseDatatype:
1. The empty datatype Empty = {}.
2. The singleton datatype Singleton = {∗}.
3. The bool datatype Bool = {⊤ ⊤, ⊥ }.
4. The datatype of extended rationals Q∞ .
5. A set of primitive data 𝑑 ∈ PrimData (we may freely assume more than one
primitive datatype if required).
6. A partially-ordered set of times 𝑛 ∈ Time which for simplicity we identify with
some (possibly infinite) initial segment of N.3
7. A set of places 𝑝 ∈ Place (or processes or participants).
8. A finite set of message types 𝑚 ∈ MessageType.
9. We assume that the base datatypes are pairwise disjoint, so that (for example)
Time ∩ Place = ∅.
Definition 2.1.6. Define the syntax of datatypes and function types induc-
tively as per Figure 1.
2 The Wikipedia article on the point at infinity (permalink) is clear, and has good references.
3 An initial segment of N is either N or a set of the form {0, . . . , 𝑛} for some 𝑛 ∈ N. That is, it is a subset 𝐼 of N that is down-closed, meaning
that ∀𝑥, 𝑥 ′ ∈ N.(𝑥 ∈ 𝐼 ∧ 𝑥 ′ < 𝑥 ) ⇒ 𝑥 ′ ∈ 𝐼 . There is only one infinite initial segment of N, namely N itself.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 7
Remark 2.1.7. Note the following:
1. For simplicity, we identify datatypes with their denotation; e.g. we use Q∞ both
as a syntactic symbol for ‘the datatype of extended rationals’ and as itself, namely
the set of extended rationals. Similarly, function-types are also function-spaces.
Our intended denotation for types will not change, so it makes sense to identify
a (data)type with its intended denotation.
2. We do not nest → in function-types: e.g. (𝛼 → 𝛽) → 𝛾 is not a legal function-
type, as per Definition 2.1.6. Neither is 𝛼 → (𝛽 → 𝛾), though (𝛼 × 𝛽) → 𝛾 is a
legal function-type.
There is no technical obstacle to doing this, but for now, we do not need the
power of explicit higher-order functions4 and it is helpful for clarity to create a
distinction between data, and functions on data.
3. In Definition 2.1.5(6) we identify time with a (possibly infinite) initial segment
of the natural numbers. In fancy language, time is a discrete, totally ordered,
connected partial order with a least element. Generalisations are doubtless
possible, but for now this will suffice.
2.1.2. Syntax
Definition 2.1.8. We fix some more data:
1. A set of function symbols f ∈ FSymb, to each of which is associated a function-
type, which we will write type(f). If f ∈ FSymb then we may write
f : 𝛼 → 𝛼′ for the assertion type(f) = (𝛼 → 𝛼 ′).
We may freely assume standard function symbols, including 0, 1, +, ∗, tupling
(-, . . . , -), and projections proj𝑖 , with their usual meanings and types. Meaning
will always be clear.
2. For each datatype 𝛼 we assume a cardinality function symbol
# : Pow(𝛼) → Q∞ .
If 𝑃 ⊆ 𝛼 then #𝑃 returns the (finite) cardinality of 𝑃 if 𝑃 is finite, and #𝑃 returns
∞ if 𝑃 is infinite.
3. A set of predicate symbols P ∈ PSymb to each of which is associated a datatype,
which we will write arity(P) and call the arity of P. If P ∈ PSymb then we may
write
P : 𝛼 for the assertion arity(P) = 𝛼 .
4 This comment is bogus in the sense that Pow (𝛼 ) is a legal datatype and it is just as higher-order as 𝛼 → 𝛼 , but operationally we will use
Pow (𝛼 ) as a powerset, not a function-space.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 8
We may freely assume standard predicate symbols, such as =, <, and ≤, with
their usual meanings. Meaning will always be clear.
For each datatype 𝛼 we assume a set membership predicate symbol ∈ : 𝛼 ×
Pow(𝛼).
4. For each datatype 𝛼, we assume a disjoint set of variable symbols 𝑎, 𝑏, 𝑐, · · · ∈
VarSymb(𝛼).
We may annotate a variable with its type, as 𝑎 : 𝛼 or 𝑐 : 𝛾. This is especially
useful when writing a comprehension term, so that if 𝑎 ∈ VarSymb(𝛼) and 𝜙 is a
predicate then we may write { 𝑎 | 𝜙 } as { 𝑎 : 𝛼 | 𝜙 } . As typeset strings these look
different, but they denote identical abstract syntax — in the latter case, we just
remind ourself of the (unique) type of 𝑎.5
Definition 2.1.9. Define the syntax of terms and predicates inductively as per
Figure 2.
We call a predicate or a term closed when it has no free variables.
Notation 2.1.10. Our syntax for datatypes from Figure 1 admits tuple types, and
our syntax for terms and predicates includes P(𝑡) (a predicate symbol applied to
a term) and f (𝑡) (a function symbol applied to a term).
This means that if we have a tuple (𝑥 1, . . . , 𝑥𝑛 ) and wish to apply a predicate
P or function symbol f to it, then we should write these P((𝑥 1, . . . , 𝑥𝑛 )) and
f ((𝑥 1, . . . , 𝑥𝑛 )). For our sanity and for that of the reader, we will elide double
brackets and shorten this to P(𝑥 1, . . . , 𝑥𝑛 ) and f (𝑥 1, . . . , 𝑥𝑛 ).
Remark 2.1.11.
1. We could easily unify function symbols and predicates, using the base datatype
of booleans Bool = {⊥⊥, ⊤ }. Boolean connectives like ∧ and ⇒ could also become
function symbols — but not ∀ , unless we also admit higher types, in which case
Pow(𝛼) could become 𝛼 → Bool.
However, note that if predicates were functions then the type corresponding
to P : 𝛼 would not be 𝛼 → Bool; it would be (Time × Place) × 𝛼 → Bool. See
Definition 2.2.1(4) and Remark 2.2.2(3).
2. There is some sneaky subtyping going on, because Time ⊆ Q∞ . This lets us
write things like now+1 and not worry about edge cases.6
5 If you are asked to buy milk and are provided with a shopping list in which the word ‘milk’ is circled three times in red ballpoint pen,
perhaps because you forgot to buy milk in the past, then this does not change the syntax of the shopping list. The red circle is metadata,
intended to help you notice the word ‘milk’ (and perhaps laden with other meaning, such as judgement on your past failures and warning
not to repeat them), but the word ‘milk’ is still just the word ‘milk’ and the syntax of the underlying shopping list is unaffected. In the
same way, ‘: 𝛼 ’ is metadata added to the syntax { 𝑎 | 𝜙 } to make sure that we know what the type of 𝑎 is.
6 There may be other ways to set things up.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 9
Time
8 • • • • • • • • •
7 • • • • • • • • •
6 • • • • • • • • •
5 • • • • • • • • •
4 • • • • • • • • •
3 • • • • • • • • •
2 • • • • • • • • •
1 • • • • • • • • •
0 • • • • • • • • •
Place
Figure 3. Picture of a model with 10 places and 9 times
2.2. Semantics
Definition 2.2.1. A model 𝜍 consists of the following data:
1. Sets Time𝜍 and Place𝜍 of times and places. We will usually omit the subscript
and just write these Time and Place.
2. Zero, one, or more sets of primitive data as convenient and required.
3. For each function symbol f ∈ FSymb of type (𝛼 → 𝛼 ′) = type(f), an interpreta-
tion function 𝜍 (f) : 𝛼 → 𝛼 ′.
The cardinality function # : Pow(𝛼) → Q∞ must be interpreted to map a finite
set to its cardinality, and an infinite set to ∞.
4. For each predicate symbol P ∈ PSymb of type 𝛼 = type(P), an interpretation
function 𝜍 (P) : (Time × Place × 𝛼) → Bool.
The set membership predicate symbol ∈ : 𝛼 × Pow(𝛼) must be interpreted as
actual set membership, which maps (𝑥, 𝑋 ) to ⊤ if 𝑥 ∈ 𝑋 and to ⊥ if 𝑥 ∉ 𝑋 .
5. A function 𝜍 (msg) : (Time × Place × Place × Data) → Bool.
Remark 2.2.2 (Comments on the model).
1. Intuitively, a model is shaped like a rectangle, with places 𝑝 ∈ Place along the
x-axis and times 𝑛 ∈ Time along the y-axis, as illustrated in Figure 3.
2. The meaning 𝜍 (f) of a function symbol f ∈ FSymb does not vary over time and
space, in the sense that if f : 𝛼 → 𝛼 ′, then 𝜍 (f) is just a function in 𝛼 → 𝛼 ′.
Intuitively, this reflects the fact that functions represent things that we do to data
(like ‘addition’ or ‘multiplication’). For example, if 3! returns 6 here and now,
then 3! should return 6 tomorrow and somewhere else.
3. The meaning 𝜍 (P) of a predicate symbol P ∈ PSymb does vary over time and
space, in the sense that if P : 𝛼, then 𝜍 (P) is a function in (Time × Place × 𝛼) →
Bool.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 10
[[𝑑]] 𝑛,𝑝,𝜍 = 𝑑
[[f (𝑡 1, . . . , 𝑡 ar (f ) )]] 𝑛,𝑝,𝜍 = 𝜍 (f)([[𝑡 1 ]] 𝑛,𝑝,𝜍 , . . . , [[𝑡 ar (f ) ]] 𝑛,𝑝,𝜍 )
[[here]] 𝑛,𝑝,𝜍 = 𝑝
[[now]] 𝑛,𝑝,𝜍 = 𝑛
[[ { 𝑎 : 𝛼 | 𝜙 } ]] 𝑛,𝑝,𝜍 = {𝑥 ∈ 𝛼 | 𝑛, 𝑝 ⊨𝜍 𝜙 [𝑎:=𝑥]}
Above, f ranges over function symbols (Definition 2.1.8(1)); 𝜙 ranges over predi-
cates (Figure 2); and 𝑥 ranges over data (times, places, extended rationals, or any
other data).
Figure 4. Meaning of terms (Definition 2.2.5)
Intuitively, this reflects that predicates represent things that we say about data,
and this can depend on time and place. For example, we might assert ‘it’s raining’
here (in London) and now (today), and then ‘it’s not raining’ tomorrow.7
4. Suppose 𝑛 ∈ Time and 𝑝, 𝑝 ′ ∈ Place and 𝑥 ∈ 𝛼. Then intuitively,
𝜍 (msg) (𝑛, 𝑝, 𝑝 ′, 𝑥) = ⊤
means that at time 𝑛, 𝑝 sends a message to 𝑝 ′ with payload 𝑥.
We do not need to worry about multiple message types here — we only need one
msg — because we gave ourselves a datatype MessageType of message types in
Definition 2.1.5(8), so 𝛼 can have the form MessageType × 𝛼 ′ (if we wish) so that
the payload 𝑥 = (𝑚, 𝑥 ′) contains information about what type of message it is.
In this model of message-passing, messages arrive instantly and reliability (no la-
tency; no message loss; no corruption of payload). Obviously, this is a (deliberate)
simplification of real-world behaviour.
Definition 2.2.3. Suppose 𝜍 is a model (Definition 2.2.1) and 𝑛 ∈ Time𝜍 . Then
we define:
1. 𝜍↓𝑛 is the model obtained by restricting 𝜍 to times until 𝑛, including 𝑛.
2. 𝜍↑𝑛 is the model obtained by restricting 𝜍 to times from 𝑛, including 𝑛.
Remark 2.2.4. Pedant point: We distinguish in Figure 5 between premises above a
line, that are part of the derivation-tree, and side-conditions above a line, that are
not part of the derivation-tree and which we put in brackets.8 For example in
7 Bonus fact: ‘rainy London’ is a myth. It rains more in Miami and Orlando. Plus, London doesn’t get hurricanes.
8 Sometimes, this distinction can really matter. True story: the first author once got a paper rejected because the referees didn’t understand
this distinction.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 11
𝐴 = {𝑥 ∈ 𝛼 | 𝑛, 𝑝 ⊨𝜍 𝜙 [𝑎:=𝑥]}
𝐵 = {𝑥 ∈ 𝛼 | 𝑛, 𝑝 ⊨𝜍 𝜙 [𝑎:=𝑥] ∧ 𝜓 [𝑎:=𝑥]} 𝜍 (P)(𝑛, 𝑝, [[𝑡]] 𝑛,𝑝,𝜍 )
(#𝐵/#𝐴 ≥ 𝑟 ) (PValid)
(PiValid) 𝑛, 𝑝 ⊨𝜍 P(𝑡)
Î
𝑛, 𝑝 ⊨𝜍 𝑟 𝑎 : 𝜙 .𝜓
𝜍 (msg)(𝑛, 𝑝 ′, 𝑝, [[𝑡]] 𝑛,𝑝,𝜍 ) 𝜍 (msg)(𝑛, 𝑝, 𝑝 ′, [[𝑡]] 𝑛,𝑝,𝜍 )
(inValid) (outValid)
𝑛, 𝑝 ⊨𝜍 in𝑝 ′ (𝑡) 𝑛, 𝑝 ⊨𝜍 out𝑝 ′ (𝑡)
[[𝑡]] 𝑛,𝑝,𝜍 ∈ [[𝑡 ′]] 𝑛,𝑝,𝜍 (𝑛 ≥ #Time)
∈ Valid)
(∈ (tOFlow)
𝑛, 𝑝 ⊨𝜍 𝑡 ∈ 𝑡 ′ 𝑛, 𝑝 ⊨𝜍 𝜙
[[𝑡]] 𝑛,𝑝,𝜍 , 𝑝 ⊨𝜍 𝜙 ([[𝑡]] 𝑛,𝑝,𝜍 ∈N) 𝑛, [[𝑡]] 𝑛,𝑝,𝜍 ⊨𝜍 𝜙 ([[𝑡]] 𝑛,𝑝,𝜍 ∈Place)
(tValid) (pValid)
𝑛, 𝑝 ⊨𝜍 @𝑡 𝜙 𝑛, 𝑝 ⊨𝜍 @𝑡 𝜙
𝑛, 𝑝 ⊨𝜍↑𝑛 𝜙 𝑛, 𝑝 ⊨𝜍↓𝑛 𝜙
(BOUValid) (EOUValid)
𝑛, 𝑝 ⊨𝜍 BOU(𝜙) 𝑛, 𝑝 ⊨𝜍 EOU(𝜙)
Figure 5. Valid sequents (Definition 2.2.5)
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 12
(tValid), [[𝑡]] 𝑛,𝑝,𝜍 , 𝑝 ⊨𝜍 𝜙 is a premise of the rule, whereas [[𝑡]] 𝑛,𝑝,𝜍 ∈ N is not: it
is a side-condition which must be satisfied in order for the instance of the rule to
be valid.
Definition 2.2.5. Suppose that:
1. 𝜍 is a model.
2. 𝑛 ∈ Time is a time.
3. 𝑝 ∈ Place is a place.
Suppose that 𝑡 is a closed term and 𝜙 is a closed predicate (ones having no free
variable symbols). Then we define
1. [[𝑡]] 𝑛,𝑝,𝜍 the denotation of 𝑡 as per the rules in Figure 4, and
2. the set of valid sequents 𝑛, 𝑝 ⊨𝜍 𝜙 in Figure 5.
If 𝑛, 𝑝 ⊨𝜍 𝜙 is valid, then we say that 𝜙 holds at time 𝑛 and place 𝑝.
Remark 2.2.6 (Comments on the rules). We discuss the rules in Figure 5:
1. Rule (∈
∈ Valid):
This is routine: 𝑡 ∈ 𝑡 ′ holds at time 𝑛 and place 𝑝 when the denotation of 𝑡 at time
𝑛 and place 𝑝 is an element of the denotation of 𝑡 ′ at time 𝑛 and place 𝑝.
2. Rule (PValid):
P(𝑡) holds at time 𝑛 and place 𝑝 when 𝜍 (P)(𝑥) returns ⊤ at time 𝑛 and place 𝑝,
where 𝑥 is the value denoted by 𝑡 at time 𝑛 and place 𝑝.
3. Rules (inValid) and (outValid):
Rules (inValid) and (outValid) specify that:
• in𝑝 ′ (𝑡) holds at time 𝑛 and place 𝑝, when 𝑝 receives a message from 𝑝 ′ at
time 𝑛 with payload 𝑡, and
• out𝑝 ′ (𝑡) holds at time 𝑛 and place 𝑝, when 𝑝 sends a message to 𝑝 ′ at time
𝑛 with payload 𝑡.
Thus, the message-passing relation 𝜍 (msg)(𝑛, 𝑝, 𝑝 ′, 𝑥) in the model manifests
itself in the predicate language as two modalities in𝑝 (𝑡) and out𝑝 (𝑡), as per
Remark 2.3.1.
4. Rule (tOFlow):
This rule can only be triggered if Time is a finite initial segment of N; if Time is
infinite, i.e. if time is all of N, then (tOFlow) cannot be used, because 𝑛 ≥ #Time
is impossible.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 13
If 𝑛 is a time greater than or equal to #Time — e.g. suppose time is finite and
we try to evaluate @now+1⊥ on the final day of time — then every predicate 𝜙 is
deemed valid (including ⊥ ). We call this a time overflow. This is expressed by
rule (tOFlow).
In contrast, if 𝑛 denotes a negative time — as might happen if for example we try
to evaluate @now-1⊤ on day 0 — then every predicate is deemed invalid (including
⊤ ). We call this a time underflow. This is expressed by the absence of any rule
that could derive a valid sequent for the form 𝑛, 𝑝 ⊨𝜍 𝜙 where 𝑛 < 0.
Why should we deem everything to hold after the end of time (even false), and
nothing to hold before the beginning of time (even true)? There is a specific
technical reason for this: typically service commitments have the form
‘if something happens today, I will do something in the future’
or
‘if something happened in the past, then I must do something now’.
By letting time underflows be invalid (even ⊤ ) and time overflows be valid (even
⊥ ), this makes service commitments fail gracefully at the edge cases, in the sense
that they become trivially true.
The price paid is that a service commitment of the form ‘if something happens
tomorrow, I will do something today’ may fail ungracefully. If this becomes an
issue then we can reevaluate our design choices, but so far pushing ungraceful
behaviour into this corner case seems to be helpful, because no examples of
interest (that we have seen so far) actually use it.
5. Rule (pValid):
Suppose that at time 𝑛 and place 𝑝, 𝑡 denotes a place 𝑝 ′. Then @𝑡 𝜙 holds at time
𝑛 and place 𝑝, if 𝜙 holds at time 𝑛 and place 𝑝 ′. Thus, @𝑡 𝜙 ‘teleports’ us to 𝑡.
6. Rule (tValid):
Suppose that at time 𝑛 and place 𝑝, 𝑡 denotes a natural number 𝑛′. Then @𝑡 𝜙
holds at time 𝑛 and place 𝑝, if 𝜙 holds at 𝑛′ and place 𝑝.
Note the subtlety here that 𝑛′ need not be a time; 𝑛′ could be greater than #Time,
or 𝑛′ could be negative.
• If 𝑛′ is negative then by design our rules will not derive a valid sequent, so
in this case @𝑡 𝜙 cannot hold at time 𝑛 and place 𝑝.
• If 𝑛′ is greater than #Time then 𝑛′, 𝑝 ⊨𝜍 𝜙 holds by rule (tOFlow), so
𝑛, 𝑝 ⊨𝜍 @𝑡 𝜙 holds.
• If 𝑛′ ∈ Time then the sequent 𝑛′, 𝑝 ⊨𝜍 𝜙 may or may not be valid, just
depending on the rest of the rules.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 14
7. (BOUValid) and (EOUValid):
• EOU is a modality that ends time when it is invoked (thus deleting the
future). EOU stands for ‘End Of Time’ (a riff on ‘EOF’ for ‘End Of File’ and
‘EOS’ for ‘End Of String’).
• BOU is a modality that begins time when it is invoked (thus deleting the
past). BOU stands for ‘Beginning Of Time’ (a riff on ‘BOF’ for ‘Beginning
of File’).
Both modalities work by truncating (restricting) the model, as per Definition 2.2.3.
8. Rule (PiValid):
Intuitively, 𝑟 𝑎 : 𝜙 .𝜓 holds at time 𝑛 and place 𝑝 when the assertion
Î
‘𝜙 [𝑎:=𝑣] implies 𝜓 [𝑎:=𝑣]’
holds for at least 𝑟 ∗ 100 percent of the values 𝑣 that make 𝜙 [𝑎:=𝑣] hold. It may
hold for more, but it cannot hold for less.
So if 𝜙 [𝑎:=𝑣] holds for 10 values of 𝑣, and from those values 𝜓 [𝑎:=𝑣] holds for
9 values, then 0.9 𝑎 : 𝜙 .𝜓 and 0.8 𝑎 : 𝜙 .𝜓 hold, and 1 𝑎 : 𝜙 .𝜓 does not.
Î Î Î
𝑎 : 𝜙 .𝜓 is equivalent to what we would normally write just as ∀𝑎.𝜙 ⇒ 𝜓 .
Î1
More discussion of this is in Remark 2.4.1.
Remark 2.2.7 (Design choices for time). We consider some design choices in
Figure 4 having to do with how time (and place) are treated:
1. We continue a discussion from Remark 2.2.2(2&3).
The meaning 𝜍 (f) of a function symbol f in the clause
[[f (𝑡 1, . . . , 𝑡 ar (f ) )]] 𝑛,𝑝,𝜍 = 𝜍 (f)([[𝑡 1 ]] 𝑛,𝑝,𝜍 , . . . , [[𝑡 ar (f ) ]] 𝑛,𝑝,𝜍 )
in Figure 4 does not depend on time or place. Put another way, the clause is not
this:
[[f (𝑡 1, . . . , 𝑡 ar (f ) )]] 𝑛,𝑝,𝜍 = 𝜍 (f)(𝑛, 𝑝, [[𝑡 1 ]] 𝑛,𝑝,𝜍 , . . . , [[𝑡 ar (f ) ]] 𝑛,𝑝,𝜍 )
Contrast with rule (PValid) for P in Figure 5, which does let the meaning of P
vary in time and place.
2. [[𝑡]] 𝑛,𝑝,𝜍 in Figure 4 interprets a term 𝑟 in the context of a time 𝑛, a place 𝑝, and
an interpretation 𝜍. Thus as per Figure 4
• [[here]] 𝑛,𝑝,𝜍 = 𝑝, reflecting that ‘here’ means ‘our current location’, and
• [[now]] 𝑛,𝑝,𝜍 = 𝑛, reflecting the fact that ‘now’ means ‘our current time’.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 15
Note that participants in a distributed system may not have access to a global
clock (they may have local clocks, but there is no guarantee that these are
synchronised [Lam78]), so [[-]] 𝑛,𝑝,𝜍 evaluates a reality which is known to an
omniscient observer, but which may not necessarily be directly accessible to
local observers within the system.
We could model local clocks of participants in various ways, e.g. just by providing
a constant-symbol along with suitable axioms (e.g. an axiom that local time
should we within some 𝜖 of ‘real’ time).
3. Concepts such as ‘today’ (for whatever this means to a participant) can be
expressed manually by computations on time, e.g. designing a predicate to
describe the times between the latest previous midnight and the earliest next
midnight (whatever that means).
2.3. The in and out modalities
Remark 2.3.1. Our predicate syntax from Figure 2 features predicate-formers
in𝑝 (𝑡) and out𝑝 (𝑡). These are two natural modal operators (arguably the two
natural modal operators) corresponding to the message-passing primitive 𝜍 (msg)
in the model from Definition 2.2.1(5).
There is of course much precedent for this: for instance recall how the accessi-
bility relation on possible worlds in Kripke models gives rise to two modalities □
and ^ [BRV01, Definition 1.9].
A simple lemma expresses the connection between ‘in’ and ‘out’:
Lemma 2.3.2. Suppose 𝑛 ∈ Time and 𝑝, 𝑝 ′ ∈ Place and 𝑡 is a term and 𝑑 is a datum
of the same type as 𝑡, and suppose 𝜍 is a valuation. Then:
1. 𝑛, 𝑝, 𝜍 ⊨ out𝑝 ′ (𝑑) if and only if 𝑛, 𝑝 ′, 𝜍 ⊨ in𝑝 (𝑑).
2. 𝑛, 𝑝, 𝜍 ⊨ out𝑝 ′ (𝑡) if and only if 𝑛, 𝑝 ′, 𝜍 ⊨ in𝑝 (@𝑝 𝑡).
Proof. We consider each part in turn:
1. We check the derivation rules in Figure 5 and see that both hold if and only if
𝜍 (msg)(𝑛, 𝑝, 𝑝 ′, [[𝑡]] 𝑛,𝑝,𝜍 ).
2. From part 1 of this result, noting that [[@𝑝 𝑡]] 𝑛,𝑝 ′,𝜍 = [[𝑡]] 𝑛,𝑝,𝜍 . □
Remark 2.3.3. In view of Lemma 2.3.2, the reader might wonder if we can express
out using in, analogously to how (for example) we can write 𝑃 ∨ 𝑄 = ¬ (¬
¬𝑃 ∧ ¬𝑄)
or ^𝑃 = ¬ □¬ ¬𝑃.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 16
It is true that we can express out𝑞 (𝑥) using in𝑝 (𝑥), but this turns out to be
quite unnatural:
out𝑞 (𝑥) is =here ∧ @𝑞 in𝑝 (𝑥).
∃ 𝑝.𝑝=
Above, ∃ 𝑝 is quantification over places, and = is an equality relation on places.
The situation here is not as for 𝑃 ∨ 𝑄 = ¬ (¬¬𝑃 ∧ ¬𝑄) or ^𝑃 = ¬ □¬ ¬𝑃, because
=here∧
∃ 𝑝.𝑝= ∧ @𝑞 in𝑝 (𝑥) is more complicated than out𝑞 (𝑥) (involving two powerful
constructs; existential quantification over places and equality) whereas ¬ (¬ ¬𝑃 ∧
¬𝑄) and ¬ □¬ ¬𝑃 are longer than but not more complicated than 𝑃 ∨ 𝑄 and ^𝑃
respectively.
Thus, we can say that in and out are both natural primitives in our logic.
By similar arguments we can see why we use the modal-flavoured in and
out instead of referring directly to the underlying message-passing relation in
the model. Suppose we assume a binary predicate msg in the syntax that gets
interpreted directly as 𝜍 (msg). Then we can express in𝑝 (𝑥) and out𝑞 (𝑥) as
follows:
in𝑝 (𝑥) is ∃𝑞.𝑞= =here ∧ msg(now, 𝑝, 𝑞, 𝑥) and
out𝑞 (𝑥) is ∃ 𝑝.𝑝==here ∧ msg(now, 𝑝, 𝑞, 𝑥).
One reason that modal logic is useful is that, as we see above, modalities can be
clearer and more convenient than direct reference to an underlying relational
structure, if we want to talk about properties from the point of view of points in
the model — which is what we will mostly want to do in this document.
2.4. Pragmatic implication
Remark 2.4.1. A pragmatic implication9 has the form
Î𝑟
𝑐 𝑎 : 𝜙 .𝜓
where
• 𝑎 is a variable symbol of type 𝛼,
• 𝜙 and 𝜓 are predicates,
• 𝑟 ∈ [0, 1] is a ratio, and
• 𝑐 ≥ 0 is a threshold.
To evaluate 𝑛, 𝑝 ⊨𝜍 𝑐𝑟 𝑎 : 𝜙 .𝜓 , we define
Î
𝐴 = {𝑥 ∈ 𝛼 | 𝑛, 𝑝 ⊨𝜍 𝜙 [𝑎:=𝑥]} and
𝐵 = {𝑥 ∈ 𝛼 | 𝑛, 𝑝 ⊨𝜍 𝜙 [𝑎:=𝑥] ∧ 𝜓 [𝑎:=𝑑]} = {𝑥 ∈ 𝐴 | 𝑛, 𝑝 ⊨𝜍 𝜓 [𝑎:=𝑥]}
and we proceed as follows:
9 The Π notation quotes the standard notation for dependent product (permalink), which is a generalised implication.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 17
• If #𝐴 < 𝑐 then we return ⊤ .
• If #𝐴 ≥ 𝑐 and #𝐵/#𝐴 ≥ 𝑟 then we return ⊤ .
• If #𝐴 ≥ 𝑐 and #𝐵/#𝐴 < 𝑟 then we return ⊥ .
The pragmatic implication treated in Figure 5 lacks the threshold 𝑐, but we shall
see in Examples 2.4.3(6) that the version with the threshold is expressible using
the version without.
Some notation will be useful in Example 2.4.3 and later:
Notation 2.4.2. Suppose that msgtyp ∈ MessageType is a message type and sup-
pose 𝑝 ∈ Place is a place. Define syntactic sugar msgtypIn𝑝 (𝑡) and msgtypOut𝑝 (𝑡)
and msgtypIn𝑝 () and msgtypOut𝑝 () as follows:
msgtypIn𝑝 (𝑡) is in𝑝 (msgtyp, 𝑡)
msgtypOut𝑝 (𝑡) is out𝑝 (msgtyp, 𝑡)
msgtypIn𝑝 () is in𝑝 (msgtyp)
msgtypOut𝑝 () is out𝑝 (msgtyp).
Example 2.4.3.
1. 10 𝑎 : 𝜙 .𝜓 expresses a meaning that we would write in normal predicate logic
Î
as ∀𝑎.𝜙 ⇒ 𝜓 . Thus, we can express 𝜙 ⇒ 𝜓 just as
is
Î1
𝜙 ⇒𝜓 0𝑎 : 𝜙 .𝜓
for 𝑎 a fresh variable symbol that is not free in 𝜙 or 𝜓 .
2. ⊤ is just ⊥ ⇒ ⊥ and ¬𝜙 is just 𝜙 ⇒ ⊥ .
3. ∀𝑎.𝜙 is just 10 𝑎 : ⊤ .𝜙 and ∃ 𝑎.𝜙 is just ¬ ∀𝑎.¬
Î
¬𝜙.
4. Suppose 𝜙 and 𝜙 are predicates. Define disjunction 𝜙 ∨ 𝜙 ′ and conjunction
′
𝜙 ∧ 𝜙 ′ by
𝜙 ∨ 𝜙′ is ¬𝜙) ⇒ 𝜓
(¬ 𝜙 ∧ 𝜙′ is ¬ (¬
¬𝜙 ∨ ¬𝜙 ′).
5. Suppose 𝑎 is a variable symbol with type 𝛼 and 𝑡 is a term with type Pow(𝛼).
Then define bounded existential and universal quantification ∃ 𝑎∈𝑡 .𝜙 and
∀𝑎∈𝑡 .𝜙 by
∃ 𝑎∈𝑡 .𝜙 is ∈𝑡 ∧ 𝜙
∃ 𝑎.𝑎∈ ∀𝑎∈𝑡 .𝜙 is ∈𝑡 ⇒ 𝜙 .
∀𝑎.𝑎∈
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 18
6. Threshold.
To express 𝑐𝑟 𝜙 .𝜓 using 𝑟 𝜙 .𝜓 , we just write
Î Î
Û Î
∃ 𝑎 1 , . . . , 𝑎𝑐 . (𝜙 [𝑎:=𝑎𝑖 ] ∧ 𝑎𝑖 ≠𝑎 𝑗 ) ⇒ 𝑟𝜙 .𝜓 .
1≤𝑖< 𝑗 ≤𝑐
In practice we would want to offer the user the version with the threshold as
a primitive, but the above encoding shows that this makes no difference to
expressivity.
7. To express that over the next 1000 time steps (not including now) we will respond
to a fraction of (at least) 0.9 of ping requests, assuming we get at least 10 ping
requests, we could write
∧ @𝑛′ pingIn𝑞 ().@𝑛′ +1 pongOut𝑞 ().
Î0.9
pingCommit = 10 𝑛′, 𝑞 : now < 𝑛′ ≤ now+1000∧
As per Notation 2.4.2, pingIn𝑞 () above is shorthand for in𝑞 (ping) where ping :
MessageType, and similarly for pongOut𝑞 (). We also assume above that 𝑞 can
send at most one ping message to 𝑝 per timestamp.
8. To express that over the next 1000 time steps (not including now) we will relay
a fraction of 0.9 of messages of type m : MessageType from a particular 𝑞, to a
particular 𝑞′, assuming we get at least 10 such requests, we could write
∧ @𝑛′ mIn𝑞 (𝑣).@𝑛′ +1 mOut𝑞 ′ (𝑣).
Î0.9
relayCommit𝑞→𝑞 ′ = 10 𝑛′ : now < 𝑛′ ≤ now+1000∧
As per Notation 2.4.2, mIn𝑞 (𝑣) is shorthand for in𝑞 ((M, 𝑣)), and similarly for
mOut𝑞 (𝑣).
9. To express that over the next 1000 time steps (not including now) we will relay a
fraction of 0.9 of any type of message from any 𝑞, to a particular 𝑞′, assuming
we get at least 10 such requests, we could write
∧@𝑛′ in𝑞 (𝑣).@𝑛′ +1 out𝑞 ′ (𝑣).
Î0.9
relayCommit∗→𝑞 ′ = 10 𝑛′, 𝑞, 𝑣 : now < 𝑛′ ≤ now+1000∧
10. Composition of service commitments: A service commitment is just a predi-
cate. Composition of service commitments comes ‘for free’ because predicates
can be combined using logical connectives, like ⇒ .
For example, to express that if 𝑞′ will commit to pingCommit then we will provide
relayCommit∗→𝑞 ′ , we could write:
(@𝑞 ′ pingCommit) ⇒ relayCommit∗→𝑞 ′ .
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 19
11. Rate limiting: To express that from now onwards, in any 1000 units of time at
most 10 pings arrive, we could write:
∀𝑛.𝑛 ≥ now ⇒ @𝑛 111 𝑛′ : 𝑛 ≤ 𝑛′ < 𝑛+1000 ∧ @𝑛 ′ ∃𝑞.pingIn𝑞 .⊥
Î
⊥.
To express that there are at most 10 datums, just use existential quantification
and inequality checks.
2.5. Example: the PingCommit service commitment
Some notation will be useful:
Notation 2.5.1. Suppose 𝜙 is a predicate.
1. Define a tomorrow modality tomorrow(𝜙) by:
tomorrow(𝜙) is @now+1𝜙 .
2. Define a henceforth (or from today) modality henceforth(𝜙) by:
henceforth(𝜙) is ∈ Time.𝑛≥now ⇒ @𝑛 𝜙 .
∀𝑛∈
Example 2.5.2. Let’s unpack the meaning of 𝑛, 𝑝 ⊨𝜍 tomorrow(𝜙):
1. 𝑛, 𝑝 ⊨𝜍 tomorrow(𝜙) is 𝑛, 𝑝 ⊨𝜍 @now+1𝜙.
2. 𝑛, 𝑝 ⊨𝜍 @now+1𝜙 is [[now+1]] 𝑛,𝑝,𝜍 , 𝑝 ⊨𝜍 𝜙.
3. [[now+1]] 𝑛,𝑝,𝜍 = [[now]] 𝑛,𝑝,𝜍 +1 = 𝑛+1.
4. Thus, 𝑛, 𝑝 ⊨𝜍 tomorrow(𝜙) is valid just when 𝑛+1, 𝑝 ⊨𝜍 𝜙.
Thus we can say
tomorrow(𝜙) is true precisely when 𝜙 is true tomorrow.
Example 2.5.3. We can now express the PingCommit service commitment. It
conveniently illustrates how we can use our machinery to make a very simple, but
still meaningful, service commitment. We continue Example 2.4.3(7) and assume
message types ping, pong ∈ MessageType. Then for a participant 𝑝 ∈ Place:
1. 𝑝 can express that henceforth, if it receives a ping message from 𝑞 then it will
send a pong message to 𝑞′ tomorrow, as follows:
PingCommit = henceforth in𝑞 (ping) ⇒ tomorrow(out𝑞 ′ (pong)) .
We can use our sugar from Notation 2.4.2 to put this more succinctly:
PingCommit = henceforth pingIn𝑞 () ⇒ tomorrow(pongOut𝑞 ′ () .
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 20
2. To express that 𝑝 commits to PingCommit at some time 𝑛, we assert a judgement
𝑛, 𝑝 ⊢ PingCommit.
This judgement (and the commitment it expresses) is valid in a model 𝜍 when
𝑛, 𝑝 ⊨𝜍 PingCommit.
2.6. Expert mode
Pragmatic implication seems to be enough for now, but users may want to express
more subtle notions of implication, e.g. based on information about behaviour
that has been gossiped from other participants. For this, we need to be able
to collect events, pass them as data, and (most likely) do arithmetic on their
cardinalities. Accordingly, our logic allows us to collect instances of particular 𝜙
as
{𝑑 ∈ 𝛼 | 𝑛, 𝑝 ⊨𝜍 𝜙 [𝑎:=𝑑]} : Pow(𝛼).
We can recover pragmatic implication 𝑟 𝑎 : 𝜙 .𝜓 using collection (and the
Î
connectives of propositional logic) as follows:
# { 𝑎 | 𝜙 } ≠ 0 ⇒ # { 𝑎 | 𝜙 ∧ 𝜓 } /# { 𝑎 | 𝜙 } ≥ 𝑟 .
Participants are free to collect sets as data 𝑆 : Pow(𝛼), pass these around
as data to other participants, for sophisticated participants to ‘roll their own’
variants of pragmatic implication (or whatever else they find useful, since sets
collection is very expressive).
2.7. Using EOU and BOU to evaluate service commitments
Recall from Remark 2.2.6(7) that EOU and BOU truncate the model, deleting the
future and the past respectively. This can be useful for expressing that predicates
are valid up to a certain point or from a certain point, which is a common
practical use-case. For example, the expression ‘so far so good’ can be expressed
as EOU(good) (for a suitable predicate expressing ‘good’).
Consider participants 𝑝 and 𝑝 ′, a predicate) 𝜙, and times 𝑛 ≤ 𝑛′. Then 𝑝 ′ may
be interested in validity of the following:
1. 𝑝 has kept a service commitment that 𝜙 made at time 𝑛 up to now.
𝑛′, 𝑝 ⊨𝜍 EOU(@𝑛 (𝜙)).
2. 𝑝 has not kept a service commitment 𝜙 make at time 𝑛.
¬@𝑛 (𝜙)).
𝑛′, 𝑝 ⊨𝜍 EOU(¬
3. 𝑝 has mostly (at least 0.9 of the time) kept a service commitment henceforth(𝜙),
made at time 𝑛.
𝑛′, 𝑝 ⊨𝜍 EOU( 0.9 𝑛′′ : 𝑛′′ ≥ 𝑛.@𝑛′′ 𝜙).
Î
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 21
2.8. How to express offers, using axioms
In practice, commitments are not made arbitrarily. Typically, an offer is sent out
to some set of eligible participants, and the offer is available only to a limited
number of free slots.
A real-life example is: an online retailer advertising 50 items for sale, who will
not post abroad. Thus the offer is available to (at most) any fifty from the set of
all participants in the same country as the retailer.
It is useful to fix some terminology:
1. A predicate is a statement in a formal language that can be evaluated to
true or false. In this document, we are specifically interested predicates in
the formal language in Figure 2.
2. A commitment a predicate for which there is some social or technical
agreement that this predicate should evaluate to true.10
3. An offer is a mechanism by which a predicate is converted into a com-
mitment. Thus, an offer is a formal mechanism for committing to what a
predicate expresses.
We now study how we can we express offers within message logic. Specifically:
we study the example of offering to commit to 𝜙, to at most 𝑚 ≥ 0 from a set
𝑃 ⊆ P participants on a first-come first-served basis.
We assume a tuple of data
(𝑎, 𝑏, 𝜙, 𝑃, 𝑚, accept [a.𝜙] , offer [𝑎.𝜙] )
as follows:
1. 𝜙 is a predicate, 𝑎 : Place is a variable of type Place, and 𝑏 : 𝛽 is a variable
of some other type. Here:
• 𝜙 represents a commitment which we are offering to make.
• 𝑎, which may be free in 𝜙, represents the participant to whom we
make the commitment.
• 𝑏, which may be free in 𝜙, represents some configuration data for
the commitment (for this discussion, we do not care what it is).
2. 𝑃 ⊆ Place is a set of candidate promisees.
Intuitively, if 𝑝 ∈ 𝑃 then we are willing to commit to 𝜙 [𝑎:=𝑝], subject to
availability.
10 Note the word ‘should’ here, rather than ‘must’. A predicate that must be true is an axiom; that is a different idea again.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 22
3. 𝑚 : N ≥0 is a number, which we call the number of free slots.
We are willing to commit to 𝜙 for any 𝑚 of the elements of 𝑃 (and possibly
fewer, if fewer ask, but no more).
4. accept [a.𝜙] : MessageType is a message type, and offer [𝑎.𝜙] : Pow(Place) ×
N ≥0 is a predicate symbol.
Note that accept [a.𝜙] and offer [𝑎.𝜙] are atomic tokens in our syntax; the
subscripts 𝑎.𝜙 are part of how we write them to remind us that we consider
these tokens to be associated to 𝜙, but this association just exists in our
mind. As far as the mathematics is concerned, these are just two symbols.
Then we can impose an axiom on offer [𝑎.𝜙] as follows:
∀𝑃 : Pow(Place). ∀𝑚 : N ≥0 .𝑚 ≥ 1 ⇒
offer [𝑎.𝜙] (𝑃, 𝑚) ⇒ ( ∃ 𝑝∈ ∈𝑃, 𝑦 : 𝛽.in𝑝 (accept [a.𝜙] , 𝑦) ⇒
∈𝑃, 𝑦 : 𝛽. in𝑝 (accept [a.𝜙] , 𝑦)∧
∃ 𝑝∈ ∧
tomorrow 𝜙 [𝑎:=𝑝, 𝑏:=𝑦] ∧ offer [𝑎.𝜙] (𝑃\{𝑝}, 𝑚-1)
Above, ⇒ represents (normal logical) implication, and ∃ 𝑝∈ ∈𝑃 represents bounded
quantification, as per Example 2.4.3(1&5). This asserts that at each timestep, if
we have offered 𝜙 and at least one participant sends us a message accepting this
offer (with some configuration 𝑦), then tomorrow we commit to 𝜙 for one of the
participants who messaged in.
We have used tomorrow just as an example: any other time modality (including
none at all) would be valid depending on what we want to express. We could
also accept multiple offers at a given timestep; again, it would just be a matter of
writing up the relevant predicate.
Remark 2.8.1. Even though we have shown how to express an offer as an axiom
(about a predicate called offer [𝑎.𝜙] ), this does not mean that an offer is an axiom
in the sense of pure mathematics.
In mathematics, if a model does not satisfy the axioms of (say) being a group,
then it is not a group. This is by definition; there is no debate about it.
In our model of service commitments, if a model does not satisfy the axiom
for offer [𝑎.𝜙] , this does not necessarily mean that the offer has not been made. It
could also mean that the offer was made but not fulfilled. There is an additional
step here, where participants in the distributed system decide how serious the
breach of this commitment is; conversely even if an offer is made and fulfilled,
participants in the distributed system might decide for one reason or another
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 23
that they are still not satisfied. See the discussions in Subsections 3.2.6 and 3.3.3
and Remark 3.3.1.
3. The operational framework
So far, we have used message logic to describe service commitments: now we
will discuss how we could apply it to help us implement decision-making and
coordination in practical large scale distributed systems.
1. In Subsection 3.1 we outline the differences between our logic and the real
world, and explain why the differences make sense.
2. In Subsection 3.2 we explain the seven stages in the life cycle of a service
commitment in a distributed system.
3. In Subsection 3.3 we spell out application scenarios, portraying how par-
ticipants can use service commitments for local decision-making.
3.1. Differences between our logic and the real world
Logic is by design an abstraction of some (usually messy) reality. It is important
and useful to abstract — but it is also important to remember what we have
abstracted, why, and to what extent realistic details could be put back into the
logical model if required. We consider a list of such differences next:
1. Messages have latency and may get lost:
• In the logic, messages arrive instantly and do not go missing.
• In the real world, messages take time to arrive (i.e., they have latency)
and may get lost.
The logic is a reasonable first approximation to the real world, but a more
detailed model could include aspects of latency and possibly message loss.
Including this would not be technically difficult.
2. Observability:
• In the logic, predicates are evaluated by an omniscient observer
standing outside the model’s notion of time and space.
• In the real world, participants are not omniscient. They can only see
local and partial information and are trapped in the model’s notion
of time and space.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 24
An omniscient observer can stand outside the model and make specifi-
cations about what should happen. Participants can then reason locally
about whether such a specification has actually been met, based on their
local knowledge. Building infrastructure to facilitate such local reasoning
remains future work.
3. Network communication:
• In the logic, any participant can directly send a message to any other
participant. In effect, all participants in the network are connected
to each other (i.e., everyone is each other’s peer).
• In the real world, participants can only directly send messages to a
small number of other nodes; longer-range communication happens
through gossiping.
Gossiping could be folded into the logic quite easily. This is future work.
4. Information flow control:
• In the real world, nodes have the flexibility to express who should
be able to read what part of a message by means of encryption/de-
cryption.
• In the logic, the omniscient observer knows everything, including
any decryption keys that may be required. Thus information flow
control is inherently not represented in the logic as its currently
designed.
Folding this into the logic is possible, but would add more complexity.
Future work could explore information flow control in our logic model.
5. Evaluation:
In Subsection 2.8 we distinguish between a predicate, a commitment (=
a predicate for which there is agreement that it should be true), and an
offer (= an offer to commit). Offers may be made to many participants, and
actual commitments made to only some of them (as for example in a sale
of goods that is available ‘while stocks last’).
However, deciding whether breaking a commitment, or keeping it, is a
good or a bad thing, may vary.11 See the discussion of quality evaluation in
11 This seems to be a significant difference between pure logic, where if we assert a predicate as true and it isn’t, then that’s bad; and applied
logic, where we may assert a predicate as true and then, for whatever reason, change our mind.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 25
Subsection 3.2, and the discussion with examples in Subsection 3.3 (and in
particular the discussion in Subsection 3.3.3). Deciding how much of this
evaluation stage to express in the logic, and how much to leave external to
it, is a design question for future work.
3.2. The seven stages of a service commitment
We will now split the life cycle of a service commitment — as it works in a real-life
distributed system — into seven stages as follows:
1. Offer creation.
2. Offer dissemination.
3. Offer negotiation.
4. Commitment activation.
5. Consumption.
6. Quality evaluation.
7. Termination.
We discuss each stage in turn:
3.2.1. Service offer creation
Recall from Definition 1.1.1 that a service commitment is a commitment to
some pattern of messaging behavior. Any participant 𝑝 can create a service
commitment SC about its own future behaviour, but usually first it offers to do
so:
Definition 3.2.1. A service offer is an announcement that a participant is
willing to commit to a service commitment (Definition 1.1.1).
We need service offers for service commitments because in practice
1. a service commitment might be one half of a trade (‘I’ll give you a donut if
you’ll give me a dollar’), or
2. a service commitment, even if free, might have limited availability (e.g.
‘free doughnuts! only while stocks last!’).
So in practice, before 𝑝 commits to SC, it usually creates a corresponding service
offer SO to let other participants know what behavior 𝑝 can commit to, and the
terms of agreement for that commitment.
Concretely, a service offer SO is a data tuple
(Promiser, Promisees, Content, ActivationStatus, ExpirationTime, Identifier), (1)
as we now describe:
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 26
1. Promiser: The promiser has to be 𝑝 (the participant making the offer). It is
important to note that 𝑝 cannot make offers or commitments on behalf of
others.
2. Promisees: The participants for whom the commitment encoded in SO is
intended. This set is identified (in a standard way) by a function that inputs
a participant identifier and outputs ‘yes’ or ‘no’.12
3. Content: A predicate in message logic identifying the specific constraint on
behaviour that 𝑝 commits to, and the terms of agreement for a commitment.
4. Activation Status: The current availability status of the service offer (active
or inactive). The activation status can differ per promisee.
5. Expiration Time: The deadline after which SO becomes invalid. The expi-
ration time can differ per promisee.
6. Identifier: A unique identifier by which 𝑝 or any other participant may
refer to the service commitment so at 𝑝.
To make the service offer SO active (rather than it being just some data on
disk), 𝑝 adds SO to a special field in its memory called its configuration.
3.2.2. Service offer dissemination
Once a participant 𝑝 has created a service offer SO to express an offer of some
commitment SC, it can communicate SO to other participants as a message using
established gossiping and pub-sub protocols.
Concretely, the message consists of a tuple of the fields in equation (1) in
Subsection 3.2.1. 𝑝 can choose to cryptographically encrypt (some of) the fields.
For example, 𝑝 may not want to disclose the list of promisees and encrypt
the content such that only the target promisee can decrypt it (rather than all
promisees or other participants). The message is cryptographically signed by 𝑝:
write this sig𝑝 (SO).13
There is no restriction that sig𝑝 (SO) can only be communicated to participants
in the promisees of so — such a restriction would make no sense on a gossip
network! — but when sig𝑝 (SO) does reach a promisee 𝑞, 𝑞 can update its local
configuration by storing the pair (SO, 𝑝). 𝑞 can also subscribe to changes in
12 The ‘always yes’ function is a promise to everyone; the ‘always no’ function is a (vacuous) promise to nobody; a singleton function that
returns ‘yes’ on just a single participant represents a promise to that one participant; and so on.
13We assume that signatures cannot be forged.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 27
the service offer (e.g., deactivation or expiration time modifications) or to stay
informed of any updates to the offer’s state.
In some cases, participants may need to discover existing service commitments
dynamically. This typically involves querying the network for commitments
that match certain criteria, such as commitments involving a specific promiser,
promisee, or service. The exact query language is considered future work, but
one possibility would be to use Anoma intents as a basis for discovery.
3.2.3. Service offer negotiation
When a service offer SO from promiser 𝑝 reaches a promisee 𝑞, 𝑞 will choose
precisely one of the following options:
1. Do nothing: Promisee 𝑞 may not be interested in consuming the service
offer from 𝑝.
2. Negotiate: Promisee 𝑞 may want to negotiate the terms of the offer. This
phase is valuable for 𝑞 if it can get the same service for lower costs,14 and
this also provides pricing information for 𝑝 to use in future offers.
3. Request for service commitment activation: Promisee 𝑞 may agree on the
terms of the service offer and request 𝑝 to commit to the promised behavior.
3.2.4. Service commitment activation
When promiser 𝑝 receives a service commitment activation request message
from a promisee 𝑞, 𝑝 can decide to:
1. Accept the request: Promiser 𝑝 creates a service commitment SC with
the same fields as the service offer SO. 𝑝 sends a signed commitment
activation message sig𝑝 (SC) to 𝑞 (or initiates a three way handshake).
When 𝑞 receives the activation message, it updates its local configuration
with SC, and adds a Performance Status field indicating to what extent 𝑞
assesses the service commitment has been kept, or violated.
2. Reject the request: Promiser 𝑝 simply ignores the request message.
3.2.5. Service consumption
Consuming a service commitment refers to the act of (not) engaging with the
promised behavior. A promisee is not obligated to use the promised service.
For example, a promisee may not be happy with the quality of a provided relay
service, and decide to interact with other promisers.
14 could be financial and/or non-financial costs
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 28
3.2.6. Service commitment evaluation
Evaluation occurs as the participant monitors whether the terms of the service
commitment are being met. The system checks whether the promiser has fulfilled
the obligations within the specified time frame and according to the predefined
quality parameters (if any). If the promiser 𝑝 successfully completes the commit-
ment, promisee 𝑞 marks service commitment as ‘fulfilled’. If the promiser 𝑝 fails
to fulfill its commitment, 𝑞 will modify the Performance Status field mentioned
in item 1 in Subsection 3.2.4 accordingly.
We propose to use service commitments evaluation data as a ‘unit of trust’
in Anoma’s Node Trust Architecture. More concretely: promisers will have
a reputation that reflects how reliable their commitments are. Promisers that
breach their commitments may lose reputation and find that nobody invokes
their services in the future.
3.2.7. Service commitment termination
Service commitments are considered to be terminated in the following ways:
1. Gracefully: The service commitment SC is considered terminated when its
expiration deadline has passed, after which phase 1 can start again.
2. Not gracefully: Promiser 𝑝 leaves the network while it has active commit-
ments to other participants. We rely on existing networking protocols to
learn that a participant has left the network and to prevent that promisers
with bad reputation can whitewash their reputation [FPCS04].
3.3. Application scenario
We now discuss application scenarios to illustrate how service commitments can
be used for decision-making by participants in a distributed system.
3.3.1. An extremely simple application scenario
Consider the simple network displayed in Figure 6, having four participants:
Alice, Bob, Carol and Dave.
Alice is connected to Bob and Carol; Bob is connected to Dave; and Carol
is also connected to Dave. Furthermore, Bob has committed to a pragmatic
implication that he will relay at least 95% of any messages he receives to Dave
within 10ms, and Carol has committed to a pragmatic implication that she will
relay all messages of any messages she receives to Dave within 1000ms.
Suppose that Alice wants to get a message to Dave. She has two choices:
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 29
Bob
send to Bob? 𝑟 =0.95 𝑙 ≤10
Alice Dave
send to Carol? 𝑟 =1 𝑙 ≤1000
Carol
Figure 6. An extremely simple application scenario
1. She can send a message to Bob, who (if he keeps his commitment) will
probably relay it to Dave quite quickly, though Bob has reserved the right
to drop up to 5% of the messages.
2. She can send a message to Carol, who (if she keeps his commitment) will
certainly relay it to Dave, but only commits to do so within a second.
In spite of the simplicity of this scenario, there are a few points to note:
1. The ratios 𝑟 = 0.95 and 𝑟 = 1 are not probabilities. It is not the case that Bob
will forward messages with 95% probability. He has promised to forward
at least 95% of messages, but he might forward more.
2. In the scenario as presented in Figure 6, there is no guarantee of fairness;
e.g. if Bob is getting one message every second from each of 20 participants,
he would fulfill his commitment if he forwards all messages from 19 of the
participants and drops all messages from one of them.15
We highlight this because probabilistic behaviour is assumed to be fair,
however, as just discussed pragmatic implication is not probabilistic.
3. Bob and Carol may have committed to relay messages, but they are not
necessarily forced to meet their commitments.
We are presented with a model, and some predicates which for social
reasons we may consider ‘should’ be true, and for which in a real-life
system there may be incentives to keep them true, but this does not nec-
essarily imply that they must be true. This is a feature, not a bug, which
reflects the fact that in real life, commitments may be made but not met,
sometimes for legitimate reasons which are understood and recognised as
legitimate by the community of participants on the network, as we discuss
in Remark 3.3.1.
Remark 3.3.1. A toy example of how commitments may be broken for good
reasons is given in Subsection 3.3.3.
15 More sophisticated commitments, corresponding to more elaborate predicates, are certainly possible; we just have not written them out.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 30
Bob
send to Bob? 𝑟 =1 𝑙 ≤300
Alice 𝑟 =1 𝑙 ≤100 Dave
send to Carol? 𝑟 =.5 𝑙 ≤100
Carol
Figure 7. A simple application scenario
An example from real life is a force majeure exclusion in contracts, which frees
parties from liability or obligation when an extraordinary event or circumstance
beyond the control of the parties interferes with delivery of a service. Crucially,
what constitutes ‘extraordinary event’ and ‘beyond the control of the parties’ is a
social decision, not a purely legalistic, logical, or technical one. So essentially, the
force majeure clause just expresses “we commit to this contract, unless society
decides that there is a good reason not to”, which is more-or-less the same idea
as the service commitment evaluation step from Subsection 3.2.6.16
So it is important to understand that we are using logic to model service
commitments, and offers of service commitments, but an interesting added
dose of ‘real world’ is implicit in our narrative and lurking behind the logical
formalism.17
There are predicates and axioms in our formalism, but they are more than just
that.
3.3.2. A simple application scenario
This case is similar to the previous one, but here Bob has committed to forward
messages to both Dave (within 300ms) and Carol (within 100ms). Again, this
illustrates that the 𝑟 -labels are not probabilities; they are commitments.
In this case, if Alice has to choose to whom to send a message — and assuming
she has no other information than what is illustrated in the diagram, and assuming
that she would prefer that more than half of her messages arrive — then she has a
single best strategy: send everything to Bob. If Bob keeps all of his commitments,
then Alice’s messages will arrive with Dave — once (perhaps) within 200ms, and
again (certainly) within 300ms.
16 There is also a dual to this: the law recognises cases where the letter of a contract is fulfilled, and but not its spirit, and depending on
circumstances this too may be taken very seriously. For instance: a retail customer might challenge a contract with a large company that
contains some fine print, on the grounds that the significance of that fine print was not made sufficient clear at the time of entering into
the contract.
17Well, interesting for a logician like the first author. To an industry practitioner like the second author, this is all rather obvious — and, if
anything, it’s pure mathematical axiom systems that may seem a little odd.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 31
𝑟 =1
Alice Dave
𝑟 =1 Carol 𝑟 =1
Figure 8. A silly application scenario
3.3.3. A silly case
In the silly application scenario in Figure 8, if Alice sends a message to Dave
then he will relay it to Carol, who will relay it to Alice, who will relay it to Dave,
and so on forever. This is clearly a pathological situation: the question is how to
understand it within our logic.
There are various ways to do so, including:
1. Blame it on the fact that the agents committed with certainly (i.e. with
𝑟 = 1) to forward their messages. That is, we pin the error on the 1; perhaps
it should be (say) 𝑟 = 0.9 instead.
The problem with this is that we do not distinguish between ‘silly’ messages
that should be dropped, and ‘sensible’ messages that should be forwarded.
2. Accept that sometimes commitments will be broken and that this may be
a good thing.
Essentially, this pushes design effort out of the logical system and into
the evaluation mechanisms which agents may use in the real world to
decide whether some violation of a service commitment is bothersome to
them. Promises may be made and broken without penalty, if circumstances
justify it.18
3. Extend the logic such that agents can trace the path a message has followed.
Then the service commitment could be refined so that (for example) Alice
commits to forward messages that have not already passed through Dave.
4. Conclusions
4.1. Summary
Message logic is a simple way to formally describe and reason about messages
passed around a network. This is interesting and useful because message-passing
18 This is not a hypothetical; it happens all the time in the real world. For example: if you leave your baby with a babysitter and say “Don’t
leave the flat”, and the babysitter promises not to — and then there’s a fire in the flat, then you would be furious with the babysitter if they
did keep their promise. The point here is not that (surprise, surprise!) common sense exists; it’s that the raw material of our commitments
to one another, including those expressed using Message Logic, is embedded, and known to be embedded, within a social framework
within which they are intended to be understood.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 32
is the relevant observable of a distributed system, so this logical model captures
a fundamental and useful aspect of reality.
Our model as it is currently set up is (deliberately) simplistic — e.g. messages
are passed with zero latency and zero loss — but often that is a reasonable first
approximation to real life (a detailed discussion is in Subsection 3.1).19
In our model:
1. Service commitments are modelled by predicates, that are valid in a model
if and only if the service commitment is kept. See the example in Subsec-
tion 2.5.
2. Offers become axiomatisations, as outlined in Subsection 2.8.
This is embedded in a rich real-life context of discovery protocols, evaluation of
whether commitments have been kept and corresponding reputation systems,
and so on. With time and further research, we will see how much of this we
want to include in the logic and how much is inherently not to do with logical
formalisation and should be left as external to the model (e.g. we may want to
abstract over the gossip or pub-sub protocol used).
We envisage that message logic could be applied as follows:
1. Informally, as a rigorous logical language for specifying service commit-
ments (think: first-order logic for distributed systems).
2. Formally, for machine verification. A system of service commitments
and offers could be expressed in (say) Isabelle or Agda; and correctness
properties could be asserted and proved (or refuted).
3. Programmatically. In this scenario, Message Logic is used to express service
commitments directly in an implementation. We wrote earlier that service
commitments are ‘modelled’ by predicates, but in a programmatic setting,
the service commitment would be a predicate in Message Logic.
Message logic is designed to seem simple, because we want it to be easy to use;
however, it is also a powerful and expressive logic.
4.2. Related work
Message Logic draws inspiration from various fields in distributed systems,
formal methods, and service-oriented architectures. Relevant related work is as
follows:
19 If we make a phone call, to a first approximation we assume that data will be delivered effectively with zero latency and zero loss.
Sometimes that assumption is violated; the connection becomes laggy and/or unreliable. At this point we become frustrated, precisely
because we had this assumption.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 33
1. Promise theory
Promise theory [Bur15] provides a conceptual framework for reasoning
about cooperative systems through promises made by agents, which in-
clude guarantees of behavior and performance. Message Logic builds on
this notion by formalising service commitments as predicates, providing
a logical foundation for verifying and analyzing such commitments in
distributed systems.
2. Service coordination in distributed systems
Research on service coordination mechanisms, such as service-oriented ar-
chitectures [PVDH07, SBD+ 10], protocols for consensus [Kwo14, SWvRM20,
BG17], and service discovery services [PTT12, MRPM08], intersects with
our work. While these protocols aim to ensure reliable coordination among
distributed components, Message Logic abstracts these mechanisms into a
logical framework, allowing for the formal reasoning of commitments and
offers independent of implementation-specific details.
3. Temporal and modal logics for distributed systems
Logics such as Linear Temporal Logic (LTL) [DLH01] and Computational
Tree Logic (CTL) [BH05] have been extensively used to model and verify
distributed system behaviors. These frameworks capture the temporal
aspects of distributed systems but lack the expressiveness to directly model
service commitments and their violations. Message Logic complements
these approaches by focusing on service-level guarantees within distributed
systems.
4.3. Future work
We see two natural ways to extend Message Logic: we can extend the logic, and
we can build infrastructure with which to apply it.20
4.3.1. Extending Message Logic for real-world scenarios
The current logical framework makes some simplifying assumptions (e.g. zero-
latency and no message loss; see Subsection 3.1). This is of course unrealistic, so
we can consider expanding the logic in different ways (depending on what we
care to model more precisely), including:
20 Or perhaps both; e.g. a practical application may require new tooling, and it might motivate a specific logic extension.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 34
1. Incorporating Latency and Message Loss: Including bounded or unbounded
delays and probabilistic message loss, to reflect the inherent uncertainties
of distributed environments.
2. Local Reasoning for Participants: Currently, an omniscient observer de-
termines whether commitments are upheld. If we want to explicitly reason
within the logic about what participants know, believe, or know or be-
lieve that other participants know or believe (and so on), then epistemic
modalities are well-studied and could be added [RSW24].
3. Dynamic Service Commitment Evolution: Mechanisms to handle changes
to service commitments (such as updates to offers or modifications of
commitments during execution) would make the logic more adaptable to
evolving systems.
4. Integration with Reputation Systems: Formalising how service commit-
ment evaluation feeds into reputation or trust systems could enhance the
practical relevance of Message Logic for real-world distributed systems
with decentralised decision-making.
Note that these extensions would probably not increase expressive power in
principle; they would simply make some things easier to express.
4.3.2. Building infrastructure around Message Logic
To apply Message Logic in practice, we would require infrastructure and tools.
Potential directions include:
1. Specification and verification frameworks: Developing tools that allow
practitioners to specify service commitments in Message Logic and verify
their correctness automatically using theorem provers like Isabelle, Coq,
or SMT solvers.
2. Protocol representation and composition: Building data structures and
libraries that allow the expression of complete protocols (e.g., consensus
algorithms, mempool mechanisms) within the logic. This would demon-
strate the practical applicability of Message Logic to design and implement
distributed system protocols.
3. Simulation and monitoring Tools: Implementing simulation environments
to test distributed systems against formalised commitments. Monitoring
tools could also dynamically evaluate service-level compliance during
system operation.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 35
4. Programmatic integration: Extending programming languages or mid-
dleware to natively support Message Logic predicates, enabling direct use
of service commitments as part of distributed system implementations.
References
BG17. Vitalik Buterin and Virgil Griffith. Casper the friendly finality gadget. arXiv preprint
arXiv:1710.09437, 2017. URL: https://arxiv.org/abs/1710.09437. (cit. on p. 34.)
BH05. Alexander Bell and Boudewijn R Haverkort. Sequential and distributed model
checking of Petri nets. International Journal on Software Tools for Technology Transfer,
7:43–60, 2005. doi:10.1016/S1571-0661(05)80391-3. (cit. on p. 34.)
BRV01. Patrick Blackburn, Maarten de Rijke, and Yde Venema. Modal Logic. Cambridge
Tracts in Theoretical Computer Science. Cambridge University Press, 2001. (cit. on
p. 16.)
Bur15. Mark Burgess. Thinking in promises: designing systems for cooperation. O’Reilly
Media, Inc., 2015. (cit. on p. 34.)
DLH01. Falk Dietrich, Xavier Logean, and J-P Hubaux. Modeling and testing object-oriented
distributed systems with linear-time temporal logic. Concurrency and Computation:
Practice and Experience, 13(5):385–420, 2001. doi:10.1002/cpe.571. (cit. on p. 34.)
FPCS04. Michal Feldman, Christos Papadimitriou, John Chuang, and Ion Stoica. Free-riding
and whitewashing in peer-to-peer systems. In Proceedings of the ACM SIGCOMM
workshop on Practice and theory of incentives in networked systems, pages 228–236,
2004. (cit. on p. 29.)
Kwo14. Jae Kwon. Tendermint: Consensus without mining. Draft v. 0.6, fall, 1(11):1–11,
2014. (cit. on p. 34.)
Lam78. Leslie Lamport. Time, clocks, and the ordering of events in a distributed system.
Communications of the ACM, 21(7):558–565, July 1978. doi:10.1145/359545.359563.
(cit. on p. 16.)
MRPM08. Elena Meshkova, Janne Riihijärvi, Marina Petrova, and Petri Mähönen. A survey
on resource discovery mechanisms, peer-to-peer and service discovery frameworks.
Computer networks, 52(11):2097–2128, 2008. (cit. on p. 34.)
PTT12. Giuseppe Pirrò, Domenico Talia, and Paolo Trunfio. A DHT-based semantic overlay
network for service discovery. Future Generation Computer Systems, 28(4):689–707,
2012. (cit. on p. 34.)
PVDH07. Mike P Papazoglou and Willem-Jan Van Den Heuvel. Service oriented architectures:
approaches, technologies and research issues. The VLDB journal, 16:389–415, 2007.
doi:10.1007/s00778-007-0044-3. (cit. on p. 34.)
RSW24. Rasmus Rendsvig, John Symons, and Yanjing Wang. Epistemic Logic. In Edward N.
Zalta and Uri Nodelman, editors, The Stanford Encyclopedia of Philosophy. Meta-
physics Research Lab, Stanford University, Summer 2024 edition, 2024. (cit. on
p. 35.)
SBD+ 10. Jan Sacha, Bartosz Biskupski, Dominik Dahlem, Raymond Cunningham, René Meier,
Jim Dowling, and Mads Haahr. Decentralising a service-oriented architecture.
Peer-to-Peer Networking and Applications, 3:323–350, 2010. (cit. on p. 34.)
SWvRM20. Isaac Sheff, Xinwen Wang, Robbert van Renesse, and Andrew C Myers. Hetero-
geneous paxos: Technical report. arXiv preprint arXiv:2011.08253, 2020. URL:
https://arxiv.org/abs/2011.08253. (cit. on p. 34.)
Acknowledgements. We thank Tobias Heindel for comments on a draft, and for asking good questions.
DOI: 10.5281/zenodo.14251398 Anoma Research Topics | December 4, 2024 | 36