0% found this document useful (0 votes)
231 views50 pages

Skript Graphs 2

Lecture notes of Probablty on Graph that has given at Tum

Uploaded by

Random Girl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
231 views50 pages

Skript Graphs 2

Lecture notes of Probablty on Graph that has given at Tum

Uploaded by

Random Girl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Probability on graphs (MA4406)

Nina Gantert1
April 2, 2024

Contents
1 Random walks on graphs 2
1.1 Reversible Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Electric networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Green function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Escape probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Reduction of networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 Energy 20
2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Thomson’s principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Recurrence and transience 29


3.1 Nash-Williams inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Energy and transience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Percolation 34
4.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Percolation on regular trees . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.3 Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Bounds for the critical probability . . . . . . . . . . . . . . . . . . . . . . . 39
4.5 Uniqueness of the infinite open cluster . . . . . . . . . . . . . . . . . . . . 42

1
Zentrum Mathematik, Bereich M14, Technische Universität München, D-85747 Garching bei
München, Germany. e-mail: gantert@[Link]
Preface
These lecture notes are intended for the students of the course Probability on graphs.
They are based on the books [LP16] and and [Gri10] and on Silke Rolles’ lecture notes.
Please do not distribute them. They are not supposed to replace any existing books.
The course covers two topics:

• random walks on graphs and

• percolation.

The part on random walks is based on Chapter 2 of the book [LP16] by Russell Lyons and
Yuval Peres. The book [DS84] by Doyle and Snell provides an elementary introduction.
The aim is to study random walks on weighted graphs and show rigorous connections
with electric networks. The intuition from electric networks will allow to deduce nice
proofs for recurrence and transience of the random walks. As a special case, we will study
recurrence and transience of the simple random walk on Zd , d ≥ 1.

1 Random walks on graphs


1.1 Reversible Markov chains
Definition 1.1 An (undirected) graph is a pair G = (V, E) consisting of a countable set
V of vertices and a set
E ⊆ {{u, v} : u, v ∈ V }
of edges. In particular, all edges are undirected. We denote an edge with endpoints u and
v by {u, v}.

Here is some frequently used notation.

• A vertex v is called incident to an edge e if v ∈ e.

• The degree of a vertex v ∈ V equals the number of edges incident to it:

degree(v) = |{e ∈ E : v ∈ e}|.

• The graph G is called locally finite if every vertex is incident to a finite number of
edges.

• Let u, v ∈ V . A path from u to v is a sequence γ = (v0 , . . . , vn ) of vertices with


n ∈ N0 , v0 = u, vn = v, and {vi , vi+1 } ∈ E for all 0 ≤ i ≤ n − 1.

• A graph G is connected if for any two vertices u, v ∈ V there is a finite path from
u to v in G.

2
Every edge e ∈ E is given a weight ce > 0. For v ∈ V , set
X
cv = c{u,v} .
u∈V :{u,v}∈E

Throughout this chapter, let G = (V, E) be a connected, undirected, graph with vertex
set V and edge set E, and assume that either

• G is locally finite or

• cv < ∞ for all v ∈ V .

Note that G can be infinite. If G is finite, both assumptions are clearly satisfied.

Example 1.2 (Random walk on a weighted graph)


The random walk on the weighted graph G is a Markov chain with state space V and
transition matrix (c
{u,v}
cu
, if {u, v} ∈ E,
p(u, v) =
0, otherwise.
If we set c{u,v} = 0 for u, v ∈ V with {u, v} ∈
/ E, we have
c{u,v}
p(u, v) = for all u, v ∈ V.
cu
If all weights are equal, e.g. ce = 1 for all e ∈ E, the process is called simple random
walk. The transition probabilities of simple random walk are given by
(
1
degree(u)
, if {u, v} ∈ E,
p(u, v) =
0, otherwise.

Lemma 1.3 Random walk on a weighted graph is reversible.

Proof. For all u, v ∈ V one has

cu p(u, v) = c{u,v} = cv p(v, u).

Hence, π(v) = cv , v ∈ V , is a reversible measure.

Theorem 1.4 Let (Xn )n∈N0 be an irreducible Markov chain with reversible measure π.
Then, (Xn )n∈N0 is a random walk on a weighted graph.

Proof. Let V denote the state space of (Xn )n∈N0 . SincePthe Markov chain is irreducible
and every reversible measure is stationary (i.e. π(u) = v∈V π(v)p(v, u) for all u ∈ V ),
one has π(v) > 0 for all v ∈ V . Define

E = {{u, v} ⊆ V : p(u, v) > 0}.

3
The detailed balance equation

π(u)p(u, v) = π(v)p(v, u) (1.1)

implies p(u, v) > 0 ⇔ p(v, u) > 0. Hence, E is well-defined. Define the weights

c{u,v} = π(u)p(u, v).

By (1.1), the weights are well-defined. Then,


X X X
cu = c{u,v} = π(u)p(u, v) = π(u) p(u, v) = π(u).
v∈V :{u,v}∈E v∈V :p(u,v)>0 v∈V

Consequently,
c{u,v} π(u)p(u, v)
= = p(u, v).
cu π(u)
Hence, (Xn )n∈N0 is a random walk on the weighted graph G = (V, E).

1.2 Harmonic functions


In the following, let (Xn )n∈N0 be an irreducible Markov chain with (possibly infinite) state
space V and transition matrix P (where Pxy = p(x, y)).

Definition 1.5 A function f : V → R is called harmonic on U ⊆ V if for all u ∈ U one


has X
f (u) = p(u, v)f (v).
v∈V

For simple random walk, this means that the value of the function at u ∈ U equals the
average of its values at neighboring points:
1 X
f (u) = f (v).
degree(u)
v∈V :{u,v}∈E

Example 1.6 For A ⊆ V , let

τA = inf{n ∈ N0 : Xn ∈ A}

denote the first hitting time of A. For Z ⊆ V with A ∩ Z = ∅, consider

f (u) = Pu (τA < τZ ),

the probability that the Markov chain visits the set A before visiting the set Z given X0 = u.
Observe that (
0 if u ∈ Z,
f (u) =
1 if u ∈ A

4
Furthermore, for u ∈ V \ (A ∪ Z), one has
X
f (u) = Pu (X1 = v, τA < τZ )
v∈V
X
= Pu (X1 = v)Pu (τA < τZ |X1 = v)
v∈V :Pu (X1 =v)>0
X
= p(u, v)Pv (τA < τZ ) by the Markov property
v∈V
X
= p(u, v)f (v)
v∈V

Hence, f is harmonic on V \ (A ∪ Z). This method to prove that f is harmonic is called


one-step analysis.

For U ⊆ V , a state v ∈ V is called accessible from u0 ∈ U for the chain absorbed


outside U if there exist n ∈ N0 , u1 , . . . , un ∈ U with

P(Xi = ui ∀0 ≤ i ≤ n, Xn+1 = v) > 0.

In other words, there exists a path γ from u0 to v which is inside U except possibly the
last point and the probability that the Markov chains goes through this path is strictly
positive. We set

access(U, u0 ) := {v ∈ V : v is accessible from u0 for the chain absorbed outside U }.

Theorem 1.7 (Maximum principle) Let f : V → R be harmonic on U ⊆ V . If there


exists u0 ∈ U with
f (u0 ) = sup f (v),
v∈V

then f is constant on access(U, u0 ).

Proof. Let M := supv∈V f (v). Consider u ∈ V with p(u0 , u) > 0. By the definition of
M , f (u) ≤ M . Assume f (u) < M . Since f is harmonic at u0 , we have
X
M = f (u0 ) = p(u0 , v)f (v) < M,
v∈V

which is a contradiction. Hence, f (u) = M . The claim follows by induction.

Theorem 1.8 (Uniqueness principle) Let U be a finite subset of V with U ̸= V . Let


f, g : V → R be harmonic on U with

f (x) = g(x) for all x ∈ V \ U.

Then f = g.

5
Proof. Consider h = f − g. Then, h is harmonic on U and h = 0 on V \ U . Since U is
finite, there exists v0 ∈ V with h(v0 ) = supv∈V h(v).

• Case v0 ∈ V \ U : h(v0 ) = 0

• Case v0 ∈ U : By the maximum principle, h is constant on access(U, v0 ). Since


V \U ̸= ∅ and the Markov chain is irreducible, there exists v ∈ (V \U )∩access(U, v0 ).
Hence, h(v0 ) = 0.

In both cases, we conclude h ≤ 0 on V . Applying the same argument to −h, we find


−h ≤ 0 and conclude h = 0.
Here are some consequences of the uniqueness principle:

Remark 1.9 If V is finite, the function

f (u) = Pu (τA < τZ ), u ∈ V,

from Example 1.6 is completely determined by the following properties:

• f is harmonic on V \ (A ∪ Z);

• the boundary values f = 0 on Z and f = 1 on A.

Proof. Assume we have two functions f and g with these properties. Then, by the
uniqueness principle with U = V \ (A ∪ Z) we conclude that f = g.
By an analogous argument we obtain:

Remark 1.10 (Superposition principle) Let f, f1 , f2 be harmonic on a finite subset


U of V with U ̸= V . If f = a1 f1 + a2 f2 for some a1 , a2 ∈ R on V \ U , then

f = a1 f1 + a2 f2 on V.

Theorem 1.11 (Dirichlet problem) Let U ⊊ V and let g : V \ U → R be bounded.


There exists a function f : V → R with the following properties

• f is harmonic on U ;

• f = g on V \ U .

One says that f solves the Dirichlet problem with boundary value g.

Proof. Consider
f : V → R, f (v) = Ev [g(XτV \U )1{τV \U <∞} ].
It is not hard to see that f satisfies the claim.

6
1.3 Electric networks
It is convenient to consider a weighted graph as an electric network where the weight ce
of an edge e corresponds to a conductance ce . The resistance of e is given by

re := c−1
e .

In the following, we will always take

A, Z ⊆ V with A ∩ Z = ∅.

Definition 1.12 (Voltage) A function U : V → R which is harmonic on V \ (A ∪ Z) is


called a voltage function.

Usually we require U = 1 on A and U = 0 on Z. Note that

u 7→ Pu (τA < τZ )

is a voltage.
Sometimes we need to replace every undirected edge {u, v} by the two directed edges
(u, v) and (v, u). The corresponding set of directed edges equals

⃗ = {(u, v), (v, u) : {u, v} ∈ E}.


E

⃗ → R is
Definition 1.13 (Current) Given a voltage U, the associated current i : E
given by
i(u, v) = c{u,v} (U(u) − U(v)).

Note that i(u, v) > 0 ⇔ U(u) > U(v). This means that current flows in the direction
of decreasing voltage.
For convenience of notation, we set i(u, v) = 0 for u, v ∈ V with {u, v} ∈
/ E.

⃗ → R is called a flow between A and Z if


Definition 1.14 (Flow) A function θ : E

(a) it is antisymmetric, i.e. θ(u, v) = −θ(v, u) for all {u, v} ∈ E and


P
(b) v∈V :{u,v}∈E θ(u, v) = 0 for all u ̸∈ A ∪ Z.

By the definition of current, we have

Theorem 1.15 (Ohm’s law) For {u, v} ∈ E, one has U(u) − U(v) = r{u,v} i(u, v).

Remark 1.16 The current is uniquely determined by the voltage by Ohm’s law.

Theorem 1.17 (Kirchhoff ’s node law) The current is a flow between A and Z.

7
Proof. Clearly, i(u, v) = −i(v, u). We calculate for u ∈ V \ (A ∪ Z),
X X
i(u, v) = c{u,v} (U(u) − U(v))
v∈V :{u,v}∈E v∈V :{u,v}∈E
X
=cu U(u) − c{u,v} U(v)
v∈V :{u,v}∈E
 
X
=cu U(u) − p(u, v)U(v) = 0;
v∈V :{u,v}∈E

because U is harmonic on V \ (A ∪ Z).


The last proof shows the following statement:

Remark 1.18 If Ohm’s law and Kirchhoff ’s node law hold for functions U and i, then
U is a voltage.

Theorem 1.19 (Kirchhoff ’s cycle law) Let γ = (v0 , . . . , vn = v0 ) be a cycle, i.e.


{vj , vj+1 } ∈ E for all 0 ≤ j ≤ n − 1. Then,
n−1
X
r{vj ,vj+1 } i(vj , vj+1 ) = 0.
j=0

Proof. Using Ohm’s law, we obtain


n−1
X n−1
X
r{vj ,vj+1 } i(vj , vj+1 ) = (U(vj ) − U(vj+1 )) = U(v0 ) − U(vn ) = 0.
j=0 j=0

In the following, we assume that a ∈ V , a ∈


/ Z, and V \ ({a} ∪ Z) is finite. Let

τa+ = inf{n ≥ 1 : Xn = a}

denote the first return time to a. We are interested in

P(a → Z) := Pa (τZ < τa+ ),

the probability of hitting the set Z before returning to a. If Z is the boundary of a big
box in Zd and a = 0 this escape probability can be used to characterize recurrence and
transience.
Consider a voltage U : V → R with value U(a) ̸= 0 at a and U(v) = 0 for all v ∈ Z.
Recall that
f (u) = Pu (τa < τZ )

8
is harmonic on V \({a}∪Z) with f = 0 on Z and f (a) = 1. The function g(u) = U(u)/U(a)
is also harmonic on V \ ({a} ∪ Z) with boundary values g = 0 on Z and g(a) = 1. By the
uniqueness principle,
U(u)
Pu (τa < τZ ) = .
U(a)
We calculate
X
P(a → Z) = Pa (τZ < τa+ ) = p(a, v)Pa (τZ < τa+ |X1 = v).
v∈V

Using the Markov property, one obtains


Pa (τZ < τa+ |X1 = v) = Pv (τZ < τa ).
Since V \ Z is finite, it follows that Pv (τZ < ∞) = 1 for v ∈ V \ Z. For v ∈ Z, one has
τZ = 0. Hence, Pv (τZ < ∞) = 1 for all v ∈ V . Consequently,
U(v)
Pv (τZ < τa ) = 1 − Pv (τa < τZ ) = 1 −
U(a)
(we need Pv (τZ < ∞) = 1 to ensure that Pv (τZ < τa ) = 1 − Pv (τa < τZ ): the event
{τZ = τa = ∞} has probability 0!). Inserting these yields
X c{a,v}  U(v)

P(a → Z) = 1−
v∈V
c a U(a)
1 X 1 X
= c{a,v} (U(a) − U(v)) = i(a, v).
ca U(a) v∈V ca U(a) v∈V
Consequently,
1 X
U(a) = i(a, v). (1.2)
ca P(a → Z) v∈V
P
By definition, v∈V i(a, v) is the total amount of current flowing into the network at
a. Hence, we can consider the network between a and Z as a single conductor with
conductance given by ca P(a → Z).
Definition 1.20 The effective conductance between a and Z is defined by
Ceff := ca P(a → Z) =: C(a ↔ Z).
The effective resistance between a and Z is defined by
1
Reff := =: R(a ↔ Z).
Ceff
The definition of the effective conductance yields the important formula for the escape
probability:
Lemma 1.21
Ceff
P(a → Z) = .
ca

9
1.4 Green function
Definition 1.22 The Green function of the random walk absorbed when hitting Z is the
function GZ : V × V → [0, ∞] defined by
"∞ #
X
GZ (a, v) = Ea 1{Xn =v} 1{n<τZ } .
n=0

This is the expected number of visits in v before hitting Z for the random walk starting
at a.

Lemma 1.23 (a) GZ (a, v) = 0 if a ∈ Z or v ∈ Z;


1 ca
(b) GZ (a, a) = = = ca Reff for a ∈ V \ Z.
P(a → Z) Ceff

Proof.

(a) If X0 = a ∈ Z, then τZ = 0 and hence 1{n<τZ } = 0 for all n. Thus, GZ (a, v) = 0. If


Xn = v ∈ Z, then n ≥ τZ and hence 1{n<τZ } = 0. The claim GZ (a, v) = 0 follows.

(b) To compute GZ (a, a) note that the random walk visits a at time 0 if it starts in a.
Therefore, the probability of precisely k + 1 visits in a before time τZ equals

Pa (τa+ < τZ )k Pa (τZ < τa+ ) = (1 − p)k p

with p = P(a → Z). This is the distribution of a geometric random variable with
success probability P(a → Z). Consequently,
1 ca
GZ (a, a) = = = ca Reff .
P(a → Z) Ceff

Lemma 1.24 For all a, v ∈ V , one has ca GZ (a, v) = cv GZ (v, a).

Proof. If a ∈ Z or v ∈ Z, both sides equal zero. For a, v ∈ V \ Z and n ∈ N, let Γn (a, v)


denote the set of paths of length n from a to v, which avoid the set Z. We calculate

X ∞
X X n−1
Y
ca GZ (a, v) = ca Pa (Xn = v, n < τZ ) = ca δav + ca p(vi , vi+1 ).
n=0 n=1 (v0 ,...,vn )∈Γn (a,v) i=0

By reversibility,
n−1
Y n−1
Y
ca p(vi , vi+1 ) = cv p(vi+1 , vi ),
i=0 i=0

10
which equals the probability to traverse the path (vn , . . . , v0 ) ∈ Γn (v, a) starting with the
reversible measure. Hence,

X X n−1
Y
ca GZ (a, v) =cv δav + cv p(vi+1 , vi )
n=1 (v0 ,...,vn )∈Γn (v,a) i=0

X
= cv Pv (Xn = a, n < τZ ) = cv GZ (v, a).
n=0

When we multiply the voltage U by a constant, the current is multiplied by the same
constant. This can be used to achieve different normalizations e.g.

• The voltage equals one for one vertex a: U(a) = 1 or

• there is a unit current flowing from a to Z: v∈V i(a, v) = 1. In this case, by (1.2),
P

1 1
U(a) = = = Reff . (1.3)
ca P(a → Z) Ceff

Theorem 1.25 (Green function as voltage) Let G be a finite graph. Impose a volt-
age U on {a} ∪ Z with U = 0 on Z such that a unit current flows from a to Z. Then the
voltage is given by
GZ (a, v)
U(v) = for all v ∈ V.
cv
Proof. Case v ∈ Z: both sides equal zero.
GZ (a,a)
Case v = a: By (1.3), U(a) = Reff . Lemma 1.23 yields ca
= Reff .
Case v ∈ V \ ({a} ∪ Z): By Lemma 1.24,

GZ (a, v) GZ (v, a)
= .
cv ca
Using a one-step analysis as in Example 1.6, one can show that v 7→ GZ (v, a) is harmonic
on V \ ({a} ∪ Z). The claim follows from the uniqueness principle.
For x, y ∈ R, we abbreviate x ∧ y := min{x, y}.

Theorem 1.26 Let G be a finite graph. For {u, v} ∈ E, let Suv be the number of transi-
tions from u to v for the Markov chain (Xn∧τZ )n∈N0 absorbed when it hits Z. Then,

Ea [Suv ] =GZ (a, u)p(u, v). (1.4)

Furthermore, if i is the current in G which arises when a potential U is applied between


a and Z in such a way that a unit current flows from a to Z, then

Ea [Suv − Svu ] =i(u, v). (1.5)

11
Proof. If a ∈ Z or u ∈ Z, both sides of (1.4) equal zero. Assume a, u ̸∈ Z. Then,
"∞ # ∞
X X
Ea [Suv ] =Ea 1{Xk∧τZ =u,X(k+1)∧τZ =v} = Pa (Xk = u, Xk+1 = v, τZ > k)
k=0 k=0

X
= Pa (Xk+1 = v|Xk = u, τZ > k)Pa (Xk = u, τZ > k).
k=0

Since τZ is a stopping time,

{τZ > k} = {τZ ≤ k}c ∈ σ(X0 , . . . , Xk ).

Hence, by the Markov property,

Pa (Xk+1 = v|Xk = u, τZ > k) = p(u, v).

Thus, we obtain

X  
Ea [Suv ] = Ea 1{Xk =u,τZ >k} p(u, v)
k=0
" ∞
#
X
=Ea 1{Xk =u,τZ >k} p(u, v) = GZ (a, u)p(u, v).
k=0

This proves (1.4).


To prove (1.5), we calculate for u, v ∈ V

Ea [Suv − Svu ] =GZ (a, u)p(u, v) − GZ (a, v)p(v, u) by (1.4)


=U(u)cu p(u, v) − U(v)cv p(v, u) by Theorem 1.25
=(U(u) − U(v))c{u,v} = i(u, v).

1.5 Escape probabilities


Let G = (V, E) be an infinite weighted graph. Assume G is connected and locally finite.
Let Gn = (Vn , En ), n ∈ N, be finite graphs such that

• Gn is a subgraph of G in the sense that Vn ⊆ V and En = {{u, v} ∈ E : u, v ∈ Vn };

• Gn is connected;

• Gn , n ∈ N, exhaust G in the sense that Vn ↑ V .

12
Every edge e ∈ En is given the weight it has in G.
A loop is an edge with the same endpoints. Multiple edges are parallel edges between
the same pair of vertices. A multigraph is a graph which may contain loops or multiple
edges.
Set Zn = V \ Vn . Let
Gw w
n = (Vn ∪ {zn }, En )

be the graph obtained from G by identifying all vertices in Zn to a single vertex zn and then
removing all loops and keeping multiple edges. We say Gw n has wired boundary conditions.
A random walk on G which is stopped when hitting Zn is in one-to-one correspondance
to a random walk on Gw n which is stopped when it hits the vertex zn .
Assume the random walk starts in a. If a ∈ Vn , then {τZn+1 < τa+ } ⊆ {τZn < τa+ }.
Hence, the events {τZn < τa+ }, n ∈ N, are decreasing. One has
\
{τZn < τa+ } ={hit all Zn before returning to a}
n∈N
={never return to a} = {τa+ = ∞},

where the second equality holds only Pa -almost surely (since the event that the RW does
not return to a but also not hit all the Zn has probability 0 under Pa ). In particular,

lim P(a → Zn ) = lim Pa (τZn < τa+ ) = Pa (τa+ = ∞). (1.6)


n→∞ n→∞

Pa (τa+ = ∞) is called the escape probability from a.

Lemma 1.27 For G = (V, E) an infinite, connected weighted graph. Then the following
statements are equivalent:

(a) The random walk on G is transient.

(b) Pa (τa+ = ∞) > 0 for all a ∈ V .

(c) Pa (τa+ = ∞) > 0 for some a ∈ V .

Proof. By definition, a is transient if Pa (τa+ = ∞) > 0. The random walk on G is


called transient if all a ∈ V are transient. Thus, (a)⇔(b) is true by definition. Since the
random walk is an irreducible Markov chain, transience of one state implies transience of
all states. Hence, (b)⇔(c).
We know
C(a ↔ Zn , G) C(a ↔ zn , Gw
n)
P(a → Zn ) = Pa (τZn < τa+ ) = = ; (1.7)
ca ca
here the second component indicates the underlying graph for which the effective conduc-
tance is calculated. Since P (a → Zn ) is decreasing in n, the same is true for C(a ↔ zn , Gw
n ).

13
Definition 1.28 The effective conductance between a and ∞ is defined by
C(a ↔ ∞) = lim C(a ↔ Zn , G) = Ceff . (1.8)
n→∞

The effective resistance between a and ∞ is defined by


1
R(a ↔ ∞) = = Reff . (1.9)
C(a ↔ ∞)
One can show directly (exercise) that the limits in (1.8) and (1.9) do not depend on the
choice of the sequence Zn , n ≥ 1. This follows also from (1.7) and (1.6): the limit of
P(a → Zn ) equals Pa (τa+ = ∞) and does not depend on the choice of the sequence Zn .
The following criterion for recurrence and transience is very useful.
Theorem 1.29 Random walk on an infinite graph is recurrent if and only if for some
a∈V
C(a ↔ ∞) = 0 ⇔ R(a ↔ ∞) = ∞.
The random walk is transient if and only if for some a ∈ V
C(a ↔ ∞) > 0 ⇔ R(a ↔ ∞) < ∞.
In both cases “for some a ∈ V ” can be replaced by “for all a ∈ V ”. The escape probability
is given by
C(a ↔ ∞) 1
Pa (τa+ = ∞) = = .
ca ca R(a ↔ ∞)
Proof. For a ∈ V , we calculate
Pa (τa+ = ∞) = lim P(a → Zn ) by (1.6)
n→∞
C(a ↔ Zn , G)
= lim by (1.7)
n→∞ ca
C(a ↔ ∞) 1
= = .
ca ca R(a ↔ ∞)
The claim follows from Lemma 1.27.

1.6 Reduction of networks


By Theorem 1.29, it suffices to calculate or bound the effective conductance in order to
get control over escape probabilities.
In the following, a resistor r is an edge with resistance r and a conductor c is an edge
with conductance c.
Theorem 1.30 (Series law) Two resistors r1 and r2 in series are equivalent to a single
resistor r1 + r2 . More formally, assume that v ∈ V \ (A ∪ Z) has degree 2 with neighbors
u1 and u2 and we replace the edges {u1 , v}, {u2 , v} with resistances r{u1 ,v} = r1 and
r{u2 ,v} = r2 by the single edge {u1 , u2 } with resistance r1 + r2 . Then,

14
• the voltage in V \{v} and the current along edges of G with both endpoints in V \{v}
do not change and

• the current flowing from u1 to u2 equals i(u1 , v).

Proof. Consider the new multigraph G̃ = (Ṽ = V \{v}, Ẽ) with Ẽ = E\{{u1 , v}, {u2 , v}}∪
{{u1 , u2 }} and r̃e = re for e ∈ Ẽ ∩ E, r̃{u1 ,u2 } = r1 + r2 . By Remarks 1.16 and 1.18, it
suffices to show that Ohm’s law and Kirchhoff’s node law hold on G̃ for the functions

 i(w1 , w2 ) if w1 , w2 ∈ V \ {v} with {w1 , w2 } ∈ E,
ĩ(w1 , w2 ) = i(u1 , v) if (w1 , w2 ) = (u1 , u2 ),
i(v, u1 ) if (w1 , w2 ) = (u2 , u1 ),

Ũ(w) = U(w) for w ∈ V \ {v}.

Claim: Ohm’s law holds.


It suffices to check it for the edge between u1 and u2 . One has

Ũ(u1 ) − Ũ(u2 ) =U(u1 ) − U(u2 ) = (U(u1 ) − U(v)) + (U(v) − U(u2 ))


=r1 i(u1 , v) + r2 i(v, u2 ) by Ohm’s law
=r1 i(u1 , v) − r2 i(u2 , v). (1.10)

Kirchhoff’s node law at v yields

i(u1 , v) + i(u2 , v) = 0 ⇒ −i(u2 , v) = i(u1 , v).

Inserting this into (1.10), yields

Ũ(u1 ) − Ũ(u2 ) = (r1 + r2 )i(u1 , v) = r̃{u1 ,u2 } ĩ(u1 , u2 ).

Claim: Kirchhoff’s node law holds.


By the definition of ĩ, one has ĩ(w1 , w2 ) = −ĩ(w2 , w1 ) for all {w1 , w2 } ∈ E. It remains to
check the node law at u1 and u2 . We calculate
X X X
ĩ(u1 , w) = ĩ(u1 , w) = i(u1 , w) + ĩ(u1 , u2 ).
w∈Ṽ :{u1 ,w}∈Ẽ w∈V \{v}:{u1 ,w}∈Ẽ w∈V \{v}:{u1 ,w}∈E

The last sum equals −i(u1 , v) by Kirchhoff’s law. By definition, ĩ(u1 , u2 ) = i(u1 , v).
Hence,
X
ĩ(u1 , w) = 0.
w∈Ṽ :{u1 ,w}∈Ẽ

15
Lemma 1.31 For α ∈ (0, 1), consider a nearest-neighbor random walk on Z with

P(Xn+1 = u + 1|Xn = u) = α and P(Xn+1 = u − 1|Xn = u) = β = 1 − α

for all n ∈ N0 . For n ∈ N and k ∈ {0, . . . , n}, the probability of hitting 0 before n given
X0 = k is given by

n−k
if α = 12 ,


n

Pk (τ0 < τn ) = (α/β)n−k − 1
 if α ̸= 12 .
(α/β)n − 1

Proof. Case α = 12 : We consider the line graph with V = {0, . . . , n} with resistors given
by re = 1 for all edges. We set U(0) = 1 and U(n) = 0. Then the probability Pk (τ0 < τn )
is the voltage at k.
By the series law, the voltage does not change if we replace the k unit resistors between
0 and k by one resistor with resistance r{0,k} = k. Similarly, we can replace the n − k
unit resistors between k and n by one resistor with resistance r{k,n} = n − k. Since U is
harmonic at k and U(n) = 0, we obtain
c{0,k} c{k,n}
U(k) = U(0) + U(n)
c{0,k} + c{k,n} c{0,k} + c{k,n}
c−1
{k,n} r{k,n} n−k
= −1 −1 = = . (1.11)
c{k,n} + c{0,k} r{k,n} + r{0,k} n

Case α ̸= 12 : We use a similar argument. In this case, we set for u ∈ {0, . . . , n − 1}


 u  u
α β
c{u,u+1} = ⇒ r{u,u+1} = .
β α
This gives the right transitions probabilities because for all u ∈ {1, . . . , n − 1}, one has
 u
α α
c{u,u+1} β β
p(u, u + 1) = =  u  u−1 = α = p.
c{u,u+1} + c{u,u−1} α α +1
β
+ β
β

We replace the k resistors between 0 and k by


k−1 k−1  u β k

X X β 1− α
r{0,k} = r{u,u+1} = =
u=0 u=0
α 1 − αβ

and the n − k resistors between k and n by


n−1 n−1  u  k n−k β k β n
1 − αβ
 
X X β β α
− α
r{k,n} = r{u,u+1} = = = .
u=k u=k
α α 1 − αβ 1− β
α

16
Using (1.11), we obtain
 n−k
β k
n α
− αβ −1

r{k,n} α β
U(k) = = n =  n .
r{k,n} + r{0,k} 1 − αβ α
−1
β

Remark 1.32 These arguments are robust: α could also depend on u!


Theorem 1.33 (Parallel law) Two conductors c1 and c2 in parallel are equivalent to a
single conductor c1 + c2 . More formally, assume that two parallel edges e1 , e2 ∈ E between
u1 and u2 with conductances ce1 = c1 and ce2 = c2 are replaced by a single edge e between
u1 and u2 with conductance c1 + c2 . Then,
• the voltage in V and the current along edges in E \ {e1 , e2 } do not change and
• i(e) = i(e1 ) + i(e2 ) provided e, e1 , and e2 have the same orientation.
Proof. The proof follows the same lines as the proof of the series law. We assume that
e, e1 , and e2 have the same orientation and denote by −e, −e1 , −e2 the edges with the
opposite orientation. We check that Ohm’s law and Kirchhoff’s node law hold on the new
graph G̃ = (V, F ) with the edge set F = (E \ {e1 , e2 }) ∪ {e} with ce = c1 + c2 for the
voltage Ũ = U and
for f ∈ F⃗ \ {e, −e},

 i(f )
ĩ(f ) = i(e1 ) + i(e2 ) for f = e,
i(−e1 ) + i(−e2 ) for f = −e.

Claim: Ohm’s law holds.


It suffices to consider the edge e because at all other edges neither the voltage nor the
current change. We calculate
ĩ(e) = i(e1 ) + i(e2 ) = c1 (U(u1 ) − U(u2 )) + c2 (U(u1 ) − U(u2 )) = ce (Ũ(u1 ) − Ũ(u2 )).
Claim: Kirchhoff’s node law holds.
It suffices to verify that Ũ is harmonic at u1 and u2 in G̃. Without loss of generality, we
consider only u1 . For f ∈ E with u1 ∈ f let fˇ denote the endpoint of f different from u1 .
We know that
X cf X cf ce ce
U(u1 ) = U(fˇ) = U(fˇ) + 1 U(u2 ) + 2 U(u2 ).
f ∈E:u ∈f
cu1 cu1 cu1 cu1
1 f ∈E\{e1 ,e2 }:u1 ∈f

Note that cu1 is unchanged in G̃. Since ce = ce1 + ce2 , we obtain


X cf ce X cf
Ũ(u1 ) = U(fˇ) + U(u2 ) = Ũ(fˇ).
cu1 cu1 f ∈F :u ∈f
c u1
f ∈F \{e}:u1 ∈f 1

An automorphism of the weighted Graph (G = (V, E), c) consists of two bijections


ϕ : V → V and ψ : E → E with the following properties for all v ∈ V and e ∈ E:

17
• v∈e ⇒ ϕ(v) ∈ ψ(e)

• ce = cψ(e)
For u, v ∈ V , we define d(u, v) to be the length of the shortest path connecting u and v.
In other words,

d(u, v) = inf{n ∈ N0 : ∃v0 , . . . , vn ∈ V with v0 = u, vn = v, {vi , vi+1 } ∈ E ∀i ∈ {0, . . . , n−1}}.

We call a weighted graph (G, c) spherically symmetric about 0 ∈ V if for any u, v ∈ V


with d(u, 0) = d(v, 0) there exists an automorphism of the weighted graph with

ϕ(0) = 0 and ϕ(u) = v.

Theorem 1.34 Let (G, c) be an infinite spherically symmetric graph. For n ∈ N, let

En = {{u, v} ∈ E : d(u, 0) = n − 1, d(v, 0) = n}

be the set of edges connecting vertices on levels n − 1 and n. Then,



X 1 X
R(0 ↔ ∞) = with Cn = ce .
n=1
Cn e∈En

In particular, the random walk on G is transient if and only if



X 1
< ∞.
n=1
C n

Proof. For n ∈ N, let


n
[
Vn = {v ∈ V : d(v, 0) = n}, V≤n = Vi .
i=0

Let n ∈ N and u, v ∈ Vn . In particular, d(u, 0) = d(v, 0). Let U be harmonic on V≤n \ {0}
and let U(0) = 1, U(w) = 0 for all w ∈ V \ V≤n .
Claim 1: U(u) = U(v)
By assumption, there exists an automorphism of the weighted graph with ϕ(0) = 0 and
ϕ(u) = v. Since the graph is spherically symmetric,

Ũ = U ◦ ϕ

is also harmonic on V≤n \ {0} and satisfies

Ũ(0) = U(ϕ(0)) = U(0) = 1, Ũ(w) = 0 for all w ∈ V \ V≤n .

By the uniqueness principle, Ũ = U. In particular,

U(u) = Ũ(u) = U(ϕ(u)) = U(v).

18
Consequently, there is current zero flowing through edges between u and v. Hence, if such
edges exist, we can remove them without changing the voltage U. Thus, we can remove
all edges between vertices having the same distance to 0. Without loss of generality we
assume that G contains no such edges.
Let G̃ denote the graph obtained from G by identifying u and v. We denote the
identification of u and v by v.
Claim 2: The restriction of U to V \ {u} is harmonic on G̃.
It suffices to consider the vertex v. Since U is harmonic on G, we know
X c{u,w}
U(u) = U(w), (1.12)
cu
w∈V :{w,u}∈E
X c{v,w}
U(v) = U(w). (1.13)
cv
w∈V :{w,v}∈E

Since U(u) = U(v), we obtain


1 X c{u,w} X c{v,w}
U(v) = (U(u) + U(v)) = U(w) + U(w). (1.14)
2 2cu 2cv
w∈V :{w,u}∈E w∈V :{w,v}∈E

Note that on the new graph


c̃v = cu + cv .
Since G is spherically symmetric, cu = cv and thus, c̃v = 2cu = 2cv . Consequently, (1.14)
shows that U is harmonic at v. Note that U remains harmonic if we identify for each n
all vertices in Vn simultaneously.
We construct from G a new graph

G = (N0 , E = {{i, i + 1} : i ∈ N0 })

as follows. For any n ∈ N we identify all vertices in Vn and call the new vertex n. Hence,
by claims 1 and 2, the escape probabilities P(0 → V \ V≤n ) do not change. Between n
and n + 1 there are |En+1 | parallel edges. By the parallel law, these parallel edges can be
replaced by one edge {n, n + 1} with conductance Cn+1 . By the series law, the effective
resistance of the new network G equals

X 1
.
n=1
C n

By construction, the effective resistance of G equals the effective resistance of G. The


second claim follows from Theorem 1.29.

19
2 Energy
2.1 Definitions
Let G = (V, E) be a graph of bounded degree, i.e. supv∈V degree(v) < ∞. Consider the
Hilbert space X
ℓ2 (V ) = {f : V → R : f (v)2 < ∞}
v∈V

with the real scalar product X


⟨f, g⟩ = f (v)g(v).
v∈V

⃗ let −e := (v, u) denote the reversed edge. Let E


For e = (u, v) ∈ E, ⃗ 1/2 ⊆ E
⃗ denote a set
of directed edges such that for each e = {u, v} ∈ E precisely one of the edges (u, v) or
⃗ 1/2 . Consider the Hilbert space
(v, u) is contained in E
X
⃗ → R : θ(e) = −θ(−e),
ℓ2− (E) = {θ : E θ(e)2 < ∞}

e∈E

with the real scalar product


1X X
⟨θ, η⟩ = θ(e)η(e) = θ(e)η(e).
2

e∈E ⃗ 1/2
e∈E

⃗ we set ě = u, ê = v.
For e = (u, v) ∈ E,

Definition 2.1 We define the coboundary operator



d : ℓ2 (V ) → ℓ2− (E), (df )(e) = f (ě) − f (ê), e ∈ E,

and the boundary operator


X
d∗ : ℓ2− (E) → ℓ2 (V ), (d∗ θ)(v) = θ(e), v ∈ V.

e∈E:ě=v

One has to show that d and d∗ map indeed into the indicated ℓ2 -spaces, see exercises.
df gives the flow through the edges induced by f ; d∗ θ gives the net flow at the vertices.
Clearly, d and d∗ are linear operators.

Lemma 2.2 d and d∗ are adjoints of each other: For all f ∈ ℓ2 (V ) and all θ ∈ ℓ2− (E),
one has
⟨θ, df ⟩ = ⟨d∗ θ, f ⟩.

20
Proof. We calculate
1X 1X 1X
⟨θ, df ⟩ = θ(e)(f (ě) − f (ê)) = θ(e)f (ě) − θ(e)f (ê)
2 2 2

e∈E ⃗
e∈E ⃗
e∈E
1X X 1X X
= f (v) θ(e) − f (v) θ(e)
2 v∈V 2 v∈V

e∈E:ě=v ⃗
e∈E:ê=v
1 X  X 
= f (v) d∗ θ(v) + θ(−e) .
2 v∈V

e∈E:ê=v

Note that
X X
θ(−e) = θ(e) = d∗ θ(v).

e∈E:ê=v ⃗
e∈E:ě=v

Inserting this yields


X
⟨θ, df ⟩ = f (v)d∗ θ(v) = ⟨d∗ θ, f ⟩.
v∈V

Remark 2.3 Using the operators d and d∗ , we can rewrite Ohm’s law and Kirchhoff ’s
node law for the voltage U, the current i, and the resistance r as follows:
• Ohm’s law: dU(e) = i(e)re for all e ∈ E.

• Kirchhoff’s node law: d∗ i(v) = 0 for all v ∈ V \ (A ∪ Z).

Let θ ∈ ℓ2− (E). Recall that θ is called a flow between A and Z if d∗ θ = 0 on V \ (A ∪ Z).
If in addition d∗ θ ≥ 0 on A and d∗ θ ≤ 0 on Z, then θ is a flow from A to Z. Define
X
strength(θ) := d∗ θ(a)
a∈A

to be the strength of the flow. This is the total amount flowing into the network. A unit
flow is a flow of strength 1.

Lemma 2.4 (Flow conservation) Let G be a finite graph and let A, Z ⊆ V with A ∩
Z = ∅. For any flow θ between A and Z, one has
X X
d∗ θ(a) = − d∗ θ(z). (2.1)
a∈A z∈Z

In other words, the total amount flowing into the network at A equals the total amount
flowing out of the network at Z.
If in addition f : V → R is a function with f |A = α and f |Z = ζ with constants
α, ζ ∈ R, then
⟨θ, df ⟩ = strength(θ)(α − ζ).

21
Proof. Let 1 : V → R, 1(v) = 1 denote the constant function 1. Recall that d∗ θ(v) = 0
for all v ∈ V \ (A ∪ Z). We calculate
X X X
d∗ θ(a) + d∗ θ(z) = d∗ θ(v)1(v) = ⟨d∗ θ, 1⟩ = ⟨θ, d1⟩ = 0
a∈A z∈Z v∈V

because d1(e) = 0 for all e ∈ E. ⃗ This proves (2.1).



Using again d θ(v) = 0 for all v ∈ V \ (A ∪ Z), we calculate
X X X
⟨θ, df ⟩ = ⟨d∗ θ, f ⟩ = d∗ θ(v)f (v) = d∗ θ(a)α + d∗ θ(z)ζ = strength(θ)(α − ζ),
v∈V a∈A z∈Z

where the last equality follows from (2.1).


From now on, let G be a finite graph with strictly positive weights on the edges, for
some weight function h : E → R, h > 0. For θ, η ∈ ℓ2− (E), we have

⟨θ, η⟩h := ⟨θh, η⟩ = ⟨θ, ηh⟩.

The associated norm is given by


p
∥θ∥h = ⟨θ, θ⟩h .

⃗ → R by
Definition 2.5 We define the energy of an antisymmetric function θ : E
1X
E(θ) := ∥θ∥2r = θ(e)2 re .
2

e∈E

Lemma 2.6 Let i : E ⃗ → R be a unit flow from A to Z such that the voltage U satisfies
U = UA on A and U = UZ on Z with constants UA , UZ ∈ R. The effective resistance
R(A ↔ Z) between A and Z is defined as the effective resistance between a and Z if all
vertices in A are identified to one vertex a. Then, one has

E(i) = UA − UZ = R(A ↔ Z).

Proof. By Ohm’s law, we obtain

E(i) = ⟨i, ri⟩ = ⟨i, dU⟩.

By assumption, i is a flow of strength one. Hence, Lemma 2.4 implies

E(i) = UA − UZ .

Since all vertices in A are at the same voltage, there is no current flowing through edges
between vertices of A. Hence, we can remove all theseP edges
P and identify all vertices in
A with one vertex a. Since i is a unit flow, IAZ := a∈A v∈V i(a, v) = 1. By (1.2),

UA − UZ = R(A ↔ Z)IAZ = R(A ↔ Z).

22
⃗ let χe : E
For e ∈ E, ⃗ → R,
χe = 1{e} − 1{−e}
be the unit flow along the edge e. In other words,

 1 if ẽ = e,
e
χ (ẽ) = −1 if ẽ = −e,
0 else.

⃗ → R, one has
Lemma 2.7 For any antisymmetric function θ : E

⟨χe , θ⟩r = θ(e)re ∀e ∈ E, ⃗ (2.2)


X
⟨ ce χe , θ⟩r = d∗ θ(v) ∀v ∈ V. (2.3)

e∈E:ě=v

Proof. By definition of χe , one has


1X e 1
⟨χe , θ⟩r = χ (ẽ)θ(ẽ)rẽ = (θ(e) − θ(−e))re = θ(e)re .
2 2

ẽ∈E

Using this and the linearity of the scalar product, we calculate


X X X X
⟨ ce χe , θ⟩r = ce ⟨χe , θ⟩r = ce θ(e)re = θ(e) = d∗ θ(v).

e∈E:ě=v ⃗
e∈E:ě=v ⃗
e∈E:ě=v ⃗
e∈E:ě=v

Remark 2.8 Lemma 2.7 allows us to reformulate Kirchhoff ’s laws as follows:


• Kirchhoff’s node law: ⟨ e∈E:ě=v ce χe , i⟩r = 0 for all v ∈ V \ (A ∪ Z).
P

• Kirchhoff’s cycle law: For any cycle γ = (v0 , . . . , vn = v0 ) with ej = (vj , vj+1 ) ∈ E,

one has
Xn−1
⟨ χej , i⟩r = 0.
j=0

2.2 Thomson’s principle


Lemma 2.9 Let i : E ⃗ → R be an antisymmetric function satisfying Kirchhoff ’s cycle
law. Suppose that i satisfies Kirchhoff ’s node law on some set W ⊆ V :
X
i(w, v) = 0 ∀w ∈ W. (2.4)
v∈V :{v,w}∈E

Then, there exists a function U : V → R which is harmonic on W such that i is the


current associated to U. The voltage U is unique up to an additive constant.

23
Proof: see exercises.
Recall that the underlying graph is finite. Let span{θ1 , . . . , θm } denote the linear
subspace of ℓ2− (E) generated by the functions θ1 , . . . , θm ∈ ℓ2− (E). We define
• the star space n X o
⋆ = span ce χ e : v ∈ V and

e∈E:ě=v

• the cycle space


n−1
nX o
⃝ = span ⃗ (v0 , . . . , vn = v0 ) is a cycle .
χej : ej = (vj , vj+1 ) ∈ E,
j=0

Remark 2.10 (a) θ ∈ ⃝⊥ ⇔ θ satisfies Kirchhoff ’s cycle law.

(b) θ ∈ ⋆⊥ ⇔ θ satisfies Kirchhoff ’s node law.

Proof.
(a) θ ∈ ⃝⊥ ⇔ For any cycle (v0 , . . . , vn = v0 ) with ej = (vj , vj+1 ) one has

Xn−1
0=⟨ χej , θ⟩r .
j=0

By Remark 2.8, this is Kirchhoff’s cycle law.

(b) θ ∈ ⋆⊥ ⇔ For all v ∈ V one has


X
0=⟨ ce χe , θ⟩r .

e∈E:ě=v

By Remark 2.8, this is Kirchhoff’s node law.

Lemma 2.11 ⋆ and ⃝ are orthogonal with respect to ⟨·, ·⟩r and we can write ℓ2− (E) as a
direct sum:
ℓ2− (E) = ⋆ ⊕ ⃝.

Proof. Orthogonality. We show that the generating sets of ⋆ and ⃝ are orthogonal. Let
v ∈ VPand let (v0 , . . . , vn = P ⃗ for all j. Consider
v0 ) be a cycle with ej = (vj , vj+1 ) ∈ E
n−1
θ1 = e∈E:ě=v
⃗ ce χe and θ2 = j=0 χej . Clearly,
n−1
X X
⟨θ1 , θ2 ⟩r = ce ⟨χe , χej ⟩r .

e∈E:ě=v j=0

24
⃗ one has by (2.2)
For e, ej ∈ E,

 1 if ej = e,
e ej ej ej
ce ⟨χ , χ ⟩r = ce χ (e)re = χ (e) = −1 if ej = −e,
0 else.

If ej is not incident to v, then ce ⟨χe , χej ⟩r = 0. The edges incident to v in the cycle come
in pairs of the form ej = (u, v), ej+1 = (v, w). In this case, we obtain
X X
ce ⟨χe , χej ⟩r = −1, ce ⟨χe , χej+1 ⟩r = 1,

e∈E:ě=v ⃗
e∈E:ě=v

so both contributions cancel. This shows that ⟨θ1 , θ2 ⟩r =0.


Direct sum. To show that ℓ2− (E) is the direct sum of ⋆ and ⃝, we need to show that

⋆⊥ ∩ ⃝⊥ = {0}. (2.5)

Let θ ∈ ⋆⊥ ∩ ⃝⊥ . Since θ ∈ ⃝⊥ , θ satisfies Kirchhoff’s cycle law by Remark 2.10. By


Lemma 2.9 with W = ∅, there exists a function F : V → R such that

θ(e) = ce dF (e)

⃗ Since θ ∈ ⋆⊥ , θ satisfies Kirchhoff’s node law by Remark 2.10:


for all e ∈ E.

d∗ θ(v) = 0 for all v ∈ V.

Consequently, for v ∈ V , one has


X X
0 =d∗ (cdF )(v) = ce dF (e) = c{u,v} (F (v) − F (u))

e∈E:ě=v u∈V
X
=cv F (v) − c{u,v} F (u)
u∈V

Hence, X c{u,v}
F (v) = F (u)
u∈V
cv
and F is harmonic. Since the graph G is finite, F attains its maximum on V . The graph
G is connected. So F is constant by the maximum principle. Hence, θ = cdF = 0. This
shows that (2.5) holds and thus finishes the proof.
Observe that

• dim ⋆ = |V | − 1 because v∈V e∈E:ě=v ce χe = 0 and this is the only dependence;


P P

• dim ⃝ = |E| − |V | + 1;

• dim ℓ2− (E) = |E| = dim ⋆ + dim ⃝.

25
Let
P⋆ : ℓ2− (E) → ⋆
be the orthogonal projection to ⋆. In particular, P⋆ ◦ P⋆ = P⋆ and for all θ ∈ ℓ2− (E) one
has
P⋆ θ ∈ ⋆, θ − P⋆ θ ∈ ⋆⊥ .

Theorem 2.12 (Thomson’s principle) Let G be a finite graph, let A, Z ⊆ V with


A ∩ Z = ∅, and let θ : E⃗ → R be a flow from A to Z. Then there is a unique current
⃗ → R with d∗ i = d∗ θ and one has
i:E

E(θ) > E(i) unless θ = i.

Proof. Existence: Given θ, define i = P⋆ θ. Then, i ∈ ⋆ = ⃝⊥ . By Remark 2.10, i


satisfies Kirchhoff’s cycle law.
Since θ − i ∈ ⋆⊥ , θ − i satisfies Kirchhoff’s node law by Remark 2.10: for all v ∈ V

0 = d∗ (θ − i)(v) = d∗ θ(v) − d∗ i(v) ⇒ d∗ i(v) = d∗ θ(v).

In particular, d∗ i(v) = d∗ θ(v) for all v ∈ V \ (A ∪ Z). By Lemma 2.9 i must be a current.
Since i ∈ ⋆ and θ − i ∈ ⋆⊥ are orthogonal, we obtain

E(θ) = ∥θ∥2r = ∥i + (θ − i)∥2r = ∥i∥2r + ∥θ − i∥2r > ∥i∥2r = E(i),

where the strict inequality holds for all θ ̸= i.


Uniqueness: Note that d∗ i = d∗ θ (⇔ d∗ (θ − i) = 0) is equivalent to θ − i ∈ ⋆⊥ = ⃝.
By Lemma 2.11, there is a unique decomposition θ = i + (θ − i) with i ∈ ⋆ and θ − i ∈ ⃝.
As was argued above, i must be a current.

2.3 Monotonicity
The following theorem states a nice monotonicity property. If one increases the conduc-
tance of some edges, then the effective conductance can only increase. Increasing the
conductance of some edges is the same as decreasing the resistance of these edges. The
effect is a decrease in the effective resistance.

Theorem 2.13 (Rayleigh’s monotonicity principle) Let G be a graph with two col-
lections of weights c = (ce )e∈E and c′ = (c′e )e∈E satisfying

ce ≤ c′e ∀e ∈ E.

(a) Suppose G is finite. Then, for A, Z ⊆ V with A ∩ Z = ∅, one has

Cc (A ↔ Z) ≤ Cc′ (A ↔ Z) or, equivalently Rc (A ↔ Z) ≥ Rc′ (A ↔ Z).

26
(b) Suppose G is infinite. Then, for all a ∈ V , one has

Cc (a ↔ ∞) ≤ Cc′ (a ↔ ∞) or, equivalently Rc (a ↔ ∞) ≥ Rc′ (a ↔ ∞).

In particular, if the random walk on (G, c) is transient, then the random walk on
(G, c′ ) is transient as well. If the random walk on (G, c′ ) is recurrent, then the
random walk on (G, c) is recurrent as well.

Proof.
⃗ →R
(a) By Lemma 2.6, on the graph with conductances c there exists a unit flow ic : E
from A to Z induced by a voltage U which is constant on A and Z such that

Rc (A ↔ Z) = Ec−1 (ic ).

Clearly,
X ic (e)2 X ic (e)2
Ec−1 (ic ) = ≥ = E(c′ )−1 (ic ). (2.6)
e∈E
ce e∈E
c′e

⃗ → R be a unit flow from A to Z on the graph with conductances c′


Let ic′ : E
induced by a voltage U ′ which is constant on A and Z such that

Rc′ (A ↔ Z) = E(c′ )−1 (ic′ ).

By construction, ic and ic′ satisfy Kirchhoff’s node law. Thus,

d∗ ic = 0 = d∗ ic′ on V \ (A ∪ Z).

All vertices in A are at the same voltage. Hence, we can identify them with one
vertex a. The same way, we can identify all vertices in Z to one vertex z. Since ic
and ic′ are unit flows,
d∗ ic (a) = 1 = d∗ ic′ (a).
By flow conservation (Lemma 2.4),

d∗ ic (z) = −d∗ ic (a) = −1

and the same holds for c′ . Thus, d∗ ic (z) = d∗ ic′ (z). Consequently,

d∗ ic = d∗ ic′ .

By Thomson’s principle,

E(c′ )−1 (ic ) ≥ E(c′ )−1 (ic′ ). (2.7)

Combining (2.6) and (2.7), we obtain

Rc (A ↔ Z) = Ec−1 (ic ) ≥ E(c′ )−1 (ic′ ) = Rc′ (A ↔ Z).

27
(b) Exhaust G with finite graphs Gn = (Vn , En ) with Vn ↑ V and set Zn = V \ Vn . Let
Gw
n denote the associated graph with all vertices in Zn identified to one vertex zn .
By definition,
Cc (a ↔ ∞) = lim Cc (a ↔ zn , Gwn ).
n→∞

Applying part (a) to the finite graphs Gw


n yields

Cc (a ↔ ∞) ≤ Cc′ (a ↔ ∞).

The claim about the random walk follows from Theorem 1.29.

Corollary 2.14 (a) Removing an edge not incident to a decreases P(a → Z).

(b) Contracting an edge with endpoints in V \ (A ∪ Z), i.e. identifying its endpoints and
removing the resulting loop, increases the effective conductance between A and Z.

Proof.

(a) Removing edge e is the same as setting ce = 0. Thus, the claim follows from
Rayleigh’s monotonicity principle. Recall that

C(a ↔ Z)
P(a → Z) =
ca
and ca does not change.

(b) Let G̃ = (Ṽ = V \ {u}, Ẽ = E \ {ẽ}) be the graph obtained from G by contracting
ẽ = {u, v} to v. Let i be the unit current flow from A to Z on G and let ĩ denote
its restriction to G̃. Then,

d∗ ĩ(v) = d∗ i(v) − i(v, u) + d∗ i(u) − i(u, v) = d∗ i(v) + d∗ i(u) = 0

and hence d∗ ĩ(w) = d∗ i(w) for all w ∈ V \ {u}. Consequently, ĩ is a unit flow from
A to Z. Hence, by Thomson’s principle, we have
X ĩ(e)2 X i(e)2
R(A ↔ Z; G̃) ≤ EcG̃−1 (ĩ) = = ≤ EcG−1 (i) = R(A ↔ Z; G).
ce ce
e∈Ẽ e∈E\{{u,v}}

28
3 Recurrence and transience
3.1 Nash-Williams inequality
Definition 3.1 Let A, Z ⊆ V be disjoint. A set of edges F ⊆ E is said to separate A
and Z if any path with starting point in A and endpoint in Z must contain an edge in F .
The set F is called a cutset.

Theorem 3.2 (Nash-Williams inequality) Let G = (V, E) be a finite graph. If a, z ∈


V with a ̸= z are separated by pairwise disjoint cutsets F1 , . . . , Fn , then
n
X 1
R(a ↔ z) ≥ P .
k=1 e∈Fk ce

⃗ → R be a unit current flow from a to z. By Lemma 2.6,


Proof. Let i : E

E(i) = R(a ↔ z).

Let F be a cutset separating a and z. Let

Z ={v ∈ V : v is an endpoint of an edge in F and v is separated from a by F },


K ={v ∈ V : v is not separated from a by F }.

Let G′ = (V ′ , E ′ ) denote the subgraph of G with V ′ = K ∪ Z. Then, i induces a unit flow


i′ from a to Z. By flow conservation (Lemma 2.4),
X X X
1 = d∗ i′ (a) = − d∗ i′ (z) = − i(e).
z∈Z z∈Z e∈E
⃗ ′ :ě=z

If both endpoints of e are contained in Z, then both i(e) and i(−e) = −i(e) occur in the
last sum. Any e ∈ E ′ with precisely one endpoint in Z must be in F . Hence, in the last
sum it suffices to sum over e ∈ F⃗ with precisely one endpoint in Z. Thus,
X X X
1=− i(z, v) ⇒ 1 ≤ |i(e)|.
z∈Z v∈V :{z,v}∈F, e∈F
|{z,v}∩Z|=1

Using the Cauchy-Schwarz inequality, we obtain


X√ 2 X X
1≤ ce re |i(e)| ≤ ce re i(e)2 .
e∈F e∈F e∈F

Hence,
X 1
re i(e)2 ≥ P .
e∈F e∈F ce

29
We have this bound for any Fk , 1 ≤ k ≤ n. Thus,
n X n
X
2
X 1
E(i) ≥ re i(e) ≥ P .
k=1 e∈Fk k=1 e∈Fk ce

Definition 3.3 A set of edges F ⊆ E is said to separate a ∈ V and ∞ if any infinite


self-avoiding path with starting point a must contain an edge in F . The set F is called a
cutset.

Theorem 3.4 (Nash-Williams criterion) Let G = (V, E) be an infinite graph. Let


Fn , n ∈ N, be pairwise disjoint finite cutsets such that each Fn separates a from ∞.
Then,

X 1
R(a ↔ ∞) ≥ P .
n=1 e∈F n
c e

Proof. We approximate G by finite subgraphs Gn = (Vn , En ), n ∈ N, such that Vn ↑ V


and ∪nk=1 Fk ⊆ En . Identifying all vertices in V \ Vn with zn , we know that

R(a ↔ ∞) = lim R(a ↔ zn ).


n→∞

The claim follows from the Nash-Williams


S∞ inequality.
Suppose we have a partition V = n=1 Wn of the vertex set and cutsets with the
following properties:
• W1 = {a},

• for each n ∈ N, the set Fn separates Wn from Wn+1 .


For each n, identify all vertices in Wn with one vertex vn . Then, we obtain a graph with
vertex set {vn : n ∈ N}. The edge set consists of parallel edges in Fn between
P vn and
vn+1 . These parallel edges can be replaced by one edge with conductance e∈Fn ce and
hence with resistance
1
P .
e∈Fn ce

The whole graph has the effective resistance



X 1
P
n=1 e∈Fn ce

given as lower bound in the Nash-Williams criterion.


Here is a nice application of the Nash-Williams criterion:

Theorem 3.5 (Polya’s theorem) For d ∈ {1, 2}, simple random walk on Zd is recur-
rent.

30
Proof. For n ≥ 1, let

Fn = {e = {u, v} ∈ E : ∥u∥∞ = n, ∥v∥∞ = n + 1}.

Then each Fn separates the origin from ∞. By the Nash-Williams criterion,


∞ ∞
X 1 X 1
R(0 ↔ ∞) ≥ P = .
n=1 e∈Fn ce n=1
|F n |

In dimension d = 1, we have |Fn | = 2 for all n and thus R(0 ↔ ∞) = ∞. For d = 2, we


find |Fn | = 4(2n + 1) and hence

X 1
R(0 ↔ ∞) ≥ = ∞.
n=1
4(2n + 1)

3.2 Energy and transience


Theorem 3.6 Let G be a connected graph with a countable vertex set. The following are
equivalent:
(a) Random walk on G is transient.

(b) There is a unit flow on G of finite energy from every vertex to ∞.

(c) There is a unit flow on G of finite energy from some vertex to ∞.

Proof. “(b)⇒(c)”: trivial.


“(c)⇒(a)”: Let Gn = (Vn , En ), n ∈ N, be finite subgraphs of G exhausting G in the sense
that Vn ↑ V . Let Gw w
n = (Vn ∪ {zn }, En ) be the graph obtained from G by identifying all
vertices in V \ Vn to a single vertex zn and then removing all loops and keeping multiple
edges. Fix a vertex a ∈ V . By definition,

R(a ↔ ∞) = lim R(a ↔ zn ).


n→∞

Assume n is large enough so that a ∈ Vn . Let in be the unit current flow in Gw


n from a to
zn , and let Un be the corresponding voltage. Then, by Lemma 2.6

E(in ) = R(a ↔ zn ).

Let θ be a unit flow in G from a to ∞ with E(θ) < ∞. Every edge in the wired graph
Gwn can be identified with a unique edge in G. Using this identification, the restriction of
θ to Gw w
n is a unit flow on Gn from a to zn . Hence, Thomson’s principle implies

1 X 1X
E(in ) ≤ E(θ|Gwn ) = θ(e)2 re ≤ θ(e)2 re = E(θ) < ∞.
2 2

w
e∈E ⃗
e∈E
n

31
We conclude
R(a ↔ ∞) = lim E(in ) ≤ E(θ) < ∞.
n→∞

Hence, the random walk is transient.


“(a)⇒(b)”: Suppose the random walk is transient. Then,

lim E(in ) = R(a ↔ ∞) < ∞.


n→∞

Hence, there exists M > 0 such that

E(in ) ≤ M for all n.

For v ∈ V , let

X
Yn (v) = 1{Xk =v} 1{k<τV \Vn } = number of visits to v before hitting a point outside Vn ,
k=0
X∞
Y (v) = 1{Xk =v} = number of visits to v.
k=0

By the definition of the Green’s function and Theorem 1.25,

Ea [Yn (v)] = GV \Vn (a, v) = cv Un (v).

By monotone convergence,

Ea [Y (v)] = lim Ea [Yn (v)] = cv lim Un (v).


n→∞ n→∞

We define
U(v) = lim Un (v).
n→∞

Since the random walk is transient, Ea [Y (v)] < ∞. Hence,

U(v) < ∞.

We define
i = cdU = lim cdUn = lim in .
n→∞ n→∞

Then, i is a unit flow from a to ∞ with


 
E(i) = E lim in ≤ lim E(in ) ≤ M.
n→∞ n→∞

Here we used Fatou’s lemma.


A simple path is a path which traverses every edge e ∈ E at most once. Let

Γ = {γ = (vn )n∈N0 : γ is a simple path starting at v0 = 0}.

We identify a path γ ∈ Γ with the edge sequence en = (vn , vn+1 ), n ∈ N0 .

32
Lemma 3.7 For any probability measure µ on Γ,

X
θ(e) = (µ(en = e) − µ(en = −e)), ⃗
e ∈ E,
n=0

is a unit flow from 0 to ∞.


Proof. Given γ ∈ Γ, we define for e ∈ E

X ∞
X
en
ψ(e) = χ (e) = (1{en =e} − 1{en =−e} ).
n=0 n=0

⃗ is traversed at most once. Hence, |ψ(e)| ≤ 1.


Since γ is a simple path, every edge e ∈ E
In particular, ψ and θ are well-defined. R
ψ is a unit flow from 0 to ∞. Since θ(e) = ψ(e)dµ the claim follows.

Theorem 3.8 (Polya’s theorem) For d ≥ 3, simple random walk on Zd is transient.

Proof. Since Z3 is a subgraph of Zd for d ≥ 4, it follows from Rayleigh’s monotonicity


principle
C(0 ↔ ∞, Z3 ) ≤ C(0 ↔ ∞, Zd ).
Since transience is equivalent to have a strictly positive effective conductance, it suffices
to prove transience for d = 3.
For x ∈ S 2 = {y ∈ R3 : ∥y∥2 = 1}, let l(x) denote the half-line from 0 to infinity
through x. Let γ(x) ∈ Γ be a simple path from 0 to ∞ that stays within distance 4 of
l(x). Choose γ(x) measurably. Let X be uniformly distributed on S 2 . Then, writing
(eX
n )n∈N0 for the random edge sequence of γ(X), we obtain the flow θ from 0 to ∞


X
θ(e) = (P(eX X
n = e) − P(en = −e)),

e∈E
n=0

from Lemma 3.7. Since γ(X) is a simple path,



[  ∞
[ 
θ(e) = P {eX
n = e} − P {eX
n = −e} = P(e ∈ γ(X)) − P(−e ∈ γ(X)).
n=0 n=0

⃗ Let me = 1 (u + v) denote the midpoint of the edge e, and let


Let e = (u, v) ∈ E. 2
3
B(x, r) = {y ∈ R : ∥x − y∥ < r} denote the ball of radius r around x; here ∥ · ∥ denotes
the Euclidean norm. Note that

P(e ∈ γ(X)) ≤ P(γ(X) passes through B(me , c1 ))

33
for some constant c1 > 0. Instead of picking X uniformly at random from S 2 we could
pick a point X ′ uniformly from ∂B(0, ∥me ∥) and define l(X) as the half-line from 0 to ∞
via X ′ . Hence, for some constant c2 > 0 one has
c2
P(e ∈ γ(X)) ≤ .
∥me ∥2
Consequently,
c2
|θ(e)| ≤ .
∥me ∥2
The number of edges with ∥me ∥ ∈ [n, n + 1) is bounded by c3 n2 . Hence, we can estimate
the energy of θ as follows

X
2
X c22 X c22 X X c22
2E(θ) = θ(e) ≤ = +
∥me ∥4 ∥me ∥4 n=1 ∥me ∥4

e∈E ⃗
e∈E e∈E:⃗ ⃗
e∈E:
∥me ∥∈(0,1) ∥me ∥∈[n,n+1)

X c2 c3 n 2
2
≤c4 + <∞
n=1
n4

with a constant c4 . Thus, θ is a flow of finite energy from 0 to ∞. By Theorem 3.6, the
random walk is transient.

4 Percolation
The motivation to study percolation comes from porous materials. For instance, one
would like to find a model for porous materials and use it to understand whether water
can go through a piece of porous material.

4.1 The model


Let G = (V, E) be a locally finite graph. In this chapter, we assume that G has no self-
loops. In other words, we assume that {v} ̸∈ E for all v ∈ V . An important example of
a graph is

Ld = (Zd , E) with E = {{u, v} : u, v ∈ Zd , ∥u − v∥2 = 1}, d ∈ N.

In percolation theory, vertices are also called sites and edges are also called bonds.
Let p ∈ [0, 1]. Let Xep , e ∈ E, be i.i.d. Bernoulli random variables with

P(Xep = 1) = p and P(Xep = 0) = 1 − p.

We call an edge e ∈ E open if Xep = 1 and closed if Xep = 0.

Definition 4.1 Let Op = {e ∈ E : Xep = 1} be the set of open edges. We call (G, Op ) a
model of bond percolation.

34
Bond percolation models a porous material. You can think of a system of tubes. Let
e = {u, v}. If Xep = 1, the tube between u and v is open and hence it lets water flow
through it. On the other hand, if Xep = 0, the tube between u and v is blocked and no
water can flow between u and v.

Remark 4.2 We construct bond percolation on the following probability space:

• Ω = {0, 1}E

• F = product σ-algebra on Ω = smallest σ-algebra generated by finite cylinders

• Pp = product measure

• Xep : Ω → {0, 1} projection to the component indexed by the edge e.

Definition 4.3 (a) A path γ = (v0 , . . . , vn ) is called open if {vi , vi+1 } ∈ Op for all i.

(b) We say that u and v are connected by an open path and write u ↔p v if there exists
an open path from u to v.

(c) For v ∈ V we call


C p (v) = {u ∈ V : u ↔p v}
the open cluster of v.

Note that ↔p is an equivalence relation. The equivalence classes are the connected
components of Op . They are called open clusters.
In the following, we assume the graph G to be infinite, i.e. |V | = ∞. A central question
is the following:

For which values p ∈ [0, 1] does an infinite open cluster exist?

Of course, the cases p ∈ {0, 1} are trivial:

p=0 ⇒ Op = ∅ a.s.
p=1 ⇒ Op = E a.s.

Definition 4.4 We say that percolation occurs if there exists an infinite open cluster.

Lemma 4.5 For all v ∈ V , |C p (v)| is a random variable.

Proof. We write X
|C p (v)| = 1{v↔p w} .
w∈V

Hence, it suffices to show that 1{v↔p w} is a random variable. Let

An = {there exists an open path from v to w of length ≤ n}.

35
Then,
1An ↑ 1{v↔p w} .
Thus, it suffices to show that 1An is a random variable. Let

B(n) = {u ∈ V : there exists a path from v to u of length ≤ n}

be the ball of radius n around the vertex v in G. Let

En = {e ∈ E : e ∩ B(n) ̸= ∅}.

Then, 1An depends only on Yn = (Xep )e∈En . More precisely, there exists a function g :
{0, 1}En → {0, 1} such that 1An = g(Yn ). In particular, this shows that 1An is a random
variable.

Definition 4.6 The percolation probability is defined by


[ 
ψ(p) = Pp (there exists an infinite open cluster) = Pp {|C p (v)| = ∞} .
v∈V

Lemma 4.7 ψ(p) ∈ {0, 1}

Proof. Whether an infinite open cluster exists, does not depend on the state of finitely
many edges. Therefore, the event {there exists an infinite open cluster} is contained in
the tail σ-algebra T generated by the random variables Xep , e ∈ E. (See probability
theory for details.) Since the Xep , e ∈ E, are independent, Kolmogorov’s 0-1-law implies
the claim.

Definition 4.8
θ(p) := Pp (|C p (0)| = ∞)
is the probability that 0 belongs to an infinite open cluster.

4.2 Percolation on regular trees


A cycle is a path with the same starting and endpoint. A connected graph which does
not contain any edge-self-avoiding cycle is called a tree. A rooted tree is a tree with one
distinguished vertex called the root.
For k ∈ N, the regular tree Tk with root 0 is a tree with degree(0) = k and

degree(v) = k + 1 for all v ̸= 0.

T1 is a half-line. T2 is called binary tree.

Theorem 4.9 For bond percolation on the regular tree Tk , one has
1
inf{p ∈ [0, 1] : θ(p) > 0} = .
k

36
Proof. Note that C p (0) is the ancestrial tree of a Galton-Watson branching process,
where every individual has independently of all others, i childen with probability
 
k i
ρi = p (1 − p)k−i for i ∈ {0, 1, . . . , k}
i

and ρi = 0 otherwise. This is a binomial distribution with parameters k and p.


Let µ denote the expected number of children of one individual. Then, with Y ∼
Binomial(k, p) we have
µ = E[Y ] = kp.
In Probability Theory we have learned the following criterion for survival of a Galton
Watson branching process:

• µ = kp ≤ 1 ⇒ Galton Watson process dies out a.s.

• µ = kp > 1 ⇒ Galton Watson process has a strictly positive survival probability.

For bond percolation this yields


1
p≤ ⇒ θ(p) = 0
k
1
p> ⇒ θ(p) > 0
k
This proves the claim.

4.3 Monotonicity
Question: How do θ(p) and ψ(p) behave as functions of p?
The following monotonicity is intutively clear. It is a useful tool in percolation theory.

Theorem 4.10 The map θ : [0, 1] → [0, 1], p 7→ θ(p) is non-decreasing, i.e.

p < p′ ⇒ θ(p) ≤ θ(p′ )

Proof. The proof uses a coupling: We construct bond percolation for all parameters
p ∈ [0, 1] simultaneously on the same probability space. For this purpose, let Ue , e ∈ E,
be i.i.d. random variables on a probability space (Ω, F, P) with Ue ∼ Uniform(0, 1). We
define
Xep : Ω → {0, 1}, Xep = 1{Ue ≤p} .
This is the desired coupling: For each p ∈ [0, 1], the random variables Xep , e ∈ E, are
independent because Xep is a function of Ue and the latter are independent. Furthermore,

P(Xep = 1) = P(Ue ≤ p) = p, P(Xep = 0) = P(Ue > p) = 1 − p.

37
Consider p′ ∈ [0, 1] with p < p′ . Then {Ue ≤ p} ⊆ {Ue ≤ p′ } and consequently

Xep = 1{Ue ≤p} ≤ 1{Ue ≤p′ } = Xep .
′ ′
Consequently, C p (0) contains all edges of C p (0): C p (0) ⊆ C p (0). Hence,

θ(p) = P(|C p (0)| = ∞) ≤ P(|C p (0)| = ∞) = θ(p′ ).

We now write P instead of Pp if no confusion arises. In the following, we consider


bond percolation on Zd , d ≥ 1. In this case the model is translationally invariant and
consequently,

θ(p) = P(|C p (0)| = ∞) = P(|C p (v)| = ∞) for all v ∈ Zd .

Theorem 4.11 For all p ∈ [0, 1] one has



0 if θ(p) = 0,
ψ(p) =
1 if θ(p) > 0.

Proof. Assume θ(p) = 0. Then, by σ-subadditivity, we obtain


[  X
ψ(p) = P {|C p (v)| = ∞} ≤ P(|C p (v)| = ∞).
v∈V v∈V

Since the percolation process on Zd is translationally invariant, we obtain

P(|C p (v)| = ∞) = P(|C p (0)| = ∞) = θ(p) = 0

for all v ∈ V . Thus, we conclude


ψ(p) = 0.
Assume θ(p) > 0. Then, we know
[ 
ψ(p) = P {|C (v)| = ∞} ≥ P(|C p (0)| = ∞) = θ(p) > 0.
p

v∈V

By Lemma 4.7, ψ(p) ∈ {0, 1} and we conclude ψ(p) = 1.


Due to the monotonicity of the function θ, the following definition makes sense:

Definition 4.12 We define the critical value for percolation by

pc = inf{p ∈ [0, 1] : θ(p) > 0} = sup{p ∈ [0, 1] : θ(p) = 0}


= inf{p ∈ [0, 1] : ψ(p) = 1} = sup{p ∈ [0, 1] : ψ(p) = 0}.

Lemma 4.13 For the regular tree Tk , one has pc = k1 .

38
4.4 Bounds for the critical probability
Theorem 4.14 For Z1 one has pc = 1.

Proof. Let p < 1 and consider the events


p p
An ={X{n,n+1} = 0 = X{−n−1,−n} }, n ∈ N0 .

Since the events An , n ∈ N0 , depend on disjoint edge sets, they are independent. Fur-
thermore,
X ∞ ∞
X
P(An ) = (1 − p)2 = ∞.
n=0 n=0

Hence, the second Borel-Cantelli lemma implies

P(An for infinitely many n ∈ N0 ) = 1.

On the event {An for infinitely many n ∈ N0 }, the open cluster containing 0 is finite.
Hence,
θ(p) = 0 for all p < 1.
We conclude pc = 1.
Alternatively, one can split Z into two half-lines N0 and −N and use the result pc = 1
for the tree T1 .
In dimensions d ≥ 2, the situation is much more complicated.

Theorem 4.15 For Zd with d ≥ 2, one has


 
1 2
pc (d) ∈ , .
2d − 1 3

Proof. Lower bound. For all n ∈ N, one has

θ(p) = P(|C p (0)| = ∞) ≤ P(∃x ∈ C p (0) with ∥x∥∞ = n).

For m ∈ N, let

Γm = {γ self-avoiding path of length m starting in 0}.

Then, for all n ∈ N, one has

{∃x ∈ C p (0) with ∥x∥∞ = n}


⊆{∃x ∈ Zd with ∥x∥∞ = n and ∃ open path from 0 to x in Γm for some m ≥ n}
⊆{∃ open path in Γm for some m ≥ n}

[ [
= {γ open}.
m=n γ∈Γm

39
Consequently,
∞ X
X
θ(p) ≤ P(γ open).
m=n γ∈Γm
m
Since P(γ open) = p for all γ ∈ Γm , we conclude

X
θ(p) ≤ |Γm |pm .
m=n

To derive an upper bound for |Γm |, note that all paths γ ∈ Γm start in 0. For the first step,
there are 2d possibilities. Since γ is self-avoiding, all other steps must be in a different
direction than the previous one. Hence, for every step (except the first) there are at most
2d − 1 possibilities. Hence,
|Γm | ≤ 2d(2d − 1)m−1 .
Inserting this yields
∞ ∞
X
m−1 m 2d X
θ(p) ≤ 2d(2d − 1) p = ((2d − 1)p)m .
m=n
2d − 1 m=n

1
Assume p < 2d−1
. Then, the series converges and we conclude

2d
θ(p) ≤ ((2d − 1)p)n → 0 as n → ∞.
(2d − 1)(1 − (2d − 1)p)
Hence, we have shown that
 
1
θ(p) = 0 for all p ∈ 0, .
2d − 1

This proves the lower bound


1
pc (d) ≥ .
2d − 1
Upper bound. We can consider Zd as a subset of Zd × {0} ⊆ Zd+1 . Hence, if θ(p) > 0 for
Zd , then θ(p) > 0 for Zd+1 . Since pc = inf{p ∈ [0, 1] : θ(p) > 0}, we obtain

pc (d + 1) ≤ pc (d).

Thus, it suffices to show


2
pc (2) ≤ .
3
This is proved using a contour argument due to Peierls. Consider bond percolation on
Z2 . For N ∈ N, set
[N
CN = C p ((i, 0)).
i=0

40
CN is the set of points which is connected to {0, . . . , N } × {0} by an open path. By
translational invariance, we have
P(|C p ((i, 0))| = ∞) = P(|C p ((0, 0))| = ∞) = θ(p)
for all i. Hence,
N
1 X
θ(p) = P(|C p ((i, 0))| = ∞)
N + 1 i=0
N
1 [ 
≥ P {|C p ((i, 0))| = ∞} by σ-subadditivity
N + 1 i=0
1
= P(|CN | = ∞). (4.1)
N +1
Consider the dual graph G̃ = (Ṽ , Ẽ) associated to G = (Z2 , E). It is defined by
 
1 1
Ṽ = , + Z2 , Ẽ = {{u, v} : u, v ∈ Ṽ , ∥u − v∥2 = 1}.
2 2
Suppose |CN | < ∞. We define a circle to be a self-avoiding path in the dual graph with
the same starting and endpoint. For n ≥ 2N , we consider
Γn = {γ circle of length n surrounding {0, . . . , N } × {0}}.
Claim 1: |Γn | ≤ n3n
Consider γ ∈ Γn . Since it surrounds {0, . . . , N } × {0}, it must go through the x-axis.
Consider the
 right-most edge going through the x-axis. Its endpoint on top has coordinates
1 1
m + 2 , 2 . Clearly m ≥ N because {0, . . . , N } × {0} is surrounded. Furthermore, since
γ has length n and the origin is surrounded, we have m ≤ n. Hence, there are ≤ n
possibilities to pick this point of γ. 
The edge m + 12 , 21 , m + 12 , − 21 belongs to γ. If we add edges successively to
γ, we have ≤ 3 possibilities for any other edge. (Recall that we consider Z2 and γ is
self-avoiding, so it cannot take the edge it has just crossed.) Hence, we obtain
|Γn | ≤ n3n .
Claim 2: P(|CN | = ∞) > 0 for all p > 32 and all N sufficiently large.
Every edge ẽ in the dual graph intersects a unique edge e ∈ E. We call ẽ open if e is
open. Otherwise we call ẽ closed.
A contour of CN is a circle surrounding CN with the property that the surrounded
area is minimal. Since CN is open, every contour is closed. Hence, we obtain
 [∞ 
P(|CN | < ∞) ≤P {∃γ ∈ Γn closed}
n=2N

X ∞
X
≤ P(∃γ ∈ Γn closed) = |Γn |(1 − p)n
n=2N n=2N

41
because every circle in Γn has the same probability of being closed. Using Claim 1, we
obtain ∞
X
P(|CN | < ∞) ≤ n3n (1 − p)n .
n=2N
2
For p > 3
,one has 3(1 − p) < 1 and hence the series converges. Consequently, in this
case, we conclude
lim P(|CN | < ∞) = 0.
N →∞
In particular, for all N large enough, one has
P(|CN | < ∞) < 1 ⇔ P(|CN | = ∞) > 0.
Combining this with (4.1) we obtain
1
θ(p) ≥ P(|CN | = ∞) > 0
N +1
for all p > 23 . This shows that
2
pc (2) = inf{p ∈ [0, 1] : θ(p) > 0} ≤ .
3

In general, only for very few graphs, the value of pc is known explicitly. The following
theorem was proved by Kesten in 1980.
1
Theorem 4.16 For bond percolation on Z2 , one has pc = 2
and θ(pc ) = 0.

4.5 Uniqueness of the infinite open cluster


The following theorem was proved by Burton and Keane in 1989; see [BK89].
Theorem 4.17 Consider bond percolation on Zd , d ≥ 2. Let p ∈ [0, 1] be such that
θ(p) = Pp (|C p (0)| = ∞) > 0. Then, one has
Pp (there exists a unique infinite open cluster) = 1.
Note that the event that there exists a unique infinite open cluster is not measurable
with respect to the tail σ-algebra generated by Xep , e ∈ E. Hence, Kolmogorov’s 0-1-law
is not applicable. In fact, we need to prove a different 0-1-law.
Proof of Theorem 4.17. Case p = 0: Then, θ(p) = 0 and there is nothing to show.
Case p = 1: Then, C p (0) = Zd and the claim is true.
Case p ∈ (0, 1): We work on the product space introduced in Remark 4.2. Let
N = number of infinite open clusters.
By Theorem 4.11, θ(p) > 0 implies ψ(p) = Pp (there exists an infinite open cluster) = 1.
Hence,
Pp (N ≥ 1) = 1.
The proof is split into several steps:

42
• Step 1: Pp (N = k) ∈ {0, 1} for all k ∈ N0 ∪ {∞}.

• Step 2: Pp (N = k) = 0 for all k ∈ N with k ≥ 2

• Step 3: Pp (N = ∞) = 0
Together the three steps imply the claim Pp (N = 1) = 1 of the theorem.

Lemma 4.18 Let A be an event in the product σ-algebra F. Then, for all ε > 0 there
exist a finite set F ⊆ E and an event B ∈ σ(Xep , e ∈ F ) such that

Pp (A∆B) < ε.

Here, A∆B = (A \ B) ∪ (B \ A) denotes the symmetric difference of A and B. Events B


as above are called finite dimensional cylinder sets.

Proof. The statement is a consequence of the following lemma.


Lemma 4.19 Let (Fn ) be a filtration and F∞ = σ(∪∞ n=1 Fn ). Assume A ∈ F∞ . Then,
there is a sequence of events (An ) with An ∈ Fn , ∀n such that IAn converges almost surely
and in L1 to IA . In particular, we have P(A∆An ) → 0 for n → ∞. One such sequence of
events is given by
1
An := {E[IA |Fn ] > }.
2
Proof of Lemma 4.19. Consider the sequence (E[IA |Fn ]). By Theorem 5.34 in the
Probability Theory lecture notes, E[IA |Fn ] → E[IA |F∞ ] almost surely and in L1 . Since
A ∈ F∞ , E[IA |F∞ ] = IA almost surely. Hence, for n → ∞,

E[IA |Fn ] → IA almost surely and in L1 ⇒ I{E[IA |Fn ]> 1 } → IA almost surely and in L1 .
2

Lemma 4.18 follows by taking an enumeration (ei ) of the bonds and setting Fn :=
σ(Xepi , 1 ≤ i ≤ n).
For v ∈ Zd , let
Tv : Ω → Ω, ω 7→ ω(v + ·)
denote the shift by v. In other words,

Tv ω({w, z}) = ω({v + w, v + z}) for all {w, z} ∈ E.

We call an event A ∈ F translation invariant if

Tv−1 (A) = A for all v ∈ Zd .

For example, the event that there exists an infinite open cluster is translation invariant.

Theorem 4.20 Any translation invariant event A ∈ F satisfies Pp (A) ∈ {0, 1} for all
p ∈ [0, 1].

43
Proof. Throughout the proof, p will be fixed. To simplify the notation we write P instead
of Pp .
Let A ∈ F be a translation invariant event and let ε > 0. By Lemma 4.18 there exists
a finite dimensional cylinder set Aε with

P(A∆Aε ) < ε.

Clearly, P is invariant under Tv , i.e.

P(Tv−1 (C)) = P(C)

for all C ∈ F. Consequently,

ε > P(A∆Aε ) = P(Tv−1 (A)∆Tv−1 (Aε )).

By assumption, Tv−1 (A) = A. We abbreviate

Bε := Tv−1 (Aε ).

By the above,

P(A∆Bε ) < ε.

We calculate

P(A∆(Aε ∩ Bε )) =P(A \ (Aε ∩ Bε )) + P((Aε ∩ Bε ) \ A)


≤P(A \ Aε ) + P(A \ Bε )) + P(Bε \ A)
≤P(A∆Aε ) + P(A∆Bε ) < 2ε.

By assumption, Aε depends only on finitely many edges. The shifted event Bε depends
on the shifted set of edges. Thus, we can choose v large enough so that Aε and Bε are
independent. Then, we obtain

|P(Aε )P (Bε ) − P(A)| =|P(Aε ∩ Bε ) − P(A)|.

Note that for any event C, one has

|P(C) − P(A)| = |P(C \ A) + P(C ∩ A) − P(A \ C) − P(A ∩ C)| ≤ P(A∆C).

Using this, we obtain

|P(Aε )P(Bε ) − P(A)| ≤P(A∆(Aε ∩ Bε )) < 2ε. (4.2)

On the other hand, one has

|P(A)2 − P(Aε )P(Bε )|


 
= P(A) − P(Aε ) + P(Aε ) P(A) − P(Bε ) + P(Bε ) − P(Aε )P(Bε )
≤|P(A) − P(Aε )| · |P(A) − P(Bε )| + P(Aε )|P(A) − P(Bε )| + |P(A) − P(Aε )|P(Bε ).

44
Using

|P(A) − P(Aε )| ≤ P(A∆Aε ) < ε, |P(A) − P(Bε )| ≤ P(A∆Bε ) < ε,

we conclude
|P(A)2 − P(Aε )P(Bε )| ≤ ε2 + 2ε (4.3)
Combining (4.2) and (4.3), we obtain

|P(A)2 − P(A)| ≤ ε2 + 4ε.

Since ε > 0 was arbitrary, we conclude

P(A) = P(A)2 .

This shows that P(A) ∈ {0, 1}.

Corollary 4.21 (Step 1) Pp (N = k) ∈ {0, 1} for all k ∈ N0 ∪ {∞}.

Proof. Note that shifting a configuration does not change the number of clusters. Hence,
we can apply Theorem 4.20 with A = {N = k}.

Lemma 4.22 (Step 2) Pp (N = k) = 0 for all k ∈ N with k ≥ 2.

Proof. We do a proof by contradiction. Assume that

Pp (N = k) = 1 for some k ∈ N with k ≥ 2. (4.4)

Let F ⊆ E be a finite set. For ω ∈ Ω define ω (1) to be the configuration obtained from ω
setting 
(1) 1 if e ∈ F
ωe =
ωe if e ∈ E \ F.
So in ω (1) all components with an index in F are set to 1 and all others are unchanged.
Let
NF1 (ω) = N (ω (1) )
be the number of infinite open clusters in the configuration ω (1) . Then,

Pp (NF1 = k) = Pp (ω : N (ω (1) ) = k) = Pp (N = k|ωe = 1, ∀e ∈ F ) = 1. (4.5)

For L ∈ N, let

ML = number of infinite open clusters which intersect BL = {−L, . . . , L}d .

If ML ≥ 2 and all edges with vertices in BL are declared open, then the number of infinite
open clusters decreases by this procedure. (Here we used N < ∞ a.s.) Since this cannot
be the case by (4.5), we conclude

Pp (2 ≤ ML < ∞) = 0 for all L.

45
We obtain

! ∞
[ X
Pp (2 ≤ N < ∞) ≤ Pp {2 ≤ ML < ∞} ≤ Pp (2 ≤ ML < ∞) = 0.
L=1 L=1

Thus, Pp (2 ≤ N < ∞) = 0, which contradicts (4.4).

Definition 4.23 v ∈ Zd is called a trifurcation if the following hold:

(a) v is contained in an infinite open cluster;

(b) there are precisely 3 open edges incident to v;

(c) if we remove v and the 3 open edges incident to v, then the infinite open cluster
decomposes into 3 disjoint infinite open clusters.

For v ∈ Zd , let
Dv = {ω ∈ Ω : v is a trifurcation in ω}.

Lemma 4.24 If Pp (N = ∞) = 1, then Pp (D0 ) > 0.

Proof. Let
BL = {−L, . . . , L}d , EL = {e ∈ E : e ⊆ BL }.
Let

ML = number of infinite open clusters which intersect BL ,


ML0 (ω) = number of infinite open clusters intersecting BL in the configuration ω (0)
having all edges in EL closed.

If ML0 (ω) ≥ 3, there exist x(ω), y(ω), z(ω) ∈ ∂BL = BL \ BL−1 belonging to different
infinite open clusters. Furthermore, there exist paths γx (ω), γy (ω), γz (ω) in BL which
connect 0 with x, y, z, respectively in such a way that the following hold:

• 0 is the only point which is in common for any two of these paths;

• For v ∈ {x, y, z}, γv touches precisely one vertex in ∂BL .

For x, y, z ∈ V , we define

Jx,y,z = {all edges in γx , γy , γz are open, all other edges in EL are closed}.

One has

Pp (D0 ) =Pp (0 is a trifurcation) ≥ Pp ({ML0 ≥ 3} ∩ Jx,y,z )


=Pp (Jx,y,z |ML0 ≥ 3)Pp (ML0 ≥ 3) (4.6)

46
Since ML0 ≥ ML , we obtain

Pp (ML0 ≥ 3) ≥ Pp (ML ≥ 3)

Since the events {ML ≥ 3} are increasing in L, we obtain



!
[
lim Pp (ML ≥ 3) = Pp {ML ≥ 3} ≥ Pp (N ≥ 3) = 1.
L→∞
L=1

Hence, there exists L such that


1
Pp (ML0 ≥ 3) ≥ . (4.7)
2
Jx,y,z depends only on Xep with e ∈ EL , whereas ML0 depends only on Xep with e ∈ E \ EL .
Hence, Jx,y,z and ML0 are independent, and we obtain

Pp (Jx,y,z |ML0 ≥ 3) = Pp (Jx,y,z ) ≥ (min{p, 1 − p})|EL | > 0. (4.8)

Inserting (4.7) and (4.8) into (4.6) yields the claim.

Lemma 4.25 If Pp (N = ∞) = 1, then there exists a constant c > 0 such that for all L,
one has the following bound for the expected number of trifurcations in the box BL−1 :
h X i
Ep 1Dv ≥ cLd .
v∈BL−1

Proof. Since Pp is translation invariant, we have Pp (Dv ) = Pp (D0 ) for all v. Hence, we
obtain h X i X
Ep 1Dv = Pp (Dv ) = |BL−1 |Pp (D0 ).
v∈BL−1 v∈BL−1

By Lemma 4.24, Pp (D0 ) > 0. Furthermore,

|BL−1 | = (2L − 1)d ≥ Ld .

This implies the claim with c = Pp (D0 ) > 0.

Definition 4.26 Let Y be a finite set with |Y | ≥ 3. A 3-partition Π = {P1 , P2 , P3 }


of Y is a partition of Y into precisely 3 non-empty sets P1 , P2 , P3 . Two 3-partitions
Π = {P1 , P2 , P3 } and Π′ = {P1′ , P2′ , P3′ } are called compatible if their elements can be
ordered in such a way that

P1 ⊇ P2′ ∪ P3′ or P1′ ⊇ P2 ∪ P3 .

A family P of 3-partitions is called compatible if all Π, Π′ ∈ P with Π ̸= Π′ are compatible.

47
Example 4.27 Here are three 3-partitions of {1, 2, 3, 4}:

Π = {P1 , P2 , P3 }, P1 = {1, 2}, P2 = {3}, P3 = {4},


Π′ = {P1′ , P2′ , P3′ }, P1′ = {1}, P2′ = {2}, P3′ = {3, 4},
Π′′ = {P1′′ , P2′′ , P3′′ }, P1′′ = {1}, P2′′ = {3}, P3′′ = {2, 4}.

Π and Π′ are compatible because P1 ⊃ P1′ ∪ P2′ .


Π and Π′′ are not compatible; Π′ and Π′′ are not compatible.

Lemma 4.28 Let P be a compatible family of 3-partitions of Y . One has

|P| ≤ |Y | − 2.

Proof. We use induction over |Y |.


Case |Y | = 3: Then |P| = 1 = |Y | − 2.
Induction step: Assume the claim is true for |Y | ≤ n. Let Y be a set with |Y | = n + 1
and let y ∈ Y . Let P be a compatible family of 3-partitions of Y . We can represent every
Π ∈ P as
Π = {P1 ∪ {y}, P2 , P3 }
with pairwise disjoint sets P1 , P2 , P3 ⊆ Y \ {y} with P2 ̸= ∅, P3 ̸= ∅, and P1 ∪ P2 ∪ P3 =
Y \ {y}. Using this representation, we define

P ′ = {Π ∈ P : P1 ̸= ∅}, P ′′ = P \ P ′ .

Then
P̃ ′ = {{P1 , P2 , P3 } : {P1 ∪ {y}, P2 , P3 } ∈ P ′ }
is a compatible family of 3-partitions of Y \ {y}. By induction assumption, we know

|P ′ | = |P̃ ′ | ≤ |Y \ {y}| − 2 = |Y | − 3.

Since |P| = |P ′ | + |P ′′ |, it remains to show |P ′′ | ≤ 1. We show this by contradiction. As-


sume P ′′ contains two different 3-partitions of Y , namely {{y}, P2 , P3 } and {{y}, Q2 , Q3 }.
By assumption, the two partitions are compatible. Note that y ̸∈ Pi , y ̸∈ Qi for i ∈ {1, 2}.

• {y} ⊇ {y} ∪ Qi is false because Qi ̸= ∅.

• {y} ⊇ Q2 ∪ Q3 is false because Q2 ∪ Q3 ̸= ∅ and y ∈


/ Q2 ∪ Q3 .

• P2 ⊇ {y} ∪ Qi is false because y ̸∈ P2 .

• P2 ⊇ Q2 ∪ Q3 is false because Q2 ∪ Q3 = Y \ {y} and P2 ∪ P3 = Y \ {y} with P3 ̸= ∅.

Thus, the two partitions are not compatible, which is a contradiction. Hence, |P ′′ | ≤ 1
and the claim follows.

Lemma 4.29 (Step 3) Pp (N = ∞) = 0.

48
Proof. We do a proof by contradiction. Assume that Pp (N = ∞) > 0. Then, by
Corollary 4.21, Pp (N = ∞) = 1.
Let C be an open cluster of BL . Assume x ∈ C ∩ BL−1 is a trifurcation. If we remove
x, we obtain a partition Π(x) = {P1 , P2 , P3 } of C ∩ ∂BL as follows: Let P1 , P2 , P3 be the
sets of points v ∈ C ∩ ∂BL such that there is an open path from x to v starting with one
of the 3 open edges incident to x. Then, Π(x) has the following properties:

• Pi ̸= ∅ for 1 ≤ i ≤ 3.

• Pi is a subset of an open subgraph of BL \ {x}.

• Pi ̸↔ Pj in BL \ {x} for i ̸= j, i.e. there is no open path in BL \ {x} leading from a


point in Pi to a point in Pj .

• In particular, Π(x) is a 3-partition of C ∩ ∂BL .

Let x′ ̸= x be another trifurcation in C ∩ BL−1 .


Claim: Π(x) and Π(x′ ) are different and compatible.
In the picture, P1 (x) ∩ P1 (x′ ) ̸= ∅ and P1 (x′ ) ∩ P3 (x) ̸= ∅. Hence, Π(x) ̸= Π(x′ ).
In the picture, P1 (x′ ) ⊃ P1 (x) ∪ P3 (x). Hence, Π(x) and Π′ (x) are compatible.
Let τ (C) be the number of trifurcations in C ∩ BL−1 . By Lemma 4.28,

τ (C) ≤ |C ∩ ∂BL | − 2 ≤ |C ∩ ∂BL |.

Summing both sides over all open connected clusters of BL yields


X
1Dv ≤ |∂BL |
v∈BL−1

Taking expectations yields


h X i
Ep 1Dv ≤ |∂BL | ≤ CLd−1
v∈BL−1

with a constant C > 0. This contradicts Lemma 4.25 which states


h X i
Ep 1Dv ≥ cLd .
v∈BL−1

References
[BK89] R. M. Burton and M. Keane. Density and uniqueness in percolation. Comm.
Math. Phys., 121(3):501–505, 1989.

49
[DS84] Peter G. Doyle and J. Laurie Snell. Random walks and electric networks, vol-
ume 22 of Carus Mathematical Monographs. Mathematical Association of Amer-
ica, Washington, DC, [Link] 1984.

[Gri10] Geoffrey Grimmett. Probability on graphs, volume 1 of Institute of Mathematical


Statistics Textbooks. Cambridge University Press, Cambridge, 2010. Random
processes on graphs and lattices.

[LP16] Russell Lyons and Yuval Peres. Probability on Trees and Networks. Cambridge
University Press, 2016. Available at [Link]

50

You might also like