Nonequilibrium Physics Overview
Nonequilibrium Physics Overview
Sommersemester 2013
Contents
1 Introduction 5
1.1 Nonequilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Description of nonequilibrium systems . . . . . . . . . . . . . . . . . . . . . . . 7
2 Diffusion 9
2.1 The diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Solution in one dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Some properties of the solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Diffusion under external forces . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Langevin equation 15
3.1 Derivation of the Langevin equation . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Fokker-Planck equation I 19
4.1 From Langevin to Fokker-Planck . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1.1 Stationary solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Reduction to diffusion equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Fokker-Planck equation with external field . . . . . . . . . . . . . . . . . . . . . 22
5 Random Walk 23
5.1 Symmetric random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Asymmetric random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.3 Random walk on infinite lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.4 Random walk and detailed balance . . . . . . . . . . . . . . . . . . . . . . . . . 28
6 Ratchets 29
6.1 Feynman’s ratchet and pawl . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2 Flashing ratchet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.3 Rocking ratchet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1
2 CONTENTS
8 Master equation 45
8.1 The master equation in continuous and discrete time . . . . . . . . . . . . . . . . 45
8.1.1 Master equation for continuous time . . . . . . . . . . . . . . . . . . . . 46
8.1.2 Master equation for discrete time . . . . . . . . . . . . . . . . . . . . . 46
8.2 Derivation from Chapman-Kolmogorov equation . . . . . . . . . . . . . . . . . 48
8.3 Stationary state and detailed balance . . . . . . . . . . . . . . . . . . . . . . . . 49
9 Quantum formalism 53
9.1 Master equation in quantum formalism . . . . . . . . . . . . . . . . . . . . . . . 53
9.1.1 An example: Aymmetric random walk . . . . . . . . . . . . . . . . . . . 54
9.2 Properties of the stochastic Hamiltonian . . . . . . . . . . . . . . . . . . . . . . 55
9.3 Formal solution and averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9.4 Comparison stochastic dynamics vs. quantum mechanics . . . . . . . . . . . . . 57
9.5 Quantum formalism for discrete time dynamics . . . . . . . . . . . . . . . . . . 58
10 Local processes 59
10.1 Processes with nearest-neighbour interactions . . . . . . . . . . . . . . . . . . . 59
10.2 Reaction-diffusion processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
10.3 Matrix representation of stochastic Hamiltonians . . . . . . . . . . . . . . . . . 61
10.3.1 Definition of tensor product . . . . . . . . . . . . . . . . . . . . . . . . 61
11 Solution methods 63
11.1 Exact methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.1.1 Bethe Ansatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.1.2 Matrix-product Ansatz . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.2 Approximate methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.2.1 Mean-field approximation . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.2.2 Cluster approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
12 Zero-Range Process 69
12.1 Definition of the Zero-Range Process . . . . . . . . . . . . . . . . . . . . . . . . 69
12.2 Solution of the Zero-Range Process . . . . . . . . . . . . . . . . . . . . . . . . 70
15 Vehicular traffic 87
15.1 Empirical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
15.1.1 Observables and data analysis . . . . . . . . . . . . . . . . . . . . . . . 87
15.1.2 Spontaneous traffic jams . . . . . . . . . . . . . . . . . . . . . . . . . . 88
15.2 Models of highway traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
15.3 Nagel-Schreckenberg model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
15.4 Generalizations of the NaSch model . . . . . . . . . . . . . . . . . . . . . . . . 91
16 Biological transport 95
16.1 Traffic on ant trails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
16.2 Intracellular transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
17 Pedestrian dynamics 97
17.1 Empirical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
17.2 Modeling approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
17.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4 CONTENTS
Chapter 1
Introduction
1.1 Nonequilibrium
The concept of thermal equilibrium is the central aspect of the course in Statistical Physics.
In principle one can distinguish thermal equilibrium of a system from the equilibrium of two
systems in contact.
But what is the precise definition of ”equilibrium” ? Usually it is postulated (and supported
by experience) that any isolated system1 tends to reach an ”equilibrium state”. In this state
probability densities or density operators for the microstates become time-independent. This
implies that expectation values of physical quantities approach ”equilibrium values” which do
not depend on time.
Systems in thermodynamic equilibrium can be characterized by a few macroscopic quantities,
which are constant in time. Fluctuations around averages are possible (see statistical physics,
Gibbs), but certain phenomena like Brownian motion are not correctly described by this.
For a large class of truly non-equilibrium systems no Hamiltonian (and thus no energy function)
can be defined. Instead they are defined in terms of transition rates between different configu-
rations. Such a system may attain a non-equilibrium stationary state or exhibit non-stationary
behavior for ever. Therefore systems far from equilibrium can be divided further into two sub-
classes: (a) systems for which Hamiltonian can be defined and the stable thermodynamic equilib-
rium exists, so that the system evolves spontaneously towards this equilibrium; and (b) systems
for which neither the Hamiltonian nor the Gibbsian equilibrium state exist. Systems of type (a)
can attain thermodynamic equilibrium state after sufficiently long time. But, a system of type (b)
may attain only a non-equilibrium steady-state (see Fig. 1.1.1).
Consider, for example, a window glass which is a supercooled liquid and is in a metastable state.
It can, in principle, attain the corresponding crystalline equilibrium structure after sufficiently
long time. But, in contrast, a conducting wire carrying an electrical current can, at best, attain
a non-equilibrium steady-state after the transient currents die down. Systems in the latter cate-
gory will be referred to as driven non-equilibrium system where, loosely speaking, an external
1
In fact almost all systems are not perfectly closed. What is meant are systems which are isolated as good as
possible from the surrounding world.
5
6 CHAPTER 1. INTRODUCTION
“drive” maintains the corresponding current. A difference of chemical potential between the two
ends of a sample can drive a current of material particles. Similarly, an electric potential gradient
can drive an electrical current.
Systems which are close to equilibrium can often be treated by linear response theory. As an
example, consider a time-dependent field F~ (t). The average deviation of an observable A ~ from
its equilibrium value is then given by
Z
hδ A(t)i = dt0 K(t − t0 )F~ (t0 )
~ (1.1.1)
kB T
Z ∞ χ(ω)
CA (t) = P dω cos ωt . (1.1.3)
iπ −∞ ω
A special case are systems which are evolving very slowly on an experimental timescale, i.e.
glasses.
1 −βE(C)
P (C) = e (1.1.4)
Z
1.2. DESCRIPTION OF NONEQUILIBRIUM SYSTEMS 7
with β = kB1T where C specifies a certain configuration of the system. In mechanical systems,
C is the set of the positions and momenta of the particles, whereas in the case of Ising spins,
C = {σ1 , . . . , σN } with σi = ±1. P (C) is the probability to find the system in state C and E(C)
is the corresponding energy. The partition function is
X
Z= e−βE(C) . (1.1.5)
C
This is true for any system in thermal equilibrium which can exchange energy with a heat bath
of temperature T . So if we know the energy function we can (at least in principle) determine all
thermodynamical properties of the system.
An alternative point of view on any system would be not to specify its energies, but to look at
the transitions C → C 0 between different configurations. This can be done using the transition
rates2 W (C → C 0 ). In equilibrium the so-called detailed balance condition
0
W (C 0 → C)e−βE(C ) = W (C → C 0 )e−βE(C) (1.1.6)
must hold which we will derive later. It implies that the number of transitions, and thus the flux
or probability current, from C to C 0 and C 0 → C are equal. This relation is also of some practical
importance in computer simulations of equilibrium systems (Monte-Carlo algorithms).
Now we can give a first Pragmatic definition:
In this course we mainly consider systems which are far from equilibrium, especially systems
which maintain a current (typically a particle current). Such sytems are generically driven by
their environment instead of being in equilibrium with it.
1. stochastic processes
2. kinetic theories
3. response theory
Approaches of type 1. assume that the process is stochastic. Either the process itself is stochastic
or stochastic terms (noise) are added to the (deterministic) fundamental equations. Such stochas-
tic theories are often used when we do not know the exact interactions. Therefore they appear in
many interdisciplinary applications, especially when human behavior is involved.
2
i.e. a transition probability per time
8 CHAPTER 1. INTRODUCTION
Kinetic theories try to derive nonequilibrium distribution functions (or density operators) from
first principles. They satisfy the so-called kinetic equations.
Response theories describe nonequilibrium systems in terms of equilibrium quantities, e.g. the
linear response theory described above. It usually starts from first principles, e.g. the Liouville
equation.
Chapter 2
Diffusion
First we will look at diffusion which will become a kind of reference systems for us in the
following. We try to describe the approach to equilibrium.
In the same way as in electrodynamics1 we can then derive the continuity equation as a local
expression for the conservation of particles:
∂n
+ ∇ · ~j = 0 . (2.1.4)
∂t
If we now start at time t = 0 with an inhomogeneous density distribution of the particles, experi-
ence tells us that it will become homogeneous through diffusion after a long time (see Fig. 2.1.1).
1
By considering a finite volume V with surface F : the number of particles in V can only change if there is a
current through F .
9
10 CHAPTER 2. DIFFUSION
Figure 2.1.1: An inhomogeneous density distribution (left) will become homogeneous after a
long time (right) by diffusion.
From the observation of colloidal particles in a suspension, A.E. Fick concluded in 1855 that
origin of the current is the gradient of density distribution. This is summarized in Fick’s law:
~j(~r) = −D∇n(~r) . (2.1.5)
The constant of proportionality D is called diffusion coefficient. It has the dimension [D] =
(length)2 /time.
Inserting Fick’s law into the continuity equation yields the diffusion equation (or heat equa-
tion):
∂n(~r, t)
= D∆n(~r, t) , (2.1.6)
∂t
which relates the temporal change of the density distribution with its second spatial derivative.
∂n ∂2n
=D 2 (2.2.2)
∂t ∂x
we make the factorizing Ansatz
n(x, t) = f (x)g(t) . (2.2.3)
After inserting into (2.2.2) we obtain
1 ∂g D ∂2f
= (2.2.4)
g ∂t f ∂x2
2.2. SOLUTION IN ONE DIMENSION 11
where the left side is a function of t only and the right-hand side of x only. Therefore both
sides have to constant in order to be true for all values of x and t. We denote this constant for
convenience by −k 2 D. Then (2.2.4) is reduced to two independent equations
dg
= −k 2 Dg , (2.2.5)
dt
d2 f
= −k 2 f . (2.2.6)
dx2
Both equations are easily solved:
2
g(t) = Ae−k Dt , (2.2.7)
f (x) = Beikx . (2.2.8)
We just remark here that we do not need to consider the full solution for f (x) here which would
also include the term e−ikx (see below).
The most general solution is then obtained by superposition of the solutions for all k:
Z ∞
dk 2
n(x, t) = a(k)eikx−k Dt (2.2.9)
−∞ 2π
where, for convenience, we have combined the product of the integration constants into a single
function a(k)/2π. Here we see why the term e−ikx has not been included in the solution of f (x):
since we integrate over all k-values, including k < 0. Including the e−ikx -term would only lead
to a redefinition of a(k).
The solution has the form of a Fourier transformation. The Fourier transform of the initial con-
dition n(x) is denoted by n(k), i.e.
Z ∞
dk
n(x) = n(k)eikx , (2.2.10)
−∞ 2π
This can be simplified by completing the square of the exponential function and using the Gauss
integral √
Z ∞
a 2 x2 π
e dx = . (2.2.13)
−∞ a
12 CHAPTER 2. DIFFUSION
Figure 2.2.1: Time evolution of the density profile n(x, t) due to diffusion for a delta-function
initial condition [1].
i.e. at time t = 0 all particles are located at x = 0. Then the solution simplifies to
x2
e− 4Dt
n(x, t) = N √ (2.2.17)
4πDt
√
which is also called the standard solution. It is a Gauss function of width ∼ Dt which increases
with t1/2 (cf. Fig. 2.2.1).
1. Diffusion is infinitely fast, e.g. n(x, t = ) 6= n(x, t = 0) for all x and > 0.
2. The quantity
n(x, t)
p(x, t) := (2.3.1)
N
may be interpreted as the probability to find a particle in the interval [x, x + dx]. Obviously
(compare with (2.2.15) it satisfies the necessary normalization condition
Z ∞
p(x, t)dx = 1 . (2.3.2)
−∞
where W also satisfies a diffusion equation and the definition of a Markov process (see
later). This will be explored further in Exercise 1.
We can also calculate averages using the probability density (2.3.1). After a short calculation
one finds Z ∞
2
hx i = x2 p(x, t)dx = 2Dt , (2.3.4)
−∞
where −α~v is a force due to friction effects. The stationary solution of this equation of motion
is obtain by setting ~v̇ = 0:
F~
~vB = =: B F~ (2.4.2)
α
where we have introduced the mobility B = α1 .
For simplicity we again consider the one-dimensional case. The current has two contributions,
one coming from diffusion and the other from the mobility:
j = jD + jB (2.4.3)
14 CHAPTER 2. DIFFUSION
where
∂n
jD = −D , jB = nvB = nBF . (2.4.4)
∂x
The continuity equation reads
∂n ∂j ∂n ∂2n
=− = −vB +D 2 . (2.4.5)
∂t ∂x ∂x ∂x
∂n
Its stationary solution (for ∂t
= 0) is given by
By comparison with (2.4.6) we see that kBFT = vDB . In combination with the definition (2.4.2) of
the mobility this leads to the famous Einstein relation (1905)
D = kB T B (2.4.9)
which relates the diffusion coefficient to temperature. The temperature dependence indicates that
diffusion is related to thermal collisions.
Chapter 3
Langevin equation
15
16 CHAPTER 3. LANGEVIN EQUATION
Figure 3.1.1: Brownian motion: Erratic trajectory of a pollen grain in water [1].
Since we are looking for a statistical description we consider an ensemble of identical systems1 .
We split the velocity into two contributions,
v = hvi + v 0 (3.1.3)
where hvi is the ensemble average of the velocity and v 0 a fast changing contribution. Similarly
the force can be divided:
F = hF i + F 0 . (3.1.4)
Since hF i(v = 0) = 0 the expansion of hF i with respect to v yields to lowest order
hF i = −αhvi , (3.1.5)
where α can be interpreted as friction constant. The minus sign appears because hF i has the
tendency to reduce hvi (we must have limt→∞ hvi(t) = 0).
Now we can give the equation of motion for the slowly varying part:
mhv̇i = Fext + hF i = Fext − αhvi . (3.1.6)
Note that it is not simply given by mhv̇i = Fext because then the system would not reach the
equilibrium value hvi = 0 for Fext = 0.
If we include the fast-varying part we obtain2 the Langevin equation (1908)
3.2 Averages
In order to understand the Langevin equation in more detail we take averages h...i over many
particles. Position x and the random force ξ are statistically independent so that we have
hx(t)ξ(t)i = hx(t)ihξ(t)i = 0 , (3.2.1)
due to (3.1.9).
If we assume that the particles are in a thermal equilibrium state we can apply the equipartition
theorem
m 2 1
hv i = kT (3.2.2)
2 2
where we have assumed for simplicity that we are dealing with a one-dimensional problem. In
Excercise 2 we will show that then
Z t
0 kT
e−γ(t−t ) kT dt0 = 1 − e−γt .
hxvi = (3.2.3)
0 γ
and (by integration of this result)
2 kT 1 −γt
hx i = 2 t− 1−e . (3.2.4)
γ γ
Taking the long-time limit t → ∞ the mean-square displacements simplifies to
2kT
hx2 i ≈ t (3.2.5)
γ
so that hx2 i ∝ t as previously derived for diffusion. In the opposite (short-time limit) t 1 the
mean-square displacement is hx2 i ∝ t2 as for free thermal particles.
In general the fluctuating force is characterized by its vanishing average (3.1.9) and its temporal
correlations
hξ(t)ξ(t0 )i = φ(t − t0 ) . (3.2.6)
The function φ is symmetric, i.e. φ(t) = φ(−t).
In a general Ornstein-Uhlenbeck process these correlations are only short-lived, i.e.
φ(t) ≈ 0 for t > tc . (3.2.7)
In this context tc is called correlation time.
18 CHAPTER 3. LANGEVIN EQUATION
Chapter 4
Fokker-Planck equation
We now change slightly our point of view. Instead of considering a description of a single
diffusing particle in terms of classical dynamics (plus a stochastic force) we look at an ensemble
of many particles (N 1). Based on the experience with many particle system a description in
terms of distribution functions seems to be promising.
Since we are interested in the time evolution of this quantity we start with a formal expansion in
the time increment ∆t:
P (v, t + ∆t) = hδ(v − v(t) − ∆v)i
∞
X (−1)n ∂ n
= n
h(∆v)n δ(v − v(t))i . (4.1.3)
n=0
n! ∂v
where we have used that −γv is a slowly varying variable, in contrast to ξ(t).
In order to derive a dynamical equation for P (v, t) we will now consider the averages An :=
h(∆v)n δ(v − v(t))i in (4.1.3) for all values of n.
19
20 CHAPTER 4. FOKKER-PLANCK EQUATION
• For n = 0 we have
A0 = hδ(v − v(t))i = P (v, t) (4.1.5)
according to the definition (4.1.2).
A1 = h∆vδ(v − v(t))i
Z t+∆t
= −γhv(t)δ(v − v(t))i + hξ(t0 )δ(v − v(t0 ))idt0 . (4.1.6)
t
since hξ(t)i = 0.
Combining these results we find
where the form of the coefficient1 has been chosen for later convenience. Using a similar
argument (and e.g. (4.1.2)) as in the case n = 1 we find
Z t+∆t Z t+∆t Z t+∆t
A2 = dt1 dt2 hξ(t1 )ξ(t2 )iP (v, t) − 2γ∆thv(t)δ(v − v(t))i hξ(t0 )idt0
t
t t
+ O (∆t)2 (4.1.11)
where the terms proportional to (∆t)2 , which are generated by calculating (∆v)2 , are
neglected. Since the average of the fluctuating force vanishes we finally obtain
• The terms generated by n ≥ 3 do not generate further contributions. Assuming that the
stochastic process is Gaussian one can show that they of the order O ((∆t)n ).
1
corresponding at this point to the definition of D
4.2. REDUCTION TO DIFFUSION EQUATION 21
Now we can collect the results for all the terms and determine the expansion (4.1.3). Since
P (v, t + ∆t) − P (v, t) ∂P
lim = (4.1.13)
∆t→0 ∆t ∂t
we then obtain the Fokker-Planck equation
∂P ∂ ∂
=γ v + γD P (v, t) (4.1.14)
∂t ∂v ∂v
u := veγt . (4.2.1)
Then we have
P (v, t) = P ue−γt , t =: P̃ (, u, t)
(4.2.2)
so that we can consider the probability distribution to be a function of t and the rescaled variable
u (instead of v). The time evolution of this probability distribution is given by
2
∂ P̃ ∂P ∂v ∂P ∂P ∂P 2 ∂ P
= + = −γv + γP + γv +γ D 2
∂t ∂v ∂t ∂t ∂v ∂v ∂v
∂ 2 P̃
= γ P̃ γ 2 D . (4.2.3)
∂u2
where we have used ∂v ∂t
= −γv and the Fokker-Planck equation for P .
In the next step we introduce a new function a related to the rescaled probability distribution and
a rescaled time s by
1 2γt
P̃ = aeγt ,
and s = e −1 . (4.2.4)
2γ
22 CHAPTER 4. FOKKER-PLANCK EQUATION
∂a(u, s) ∂2a
= 2 with = γ 2 D . (4.2.5)
∂s ∂u
Thus the function a satisfies a diffusion equation! Using the standard solution
1 (u−u0 )2
a(u, s) = √ e− 4s (4.2.6)
4πs
we finally obtain the probability distribution
m (v − v0 e−γt )
r
m
P (v, t) = exp − (4.2.7)
2πkT (1 − e−2γt ) 2kT (1 − e−2γt )
for t > 0. This function describes the monotonous approach to the equilibrium distribution. It is
a Gaussian distribution with average
and variance
σ 2 (t) = γD 1 − e−2γt .
(4.2.9)
F (x)
v̇ + γv +f, (4.3.2)
m
we can derive the corresponding Fokker-Planck equation along the lines of Sec. 4.1. It takes the
form
∂P (x, v, t) ∂ ∂ ∂P F (x) ∂P
=γ v + γD P (x, v, t) − v − . (4.3.3)
∂t ∂v ∂v ∂x m ∂v
The corresponding stationary solution is
1
mv 2
2
+ V (x)
P (x, v) = C exp − (4.3.4)
kT
Random Walk
So far we have introduced a microscopic description of diffusion based on ideas from classical
mechanics (forces!) with additional random fluctuations (due to random collisions). Next we
look at the problem from a slightly different perspective without refering to forces. Instead the
new description does not explicitly refer to the origin of the observed motion.
Figure 5.1.1: The symmetric random walk: in each step the particle moves with probability 1/2
either to the left or the right neighbour site.
Next we ask the following question: if the particle starts at the origin n = 0, what is the proba-
bility P (m, N ) that it ends at site m after performing N steps?
First, one easily convinces oneself that in order to end on site m in N steps the particle has to
perform
1
N+ = (N + m) (5.1.1)
2
steps to the right and
1
N− = N − N+ = (N − m) (5.1.2)
2
23
24 CHAPTER 5. RANDOM WALK
steps to the left. The order in which these steps are performed is arbitrary! Overall there are NN+
Next we perform the continuum limit by introducing a lattice constant a which is the distance
between neighbouring lattice points. A lattice point m (located at ma) then represents a con-
tinuous space interval [x, x + dx] with the continuous coordinate x = ma. The corresponding
probability distribution is now1
dx
P (x, N )dx = P (m, N ) (5.1.8)
a
and therefore
1 x2
P (x, N ) = √ e− 2N a . (5.1.9)
πN a2
We can introduce a time variable t by assuming that the particle performs N steps in time t, i.e.
ν = Nt steps per unit time (sec). Then
1 x2 νa2 N a2
P (x, N ) = √ e− 4Dt where D := = . (5.1.10)
4πDt 2 2t
1
according to the rules of transformation under a change of random variables
5.2. ASYMMETRIC RANDOM WALK 25
Thus indeed we recover the diffusive behaviour in form of the standard solution (2.2.17). Note
the typical scaling behaviour (similarity law) of this solution. It is invariant under the transfor-
mation √
t → αt and x → αx (5.1.11)
for arbitrary α. This implies that for diffusion the relevant quantity is x2 /t, not x/t.
Figure 5.2.1: Asymmetric random walk with periodic boundary conditions. Here site L + 1 is
identified with site 1.
Again we are interested in the probability P (n, t) to find the particle at site n at time t. This
time we will choose a slightly different approach to determine this quantity which will become
our standard method in the following. This method starts from a dynamical equation for the
probability P (n, t) which has the form
∂P (n, t)
= pP (n − 1, t) + qP (n + 1, t) − (p + q)P (n, t) . (5.2.1)
∂t
26 CHAPTER 5. RANDOM WALK
This is called master equation and can be considered as a kind of continuity equation for the
probabilites. Its origin is easy to understand. The left-hand-side determines the change of P (n, t)
in an infinitesimal time interval dt. Such a change can occur for three different reasons:
1. A particle moves from site n − 1 to site n. This occurs with rate p and leads to an increase
of P (n, t).
2. A particle moves from site n + 1 to site n. This again results in an increase of P (n, t) and
occurs with rate q.
3. A particle moves from site n to site n−1 or n+1. This occurs with rate p+q and decreases
P (n, t).
So the right-hand-side of (5.2.1) consists of gain terms (first two terms) which increase P (n, t)
and loss terms (last term) which leads to a decrease. This is the general basis for any master
equation and will be discussed later in more detail.
The master equation (5.2.1) for the asymmetric random walk with periodic boundary conditions
can be solved using the generating function technique. In the present case the generating
function takes the form of a Fourier series reflecting the spatial periodicity of the system. Details
of the solution will be discussed in Excercise 5. It turns out that P (n, t) is given by
L
X
P (n, t) = eλk t zk−n P̃ (k, 0) (5.2.2)
k=1
with
2πik/L 1
zk = e and λk = p(zk − 1) + q −1 (k = 1, . . . , L) . (5.2.3)
zk
The stationary state is reached for t → ∞. In this case only the term with k = L in (5.2.2)
survives. The mode k = L corresponds to zL = 1 and λL = 0 according to (5.2.3) and for
t → ∞ we have
1
P (n, t) → P (n) = P̃ (L, 0) = . (5.2.5)
L
The probabilities in the stationary state become constant, i.e. independent of the position n. This
comes not unexpected because it reflects the translational invariance of the problem.
The other eigenvalues λk determine the decay or relaxation times of mode k,
1
τk := , (5.2.6)
|Re(λk )|
5.3. RANDOM WALK ON INFINITE LATTICE 27
i.e. how quick the stationary state is reached. For finite k and L 1 one finds
2πik 4π 2 k 2
λk ' (p − 1) − (p + q) 2 . (5.2.7)
L L
The dynamical critical exponent z describes how the relaxation time increases with the system
size L. It is defined by
τ k ∼ Lz . (5.2.8)
This describes e.g. the phenomenon of critical slowing down in Monte Carlo simulations of
systems near a critical point (which you might know from the course on Computional Physics).
Near criticality the system needs a long time to reach the equilibrium state, especially for large
system sizes.
In our case we find that the dynamical critical exponent is given by
z = zdiffusion = 2 (5.2.9)
n/2
where the factor pq has been introduced for later convenience. By taking the derivative and
using the master equation (5.2.1) one then finds
∂F (z, t) √ 1
= −(p + q) + pq z + F (z, t) (5.3.2)
∂t z
and thus
√ 1
F (z, t) = F (z, 0) exp (−(p + q)t) exp pq z + t . (5.3.3)
z
The explicit form of the probabilities P (n, t) can be expressed by modified Bessel functions of
the first kind In (x) which are defined by their generating function
∞
1 1 X
exp z+ u = In (u)z n . (5.3.4)
2 z n=−∞
28 CHAPTER 5. RANDOM WALK
In terms of these Bessel functions the probability distribution takes the form
∞ (n−l)/2
−(p+q)t
X p √
P (n, t) = e P (l, 0)In−l (2 pqt) . (5.3.5)
l=−∞
q
For the special case of a symmetric random walker (p = q = 1/2) starting at the origin n = 0
(i.e. P (n, 0) = δn,0 ) one obtains the simple form
where P (C) is the probability to find the system in state C (given by the Boltzmann distribution
for equilibrium systems) and w(C → C 0 ) the transition rates between different states.
We consider the asymmetric random walk on a periodic lattice because of the simple result
(5.2.5) for the probability distribution in the stationary state. It is independent of the position
(which defines the state C of the system), i.e.
1
P (C) = P (n) = . (5.4.2)
L
The transition rates are given by
(
p for ” + ”
w(C → C 0 ) = w(n → n ± 1) = . (5.4.3)
q for ” − ”
Since P (C) is constant (and non-zero) it drops out of (5.4.1) and the remaining detailed balance
condition is
w(n → n ± 1) = w(n ± 1 → n) . (5.4.4)
Due to (5.4.3) one site of this equation is equal to p, the other to q. Thus the random walk satisfies
the detailed balance condition only in the symmetric case p = q. In the asymmetric case p 6= q
its stationary state is an nonequilibrium steady state.
Chapter 6
Ratchets
We have seen that diffusion is an isotropic process which can not generate directed motion.
However, e.g. for biological systems directed motion is essential, e.g. in intracellular transport.
Is it possible to build a rectifier for diffusion which leads to a directed transport? The answer to
this question is the so-called ratchet or Brownian ratchet [3]. Before we discuss ratchets for
diffusion in more detail we have a brief look at the famous Feynman ratchet and pawl.
Figure 6.1.1: A ratchet consists of a wheel with asymmetric paddles. The pawl (2) allows its
motion only in one direction. Here clockwise motion is not possible. (from [1])
The full Feynman-Smoluchowski ratchet is shown in Fig. 6.1.2. The ratchet is kept in a heat
bath of temperature T2 and coupled to a paddle wheel in a different heat bath with temperature
1
You might find a ratchet also in your toolbox. It is a special kind of wrench.
29
30 CHAPTER 6. RATCHETS
T2 . It appears as it is possible to use this machine to extract useful work from heat at thermal
equilibrium and lift the weight m. This would be a violation of Second Law of Thermodynamics.
The idea behind this is the following. The molecules in the heat baths undergo random Brownian
motion with a mean kinetic energy determined by its temperature. Assuming that the device
is small enough so that the even single collisions with molecules can turn the paddle. These
collisions tend to turn the paddle in both directions with the same probability. However, the
ratchet prevents the motion in one direction. Effectively this appears to lead to a turing of the
system in one direction, lifting the weight in the process.
Figure 6.1.2: The Feynman-Smoluchowski ratchet consists of a paddle wheel (in heat bath of
temperature T1 ) and a ratchet (in heat bath of temperature T2 ). At first sight it appears that this
system can extract useful work (lifting the weight m) from heat (random fluctuations) in a system
in thermal equilibrium. This would imply a violation of the Second Law of Thermodynamics.
(from [1])
Feynman’s analysis shows that this is not true and so the Second Law is not violated. Without
going into the details, which can be found in [4] we just mention that one has also to consider
collisions of the molecules with the pawl. These will lift the pawl from time to time allowing
motion in the ”forbidden” direction. Effectively no net rotation arise if the heat baths are at the
same temperature T1 = T2 .
Meanwhile there is even an experimental realization of the Feynman-Smoluchowski ratchet
where the paddle and the ratchet are realized by special molecules [5]. The results are in agree-
ment with the above analysis, i.e. no unidirectional motion was observed.
same temperature. The trapdoor is guarded by an (imaginary) demon. He allows only molecules
that are faster-than-average to move from A to B. So after some time the fast molecules are in
B and the slow ones in A. This implies that the temperature in B is larger than that in A. Since
(apparently) no work had to be spend this would be a violation of the Second Law.
The resolution of this problem is rather subtle. A proper analysis shows that it takes more ther-
modynamic work for the demon to determine the speed of the molecules (he needs some device
for measuring molecular speeds!) than exergy2 can be gained from the resulting temperature
difference.
Figure 6.1.3: Maxwell’s demon (green) opens the trapdoor between the compartments only for
fast molecules (red) moving from the left part to the right. After some time, the temperature in
B is larger than that in A. Without expenditure of work this would be a violation of the Second
Law. (from [1])
then generate an effective current to the left (in general: in the direction of the maximum which
is closer to the original position).
Figure 6.2.1: The flashing ratchet. (a) When the potential is switched on, particles move to
minima of the V (x); (b) after the potential is switched off (V (x, t) = 0) the particles start
diffusing symmetrically; (c) if the potential is switched on again the particles are captured in the
minima again; due to the asymmetry of V (x) more a captured in the minimum to the left than
that to the right of the original position.
tilting to the left (anti-clockwise) by the same angle the particle will move to the left. In this way
a current to the left is generated.
Note that the motion in the rocking ratchet is into the opposite direction of that in the flashing
ratchet. Particles move in the direction of maximum which is farther away from the original
position.
Figure 6.3.1: The rocking ratchet: a) neutral position; b) tilt clockwise: the particle moves to the
right: c) tilt anti-clockwise (same angle as in b)): the particle can not move. Periodic clockwise
- anti-clockwise (i.e. ”rocking”) tilting leads to a particle motion to the right.
34 CHAPTER 6. RATCHETS
Chapter 7
In the following we will apply the theoretical framework developed so far to a non-physical
problem, namely the chip distribution in poker tournaments. Similar to equilibrium statistical
physics1 our main interest is in universal properties of this distribution, i.e. those properties
which do not depend on the details of the dynamics. This leads to an important simplification:
we do not need a very detailed model implementing all poker rules. Instead a simpler one which
captures the main features (which is usually the difficult part in modelling!) should be sufficient.
35
36 CHAPTER 7. UNIVERSAL PROPERTIES OF POKER TOURNAMENTS
Figure 7.1.1: The initial and final chip distribution in a poker tournament (where player k is the
winner).
winner and has won all the chips. The initial and final chip distribution is shown in Fig. 7.1.1. It
looks just like the time-reversed version for the density distribution of a diffusion starting from a
δ-function initial condition (see also Fig. 2.1.1)! This is already a first indication that a suitable
modification of the methods developed for the description of diffusive processes could also be
applicable to the present problem!
(e.g. 40 − 60 − 100 − 150 − 200 − 300 − 500 − ... every 10-15 minutes). The growth rate t0
is an important quantity as it allows to control the duration of the tournament: if t0 is small the
increase is faster and the tournament will be over more quickly. Typical values for the initial
blind b0 at the start of the tournament depend on the initial number of chips x0 and are typically
of the order x0 /b0 ∼ 50 − 100.
7.2. SIMPLIFIED MODEL FOR POKER TOURNAMENTS 37
Figure 7.2.1: The (simplified) bidding dynamics used in the model. Green numbers indicate the
order in which players put their bets. The red dot indicates the position of the blind which is a
forced minimum bet in order to make the game more interesting.
Since the value of a hand depends on the probability of its occurrence it is natural to represent
it by just a probability c ∈ [0, 1]. The full model for poker tournaments is then defined by the
following rules where t is the time since the beginning of the tournament:
1. The blind bets b(t) = b0 et/t0
2. All players receive a hand represented by a random number c which is equally distributed
in [0, 1].
3. At each table, according to the bidding scheme in Fig. 7.2.1, players bet b chips with
probability e(c) if 0 < c < c0 . This corresponds to a situation where the player considers
his hand not to be too good, with the critical hand value c0 being the boundary between not
so and good hands. e(c) is an evaluation function which is expected to be monotonically
increasing: players will play good hands (i.e. larger c) more often than bad ones (i.e.
smaller c). The precise form3 of e(c) turns out to be of no importance for the universal
properties [2] we are interested in. Therefore we use
e(c) = cn (7.2.2)
where n is the number of players that have already placed bets, including the blind. There-
fore n ≥ 1. This makes sense because now e(c) can be interpreted as the probability that
c is highest among n + 1 random cards. It captures the fact that players are more careful
betting on bad hands if already some other players have placed bets before them.
4. The first player who has a hand with value larger than the critical hand value c0 (i.e. c > c0 )
goes all-in, i.e. bets all the chips he has left. Thus the probability that a player goes all-in
is given by q := 1 − c0 .
3
The determination of the optimal form of e(c) is of course a problem of great practical importance for any poker
player. It is extremely difficult for T > 2 and is studied within the field of game theory.
38 CHAPTER 7. UNIVERSAL PROPERTIES OF POKER TOURNAMENTS
5. The following players, including the blind, can also go all-in if their c > c0 . Otherwise
they will pass. In case they have less chips than the first all-in player they only bet this
amount.
6. The highest c wins all the pot, i.e. the money bet by all the players in that game. In case
nobody has placed a bet, the blind receives his bet b back.
7. Players who have no chips left are eliminated from the tournament.
8. After each round players are redistributed in such a way that the total number of tables is
as small as possible, i.e. it is [N (t)/T ] + 1 where [.] denotes the integer part.
The model contains a number of parameters which we collect in the following table.
parameter interpretation
N0 initial number players
N (t) number players left at time t
x0 initial number of chips of each player
T maximum number of players at each table
b0 initial blind
t0 growth rate of blind
b(t) blind at time t
e(c) evaluation function
c0 critical hand value
q = 1 − c0 all-in probability
• only three type of bids are possible: pass, bid the value of the blind or go all-in, i.e. the
allowed bids for each player are 0, b(t) and x(t);
• there are no multiple bidding rounds, i.e. players can not increase their previous bids;
• it is not considered that for good players the evaluation function is time-dependent since
during a tournament they ”learn” to interpret the actions of their opponents and change
strategy accordingly;
• the element of ”bluffing”, i.e. placing high bets with a bad hand, is completed neglected in
the model.
7.3. ANALYSIS OF THE SIMPLIFIED MODEL 39
and compare these results. The comparison with the empirical data will tell us if the assumptions
made in the derivation of the model are valid or not.
where (t) can be interpreted as a win-loss function: for (t) = −1 the player loses the blind
b(t), for (t) = 1 he wins b(t). The average of (t) is assumed to be = 0 since there is no
winning strategy in the mathematical sense for poker and all players are assumed to be equal.
The term (t)b(t) in (7.3.1) can be seen in analogy to the random collisions of a Brownian
particle. However, here the strength of the collision term is time-dependent.
Since we neglect learning effects of the players, consecutive bids can be considered to be statis-
tically independent. Therefore we deal with a Markov process (without memory).
If a typically value of x, which is of the order of the average x(t), is much larger than the
value b(t) of the blind, we can take the continuum limit (in time t) of (7.3.1) which leads to the
Langevin equation
dx
= ση(t)b(t) (7.3.2)
dt
where σ 2 = 2 and noise correlations hη(t)η(t0 )i = δ(t−t0 ). This is the evolution of a generalized
Brownian particle (overdamped motion).
For the derivation of (7.3.2) we take the timestep to be ∆t instead of 1 in (7.3.2) and than take
the limit ∆t → 0. Taking into account x(t + ∆t) − x(t) ≈ dx dt
∆t we arrive at (7.3.2) with
ση(t)∆t = (t).
The number P (x, t) of surviving players with x chips then evolves according to a Fokker-Planck
equation
∂P σ 2 b2 (t) ∂ 2 P
= (7.3.3)
∂t 2 ∂x2
4
Note that x(t) ≥ 0.
40 CHAPTER 7. UNIVERSAL PROPERTIES OF POKER TOURNAMENTS
which can be derived in analogy to Sec. 4. The term ∂P/∂x is missing since the motion here is
overdamped. The Fokker-Planck equation (7.3.3) has to satisfy the absorving boundary condition
P (x = 0, t) = 0 (7.3.4)
since only the number of surviving players are considered and elimated players should have no
influence on the dynamics. In addition we have the initial condition
The Fokker-Planck equation (7.3.3), e.g. using the method of images, and its solution is given
by
N0 (x−x0 )2 (x+x0 )2
− 2τ (t) − 2τ (t)
P (x, t) = p e −e (7.3.6)
2πτ (t)
where we have introduced
1
τ (t) = σ 2 b20 t0 e2t/t0 − 1 .
(7.3.7)
2
For large times t τ the distribution of chips becomes scale invariant, i.e. it can be written in
the form
N (t) x
P (x, t) = f (7.3.8)
x(t) x(t)
where the density of the density of the surviving players is given by
N (t) 2x0
=√ e−t/t0 . (7.3.9)
N0 πt0 σb0
This result has an interesting interpretation: according to (7.3.9) the decay constant for the num-
ber is t0 , i.e. the same as the growth rate of the blinds! Thus indeed t0 controls the length of the
tournament as we had originally claimed. Note that the equality of the decay constants for the
blind and the number of surving players is a specific prediction of the model for q = 0 which can
be tested (in principle) with empirical data!
The total duration of a tournament tf , obtained by taking N (tf ) = O(1) in (7.3.9), is
tf 1 x0
' ln(N0 ) − ln(t0 ) + ln . (7.3.10)
t0 2 b0
It only grows logarithmically with the number of players and the ratio x0 /b0 .
The average number of chips per surving player is given by
N0
x(t) = x0 (7.3.11)
N (t)
since N0 x0 is the total number of chips. Explicitly one finds
√
πt0 σ
x(t) = b(t) , (7.3.12)
2
7.3. ANALYSIS OF THE SIMPLIFIED MODEL 41
i.e. the average number of chips is proportional to the blind. For t0 1 one has x(t)/b(t) 1
which was just the assumption made in the derivation of the continuum approach. This supports
the consistency of our analysis.
The normalized chip distribution
P (x, t) x
f (X) = x(t) with X = (7.3.13)
N (t) x(t)
is given by
Z X
π 2 2 /4
f (X) = Xe−πX /4 , F (X) = f (Y )dY = 1 − e−πX (7.3.14)
2 0
which is the well-known Wigner distribution which e.g. occurs in quantum chaos or the distri-
bution of nuclear energy levels. This form is truly universal as it is independent of all microscopic
parameters x0 , t0 , b0 . . .
The results obtained from computer simulations of the full model show a very good agreement
with the prediction (7.3.14) by our approximate analytical theory, see Fig. 7.3.1.
Figure 7.3.1: Normalized chip distribution f (X) and its cumulative sum F (X) for q = 0 ob-
tained from computer simulations (full lines) with N0 = 10000, t0 = 2000, x0 /b0 = 100
averaged over 10000 tournaments. The distributions have been extracted at times for which
N (t)/N0 = 0.5 0.3 and 0.1. For comparison, the Wigner distribution (7.3.14) of the approximate
analytical theory is shown (broken lines). It shows almost perfect agreement. (from [2])
all-in! If only one player goes all-in this is equivalent to betting just b(t) because he can not win
more than the blind from each other player.
We consider a fiexed table with T players and assume q to be small. Then the probability for an
all-in process (i.e. at at least two players going all-in) is
T (T − 1)
Pall-in = q 2 + O(q 3 ) . (7.3.15)
2
The factor q 2 is the probability that two players go all-in and T (T − 1)/2 the number of possible
pairs out of the T players. Processes where more than two players bet all their money are of
order q 3 and will be neglected here (since q is assumed to be small).
Each of the two all-in players wins with probability 1/2. Therefore the player with less chips
looses with probability 1/2 and gets eliminated from the tournament. Thus we can determine the
decay rate for the number of players due to all-in process (averaged over all N T
tables):
dN 1 N N 4
=− · · Pall-in =: − with tall-in = . (7.3.16)
dt all-in 2 T tall-in q 2 (T− 1)
Sire [2] argues that the optimal choice for the tall-in (and hence for q) is such that the decay due
to all-in processes is the same as t0 , i.e. the decay induced by chip fluctuations due to non-all-
in processes (which are of order b(t)). The total decay rate will be shown to remain equal to
t−1
0 . One then has tall-in = 2t0 since the inverse decay rates add up. The choice for tall-in can be
understood as follows:
• For tall-in < 2t0 the game would be dominated by all-in processes. Then x(t) quickly
becomes large compared to b(t). In such a situation the first player to go all-in takes the
risk of being eliminated (if another player also goes all-in) just to win a neglible blind.
• For tall-in > 2t0 players with few chips will often go all-in because they have a good chance
to double their chips.
One would expect that real poker players, on average, self-adjust their q to the optimal value.
Thus q is no longer a free parameter of the model, but takes the value
s
2
q= . (7.3.17)
(T − 1)t0
We now look at the time evolution of the number P (x, t) of remaining players:
∂P σ 2 b2 dP ∂ 2 P 2
= · 2
+ (K(P ) − P ) , (7.3.18)
∂t 2 dt ∂x t0
where the first term comes from non-all-in processes with pots of the order b(t) (see equation
(7.3.3)). In the second term comes the from the all-in processes and the factor 2/t0 is the rate of
all-in processes involving the player considered. We have introduced the all-in kernel
Z ∞
1 x/2 1 ∞
Z Z
1 P (y) P (y) P (y)
K(P ) := P (x/2) dy + P (x − y) dy + P (x + y) dy .
4 x/2 N 2 0 N 2 0 N
(7.3.19)
7.3. ANALYSIS OF THE SIMPLIFIED MODEL 43
The three terms have the following interpretation: the first term describes the doubling of chips
(x/2 → x) by winning against a player who has more chips (> x/2). The second term comes
from a win against a player with fewer chips, who then gets eliminated from the tournament.
Finally, the last term originates in processes where the player looses against a player who has
less chips. Note that the factor 1/N has to be included since it is assumed that the player only
has one opponent who goes all in.
Integrating (7.3.19) over all x we find
Z
3
K(P )dx = (7.3.20)
4
since the first two terms add up to 1/2 which is the probability to win in an all-in with two players.
Thus the probability to stay in the tournament after an all-in event is 3/4. We can also determine
the decay rate due all-in processes. Since the rate of all-in processes is /t0 it is given by
3 2
1− = t−1
all-in . (7.3.21)
4 t0
This is consistent with the assumption made above.
In order to find a solution for (7.3.19) we make the Ansatz (of scaling form)
λ x
P (x, t) = 2 f . (7.3.22)
x̂ (t) x̂(t)
The integral of f is assumed to be normalized to 1, so that
λ
N (t) = . (7.3.23)
x̂(t)
Inserting this Ansatz into (7.3.19) one obtains an integro-differential equation for which no so-
x
lution is known. However, the asymptotic behaviour for small and large X = x̂(t) :
X
f (X) ∼ for X 1 (7.3.24)
2
f (X) ∼ 2µe− µX for X 1 (7.3.25)
where µ ≈ 1.562. Thus the scaling function decays slower than in the case q = 0. The result is
in good agreement with computer simulations (Fig. 7.3.2)
Finally one can determine the average number LN0 of chip leaders5 during a tournament with N0
players. It is found that for both q = 0 and q > 0 it grows logarithmically,
LN0 ∝ ln N0 , (7.3.26)
which is a behaviour well-known from other models, e.g. models for evolution.
5
The chip leader is the player with the most number of chips.
44 CHAPTER 7. UNIVERSAL PROPERTIES OF POKER TOURNAMENTS
Figure 7.3.2: Normalized chip distribution f (X) and its cumulative sum F (X) for q > 0 ob-
tained from computer simulations (full lines). The other parameters are the same as in Fig. 7.3.1.
For comparison, the numerical solution of the integro-differential equation for f (X) is shown
(dashed lines). The circles are data from 20 real poker tournaments. (from [2])
Figure 7.3.3: The average number of chip leaders LN0 grows logarithmically as a function of N0 .
Full symbols correspond to the case q = 0, open ones to q > 0. (from [2])
Chapter 8
Master equation
We have already seen that the master equation can be usefull tool to determine the time evolution
of systems governed by stochastic dynamics. In the following we will introduce the general
formalism and consider a few examples.
Figure 8.1.1: Interaction random walkers. Particles can move to the left and neighbour sites, but
only if these are not occupied by another particle.
45
46 CHAPTER 8. MASTER EQUATION
∂P (n, t) X X
= w(n0 → n)P (n0 , t) − w(n → n0 )P (n, t) (8.1.1)
∂t n0 n0
which is a rate equation taking into account the number of transitions per time between different
states n, n0 . The interpretation of (8.1.1) is basically the same as that given for (5.2.1). The first
sum collects the gain terms, i.e. all transitions from states n0 to the state n whereas the second
sum are the loss terms, transitions from n to some other state n0 .
Master equations of the form (8.1.1) are valid for dynamical systems with a continuous time
variable t. This can not directly be realized in computer simulations which usually proceeds in
discrete steps. Therefore continuous time dynamics is usually realized by the so-called random-
sequential update. Here in each step a site j is chosen randomly (with equal probability) and
then its state is updated according to the transition rules. Note that this usually requires the
update of neighbouring site (the target site of a moving particle). This is repeated over and over1 .
The master equation (8.1.1) can be written in a more compact form by introducing the matrix M ,
sometimes called Markov matrix
X
Mn,n0 = w(n0 → n) − δnn0 w(n → n00 ) . (8.1.2)
n00
where now W (n0 → n) are no longer transition rates (with the dimension of an inverse time),
but dimensionless transition probabilities.
The interpretation of (8.1.5) is rather straightforward. The sum on the right-hand-side runs over
all possible states n0 at time t. The probability that the state is changed from n0 to n in the
timestep t → t + ∆t is then proportional to the probability P (n0 , t) of finding the system in state
n0 times the transition probability W (n0 → n). Probability conservation implies
X
W (n → n0 ) = 1 (8.1.6)
n0
since the state n has to evolve in some state during the timestep ∆t.
One would expect that in the limit of small timesteps ∆t → 0 one recovers the continuous time
master equation (8.1.1). This will be checked now explicitly.
First we expand
∂P (n, t)
P (n, t + ∆t) ≈ P (n, t) + ∆t (8.1.7)
∂t
which we then use in
where in the second step we have used the discrete master equation (8.1.5) and the probability
conservation (8.1.6) (the term in [. . .] is equal to 1). By comparison with the continuous time
master equation (8.1.1) we find a relation between the transition rates in the continuous case and
the transition probabilities in the disrete case:
W (n0 → n)
w(n0 → n) = lim . (8.1.9)
∆t→0 ∆t
Discrete time dynamics is easy to realize in computer simulations through the so-called parallel
(or synchronous) update. In contrast to the random-sequential update the dynamical variables
are not updated in random order, but all at the same time2 .
2
Since a computer program usually works sequentially this requires some tricks when writing the computer code.
48 CHAPTER 8. MASTER EQUATION
The quantity P1 (Cn , tn |Cn−1 , tn−1 ) is just the transition probability. Thus a Markov process is
fully determined by the functions P (Cn , t) and P1 (Cn , tn |Cn−1 , tn−1 ).
The Chapman-Kolmogorov equation3 is now given by
X
P1 (C3 , t3 |C1 , t1 ) = P1 (C3 , t3 |C2 , t2 )P1 (C2 , t2 |C1 , t1 ) (t1 < t2 < t3 ) . (8.2.2)
C2
It expresses the fact that a stochastic process starting in state C1 at time t1 reaches the state C3 at
time t3 via any one of the possible intermediate states C2 at time t2 (Fig. 8.2.1).
Figure 8.2.1: Illustration of the Chapman-Kolmogorov equation. The stochastic process starting
in state C1 at time t1 can reach the state C3 at a later time t3 via any of the possible intermediate
states C2 at time t2 .
one can derive the master equation from the Chapman-Kolmogorov equation. First we expand
for small ∆t
with some function w(C0 → C), the transition rate. The δ-function expresses the fact that the
probability to stay at the same state has to be 1 for ∆t = 0.
Equation (8.2.4) needs to be slightly corrected in order to satisfy the normalization condition
X
P (C, C0 , t) = 1 (8.2.5)
C
which becomes clear when remembering that P (C, C0 , t) is actually the conditional probability
P1 (C, t|C0 , t0 ).
Equation (8.2.4) and (8.2.5) then imply that
with X
α(C0 ) = w(C0 → C) . (8.2.7)
C
If we now insert (8.2.6) into the Chapman-Kolmogorov equation (8.2.2) (keeping (8.2.3 in mind!)
and taking the limit ∆t → 0 we obtain
∂P X
(C3 , C1 , t) = [w(C2 → C3 )P (C2 , C1 ) − w(C3 → C2 )P (C3 , C1 )] . (8.2.8)
∂t C 2
This condition implies for all pairs of states n, n0 the transitions from n0 to n per unit time must
balance the transitions from n to n0 , i.e. there is no probability current flowing between these
states. The system then can not be distinguished from its time-reversed variant. Therefore it is
clear that not all stationary states satisfy detailed balance! We have already argued in Sec. 1.1
that it is the characteristic property of systems in equilibrium.
The detailed balance condition in the form (8.3.3) is not very practical! In order to check whether
it is satisfied or not one already needs to know the exact stationary state P (n). This problem
is solved by the Kolmogorov criterion. It states that a stochastic process fulfills the detailed
balance condition (8.3.3) iff for all cycles n1 → n2 → . . . → nk → n1 in configuration space
the transition rates satisfy
w(n1 → n2 )
P (n2 ) = P (n1 ) . (8.3.6)
w(n2 → n1 )
Iterating this procedure yields
w(n1 → n2 ) · · · w(nk−1 → nk )
P (nk ) = P (n1 ) . (8.3.7)
w(n2 → n1 ) · · · w(nk → nk−1 )
Since the weight (probability) P (nk ) should be uniquely determined, it has to be independent of
the path taken from n1 to nk , i.e. especially we have
w(n1 → nk )
P (nk ) = P (n1 ) . (8.3.8)
w(nk → n1 )
Combining (8.3.7) and (8.3.8) gives the criterion (8.3.4).
5
weight here means an unnormalized probability.
8.3. STATIONARY STATE AND DETAILED BALANCE 51
We have already argued that detailed balance is characteristic for systems in thermal equilib-
rium. Therefore it should be possible to show that the stationary P (nj ) have in fact the form of
Boltzmann factors. This requires the definition of a suitable energy function E(n):
1 w(n0 → n)
E(n0 ) − E(n) := ln . (8.3.9)
β w(n → n0 )
Using (8.3.8) this indeed shows that the stationary probabilities follow a Boltzmann distribution
In some situations where the strong condition (8.3.3) of detailed balance does not hold, the
weaker pairwise balance condition (sometimes called dynamical reversibility) might be ful-
filled. It requires that for each pair of states n1 and n2 a unique state n3 exists so that
Systems satisfying pairwise balance are not in equilibrium. However, due to the rather sim-
ple structure of the condition some general conclusion about such system can be drawn. The
differences between the conditions (8.3.3) and (8.3.11) are explained graphically in Fig. 8.3.1.
Consider the ten configurations in Fig. 8.3.1a) labelled by the integers 1, ...10. If the only transi-
tions into and out of the configuration
P P 1 are those shown in Fig. 8.3.1, then dP1 /dt = 0 provided
j=2,..,6 w(j → 1)P j = j=7,..,10 w(1 → j)P1 . In other words, for any arbitrary configuration
i, stationarity of Pi is satisfied if merely the total probability current into the configuration i is
exactly balanced by the total probability current out of i.
In contrast, the equilibrium, which is a special stationary state, requires detailed balance (see
Fig. 8.3.1b): the net forward and reverse probability current in beween the members of each pair
of configurations must balance exactly. Clearly this is a much stronger condition than what was
needed to guarantee a non-equilibrium stationary state. Finally, for many models a pairwise-
balance is also satisfied by the non-equilibrium stationary state (see Fig. 8.3.1c): The probability
current from configuration 1 to 2 is exactly balanced by that from 2 to 3 so that dP2 /dt = 0.
Similarly, the probability current 2 to 3 is equal to that from 3 to 1, and so on.
52 CHAPTER 8. MASTER EQUATION
Chapter 9
Quantum formalism
In the following we will develop the formalism for the master equation a little bit further. By in-
troducing a suitable notation certain similarities with quantum mechanics are revealed. This will
later allow to use methods originally developed for quantum system to be applied to stochastic
systems as well.
where the coefficients of the basis vectors are the probabilities to find the system in the corre-
sponding basis state. As in quantum mechanics, sometimes it is useful to interpret (9.1.1) as a
vector
P (0, 0, . . . , 0)
P (0, 0, . . . , 1)
|P (t)i = (9.1.2)
..
.
P (1, 1, . . . , 1)
53
54 CHAPTER 9. QUANTUM FORMALISM
i.e. up an overall sign the stochastic Hamiltonian is exactly the Markov matrix introduced in
(8.1.2). The off-diagonal elements of H have to be non-positive and, as for the case of the
Markov matrix, the sum of the elements in each column is 0 due to probability conservation, i.e.
the stochastic Hamiltonian is also a stochastic matrix.
Using this notation, the master equation (8.1.1) can be rewritten as
∂
|P (t)i = −H |P (t)i , (9.1.4)
∂t
which has the form of a Schrödinger equation in imaginary time1 .
Despite these similarities with standard quantum mechanics, one important difference has to be
emphasized: the stochastic Hamiltonian H is in general not hermitian, as can be seen from the
definition (9.1.3). This is a consequence of the fact that we are dealing with nonequilibrium
systems and will have important implications.
p + q −q 0 · · · −p
−p p + q −q · · · 0
.. ..
= 0
−p . . 0
(9.1.5)
. .. ..
..
. .
−q 0 ··· p+q
As one can easily see, the stochastic Hamiltonian of the asymmetric random walk (p 6= q) is not
hermitian.
1
Replacing t → it brings it into the standard Schrödinger form.
9.2. PROPERTIES OF THE STOCHASTIC HAMILTONIAN 55
The matrix (9.1.5) has a special form. Its matrix elements are constant along each diagonal, i.e.
the matrix elements depend only on the difference of the indices (aij = ai−j ). Such matrices are
called Toeplitz matrices. Since it is also cyclic, the stochastic Hamiltonian of the random walk
belongs to the class of circulent matrices.
H |P0 i = 0 (9.2.1)
which implies that the stationary state hP0 | is an eigenvector of the stochastic Hamiltonian H
with eigenvalue 0.
Does such an eigenvector exist? To prove this we first define a (bra-) vector
X
h0| := hn| . (9.2.2)
n
In matrix notation this vector would correspond to the row vector (1, 1, . . . , 1). It is now easily
checked that
h0|H = 0 (9.2.3)
since the sum in each column of H is 0. In other words: the vector |0i is a left eigenvector of the
stochastic Hamiltonian with eigenvalue 0. Thus also a corresponding right eigenvector |P0 i has
to exist. Since generically H is not hermitian, these vectors are not conjugates of each other, i.e.
in general we have
6 (h0|)† .
|P0 i = (9.2.4)
Thus indeed a stationary state exists.
If in addition some further conditions are satisfied one can even show that the stationary state is
unique. Then the system is ergodic which is a very useful property since e.g. ensemble averages
are the same as time averages in the stationary state. We do not give a full mathematical char-
acterization here. The main requirement is that the configuration space can not be decomposed
into different subspaces between which there are no transitions possible.
What can be said about the other eigenvalues of the stochastic Hamiltonian H ? The special
properties of a stochastic matrix allows to draw some rather general conclusions. These are
based on Gershgorin’s theorem which states that the eigenvalues λj of a complex n × n-matrix
A = (aij ) lie in the union of the disks
( )
X
Dj = z ∈ C : |z − ajj | ≤ |ajl | , (9.2.5)
l6=j
P
i.e. a circle of radius l6=j |ajl | around ajj (see Fig. 9.2.1).
56 CHAPTER 9. QUANTUM FORMALISM
P P
For a stochastic matrix one has l6=j |ajl | = − l6=j ajl = ajj since all off-diagonal elements
are non-positive and the sum of each column is 0. Thus Gershgorin’s theorem tells us2 that the
eigenvalues lie within the disks Dj = {z ∈ C : |z − ajj | ≤ ajj }. Especially this means that
Re(λj ) ≥ 0 . (9.2.6)
This means that all eigenvalues of a stochastic matrix have a non-negative real part. We will later
see that this is has implications for physics since the real parts can be interpreted as relaxation
times (see also Sec. 5.2).
Thus we can summarize the general properties of the eigenvalue spectrum of a stochastic matrix:
the eigenvalue E0 = 0 exists and all other eigenvalues Ej can be complex but have a non-negative
real part, Re(λj ) ≥ 0.
In principle one can also show that the elements of the eigenvector to the eigenvalue E0 = 0 are
all non-negative so that they can be interpreted as probabilities. This will be done below for the
case of discrete time. The present case is then included in the limit of ∆t → 0.
2
Note that A and At have the same eigenvalues.
9.4. COMPARISON STOCHASTIC DYNAMICS VS. QUANTUM MECHANICS 57
of some observable A(n) at time t. E.g. A(n) = nj would give the probability to find a particle
at site j if n is the particle number operator.
Using the left eigenvector h0| we can rewrite this expectation value in the quantum formalims:
Note that this is different from quantum mechanics where averages are given by hψ|A|ψi. In the
stochastic quantum formalism the elements of the vector |P (t)i are already probabilities whereas
in quantum mechanics the square of P the modulus is interpreted as probability density.
Expanding the initial state |P (0)i = λ aλ |Pλ i in terms of the eigenvectors |Pλ i with eigenval-
ues Eλ of H, the expectation values can be rewritten as
X
hAi(t) = h0|Ae−Ht |P (0)i = aλ e−Eλ t h0|A|Pλ i. (9.3.4)
λ
This shows that the behavior for large times t is governed by the low-lying excitations, i.e. the
eigenvalues with smallest real parts Re(Eλ ). These determine the leading relaxation times τλ
through
τλ−1 = Re(Eλ ) (9.3.5)
provided that the corresponding matrix element h0|A|Pλ i is non-vanishing.
instead of a stochastic Hamiltonian H as in the case of continuous time dynamics. Since the
matrix elements of T are probabilities they are non-negative with Tn,n0 ∈ [0, 1]. Furthermore
conservation of probability implies
X X
Tn,n0 = W (n0 → n) = 1 . (9.5.2)
n0 n0
In terms of the transfer matrix the master equation takes the form
The stationary state |P0 i = limt→∞ |P (t)i corresponds to the (right) eigenvector |P0 i of the
transfer matrix T with eigenvalue 1:
Similar arguments for the existence and uniqueness of the stationary state as in the case of con-
tinuous time apply. Now h0| as defined in (9.2.2) is a left eigenvector of T with eigenvalue 1.
The corresponding right eigenvector hP0 | is the stationary state. All other eigenvalues of T have
real parts ≥ 1.
Such statements can be proved using the Perron-Frobenius theorem. If A = (aij ) is a real
N × N matrix with strictly positive entries aij > 0 then there is a real number λ0 > 0 which
is an eigenvalue of A and any other (possibly complex) eigenvalue λ is strictly smaller than,
i.e. |λ| < λ0 . As a consequence the spectral radius of A is equal to λ0 . In compact form
the Perron-Frobenius theorem states that a real square matrix with positive entries has a unique
largest real eigenvalue with a corresponding eigenvector which has strictly positive components.
The theorem can be extended to the case aij ≥ 0, but then additional conditions have to be
satisfied, i.e. the matrix has to be irreducible.
Chapter 10
Local processes
So far we have made no restrictions in the transition rates w(n → n0 ). In principle transitions
between arbitrary states n and n0 were allowed. In many applications this is not the case, e.g. for
interacting and non-interacting random walks where particles are allowed to move to neighbour-
ing sites only. In the following we will study this important case in more detail.
In the case of processes with nearest-neighbour interactions only, the stochastic Hamiltonian can
be written in the form
XL
H= hj,j+1 , (10.1.2)
j=1
i.e. as a sum of local Hamiltonians hj,j+1 with nearest-neighbour interactions that act non-
trivially only on sites j and j + 1. Here we have assumed periodic boundary conditions. Gen-
eralization to other cases is straightforward. Later we will often consider systems coupled to
1
The terminology ‘2-state model’ is used in two different ways in the literature. It can either refer to the number
of states of a site or the number of states of the particle (if it has an internal degree of freedom).
59
60 CHAPTER 10. LOCAL PROCESSES
Table 10.1: Definition and interpretation of rates allowed in a 2-state stochastic process with
nearest neighbour interactions. Branching and fusion are sometimes called coagulation (or co-
0
alescence) and decoagulation. The diagonal elements Γττ ττ 0 are determined by the condition
ττ0
P
σ,σ 0 Γσσ 0 = 0.
boundary reservoirs. In this case the stochastic Hamiltonian has the form
L−1
X
H = h1 + hj,j+1 + hL (10.1.3)
j=1
where h1 and hL act only on the first and last site, respectively, and describe the coupling to the
corresponding particle reservoirs.
where ĥj,j+1 is an operator that ”lives” in the product space of sites j and j + 1.
In a matrix representation the state space of a site j is two-dimensional and thus the state space
of a system of L sites would be 2L -dimensional. The stochastic Hamiltonian is then represented
by a 2L × 2L -dimensional matrix whereas ĥj,j+1 corresponds to a 4 × 4-matrix3 .
Here aij B stands for the submatrix obtained by multiplying all elements of B by aij .
2
Bulk refers to the part of a system which is (far) away from the boundaries.
3
Note that hj,j+1 would also correspond to a 2L × 2L -matrix since it acts in the full state space. However, most
of its elements are trivial.
62 CHAPTER 10. LOCAL PROCESSES
Chapter 11
Solution methods
In the following we again consider for simplicity a 2-state model in one dimension with nearest-
neighbour interactions. We will be interested in systematic methods to solve the master equation
for such systems. However, as with many-body quantum systems, exact solutions are possible
only in a few cases, e.g. for special parameter values of models. If one is interested in analytical
results, in such cases one has to rely on approximative methods.
X N
Y
P (x1 , . . . , xN ) = A{σ1 ,...,σN } zσ−x
j
j
(11.1.1)
{σ1 ,...,σN } j=1
The sum is over all permutations σ of the numbers (1, . . . , N ) and the A{σ1 ,...,σN } are unknown
amplitudes. The structure of the Bethe Ansatz is similar to a Slater determinant which corre-
sponds to the case A{σ1 ,...,σN } = sign(σ).
63
64 CHAPTER 11. SOLUTION METHODS
The parameters zj are also unknown. Sometimes they are expressed by (quasi-)momenta kj such
−x
that zσj j = eikσj xj . They have to satisfy a system of N coupled nonlinear equations which arise
as consistency conditions when applying periodic (or other) boundary conditions.
The applicability of the Bethe Ansatz imposes serious restrictions on the form of the interactions
and therefore only few models can be treated by this method. Especially the Ansatz works only
in one dimension. Its application to stochastic systems has been reviewed e.g. in [8].
For periodic boundary conditions the MPA takes the translational invariant form
L
!
1 Y
P (n1 , . . . , nL ) = Tr [nj D + (1 − nj )E] . (11.1.3)
ZL j=1
In (11.1.2), (11.1.3) E and D are matrices and hW | and |V i are vectors characterizing the bound-
ary conditions.
The Ansatz (11.1.2) has a simple interpretation. It implies the translation of a configuration of
the system into a product of matrices. Starting with site 1, for each empty site (nj = 0) a factor
E appears and for each occupied site (nj = 1) a factor D. In this way, one would have e.g.
P (1, 1, 0, 0, 1, 0) ∼ D · D · E · E · D · E· = D2 E 2 DE . (11.1.5)
The boundary conditions are taken into account by multiplication with suitable hW | and |V i, i.e.
a combination of matrix elements of this product.
The explicit form of the matrices E and D and the boundary vectors has to be determined from
the condition that (11.1.2) or (11.1.3) solves the master equation (8.1.1). This leads in general to
algebraic relations between the matrices E and D and the boundary vectors hW | and |V i. We
will later consider this in more detail for some important cases.
One can show that the stationary states of one-dimensional stochastic processes are generically
of matrix-product form [7]. However, so far an explicit determination of the matrices from the
algebra has been possible in only a few cases.
11.2. APPROXIMATE METHODS 65
Inserting this Ansatz into the master equation1 simplifies the problem considerably and leads
to an equation which only involves the function P1 . For translationally invariant systems P1 is
independent of the site j. It amounts in approximately expressing the transition rates w(n → n0 )
through the single-site contributions P1 (nj ).
Alternatively one could write down the exact master equation for P1 . However, this requires
to take into account correlations with neighbouring sites, i.e. quantities like P2 (nj , nj+1 ) or
P3 (nj−1 , nj , nj+1 ) in the case of nearest-neighbour interactions. These larger Pn have then to be
expressed approximately as products of P1 similar to (11.2.1).
Sometimes quantities of interest can be calculated in MFA without requiring formal calculations.
An example is the current in the TASEP introduced in Sec. 10.2 which is a model of interacting
random walks with unidirectional particle motion. In the stationary state the current has to be
independent of the position because the continuity equation
∂ρ
+ div J~ = 0 (11.2.2)
∂t
reduces to dJ/dx = 0. Formally the current between sites j and j + 1 is given by
because a configuration n only contributes to the current between these sites if j is occupied and
j + 1 empty. If this is the case a current will flow, i.e. a particle moves from j to j + 1, with
probability p per time.
Since the MFA neglects correlations, the current is approximately given by
where ρ = N/L is the particle density (i.e. the probability that the site is occupied) if there are
N particles on a lattice of L sites.
Note that often a configuration can be characterized by different variables. The main example
in this course are particle-hopping models. Instead of occupation numbers nj = 0, 1 or particle
1
In most applications, MFA is used for the steady-state properties. But in principle also the full dynamics could
be studied with time-dependent P1 .
66 CHAPTER 11. SOLUTION METHODS
Figure 11.2.1: Graphical representation of the n−cluster approximation for a) n = 1 (i.e. mean-
field theory), b) n = 2, and c) n = 3 (more specifically the (3, 2)-cluster approximation). Each
circle corresponds to one factor in the factorization.
68 CHAPTER 11. SOLUTION METHODS
Chapter 12
Zero-Range Process
We now consider a particle hopping model which does not satisfy the exclusion principle. Here
an arbitrary number of particles is allowed on each site. However it will turn later out that this
model contains particle hopping models with exclusion, like the TASEP, as special cases.
The ZRP is characterized by its ultra-local transition rules. Originally it was introduced in mathe-
matics by Spitzer in 1970. Meanwhile many general and rigorous results are known for it coming
from mathematics. In physics, the importance and relevance of the ZRP was only realized widely
almost 30 years later.
As in the TASEP, the zero-range process describes the dynamics of N (classical and indistin-
guishable) particles on a lattice of L sites. We will consider one-dimensional lattice with periodic
boundary conditions in the following, but in principle many results can be generalized to any lat-
tice structure. In contrast to the TASEP, particles in the zero-range process are not subject to any
exclusion rule. Each lattice site j can be occupied by an arbitrary number nj of particles. Using
random-sequential dynamics, in each update step a particle is chosen at random, say on site j,
which is allowed to move to any other site l. The essential point is now that the corresponding
hopping rate wjl is independent of the state of the target site l, i.e. the number of particles present
at that site. It only depends on the number of particles nj on the starting site j. Therefore the
rate can conveniently be denoted by u(nj ).
This property motivates the name zero-range process (ZRP) since the range of interaction can
be interpreted to be zero. Obviously this only works if the number of particles allowed at each
site is not restricted. Any restriction would imply that the hopping rate can not be independent
of the state of the target site since one has to check whether there are less particles than the
maximally allowed number. If this number is reached, the hopping rates from any other site to
this site vanish.
69
70 CHAPTER 12. ZERO-RANGE PROCESS
Figure 12.1.1: Dynamics of the zero-range process: Each site of the lattice can be occupied by an
arbitrary number of particles. The departure rate u(n) of a particle depends only on the number
n of particles at the departure site, not on that at the target site.
The normalisation ZL,N , which is sometimes called (canonical) partition function, ensures that
the probabilities for all configurations containing N particles sum up to one:
L L
!
XY X
ZL,N = f (nl ) δ nl − N , (12.2.2)
n l=1 l=1
where the Kronecker-δ guarantees that only configurations containing N particles are included.
In order to prove the factorisation property (12.2.1) and determine the factors f (nl ) we insert this
as an Ansatz into the stationary master equation:
L
X L
X
u(nl−1 +1)P (· · · , nl−1 +1, nl −1, · · · ) = u(nl )P (n) . (12.2.3)
l=1 l=1
In the steady state the probability current due to hops into a particular configuration n (lhs
of (12.2.3)) balances the probability current due to hops out of the same configuration (rhs of
(12.2.3)). The choice u(0) = 0 takes into account that a site has to be occupied before a particle
can move away from it1 . Now we substitute the factorised probability (12.2.1) into the master
1
It is also assumed that P (n1 , . . . , nL ) = 0 if at least one of the arguments is negative.
12.2. SOLUTION OF THE ZERO-RANGE PROCESS 71
equation (12.2.3) and equate each term in the sum separately. After cancelling common factors
this leads to the conditions
u(nl−1 + 1)f (nl−1 +1)f (nl −1) = u(nl )f (nl−1 )f (nl ) , (12.2.4)
Thus we have shown that the factorisation (12.2.1) indeed holds if we choose the factors accord-
ing to (12.2.7). This is also true for other types of update.
Alternatively one could also start with prescribing the stationary state by defining the function
f (n) and then determine the corresponding hopping rates through
f (n−1)
u(n) = . (12.2.8)
f (n)
This shows that the ZRP gives a complete account of all the possible steady state behavior in
particle-conserving models with factorised steady state.
The solution for the case of discrete time dynamics can be obtained in an analogous way. The
stationary probabilities factorize as in (12.2.1) with the weight functions
(
1 − u(1) (n = 0)
f (n) = 1−u(1) Qn 1−u(j) (12.2.9)
1−u(n) j=1 u(j) (n ≥ 1) .
Knowing the explicit form (12.2.1), (12.2.7) of the steady state probabilities one can calculate all
stationary properties of the ZRP, at least in principle.
One important quantity that helps to understand the structure of the stationary state is the proba-
bility p(n) that a given site contains n particles. Fixing the occupation of site j = 1 to be n one
has !
X L
X
p(n) = P (n, n2 . . . nL ) δ nl − (N − n) , (12.2.10)
n2 ,...,nL l=2
72 CHAPTER 12. ZERO-RANGE PROCESS
where the Kronecker-δ takes into account that the remaining sites contain only N − n particles.
Using the factorization (12.2.1) one finally obtains
ZL−1,N −n
p(n) = f (n) . (12.2.11)
ZL,N
Note that this probability p(n) to find n particles at a given site is different from the single-site
weight f (n). This is a consequence of the fact that the constraint of fixed particle number induces
correlations between sites, although the steady state factorizes.
Chapter 13
The Totally Asymmetric Simple Exclusion Process (TASEP) introduced in Sec. 10.2 is a
particle-hopping model (see Fig. 13.0.1) which serves as paradigm not only for many trans-
port or traffic models, but is also of great importance for the general theory of nonequilibrium
processes. Here it is of similar relevance as the Ising model for equilibrium statistical mechanics
and phase transitions. One of its attractive properties is the fact that many exact results can be
obtained. First we consider the simpler case of periodic boundary conditions which leads to a
translational-invariant stationary state.
Figure 13.0.1: Dynamics of the (totally) asymmetric simple exclusion process: particles move to
empty right neighbour sites with rate p.
73
74 CHAPTER 13. TASEP I: PERIODIC BOUNDARY CONDITIONS
Figure 13.1.1: Mapping of the TASEP onto the ZRP for L = 10 and N = 6. Particle in
the TASEP correspond to sites in the ZRP (indicated by numbers). Empty sites in the TASEP
become particles in the ZRP (indicated by letters). The arrows show a possible local transition
with probability u(1).
Thus we introduce a new dynamical variable dj which is just the number of (consecutive) empty
sites in front of particle j. In traffic engineering this quantity is often called headway. Obviously
it satisfies the conservation law
XN
dj = L − N . (13.1.1)
j=1
Using this mapping the TASEP with L sites and N particles is equivalent to a ZRP with N sites
and L − N particles. Note that the particles in the ZRP hop to the left when the particles in the
TASEP move to the right.
This mapping holds for time-continuous dynamics and for parallel as well. The hopping rates in
the ZRP are given by
u(dj ) = p for dj ≥ 1 . (13.1.2)
Using the exact results for the ZRP we know that the stationary probability distribution is of
product form (see (12.2.1)) with weight functions
1
f (n) = (13.1.3)
pn
for continuous time (i.e. random-sequential update) and
(
1−p (n = 0)
f (n) = 1−p n (13.1.4)
p
(n ≥ 1)
dynamics already the simple mean-field approximation gives the exact result. Note that this MFA
applies to the occupation numbers as dynamical variables whereas by mapping to the ZRP the
headways become the relevant variables. So in this case there are no correlations between differ-
ent sites. In contrast, for discrete time dynamics the MFA is not exact. Instead it turns out that
the 2-cluster approximation gives the exact steady state. That means that for the discrete time
case there are correlations between neighbouring sites. This shows that the choice of the update
procedure (discrete vs. continuous time or random-sequential vs. parallel update) can have a
strong influence on the properties of the stationary state.
N
ρj = hnj i = =: ρ (13.2.1)
L
where ρ is usually called global density.
Another interesting quantity is the so-called fundamental diagram which is probably the most
important observable in traffic engineering. The fundamental diagram gives the density depen-
dence of the flow, i.e. the function J(ρ). The current (or flow) J is related to the average velocity
v̄ of all particles by the hydrodynamic relation
J = ρv̄ . (13.2.2)
For the TASEP we have already seen (eq. (11.2.3)) that the current is determined by
where P (d) is the stationary probability distribution of the headways dj . Eq. (13.2.4) is easy to
understand since a particles can only move if its headway is larger than 0 which is satisfied with
probability 1 − P (d = 0) in such a case it will move with rate p. Thus p(1 − P (d = 0)) is the
average velocity of a particle and the hydrodynamic relation gives (13.2.4).
In the case of continuous time dynamics (random-sequential update) we have already seen that
the exact fundamental diagram is given by
since MFA is exact in this case. For discrete time dynamics (parallel update) it can be shown that
the exact fundamental diagram is of the form
1h p i
J(ρ, p) = 1 − 1 − 4pρ(1 − ρ) (13.2.6)
2
Fig. 13.2.1 shows a comparison of this exact result with that of the mean-field approximation
which gives the exact result for continuous time. One sees that the MFA strongly underestimates
the correct flow for the parallel udpate (Fig. 13.2.1). This shows the importance of correlations
which can be described as particle-hole attraction (see Excercise 14): in the case of parallel
dynamics the probability to find an empty site in front of a particle is increased compared to the
completely random case, i.e. P (1, 0) > P (1)P (0).
Figure 13.2.1: Flow-density relation for the TASEP with parallel dynamics. Left: Comparison
of the exact result and mean-field theory (broken line) for p = 0.5. The mean-field result is exact
for the case of random-sequential dynamics. Right: Comparison of exact results for p = 0.25,
p = 0.5 and p = 0.75 (from bottom to top).
The current does not change if all particles are replaced by empty cells (‘holes’) and vice versa
(particle-hole transformation). This is of course expected from the definition of the dynamics
since every time a particle moves to the right, a hole moves to the left.
Chapter 14
We now study the TASEP with open boundary conditions by assuming that the first and the last
site are coupled to particle reservoirs (Fig. 14.0.1). If the first site is empty (n1 (t) = 0) a particle
is inserted with an input rate α. If the last site is occupied (nL (t) = 1) the particle is removed
with rate β. Due to this change of boundary conditions the system is no longer translational
invariant. In addition, the total particle number is not conserved. This makes the theoretical
treatment much more difficult. However, we will see that the exact solution for the stationary
state can still be obtained for values of the system parameters α, β and p. In the following we
will consider the case p = 1. As we have seen in Excercise 13 this is no restriction if we are
interested in the stationary state since p can be changed by rescaling of time.
77
78 CHAPTER 14. TASEP II: BOUNDARY-INDUCED PHASE TRANSITIONS
DE = D + E ,
αhW |E = hW | , (14.1.6)
βD|V i = |V i .
One can easily convince oneself that these are necessary conditions: The relation DE ∝ C =
E + D is the simplest way to achieve a constant current
1 1
J = Jj,j+1 = hW |C j−1 DEC L−j−1 |V i = hW |C j−1 (D + E)C L−j−1 |V i
ZL ZL
1 Z L−1
= hW |C L−1 |V i = (14.1.7)
ZL ZL
14.1. SOLUTION BY MATRIX-PRODUCT ANSATZ 79
as required by the equation of continuity. Similar arguments apply to the boundary currents:
α 1 ZL−1
Jl = α(1 − ρ1 ) = hW |EC L−1 |V i = hW |C L−1 |V i = = J , (14.1.8)
ZL ZL ZL
β 1 ZL−1
Jr = βρL = hW |C L−1 D|V i = hW |C L−1 |V i = =J (14.1.9)
ZL ZL ZL
where we have used the boundary equations from (14.1.6).
One also can show that the conditions (14.1.6) are sufficient for (14.1.1) being a solution of the
stationary master equation, but this requires more complicated calculations.
In order to calculate expectation values one can use use explicit representations for D, E, hW |
and |V i. What is the form of these representations? First we assume that the matrices E and D
commute. Then we have
1 1 1
+ hW |V i = hW |D + E|V i = hW |DE|V i = hW |ED|V i = hW |V i , (14.1.10)
α β αβ
where we have used (14.1.6) several times, so that we find the condition
α+β =1 (14.1.11)
for which representations with commuting E and D exist. Then E and D can be chosen as
real numbers e, d (one-dimensional representations) and and explicit solution is given by (see
Excercise 15)
1 1
e= , d= . (14.1.12)
α β
The matrix-product Ansatz (14.1.1) then factorizes and takes the form of a mean-field approxi-
mation. The density profile ρj on the line (14.1.11) is flat, i.e. independent of j.
Next we consider the case where E and D do not commute. In that case the representations are
infinite-dimensional. This can be shown in the following way: first it is clear that no vector |Xi
with E|Xi = |Xi can exist. Otherwise we would have
D = E (E − I)−1 (14.1.14)
which implies that D and E commute. This is a contradiction to our original assumption.
So we see that the algebraic relations (14.1.6) either have one-dimensional representations in the
case α + β = 1 or infinite-dimensional one in all other cases. Some explicit represenations are
given in Excercise 15.
1
Here I denotes the unit matrix.
80 CHAPTER 14. TASEP II: BOUNDARY-INDUCED PHASE TRANSITIONS
Figure 14.2.1: Exact phase diagram for the TASEP in continuous time for p = 1. The two
high- and low-density sub-phases differ in the decay of the density profile to its bulk value from
the boundaries. On the thin dotted line α + β = 1 the mean-field approximation becomes
exact (corresponding to a one-dimensional representation of the matrix-product Ansantz) and
the density profile becomes flat.
In the low-density phase α < 1/2, β > α the current only depends on the input rate and is
independent of β. In the high-density phase β < 1/2, α > β it only depends on β and is
independent of α. Finally, in the maximal current phase α, β ≥ 1/2 it is independent of both
boundary rates. Here it reaches the maximum current 1/4 which can be realized in the periodic
system, i.e. the maximum of the fundamental diagram2 .
Why do we find three different phases? This can be understood by a simple argument. In the
stationary state the current through the system has to be the same everywhere due the equation
2
Remember that we have set p = 1.
14.3. BOUNDARY-INDUCED PHASE TRANSITIONS 81
Table 14.1: Characteristic properties of the phases in the TASEP with random-sequential dynam-
ics (p = 1). It shows the current J, the bulk density ρL/2 , the densities ρ1 , ρL at the chain ends
and the asymptotic decay of the density profile near the left (input) and right (output) end of the
chain. For p 6= 1 the current is given by pJ(α/p, β/p).
of continuity. There are three different mechanisms which in principle may limit the current,
namely the efficiency of the input (corresponding to the low-density phase), the output (corre-
sponding to the high-density phase) and the transport in the bulk (corresponding to the maximal
current phase). The low-density phase can also be called input-controlled phase, and high-
density phase output-controlled phase. In contrast to these two boundary-controlled phases,
the maximal current phase is a bulk-controlled phase. Here the current through the system has
reached the maximum of the fundamental diagram of the corresponding ASEP with periodic
boundary conditions.
The density profile can also be calculated analytically. We already know that on the line α+β = 1
it is flat, i.e. ρj = const., independent of j. In general, the bulk density far away from both
boundaries is of special interest. One finds that, except for the maximum current phase, the
density profile becomes flat in the bulk and shows a dependence on j only in a region near the
boundaries. There the behaviour can be characterized by two independent length scales ξα and
ξβ determined by the input and output rate, respectively. Explicitly these length scales are given
by
ξσ = − ln[4σ(1 − σ)] with σ = α, β . (14.2.2)
There is a third length scale ξ that becomes relevant which is given by
1 1 1
= − . (14.2.3)
ξ ξα ξβ
In the maximal current phase the density profile does not become flat even in the bulk. Instead
an algebraic behaviour ρj ∝ j −1/2 is observed.
The main features of the different phases are summarized in Table 14.1.
TASEP with β = 1, but the phase diagram in Sec. 14.2 shows that this is a rather generic feature
of driven diffusive systems. In contrast to ”normal” equilibrium systems, the state in the bulk of
nonequilibrium systems can change qualitatively if the boundary conditions (here: the rates α,
β) are changed. Such transitions are called boundary-induced phase transitions.
In equilibrium this is not possible, at least for systems with short-ranged interactions. In a d-
dimensional system the boundary contribution to the free energy F is of the order Ld−1 where L
is the typical length of the system. The bulk contribution, on the other hand, is proportional to
the volume, i.e. of the order Ld . Therefore,
so that in the thermodynamic limit only the bulk terms contribute and the precise form of the
boundary conditions becomes irrelevant.
Returning to the TASEP, we have seen that the behavior of the density profiles is characterized
by three length scales ξα , ξβ and ξ. They describe how far the boundary conditions can be ”felt”
away from the boundary. If one of them becomes infinite, the boundaries influence the whole
system and a boundary-induced phase transition occurs (at least in principle). At the transition
from the high-density phase to the maximal current phase, ξβ diverges. Similarly at the boundary
between the low-density and maximal current phase, ξα is divergent. The density profiles, which
can be considered as a kind of order parameter, change in a continuous way when passing through
these phase boundaries so that the transition can be specified as a phase transition of second order.
At the boundary between the high- and low-density phase both ξα and ξβ are finite, but ξ becomes
infinite. Here the density changes discontinously when passing through the phase boundary and
the transition is of first order. The density profile on the boundary α = β ≤ 1/2 is linear with
ρ1 = α and ρL = 1 − α. However, making a snapshot of a simulation one observes a different
type of density profile, namely phase separation into a low density region at the input end of the
system and a high density region at the output end. These two regions are separated by a sharp
shock or domain wall (Fig. 14.3.1). This shock diffuses through the system, i.e. it performs
a symmetric random walk with average velocity hvS i = 0. Average over all possible shock
positions than leads to the linear density profile decribe above.
Figure 14.3.1: Snapshot of the density profile of the TASEP on the boundary between the high-
and low-density phase (α = β ≤ 1/2). It shows phase separation into a LD and a HD region
separated by a sharp shock (domain wall) at xS .
The boundary densities are controlled by the reservoirs. For the TASEP we have ρl = α and
ρr = 1 − β.
The extremal principle can also be interpreted as steady state selection principle: the state
realized in an open system is selected by the boundary conditions from the steady states of the
periodic system.
As an example for the use of the extremal principle we consider the case where ρr < ρl <
1/2. Therefore J(ρl , ρr ) = maxρ∈[ρr ,ρl ] J(ρ). Since both densities are smaller than 1/2 we are
in the monotonically increasing part of the fundamental diagram and thus J(ρl , ρr ) = J(ρl ).
Expressing this by the boundary rates we finally have J(α, β) = J(α) = α(1 − α) which is
exactly the current in the low-density phase (see Table 14.1). This is indeed correct because
ρr < ρl < 1/2 corresponds to α < 1/2 and β > 1/2. Similar considerations can be made for
any other combination of ρr and ρl , e.g. ρr < 1/2 < ρl leads to a point in the maximal current
phase with J(α, β) = 1/4.
Even if the fundamental diagram is not analytically known like in the case of the TASEP, the
extremal principle is extremely useful. It allows e.g. to predict, how many different phases the
system with open boundary conditions will exhibit. This number of phases is closely related to
the number of local extrema in the fundamental diagram. Since the TASEP fundamental diagram
has only one local maximum, it shows three phases. In the exercises we will study the case of
three local extrema (two maxima separated by a local minimum). In this situation one finds seven
phases in the open system.
For the derivation of the extremal principle two velocities are relevant which both can be deter-
mined from the fundamental diagram of the periodic system. The first is the shock velocity
J2 − J1
vS = (14.5.1)
ρ2 − ρ1
which is the velocity of a shock separating a region of density ρ1 with current J1 from a region of
density ρ2 and current J2 (Fig. 14.5.1). It can be derived from the equation of continuity applied
to the shock region. The effective current into this area is J2 − J1 . In time ∆t the particle number
near the shock changes by (J2 − J1 )∆t which leads to a motion of the shock. The change of
particle number due to the shock motion, on the other hand, is (ρ2 − ρ1 )vS ∆t. Combining these
two expressions then gives (14.5.1).
Figure 14.5.1: Derivation of the shock velocity. If the current J2 is larger than J1 the shock will
move to the right (broken line). Applying the continuity equation to this area gives vS .
Figure 14.5.2: Illustration of shock velocity (left) and collective velocity (right).
Figure 14.5.3: Shock and collective velocity can be determined from the fundamental diagram.
vS is the slope of the secant line (blue) through (ρ1 , J1 ) and (ρ2 , J2 ). vc (ρ) is the slope of the
tangent (red) at (ρ, J(ρ)).
right and finally reach the right boundary. The system is then left in a state of density ρ1 . On
the other hand, if vS < 0 the shock will move to the left and the system reaches the density ρ2 .
Although this argument is not exact (at least for discrete systems) it tells us that we can expect a
phase transition when vS = 0 where the two regions can coexist indicating a first order transition.
The collective velocity controls the stability of shocks. It is responsible for the second order
transitions into the maximal current phases. The reason is that it changes sign at density ρ = 1/2
(compare with (14.5.3)).
The collective velocity allows also to understand why in a system with a fundamental diagram
of the form as in Fig. 14.5.3 only upward shocks (where the density increases in the direction
of motion) like in Fig. 14.5.1 are stable. The shock motion corresponds to a transport of parti-
cles from the high density region to the low density region leading to new density profile as in
Fig. 14.5.4. The areas move with the respective collective velocities. In the case of a downward
86 CHAPTER 14. TASEP II: BOUNDARY-INDUCED PHASE TRANSITIONS
shock (in which the density decreases in the direction of motion) and a fundamental diagram like
Fig. 14.5.3 these velocities lead to a further transport of particles from the high to the low density
region and finally the complete ”melting” of the shock. One can easily convince oneself that for
an upward shock the situation is different. Here the two collective velocities point backwards
into the direction of the shock location which implies that such a ”melting” of the shock will not
grow. Instead it leads to a healing of the shock fluctuation and its stabilization.
Figure 14.5.4: Downward shocks are unstable in TASEP-like systems. The shock moves by
”melting” (red) where the two regions move with the collective velocities vc (ρi ). For a downward
shock their behavior leads to a further melting of the shock.
Chapter 15
Vehicular traffic
Figure 15.1.1: Definition of the quantities used to describe the configuration of vehicles on a
one-dimensional road. The cars move from left to right and their label is increasing in driving
direction (downstream). The length of car j is `j , its distance-headway dj and ∆xj = xj+1 − xj
the difference of the vehicle positions xj .
87
88 CHAPTER 15. VEHICULAR TRAFFIC
Some traffic engineers, however, still argue that spontaneous jam formation does not occur and
that jams are always induced by bottlenecks. Usually these are road inhomogeneities such as
ramps or sharp bends, or temporary disturbances through lane changes etc. Once a jam has
occurred due to such a (temporary) bottleneck it might not dissipate immediately causing most
drivers to wonder about the reason for the congestion.
Figure 15.1.2: Trajectories of individual vehicles on a single lane of a multi-lane highway. The
trajectories were drawn from aerial photographs. During the analyzed time-interval the spon-
taneous formation, propagation and dissolution of a jam has been observed. The trajectories
have been determined from aerial photographs which give the vehicles positions at certain times
(green horizontal lines). The diagonal red line indicates the jam front which moves at constant
speed opposite to the driving direction of the vehicles. The discontinuous trajectories correspond
to vehicles changing the lane.
vmax + 1 allowed integer values v = 0, 1, ..., vmax . Suppose, xn and vn denote the position and
speed, respectively, of the n-th vehicle. Then, dn = xn+1 − xn − 1, is the (spatial) headway of
the n-th vehicle at time t, i.e. the number of empty cells in front of this car (see Fig. 15.1.1) At
each time step t → t + 1, the arrangement of the N vehicles on a finite lattice of length L is
updated in parallel according to the following rules:
NaSch1: Acceleration. If vn < vmax , the speed of the n-th vehicle is increased by
one, but vn remains unaltered if vn = vmax , i.e.
vn → min(vn + 1, vmax )
NaSch2: Deceleration (due to other vehicles). If vn > dn , the speed of the n-th
vehicle is reduced to dn , i.e.,
vn → min(vn , dn )
of Cologne.
90 CHAPTER 15. VEHICULAR TRAFFIC
Figure 15.3.1: A typical configuration in the NaSch model. The number in the upper right corner
is the speed of the vehicle.
NaSch4: Vehicle movement. Each vehicle is moved forward according to its new
velocity determined in (NaSch1)–(NaSch3), i.e.
xn → xn + vn
Here ηn (t) is a Boolean random variable which takes the value 1 with probability p and 0 with
probability 1 − p and accounts for the randomization step. The term given by the maximum is
the new velocity vn (t + 1). Note that the rules can be expressed solely in terms of the positions
xn by using dn (t) = xn+1 (t) − xn (t) − 1 and vn (t) = xn (t) − xn (t − 1). This also shows that
the configuration at time t + 1 depends both on that at time t and t − 1.
The NaSch model is a minimal model in the sense that all the four steps are necessary to repro-
duce the basic features of real traffic. However, additional rules need to be formulated to capture
more complex situations. The rule (NaSch1) reflects the general tendency of the drivers to drive
as fast as possible, if allowed to do so, without crossing the maximum speed limit. The rule
(NaSch2) is intended to avoid collision between the vehicles. The randomization in (NaSch3)
takes into account the different behavioral patterns of the individual drivers, especially, nonde-
terministic acceleration as well as overreaction while slowing down; this is crucially important
for the spontaneous formation of traffic jams. Even changing the precise order of the steps of
the update rules stated above would change the properties of the model. E.g. after changing
the order of rules (NaSch2) and (NaSch3) there will be no overreactions at braking and thus no
spontaneous formation of jams.
The NaSch model may be regarded as stochastic CA. In the special case vmax = 1 it reduces to
the TASEP with parallel dynamics. The deterministic limit p = 0 is then equivalent to the CA
rule 184 in Wolfram’s notation.
15.4. GENERALIZATIONS OF THE NASCH MODEL 91
Why should the updating be done in parallel, rather than in random sequential manner, in traffic
models like the NaSch model? In contrast to a random-sequential update, parallel update can
lead to a chain of overreactions. Suppose, a vehicle slows down due the randomization step.
If the density of vehicles is large enough this might force the following vehicle also to brake
in the deceleration step. In addition, if p is larger than zero, it might brake even further due
to rule (NaSch3). Eventually this can lead to the stopping of a vehicle, thus creating a jam.
This mechanism of spontaneous jam formation is rather realistic and cannot be modeled by the
random-sequential update.
The update scheme of the NaSch model is illustrated with a simple example in Fig. 15.3.2.
Space-time diagrams showing the time evolutions of the NaSch model demonstrate that no jam
is present at sufficiently low densities, but spontaneous fluctuations give rise to traffic jams at
higher densities (Fig. 15.3.3(a)). From the Fig. 15.3.3(b) it should be obvious that the intrinsic
stochasticity of the dynamics, arising from non-zero p, is essential for triggering the jams. For
a realistic description of highway traffic, the typical length of each cell should be about 7.5 m
which is the space occupied by a vehicle in a dense jam. When vmax = 5 each time step should
correspond to approximately 1 sec of real time which is of the order of the shortest relevant
timescale in real traffic, namely the reaction time of the drivers.
Figure 15.3.2: Step-by-step example for the application of the update rules. We have assumed
vmax = 2 and p = 1/3. Therefore on average one third of the cars qualifying will slow down in
the randomization step.
15.4. GENERALIZATIONS OF THE NASCH MODEL 93
Figure 15.3.3: Typical space-time diagrams of the NaSch model with vmax = 5 and (a) p =
0.25, ρ = 0.20, (b) p = 0, ρ = 0.5. Each horizontal row of dots represents the instantaneous
positions of the vehicles moving towards right while the successive rows of dots represent the
positions of the same vehicles at the successive time steps.
Biological transport
95
96 CHAPTER 16. BIOLOGICAL TRANSPORT
Chapter 17
Pedestrian dynamics
97
98 CHAPTER 17. PEDESTRIAN DYNAMICS
Bibliography
[2] C. Sire: Universal statistical properties of poker tournaments, J. Stat. Mech. (2007) P08013
(available from http://arxiv.org/abs/physics/0703122)
[3] P. Reimann: Brownian motors: noisy transport far from equilibrium, Rev. Mod. Phys. 361,
57 (2002) (available from http://arxiv.org/abs/cond-mat/0010237)
[4] R. Feynman: The Feynman Lectures on Physics, Vol. 1, Chapter 46 (Addison-Wesley, 1963)
[5] T.R. Kelly, I. Tellitu, J.P. Sestelo: In search of molecular ratchets, Angew. Chem. Int. Ed.
Engl. 36, 1866 (1997)
[6] N.G. van Kampen: Stochastic Processes in Physics and Chemistry (North Holland)
[8] G.M. Schütz: Exactly solvable models for many-body systems far from equilibrium, in
Phase Transitions and Critical Phenomena, Vol. 19, (Eds.: C. Domb and J. L. Lebowitz),
Academic Press, 2001
[9] R. Blythe, M.R. Evans: Nonequilibrium steady states of matrix product form: a solver’s
guide, J. Phys. A40, R333 (2007)
[10] P.L. Krapivsky, S. Redner, E. Ben-Naim: A Kinetic View of Statistical Physics (Cambridge
University Press, 2010)
99