MARKOV PROCESSES
Theorems and Problems
Engenii B. Dynkin and Aleksandr A. Yushkevich
Moscow University
‘Translated from Russian by James S. Wood
P PLENUM PRESS + NEW YORK * 1969Professor Evgenii Borisozich Dynkin, who teaches at Moscow Uni-
versity, has written two earlier textbooks on Markov processes and
is also the author of a series of popular books on mathematics.
Aleksandr Adol'fovich Yushkevich holds an associate professorship
at Moscow University. His many publications include a series of
articles on Markov processes.
‘The original Russian text, published by Nauka Press in Moscow in
1967 as part of a series on Probability Theory and Mathematical
Statistics, has been corrected by the authors for the present edition.
£. 5. uxnun, A A. Ouxesuy
TEOPEMBI HM 3ARAUII O MIPOUECCAX MAPKOBA
‘TTEOREMY I ZADACHT 0 PROTSESSAKH MARKOVA.
Library of Congress Catalog-Card Number 69-12529
© 1969 Plenum Press -
A Division of Pleaum Publishing Corporation
227 West 17 Street, New York, New York 10011
All rights reserved
‘No part of this publication may be reproduced in any
form without written permission from the publisher
Printed in the United States of AmericaForeword
The concepts and methods of probability theory are finding a
growing number of applications in the natural sciences and engin-
eering, and they are making deeper and deeper inroads into the
various domains of mathematics itself. Access to these methods
is equally advantageous to mathematicians in diverse specializa-
tions, to physicists, and to engineers, and yet the elementary text-
books are only able to give a limited notion as to the present de-
velopment of the subject, while the monographs devoted to the lat-
est trends in the field are normally written for specialists and em-
ploy elaborate set-theoretic and analytic tools.
In order to grasp new mathematical concepts, one must be
aware of their power and visualize how they operate. This is best
initiated not with general theorems, but with specific problems.
The problems must be realistic and the situation typical, but not
cluttered with the kind of incidental technical difficulties that arise
in a more pedantic systematic framework.
‘The goal of the present book is to introduce, following the in-
dicated modus operandi, the reader to the latest findings in the
theory of Markov processes.
Markov processes comprise a class of stochastic processes
that has been thoroughly investigated and has enjoyed numerous appli-
cations, Those branches of the theory of Markov processes which
have already achieved classical status are presented in a few beau-
tifully written books (e.g., [1] and [2]). In recent years, however,
we find a growing number of new and significant trends, as well as
the discovery of new relationships between Markov processes and
mathematical analysis. These problems are comprehensively dis-
cussed in several monographs ([3]-[11]) which, however, are
vvi FOREWORD
scarcely geared for a beginning familiarization with the subject.
Yet the whole problem area is based mainly on transparent and
graphic ideas, the study of which provides a rich body of material
for conditioning one to the probabilistic way of thinking.
The book is divided into four chapters, each of which intro-
duces the reader to a definite problem area: potentials, harmonic
and excessive functions, and the limiting behavior of the paths
taken by a process (Chapt. 1); the probabilistic solution of differ-
ential equations (Chapt. II); certain optimal control problems
(Chapt. II), and the probabilistic aspect of boundary problems in
analysis (Chapt. IV).
In the first chapter we consider the simplest Markov chain,
viz., a symmetric random walk on a lattice. It is explained that the
familiar concepts of a harmonic function, potential, capacitance,
and others from classical analysis have their analogs in this dis-
crete model and may be used for the solution of purely probabilis-
tic problems, such as the problem of the number of visits toa given
set. The foundation for this chapter is the work of Ito and
McKean [12].
It is shown in Chapt. II how probabilistic notions are used to
obtain analytical results. In particular, this approach is used to
prove the existence of a solution to the Dirichlet problem for the
Laplace equation in a broad class of domains.*
The connections between Markov processes and potentials
find an unexpected application in Chapt. III for the investigation
of the problem of the optimal stopping of a Markov process. The
source for this chapter is [13].
Many researchers lately have focused their attention on the
problem of the broadest classes of boundary conditions for differ-
ential and other equations. These problems are treated in Chapt.
IV from the probabilistic point of view. The analysis of an ulti-
*The relationship between probability theory and the Dirichlet problem was brought
to light long before the birth of the general theory of Markov processes (by H. B.
Phillips and N. Wiener in 1923; R, Courant, K, Friedrichs, and H. Lavy in 1928). This
idea was exhaustively developed in the work of A. Ya, Khinchin (1993) and I. G.
Petrovsai (1994), A formula expressing the solution of the Dirichlet problem in terms
of the trajectories of a Wiener process was derived by J. Doob (1954), However, Doob
applied it in a direction converse to our own, namely, forthe derivation of the proper-
ties of paths from theorems of analysis.FOREWORD vin
mately simple model (birth and death processes) makes it possible
to work strictly with completely elementary devices. The pioneer
in the application of the probabilistic approach to boundary prob-
lems is Feller. He discusses birth and death processes in [14].
However, even though Feller is guided by probabilistic intuition, he
still introduces all his constructions in purely analytical form. Our
approach is based on a consideration of the properties of paths and
rests on the concept of the characteristic operator (a brief outline
of Chapt. IV is contained in [15]).
At the end of each chapter is a set of problems which go be-
yond the simple function of providing material for exercises in that
they supplement the text proper and present certain new informa-
tion. Thus, the problems in Chapt. II include a discussion of the
Martin boundary for a denumerable Markov chain.
So as not to impede the main flow of the probabilistic dis-
cussions, some of the ancillary analytical problems are treated in
the Appendix.
Besides the primary literature sources mentioned above, we
will have occasion throughout the book to refer to other works
(usually in the problems and examples).
All that is expected of the reader is an acquaintance with the
basic tenets of probability theory and classical analysis. Some of
the problems, however, require greater erudition. We have con-
sciously avoided references in the main text to measure theory and
measurability. The reader who has these concepts at his command
will have no difficulty in grasping the presentation at a more rigor-
ous set-theoretic level.
‘The book is basically the outgrowth of lectures given by the
first author at Moscow University in 1962-63 (the lectures were
recorded by M. B. Malyutov, S. A. Molchanov, and M. I. Freidlin).
This material was subsequently augmented and radically revised,
with the addition of problems to round out the content of the book.
The authors wish to thank I. L. Genis, who performed a tre-
mendous service in preparing the manuscript for publication.
January 22, 1966 E. B. Dynkin
A.A. YushkevichContents
Chapter I. ‘A Criterion of Recurrence
1, Symmetric Random Walk .
2. The Transition Function .
Behavior of the Path of the Particle
Sen oR eee err
Harmonic Functions .. .
The Potential........
Excessive Functions . . .
Capacity . a co
The Recurrence Criterion mare
9. Recurrence of a Set Situated on the Axis
Problems. . .
o
OIaanp
Chapter II. Probabilistic Solution of Certain Equations
1. Definition of the Wiener Process.........
2. Distribution at the Time of Exit
from a Circle and Mean Exit Time........
3. The Markov and Strong Markov Properties
4, Harmonicity of the Escape Probabilities ... .
5,
6
. Regular and Irregular Boundary Points... ..
. The Zero-One Law; a Sufficient Criterion
of Regularity... . .
. The Dirichlet Problem. . .
. Probabilistic Solution of the Poisson
Equation... 2... eee eee eee eee
9, Infinitesimal and Characteristic
Operators .
Problems... .
wox
Chapter II, The Optimal Stopping Problem
1. The Problem of Optimal Choice..........
2. Optimal Stopping Problem for a Markov
Chain........ oe oe
. Excessive Functions .
The Value of a Game.
The Optimal Strategy
. Application to a Random Walk with ‘Absorption
and to the Optimal Choice Problem . . .
. Optimal Stopping of a Wiener Process
8. Proof of the Fundamental Property of Concave
Functions ....
Problems .
oan
“=
Chapter IV. Boundary Conditions
10. Boundary Conditions.......
11. The Uniqueness Theorem... .
Index
1. Introduction.............
2, The Birth and Death Process .
3. The Canonical Scale and Escape
Probabilities... ..............
4. Repelling and Attracting Boundaries. .
5, The Characteristic, Average Exit Time,
and Velocity Measure «.
6. Accessible and Inaccessible Boundaries
7. Continuations of the Birth and Death Process:
Statement of Problem ...............-
8. The Jump Measure and Reflection
Coefficient. . .
Absorption Coeffi
9 ent; Inward Passability.. .
Problems ..............-
CONTENTS
87
98
102
105
107
110
112
119
127
147
151
155
161
163
171
174
182
189
198
202
210
221
231
233Chapter I
A Criterion of Recurrence
§1. Symmetric Random Walk
Consider a particle moving along the integer-valued points
0, #1, £2, ... on the x axis, executing unit jumps to the left or the
right at equal intervals. If in each such instance the probabilities
of jumping to the right or to the left are the same and equal to 1/2,
we say that the particle executes a symmetric random walk
on a line.
The points 0, +1,—1, ... that the particle can hit are called
states.
We propose to show that a particle with arbitrary initial posi-
tion will with probability one sooner or later enter any possible
state. Inasmuch as all states are clearly equally probable, it is
sufficient to show that a particle leaving any state will at some
time hit 0. We denote by 7(x) the probability of hitting 0 from a
point x, Then 7(0)=1, and, according to the total- probability
formula,
a()=Fae— N45 a(e+1) w
for x0. Consider the graph of the function 7(x), x=0, 1, 2, ...,
k, ... - Equation (1) means that any three neighboring points of this
graph lie on a single line. Consequently, all points of the graph of
the function r(x) for x=0 Hie on one line. Inasmuch as 7(0) =1, this
line emanates from the point (0, 1). If (x) were smaller than one
12 A CRITERION OF RECURRENCE (Chapt. 1
for some positive x, this line would have to intersect the x axis,
and r(x) would be negative for sufficiently large x. This is impos-
sible, hence z(x)=1 for allx2=0. Due to the symmetry of the ran-
dom walk, r(x) =1 for x<0 as well. Thus, for any initial state the
probability of attaining zero is equal to one.
A logical generalization of the random walk on a line is a
random walk on an [-dimensional integer-valued lattice H!. This
lattice consists of points (vectors) of the type
xaxuet... +26,
where @;, ..., €; comprises the orthonormal basis of an 1-dimen-
sional space, and the coordinates x,, ..., x; are arbitrary integers.
Increasing or decreasing one of the coordinates by one and leaving
the other coordinates unchanged, we obtain the 27 neighboring
lattice points to x (thus, in the two-dimensional case every point
of the lattice has four neighbors: one to the right, one to the left,
one above, and one below). At each step the particle has equal prob-
abilities 1/21 of intersecting one of the neighboring states, regard-
less of its position at the preceding instant.
It turns out in the two-dimensional, as in the one-dimensional,
case that the particle, on leaving any point of the lattice, has a prob-
ability one of hitting any other point (see the problems at the end of
the chapter). For lattices of three or more dimensions, on the
other hand, the probability of reaching one state from another, as
we shall see, is less than one. The probability of reaching a cer-
tain set B rather than a single point can be either equal to or less
than one. We designate this probability mp(x), where x is the ini-
tialstate oftheparticle. We call the set Brecurrent if tp(x)=1
for points x of the lattice, nonrecurrent (transient) ifp (x) <1
for at least one point x. In the present chapter we shall deduce a
criterion whereby it is possible to distinguish between recurrent
and nonrecurrent sets.
$2 The Transition Function
We let x(0) represent the initial position of a particle execut-
ing a random walk and let x(n) (n=1, 2, 3, ...) represent its position
after n steps.
The probability of some event A connected with a random walk,
of course, depends on the point x from which the walk was initiated.$2] THE TRANSITION FUNCTION 3
We designate this probability P,{ A} and represent the mathema-
tical expectation of the random variable £ corresponding to the dis-
tribution P, by the symbol Mxé.
Next we denote by p(n, x, y) the probability that a particle
leaving the point x will hit the point y after n steps:
p(n, x, y)=P,{x(n)= yh.
The function p(n, x, y) is an important characteristic of the random
walk and is called its transition function. Clearly, p(0, x,
x)=1, p(, x, y)=0 for xy. It is also clear that D p(n. x, y)=1.*
>
‘The quantity
pcm x =P, [MES],
yee
where B is some set in J-dimensional space, is called the transi-
tion probability from x to B inn steps.
‘An essential property of the random walk and one that con-
tributes to its analysis is the mutualindependence of the jumps {=
x(k) —x(k—-1) (k=1, 2 ...). The vectors ¢, are also independent
of the initial state of the particle, and they all have the same dis-
tribution. Specifically, any of the vectors , assumes with equal
probability every one of the values +¢;, ..., 4¢,. Utilizing this fect,
we derive an appropriate integral representation for the transition
function p(n, x, y).
We denote by @(x) a linear form assuming the value 0), on the
vector e. This means that if x=xje,+...+x7e7, then 6(x)=64x,+
..+0]x]. Consider the function
FO= Zp +, yeO— Meee, (2)
5
Le., the characteristic function of the random vector x(n). {In fact,
the series in Eq. (2) contains only a finite number of nonzero terms,
“Here and elsewhere the symbol >) indicates summation over all points of the lat-
7
nee H,4 A CRITERION OF RECURRENCE {Chapt 1
because after n steps the particle can visit not more than (21)" dif-
ferent states.) The transition function p(n, x, y) is easily expressed
in terms of the function F(@), Thus, we let Q be the set of all li-
near forms 6(z) = 6,2, +... + 6727 with coefficients 6,,..., 07 whose
absolute values do not exceed 7. We multiply Eq. (2) bye “19 (2) (z is a
point of the lattice H') and integrate over Q Since y and z are
vectors with integer -valued coordinates,
J e18)-18 (2) — fue fe GOmat Ord go, .,, 0,
“x Ak
fe Oe) a0, — (2m) for yz.
0 fr yea,
so that, consequently,
f F (2-8 a0. (3)
Q
Let us find the function F(@). Inasmuch as
x@=
0 there exists a point y at which gly)=> M—e. Re-
iterating the arguments of the preceding paragraph, we readily
show that if o is harmonic, then at any point y' neighboring upon
y the estimate g(y')= M- 2le. Hence, if M>0, then it is possible
to pick a chain of points yo, yy=Yo+@j, Yo=VitCy «5 Yq =Yn-i+ 1
for which the sum
$=G(Y) TOW +++ TOR)
is greater than any prespecified number N.
If now f is an arbitrary bounded harmonic function, the func-
tion v(x) =f (x+ e,) — f (x) is also harmonic and bounded. For it the
sum
s=fOn,ted—F Oo)
also does not exceed twice the upper bound of f. Therefore. the
exact upper bound of the function g cannot be positive. This im-
plies that for any x
G(x) =F (¥+e)—F (x) <0.
In the foregoing discussion it is admissible to replace the vector
e; with the vector —e,. Consequently,
fatep=s()-
It is proved analogougly that f(x+ a) =f (x) for any k.
An example of a harmonic function is the function T(x) ex-
pressing the probability, starting from x, of visiting the set B in-
finitely often. Consequently,
Pay (x)=
1 P(L, ©, Y)Fp(9)10 A CRITERION OF RECURRENCE (Chapt. I
is the probability, starting from x, of visiting Binfinitely often after
the first step. Clearly, this probability is equal to TR) .
Inasmuch as the function 7p(x) is bounded, it is constant ac-
cording to the above proof. We will show that it is equal to one or
zero depending on whether the set Bis recurrentornonrecurrent. Let
the set B first be nonrecurrent. We denote by q(n, y) the probabil-
ity, starting from x, of visiting B for the first time in the nth step
and of occupying the state y at that time, and we denote by mp(x),
as before, the probability, starting from x, of visiting B for the
time. Clearly,
a(2)=D DB an. y).
a=0 yeR
In order to visit B infinitely often, the particle must visit B for the
first time in some step, then visit B infinitely often. Computing the
probability of this event according to the total probability formula,
we obtain
Hg=Fe)—= BS ac, Wig) =p: ty(*)s (14)
where x is any point of the lattice. Inasmuch as B is nonrecurrent,
there exists an x at which Tax) < 1, so that m,=0.
If the set B is recurrent, then, clearly, the probability of the
event Caaf The particle never visits B after the nth step} is equal
to zero for any n=0 and any initial state x. Therefore,
a Ty) =P,x{The particle will visit B only a finite number
of times} =Px{ Cy UC, UC, U
= Px {Co} +Px{Cy} +Pxf{o} +.
consequently, T= 1.
Consequently, there is another possible definition for re-
currence. A set B is recurrent if a particle start -
ing from any point of the lattice visits B in-
finitely often with probability one. If, however,
the probability of this event is less than one for
some x, it is equal to zero for all x, and B is
nonrecurrent.$5] THE POTENTIAL la
In concluding this section, we note that not only bounded har-
monic functions, but also harmonic functions bounded below (or
above), are constant on the entire lattice H’ (see the problems).
The class of harmonic functions unbounded above and below is con-
siderably broader. For example, any linear function of the coor-
dinates Xj, ..., Xz of a vector x satisfies the equation Pf =f, hence
it is harmonic.
§5. The Potential
The Laplace operator A ties in closely with the concept of
the Newtonian potential, Let a mass be distributed in a three-di-
mensional space R? with a density y(y). According to the universal
gravitation law of Newton, this mass acts on a unit mass situated
at a point x with a force proportional to the gradient of the function
fO=% f gles (as)
where ||x—y|| indicates the distance between the points x and y.
The function f (x) is called the potential of the distribution g(y). It
may also be interpreted as the potential of an electrostatic field
created by a charge distribution 9.
It turns out, given very mild constraints on the function ¢,
that the potential f is the solution of the Poisson equation
ZAf@=— 4%). (16)
In complete analogy, the solution of Eq. (16) in an 7-dimensional
space R# (1 28) is the integral
=b, oO)dy
f() ei? (17)
where by is some positive constant. This integral is called the po-
tential of the distribution @ in the J-dimensional space.12 ‘A CRITERION OF RECURRENCE [Chapt, 1
In the discrete case Eq. (16) goes over to the equation
Af (x) =— 9) (18)
where f and g are functions on the lattice Hl. Let us consider the
operator
Gp= 9+ Po-+ PEt .1. +P'OH wees (19)
where 9=0. Let
f=G9. (20)
According to (19),
PQp=Ge—@,
hence
Af =(P—E) f =(P —E) Go = Go—9 —Go=—9Q.
Consequently, the operator G is analogous to the integral operator
specified by Eq. (17). We therefore refer to the function Go as the
potential of the function 9 (20).
The discrete potential has a straightforward probabilitistic
interpretation. As a matter of fact,
P'g(x)= 2p (m x. 9) 99) = M,0(4(n)). (21)
For n=0 Eq. (21) reduces to the equation g(x) =9(x), while for n=1
it reduces to the definition of the operator P [see Eq. (13)]. For all
other values of n Eq. (21) is proved by induction.
According to the total probability formula,
Pat x)= rx 2) pmey).
2
Assuming that Eq. (21) has already been proved for n, we deduce$5] THE POTENTIAL 1s
Ma (e(+1))=DPe+h so) = Sei. + 2) [Bee z reo]
7 = Y
=r. x 21% @=P"e a),
ie., that (21) is true for n~1 as well.
It follows from (21) that
O92) =D Me @))=M, Z 4 (x(@)). (22)
This equation leads to the following important interpretation of the
potential. Let every hit at the point y bring a payoff o(y). Then
Ge (x) is the mean value of the payoff obtained during a random walk
of a particle with initial point x.
Using the notation
ae N= XS pln my
which was introduced in §3, we rewrite the expression for the po-
tential in the form
Go(x) = De & 90). (23)
7
As noted at the end of §3, for large ||x-yll
ay) A
Neo
ta—y I?
Hence,
dota) = Vets Neo)~ ay ees
714 A CRITERION OF RECURRENCE (Chapt. 1
for ||x[[—- © in every case when 9(y) is different from zero only
at a finite number of points. Consequently, for large ||x|| the dis-
crete potential behaves like the Newtonian potential (17).
We now show that if f=Go and T is the time of
first visit of the particle to the set B, then
ta
FMS O)=My DOC ()) (24)
{if the particle never visits B, we say that tT =~, f (x(r)) =0].
Let
F(x) = Gp) = My [9 (*(0)) +O) +. FEM) ..-b
(25)
Dividing the path of the particle into two parts, viz., the part before
the time 7 and the part after the time rT, we write that
f@)=My 10% O)+ --. +9(%6—1))
MLO @) +E FD) + --- (26)
The first term in (26) clearly represents the average payoff during
the random walk prior to visiting B, while the second term repre-
sents the average payoff after the first visit to B. In order to ob-
tain (24) from (26), we need only verify that
MeLO@O+OKC+1))+ .. = MS (*@)-
Making use of the probability a(n, y)=Py{7 =n, x(n) =y}, we write
that
Mg C+) = Rae. Myo ()),
Dam y) =P, x =y],
where n varies from 0 to ~, and y spans the values from B. Con-
sequently,
MB o@ee+e)= 3B Ae y) My (A)S6] EXCESSIVE FUNC TIONS 18
= Tac 9) 3S Myo(e@) = gin, 9) £0)
= Ea
= 3 LY), (2() = ¥] = Mas (x).
86. Excessive Functions
We recall that a function f(x) (x€H') is called superhar-
monic if Pf =f. Nonnegative superharmonic functions play an im-
portant part in the theory of Markov processes and are commonly
called excessive functions.
Inasmuch as Pf =f for a harmonic function, a harmonic func -
tion is excessive if it is nonnegative. Moreover, if f =Go (g=0),
then
F—PF=O+PE+P E+...)
—PGt+Pa t+ Pet ...)=E>%.
Therefore, the potential of any nonnegative function is excessive.
We will show that any excessive function is equal
to the sum of a nonnegative harmonic function and
the potential of a nonnegative function (this result is
the discrete analog of the well-known Riesz theorem in the theory
of differential equations).
Let f be an excessive function. We set f — Pf =9, notingthat
9=0, andwriting the obvious identity
FH=OTPEH +P GPF. (27)
It follows from the estimate
G+ Pot... +P 9=f—PE3. Capacity is one of the focal
concepts in the theory of the Laplace equation.
Starting with discrete potentials f =Gg, we seek to develop
analogous formulations for functions defined on a lattice H!. We
fix a subset B of such a lattice and investigate the class Kp of all
functions y= 0 equal to zero outside Band suchthat Gp =1.
For the function f =Go, where g€K,, Eq. (24) of §5 takes
the form
FOQ=Mf OM) (32)
where T is the time of first visit of the particle to the set B. It
follows from the inequality f=1 that Myf (x(r))=Px{7< © } =mp(x).
We deduce from Eq. (82), therefore, that
F(t) <3 (x)- (33)
If the set B is nonrecurrent, then, as we saw in the preceding sec-
tion, Tg=Ggp, where 9,=1,—Px,€Ky. Consequently, rp is
reasonably called the equilibrium potential, 9p is known
asthe equilibrium distribution, andthe capacity of
the set B is defined by the formula
C(B) = Pa). (34)
For recurrent sets the concept of capacity cannot be met. We
recall that all finite sets are nonrecurrent.
We now set forth an extremal property of the equilibrium dis-
tribution yp, forming the discrete analog of the Gauss theorem in
Newtonian potential theory. Let a set B be nonrecurrent. We will
show that for any function g€Kg in this case
290) < D9) =C). (35)
The quantity > 9(y) is logically called the total charge
7
corresponding to the distribution y. The inequality tells us that§8) THE RECURRENCE CRITERION 19
the capacity of a nonrecurrent set B is definable
as the maximum total charge concentrated on B
whose potential does not exceed one.
For the proof of the relation (35) we introduce the abbre-
viated notation
Uy = 2D AO) fo)
ite
By virtue of the symmetry of the random walk, p(n, x, y)=p(n, y, x),
and, therefore, g(x, y) =g(y, x). Consequently,
(Gh. f2)= Cv Of).
Utilizing the fact that ty(x)=1for x€B and that Gp<1 for g€K»
we deduce that
LEO=E a)=O Cos) = (CO. Gs) 3.
Z Ly Uk Ss UE:
Ol me
Y
ZG
&
Fig. 120 A CRITERION OF RECURRENCE (Chapt. 1
Inasmuch as any bounded set is nonrecurrent, the recurrence
of the set B does not depend on the structure of B inside any pre-
determined sphere. As it turns out, the recurrence of B depends
on how rapidly the number of points of B falling within a sphere of
radius r grows asr—~ ©.
Let us consider an expanding sequence of spheres with cen-
ters at zero and radii r=1, 2, 2”, ..., 28, ..., which are growing in
a geometric progression. We denote by By, that part of the set B
which falls between the kth and the (k+1)st spheres (more pre-
cisely, the set of those x from B for which 2-' <|| x||= 2k;see Fig.)
The set B, is finite, hence for it the capacity C(B,) is defined. The
following criterion holds.
In order for the set B to be recurrent, it is
necessary and sufficient that the following series
diverge:
yee. (86)
:
We first prove the necessity of this condition, i.e., we show
that convergence of the series (36) implies nonrecurrence of the
set B.
We first verify that, along with the series (36), the following
series also converges:
ry, (0). (37)
We do this with the aid of the asymptotic estimate given at the end
of §3:
&% N~Tex=a Mll*#—yll> 00) (38)
where Q =e; [see Eq. (12)]. By virtue of (88), there exists an N>0
such that the following inequality is fulfilled for y€B,, k>N:
cOy2k-! for yea,, we obtain
2095, on
Hp, (0) <% ao GSE tor YES» (au)
where Q, >0 does not depend on y or k.$8) THE RECURRENCE CRITERION 23,
If a particle, on leaving y€S,, visits Bg;, then either the
event Ay or the event D, ={The particle visits Bg, after hitting the
layer S,,4} occurs. Therefore,
Fag) SP, [Ag] TP, (Pa)
It is clear that
Py [Pa] < mex A5,,(2)
{this inequality is formally proved by the introduction of the prob-
abilities q(n, z) of first hitting the layer S,., at the time n at the
point z; see the discussion at the end of §5]. Consequently,
PY {Ar} > 4,0) snes Hy, (2). (42)
It remains for us to evaluate the function ay Since this function
is the potential of the equilibrium distribution Pa while the func-
tion 9,,, is equal to zero outside Bg, and the tofal charge of the
equilibrium distribution is equal to the capacity C(B,x), it follows
from the inequality (42) that for y€S,
PY {A,}> z go. 0a) — Ex 2 BC 4) 02,
eet
ue Bay ueay
CB, min , u)— max g(z, 4)]-.
ce ae BO. a) ms x 8G |
Applying the asymptotic function (38) here, we see that for suffi-
ciently large k and y€S,
P, (Ad >C Gu) (22 — FR),
where r,, is the largest distance between the points y€S, and
EB», and R, is the least distance between z€S,,, and u€ By.
It follows from the relative position of the sets S,, Bg, and 8,444 A CRITERION OF RECURRENCE (Chapt. I
that
FeSO P12 2.2%,
Ry > itt? — 9 a3. 8,
Consequently, for sufficiently large k
P(Al> Ze ESD.
and the inequality (41) is thus proved.
Now that we have the inequality (41), it is no longer a diffi-
cult matter to prove the recurrence of the set B. We pick a num-
ber m such that the initial state x lies inside the layer S,, and the
inequality (41) is satisfied for allk=m. We denote by 7), the time
of first visit to the layer S,. The opposite of the event Aj is the
event A, ={ During the period (Ty, Ty, 4] the particle did not visit
Bat. It follows from the estimate (41) that, irrespective of the
values of 7, and x(7j,) or the nature of the motion prior to the time
7, the probability of A, is not greater than
CB,
1-9, 2Ge).,
For any s, therefore,
mss
Ps l4mM Ane A -- Wns < J]
hem
COB,
oS).
Indeed, let
9 (0, Y= Px lte—n, x(t) = % AnN--- Aga}.
Then
Pe (Fa... 1s Fe} = Dy an 9) Py (Teh
my
<(1-a EGY Be nN Tyas§9] RECURRENCE OF A SET SITUATED ON THE AXIS 25
Passing to the limit as s—- ~ and bearing in mind that the
series Jo, 5G diverges, we infer that
Pel4gU Amer U «Um -g Us ]
= 1 Py {Ag MAgaN-- NA gen M-+-
=1
and hence, that the particle with probability one visits one of the
sets By, belonging to B. Thus, the set B must be recurrent.
89. Recurrence of a Set Situated on the Axis
Making use of the recurrence criterion developed in the pre-
ceding section, wenow try to imagine what recurrent and nonrecur-
rent sets of a three-dimensional lattice look like.
It is clear that any subset of a nonrecurrent set is also non-
recurrent, and that if a set contains a recurrent subset, the set it-
self is recurrent. Moreover, we know that any bounded set is non-
recurrent.
We denote the coordinates of the point x(n) by x(n), x)(n), and
x,(n). We will show that the coordinate plane x;=0 is a recurrent
set. Clearly, the value of x;(n) varies according to the following
law: During unit time it increases or decreases by unity with prob-
abilities 1/6 and keeps the same value with probability 2/3. The
probability that the value of x;(n) will retain the same value k times
in succession is equal to (2/3)K. It tends to zero as k-~ ~, hence
the value of xs(n) must change sooner or later. It is evident from
symmetry considerations that the first increment of the quantity
x,(n) will be equal to either —1 or 1 with probability 1/2. There-
fore, the random variation law governing x;(n) differs from the one-
dimensional symmetric random walk described at the beginning of
$1 only in the possibility of the particle becoming trapped for some
finite period of time in every state in which it happens to be situ-
ated. It is clear that such waiting times do not alter the probabil-
ities of attaining the value 0, but only affect the speed of motion,
not the configuration of the path. Inasmuch as the point O is ac-
cessible from any other point with probability one in a simple ran-
dom walk, x3(n) reaches zero at some time. Thus, the coordinate
plane x;=0 is a recurrent set.