Introduction to Random Process
1 / 41
Outline
I Textbook: chapter 1 - Shreve I
I Introduction to Random process
I Classify random process
I Martingale property
I Stopping time
2 / 41
Table of Contents
Random/Stochastic processes
Martingale
Stopping time
3 / 41
Stochastic processes or Random process
I A stochastic process is a mathematical model of a
probabilistic experiment that evolves in time and generates
a sequence of numerical values.
I Each numerical value in the sequence is modeled by a
random variable
I A collection of random variables
4 / 41
Example
I the sequence of daily prices of a stock;
I the sequence of scores in a football game;
I the sequence of failure times of a machine;
I the sequence of hourly traffic loads at a node of a
communication network;
I the sequence of radar measurements of the position of an
airplane
5 / 41
Random processes
Stochastic process is a collection of random variables (Xt )t∈I
on a probability space (Ω, F, P ) taking values in S
6 / 41
Random processes
Stochastic process is a collection of random variables (Xt )t∈I
on a probability space (Ω, F, P ) taking values in S
I t: time
6 / 41
Random processes
Stochastic process is a collection of random variables (Xt )t∈I
on a probability space (Ω, F, P ) taking values in S
I t: time
I I: the index set of the process
6 / 41
Random processes
Stochastic process is a collection of random variables (Xt )t∈I
on a probability space (Ω, F, P ) taking values in S
I t: time
I I: the index set of the process
I S: the set of state of the stochastic process.
6 / 41
Random processes
Stochastic process is a collection of random variables (Xt )t∈I
on a probability space (Ω, F, P ) taking values in S
I t: time
I I: the index set of the process
I S: the set of state of the stochastic process.
I For each fixed ω, Xt (ω) is a deterministic function of time,
which is called the sample path (realization, trajectory,
sample function)
I At each instant t, Xt is a random variable
6 / 41
Example
A collection (S0 , S1 , S2 ) is a stochastic process.
S0 , S1 , S2 are RV
Corresponding to an outcome w = HH, we have a sample path
(4, 8, 16)
7 / 41
Use random processes to
I model some phenomena which evolves over time
I take into account the dependence, e.g how knowledge
about asset price up to today effect on the behavior of
asset price tomorrow or in the future
I forecasting
I evaluate risk
8 / 41
Example
Consider a binomial asset pricing model
I S0 = 4
I p(H) = p(T ) = 1
2
I u = 2, d = 12
Suppose we know that S1 = 2, S2 = 4, S3 = 8.
Then E(S4 |S1 = 2, S2 = 4, S3 = 8) is used to forecast the asset
price at period 4.
9 / 41
Example - Auto regressive model AR(1)
Let Sn be asset price at period n and rn = SnS−S n−1
n−1
be
percentage return at period n
Return at period n depends on the return at period n − 1 and
random noise n
rn = c + φrn−1 + n
1 , 2 , . . . are independent (unpredictable term effects on return)
10 / 41
I Assume that c = 3, φ = 1 and n ∼ N (0, 1)
I Given that r0 = 3, r1 = 1, r2 = 4, r3 = −1, we have
r4 = 3 + (−1) + 4 = 2 + 4
I the conditional distribution of r4
r4 |(r0 = 3, r1 = 1, r2 = 4, r3 = −1) ∼ N (2, 1)
11 / 41
I Assume that c = 3, φ = 1 and n ∼ N (0, 1)
I Given that r0 = 3, r1 = 1, r2 = 4, r3 = −1, we have
r4 = 3 + (−1) + 4 = 2 + 4
I the conditional distribution of r4
r4 |(r0 = 3, r1 = 1, r2 = 4, r3 = −1) ∼ N (2, 1)
I Forecast return at period 4
E(r4 |(r0 = 3, r1 = 1, r2 = 4, r3 = −1) = 2
11 / 41
I Assume that c = 3, φ = 1 and n ∼ N (0, 1)
I Given that r0 = 3, r1 = 1, r2 = 4, r3 = −1, we have
r4 = 3 + (−1) + 4 = 2 + 4
I the conditional distribution of r4
r4 |(r0 = 3, r1 = 1, r2 = 4, r3 = −1) ∼ N (2, 1)
I Forecast return at period 4
E(r4 |(r0 = 3, r1 = 1, r2 = 4, r3 = −1) = 2
I Risk that the return at period 4 is negative
P (r4 < 0|(r0 = 3, r1 = 1, r2 = 4, r3 = −1) = P (X < 0)
where X ∼ N (2, 1)
11 / 41
Exercise - AR(2)
Return at period n depends on the two last returns at period
n − 1 and n − 2 and random noise n
rn = 1 + 0.5rn−2 + 2rn−1 + n
1 , 2 , . . . are independent and normally distributed N (0, 1).
Given that r0 = 3, r1 = 1, r2 = 4, r3 = −1
1. Forecast return at period 5
2. Evaluate the risk that the return at period 5 is less than -1.
12 / 41
Classification of stochastic processes
X: I ×Ω→S
(t, ω) → Xt (ω)
I Based on time observation I
I Discrete: I is countable, e.g, {0, 1, 2, . . . }
I Continuous: I is uncountable, e.g [0, ∞), [0, 1]
I Based on state observation S
I Discrete S is countable, e.g, {0, 1, 2, . . . }, {20 , 2±1 , 2±2 , . . . }
I Continuous: S is uncountable, e.g [0, ∞), [0, 1]
13 / 41
Discrete time vs Continuous time
Discrete time process. Continuos time process.
14 / 41
Discrete state vs Continuous state
Mo
1 Ms
15 / 41
Markov
chain
Random
walk
Binomial
asset
pricing
Discrete
state
Continuous
Discrete time state
Random process
Continuous Discrete Poisson
process
time state
Continuous
state
Brownian
motion
Itô
Solution
integral
of
stochas-
tic
differ-
ential
equations
16 / 41
Discrete-time continuous-state stochastic process
I An air-monitoring station in southern California records
oxidant concentration levels every hour in order to monitor
smog pollution.
17 / 41
Discrete-time continuous-state stochastic process
I An air-monitoring station in southern California records
oxidant concentration levels every hour in order to monitor
smog pollution.
I Concentration levels are recorded every hour.
17 / 41
Discrete-time continuous-state stochastic process
I An air-monitoring station in southern California records
oxidant concentration levels every hour in order to monitor
smog pollution.
I Concentration levels are recorded every hour.
I Xi : the concentration level at i-th hour.
17 / 41
Discrete-time continuous-state stochastic process
I An air-monitoring station in southern California records
oxidant concentration levels every hour in order to monitor
smog pollution.
I Concentration levels are recorded every hour.
I Xi : the concentration level at i-th hour.
I X0 , X1 , ... : is discrete-time continuous-state stochastic
process.
17 / 41
Some special random processes
Discrete time Continuous time
Discrete Binomial asset pricing model Poisson process
state Markov Chain
Random Walk
Continuous Brownian motion
state Ito integral
Solution of SDE
18 / 41
Information affects on stock prices
I The market price of a stock: the price at which a buyer and
a seller agree to trade.
19 / 41
Information affects on stock prices
I The market price of a stock: the price at which a buyer and
a seller agree to trade.
I The change of the market price of a stock is caused by the
supply and demand.
19 / 41
Information affects on stock prices
I The market price of a stock: the price at which a buyer and
a seller agree to trade.
I The change of the market price of a stock is caused by the
supply and demand.
I Supply increases: low expectation of the company’s
profitability.
19 / 41
Information affects on stock prices
I The market price of a stock: the price at which a buyer and
a seller agree to trade.
I The change of the market price of a stock is caused by the
supply and demand.
I Supply increases: low expectation of the company’s
profitability.
I Demand increases: high expectation of the company’s
profitability.
19 / 41
Information affects on stock prices
I The market price of a stock: the price at which a buyer and
a seller agree to trade.
I The change of the market price of a stock is caused by the
supply and demand.
I Supply increases: low expectation of the company’s
profitability.
I Demand increases: high expectation of the company’s
profitability.
I News changes the expectation of the company’s
profitability
19 / 41
Information affects on stock prices
I The market price of a stock: the price at which a buyer and
a seller agree to trade.
I The change of the market price of a stock is caused by the
supply and demand.
I Supply increases: low expectation of the company’s
profitability.
I Demand increases: high expectation of the company’s
profitability.
I News changes the expectation of the company’s
profitability
I Filtration: time-evolving information structure to study
random process of a stock price.
19 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
20 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
I (Ω, F, P ): a probability space
20 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
I (Ω, F, P ): a probability space
I Filtration {Ft }: a non-decreasing collection of σ-algebras
{Ft ⊂ F}t≥0 such that Fs ⊂ Ft , ∀0 ≤ s ≤ t.
20 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
I (Ω, F, P ): a probability space
I Filtration {Ft }: a non-decreasing collection of σ-algebras
{Ft ⊂ F}t≥0 such that Fs ⊂ Ft , ∀0 ≤ s ≤ t.
I σ-algebra Ft : “information set”
20 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
I (Ω, F, P ): a probability space
I Filtration {Ft }: a non-decreasing collection of σ-algebras
{Ft ⊂ F}t≥0 such that Fs ⊂ Ft , ∀0 ≤ s ≤ t.
I σ-algebra Ft : “information set”
I Filtration functions like a filter of information flow to control
information propagation.
20 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
I (Ω, F, P ): a probability space
I Filtration {Ft }: a non-decreasing collection of σ-algebras
{Ft ⊂ F}t≥0 such that Fs ⊂ Ft , ∀0 ≤ s ≤ t.
I σ-algebra Ft : “information set”
I Filtration functions like a filter of information flow to control
information propagation.
I Ft represents the information available up to time t.
20 / 41
Filtered probability space
A filtered probability space is a quadruple (Ω, F, {Ft }, P ):
I (Ω, F, P ): a probability space
I Filtration {Ft }: a non-decreasing collection of σ-algebras
{Ft ⊂ F}t≥0 such that Fs ⊂ Ft , ∀0 ≤ s ≤ t.
I σ-algebra Ft : “information set”
I Filtration functions like a filter of information flow to control
information propagation.
I Ft represents the information available up to time t.
I The condition Fs ⊂ Ft , ∀0 ≤ s ≤ t ensures that the amount
of information grows as time evolves and that no
information is lost with increasing time.
20 / 41
Example
Some important σ − algebra of subsets of Ω in Example 2
1. F0 = {∅, Ω}: trial σ − algebra - contains no information.
Knowing whether the outcome w of the three tosses is in ∅
and whether it is in Ω tells you nothing about w.
21 / 41
Example
Some important σ − algebra of subsets of Ω in Example 2
1. F0 = {∅, Ω}: trial σ − algebra - contains no information.
Knowing whether the outcome w of the three tosses is in ∅
and whether it is in Ω tells you nothing about w.
2.
F1 = {0, Ω, {HHH, HHT, HT H, HT T }, {T HH, T HT, T T H, T T T
= {0, Ω, AH , AT }
where
AH = {HHH, HHT, HT H, HT T } = { H on first toss }
AT = {T HH, T HT, T T H, T T T } = { H on first toss }
F1 : information of the first coin or ”information up to
time 1”. For example, you are told that the first coin is H
and no more.
21 / 41
Example
3.
F2 = {∅, Ω, AHH , AHT , AT H , AT T }
and all sets which can be built by taking unions of these }
where
AHH = {HHH, HHT } = {HH on the first two tosses}
AHT = {HT H, HT T } = {HT on the first two tosses}
AT H = {T HH, T HT } = {TH on the first two tosses}
AT T = {T T H, T T T } = {TT on the first two tosses}
F2 : information of the first two tosses or ”information
up to time 2”
22 / 41
Example
3.
F2 = {∅, Ω, AHH , AHT , AT H , AT T }
and all sets which can be built by taking unions of these }
where
AHH = {HHH, HHT } = {HH on the first two tosses}
AHT = {HT H, HT T } = {HT on the first two tosses}
AT H = {T HH, T HT } = {TH on the first two tosses}
AT T = {T T H, T T T } = {TT on the first two tosses}
F2 : information of the first two tosses or ”information
up to time 2”
4. F3 = G set of all subsets of Ω: “full information” about
the outcome of all three tosses
22 / 41
Natural filtration
Consider a stochastic process on a probability space (Ω, F, P ).
I The filtration (Ft )t≥0 is called a natural filtration of the
process (Xt )t≥0 if Ft = σ(Xs , 0 ≤ s ≤ t), t ≥ 0.
23 / 41
Natural filtration
Consider a stochastic process on a probability space (Ω, F, P ).
I The filtration (Ft )t≥0 is called a natural filtration of the
process (Xt )t≥0 if Ft = σ(Xs , 0 ≤ s ≤ t), t ≥ 0.
I Ft is the σ-algebra generated by random variables
Xs , 0 ≤ s ≤ t
23 / 41
Adapted process
Consider a stochastic process (Xt )t≥0 on a filter probability
space (Ω, F, {Ft }, P ).
24 / 41
Adapted process
Consider a stochastic process (Xt )t≥0 on a filter probability
space (Ω, F, {Ft }, P ).
I (Xt )t≥0 is adapted to the filtered probability space if Xt is
Ft -measurable for each t ≥ 0, i.e., σ(Xt ) ∈ Ft .
24 / 41
Adapted process
Consider a stochastic process (Xt )t≥0 on a filter probability
space (Ω, F, {Ft }, P ).
I (Xt )t≥0 is adapted to the filtered probability space if Xt is
Ft -measurable for each t ≥ 0, i.e., σ(Xt ) ∈ Ft .
I Xt can be completely determined from the information
available in Ft .
24 / 41
Adapted process
Consider a stochastic process (Xt )t≥0 on a filter probability
space (Ω, F, {Ft }, P ).
I (Xt )t≥0 is adapted to the filtered probability space if Xt is
Ft -measurable for each t ≥ 0, i.e., σ(Xt ) ∈ Ft .
I Xt can be completely determined from the information
available in Ft .
I For each t > s, Xt may not be Fs -measurable, i.e., at time
s, Xt is considered unknown
24 / 41
Adapted process
Consider a stochastic process (Xt )t≥0 on a filter probability
space (Ω, F, {Ft }, P ).
I (Xt )t≥0 is adapted to the filtered probability space if Xt is
Ft -measurable for each t ≥ 0, i.e., σ(Xt ) ∈ Ft .
I Xt can be completely determined from the information
available in Ft .
I For each t > s, Xt may not be Fs -measurable, i.e., at time
s, Xt is considered unknown
I The notion of adaptedness can be interpreted as inability
to have knowledge about future events.
24 / 41
Example
I Binomial asset pricing
I S = (S0 , S1 , S2 , S3 ): stock price up to time period 3
I Filtration (Ft ) = F0 , F1 , F2 , F3 ) with
F0 = {∅, Ω}
F1 = {∅, Ω, AT , AH } = σ(AH , AT )
F2 = σ(AHH , AHT , AT H , AT T )
F3 = σ(AHHH , AHHT , AT HH , AT HT ,
AT HH , AT HT , AT T H , AT T T )
I From information of the first tossing result in F1 , the value
of asset price S1 at time 1 is well defined. So S1 is F1 -
measurable
I Similar S2 is F2 - measurable and S3 is F3 - measurable
I At any time, St is Ft - measurable. So the price process S
is an adapted process with respect to the filtration (Ft )
25 / 41
Table of Contents
Random/Stochastic processes
Martingale
Stopping time
26 / 41
Martingale
1. (Discrete time case)A (discrete time) stochastic process
(Xn )n∈N is a martingale with respect to the filtration (Fn ) if
for all n
I Xn is Fn - measurable.
I E(Xn+1 | Fn ) = Xn
2. (Continuous time case) A (continuous time) stochastic
process (Xt )t≥0 is a martingale with respect to the filtration
(Ft ) if for all t ≥ s
I Xt is Ft - measurable.
I E(Xt | Fs ) = Xs .
27 / 41
Property
Martingale tends to neither go up nor go down.
The expectation of a martingale is constant over time
E(Xt ) = E(E(Xt |F0 )) = E(X0 )
for all t
28 / 41
Remark
I In order to verify
E(Xn+1 |Fn ) = Xn
for a discrete time stochastic process
I Represent Xn+1 as
Xn+1 = Un + Yn+1
or
Xn+1 = Un Yn+1
where Un is Fn - measurable and Yn+1 is independent of
Fn
29 / 41
Example
Binomial asset pricing model (Sn )n≥0 is a martingale with
respect to (Fn )n≥0 where Fn is σ - algebra generated by the
first n tossing results.
30 / 41
Proof
Sn is completely determined if the result of the first n tosses is
known. Hence Sn is Fn - measurable for all n
Denote (
u if the nth toss is H
Xn =
d if the nth toss is T
then (Xn )n are independent and identically distributed
P (Xn = u) = p P (Xn = d) = q
and
Sn+1 = Sn Xn+1
⇒E(Sn+1 |Fn ) = E( Sn Xn+1 |Fn )
|{z} | {z }
depends on Fn independent of Fn
= Sn E(Xn+1 |Sn ) = Sn E(Xn+1 )
|{z} | {z }
take out of what is known independent property
The price process (Sk )k is a martingale if E(Xn+1 ) = 1 or
pu + qd = 1.
31 / 41
Example - Gambler
Consider a fair game in which the chances of winning and
losing equal amounts are the same, i.e. if we denote Xk the
outcome of k-th trial at the game, then it is known that
E[Xk ] = 0. Suppose that the initial wealth of a gambler is 0 and
he is allowed to borrow as much as possible at no extra cost to
play. Then his total wealth after k trials is determined by
k
X
Mk = X1 + · · · + Xk = Xn
n=1
Denote Fk = σ(X1 , . . . , Xk ): information sets generated by the
first k-trial. Then Mn is Fn - measurable for all n and
k+1
!
X
E(Mk+1 |Fk ) = E Xn |Fk
n=1
32 / 41
k+1
X k
X
Xn = Xn + Xk+1
| {z }
n=1 n=1
| {z } independent of Fk
Fk −measurable
Hence
k+1 k
!
X X
E(Mk+1 |Fk ) = E Xn |Fk = Xn + E(Xk+1 |Fk )
n=1 n=1
k
X
= Xn + E(Xk+1 )
| {z }
n=1
=0
k
X
= Xn = Mk
n=1
So the wealth proces (Mk )k≥0 is a martingale
33 / 41
Practice - Double or nothing
The gambler starts with an initial wealth of 1 dollar and she
always bets all of her wealth on the head of a fair coin. If the
coin lands on a head, she doubles her wealth. Otherwise, she
goes broke. Let
(
2 if the nth tossing is head
Xn =
0 if the nth tossing is tail
1. Determine a formula for her wealth process (Mk )k≥0
2. Is her wealth process a martingale?
34 / 41
Table of Contents
Random/Stochastic processes
Martingale
Stopping time
35 / 41
Stopping time
Let (Ft )t≥0 be a filtration on a probability space (Ω, F, P ). A
nonnegative random variable T is called to be a stopping time
with respect to (Ft )t≥0 if (τ ≤ t) ∈ Ft ) for all t ≥ 0
36 / 41
Stopping time
Let (Ft )t≥0 be a filtration on a probability space (Ω, F, P ). A
nonnegative random variable T is called to be a stopping time
with respect to (Ft )t≥0 if (τ ≤ t) ∈ Ft ) for all t ≥ 0
Example
In gambler problem, the first time that a gambler gains $8 is a
stopping time.
36 / 41
Stopped process
Let ((Xt )t≥0 be a random process and T is a stopping time with
respect to the filtration (Ft )t≥0 then the stopped process XtT is
given by
(
T Xt (ω) if t < T (ω)
Xt (ω) = Xmin(t,T (ω)) (ω) =
XT (ω) (ω) if t ≥ T (ω)
37 / 41
Optimal stopping theorem
Let (Xt )t≥0 be a martingale and T is a stopping time with
respect to the filtration (Ft )t≥0 .
If T is bounded (T < c a.s for some c) then the stopped process
XtT is also a martingale.
Consequently
E(XtT ) = E(X0 )
38 / 41
Example
Consider a fair game in which the change of winning or losing
$1 each round are equal . The gambler starts with an initial
wealth of $3. He plays until he broke or his wealth reaches $10.
What is the probability that he wins the game.
39 / 41
Solution
I Mn : his wealth after round n
40 / 41
Solution
I Mn : his wealth after round n
Mn = 10 + X1 + . . . Xn = Mn−1 + Xn
1
where Xk are i.i.d with P (Xk = 1) = P (Xk = −1) = 2
I (Mn )n≥0 is a martingale because
40 / 41
Solution
I Mn : his wealth after round n
Mn = 10 + X1 + . . . Xn = Mn−1 + Xn
1
where Xk are i.i.d with P (Xk = 1) = P (Xk = −1) = 2
I (Mn )n≥0 is a martingale because
E(Mn |Fn−1 ) = Mn−1 +E(Xn |Fn−1 ) = Mn−1 +E(Xn ) = Mn−1
I Let τ be the time that the game stops then τ is a stopping
time
40 / 41
Solution
I Mn : his wealth after round n
Mn = 10 + X1 + . . . Xn = Mn−1 + Xn
1
where Xk are i.i.d with P (Xk = 1) = P (Xk = −1) = 2
I (Mn )n≥0 is a martingale because
E(Mn |Fn−1 ) = Mn−1 +E(Xn |Fn−1 ) = Mn−1 +E(Xn ) = Mn−1
I Let τ be the time that the game stops then τ is a stopping
time
I Apply optimal stopping time, we have
E(Mτ ) = E(M0 ) = 3
40 / 41
I p: probability the the gambler wins, i.e he has $10 at time τ
41 / 41
I p: probability the the gambler wins, i.e he has $10 at time τ
I Probability that he loses (remains $0 at time τ ) is 1 − p
I (
10, with prob p
Mτ =
0, with prob 1 − p
41 / 41
I p: probability the the gambler wins, i.e he has $10 at time τ
I Probability that he loses (remains $0 at time τ ) is 1 − p
I (
10, with prob p
Mτ =
0, with prob 1 − p
I E(Mτ ) = 10p + 0(1 − p) = 10p
I But E(Mτ ) = E(M0 ) = 3. So
10p = 3 ⇒ p = 0.3
41 / 41