Quantecon Python Econometria
Quantecon Python Econometria
with Python
2 Linear Algebra 17
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Solving Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6 Further Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3 QR Decomposition 41
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 Matrix Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3 Gram-Schmidt process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Some Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.6 Using QR Decomposition to Compute Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.7 𝑄𝑅 and PCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4 Circulant Matrices 49
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2 Constructing a Circulant Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 Connection to Permutation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4 Examples with Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Associated Permutation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.6 Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
i
5.6 Full and Reduced SVD’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.7 Polar Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.8 Application: Principal Components Analysis (PCA) . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.9 Relationship of PCA to SVD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.10 PCA with Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.11 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
ii
10 Two Meanings of Probability 181
10.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.2 Frequentist Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.3 Bayesian Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10.4 Role of a Conjugate Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
iii
16.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
16.2 Privacy Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
16.3 Zoo of Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
16.4 Respondent’s Expected Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
16.5 Utilitarian View of Survey Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
16.6 Criticisms of Proposed Privacy Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
16.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
iv
22 Samuelson Multiplier-Accelerator 395
22.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
22.2 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
22.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
22.4 Stochastic Shocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
22.5 Government Spending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
22.6 Wrapping Everything Into a Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
22.7 Using the LinearStateSpace Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
22.8 Pure Multiplier Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
22.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
V Search 495
27 Job Search I: The McCall Search Model 497
27.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
27.2 The McCall Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
27.3 Computing the Optimal Policy: Take 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
27.4 Computing an Optimal Policy: Take 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
27.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
v
28.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
28.5 Impact of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
28.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
vi
35.3 Competitive Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
35.4 Market Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
35.5 Firm Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
35.6 Household Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
35.7 Computing a Competitive Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
35.8 Yield Curves and Hicks-Arrow Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
vii
43.3 Solution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
43.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
43.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
viii
48.8 Sequels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
ix
54.6 More Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
54.7 Distribution of Bayesian Decision Rule’s Time to Decide . . . . . . . . . . . . . . . . . . . . . . . . 954
54.8 Probability of Making Correct Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
54.9 Distribution of Likelihood Ratios at Frequentist’s 𝑡 . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
IX LQ Control 963
55 LQ Control: Foundations 965
55.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
55.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
55.3 Optimality – Finite Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
55.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
55.5 Extensions and Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
55.6 Further Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
55.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
x
60.6 Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
60.7 Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
60.8 Example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
60.9 Example 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062
60.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
xi
XI Asset Pricing and Finance 1177
67 Asset Pricing: Finite State Models 1179
67.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
67.2 Pricing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
67.3 Prices in the Risk-Neutral Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
67.4 Risk Aversion and Asset Prices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
67.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194
xii
XIII Auctions 1299
73 First-Price and Second-Price Auctions 1301
73.1 First-Price Sealed-Bid Auction (FPSB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301
73.2 Second-Price Sealed-Bid Auction (SPSB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
73.3 Characterization of SPSB Auction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
73.4 Uniform Distribution of Private Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
73.5 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
73.6 First price sealed bid auction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
73.7 Second Price Sealed Bid Auction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
73.8 Python Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
73.9 Revenue Equivalence Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306
73.10 Calculation of Bid Price in FPSB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1308
73.11 𝜒2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
73.12 5 Code Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1312
73.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
76 References 1367
Bibliography 1373
Index 1381
xiii
xiv
Intermediate Quantitative Economics with Python
CONTENTS 1
Intermediate Quantitative Economics with Python
2 CONTENTS
Intermediate Quantitative Economics with Python
CONTENTS 3
Intermediate Quantitative Economics with Python
4 CONTENTS
Part I
5
CHAPTER
ONE
MODELING COVID 19
Contents
• Modeling COVID 19
– Overview
– The SIR Model
– Implementation
– Experiments
– Ending Lockdown
1.1 Overview
This is a Python version of the code for analyzing the COVID-19 pandemic provided by Andrew Atkeson.
See, in particular
• NBER Working Paper No. 26867
• COVID-19 Working papers and code
The purpose of his notes is to introduce economists to quantitative modeling of infectious disease dynamics.
Dynamics are modeled using a standard SIR (Susceptible-Infected-Removed) model of disease spread.
The model dynamics are represented by a system of ordinary differential equations.
The main objective is to study the impact of suppression through social distancing on the spread of the infection.
The focus is on US outcomes but the parameters can be adjusted to study other countries.
We will use the following standard imports:
We will also use SciPy’s numerical routine odeint for solving differential equations.
7
Intermediate Quantitative Economics with Python
This routine calls into compiled code from the FORTRAN library odepack.
In the version of the SIR model we will analyze there are four states.
All individuals in the population are assumed to be in one of these four states.
The states are: susceptible (S), exposed (E), infected (I) and removed ®.
Comments:
• Those in state R have been infected and either recovered or died.
• Those who have recovered are assumed to have acquired immunity.
• Those in the exposed group are not yet infectious.
𝑠(𝑡)
̇ = −𝛽(𝑡) 𝑠(𝑡) 𝑖(𝑡)
𝑒(𝑡)
̇ = 𝛽(𝑡) 𝑠(𝑡) 𝑖(𝑡) − 𝜎𝑒(𝑡) (1.1)
̇ = 𝜎𝑒(𝑡) − 𝛾𝑖(𝑡)
𝑖(𝑡)
In these equations,
• 𝛽(𝑡) is called the transmission rate (the rate at which individuals bump into others and expose them to the virus).
• 𝜎 is called the infection rate (the rate at which those who are exposed become infected)
• 𝛾 is called the recovery rate (the rate at which infected people recover or die).
• the dot symbol 𝑦 ̇ represents the time derivative 𝑑𝑦/𝑑𝑡.
We do not need to model the fraction 𝑟 of the population in state 𝑅 separately because the states form a partition.
In particular, the “removed” fraction of the population is 𝑟 = 1 − 𝑠 − 𝑒 − 𝑖.
We will also track 𝑐 = 𝑖 + 𝑟, which is the cumulative caseload (i.e., all those who have or have had the infection).
The system (1.1) can be written in vector form as
1.2.2 Parameters
1.3 Implementation
pop_size = 3.3e8
γ = 1 / 18
σ = 1 / 5.2
"""
s, e, i = x
# Time derivatives
ds = - ne
de = ne - σ * e
di = σ * e - γ * i
1.3. Implementation 9
Intermediate Quantitative Economics with Python
# initial conditions of s, e, i
i_0 = 1e-7
e_0 = 4 * i_0
s_0 = 1 - i_0 - e_0
We solve for the time path numerically using odeint, at a sequence of dates t_vec.
"""
G = lambda x, t: F(x, t, R0)
s_path, e_path, i_path = odeint(G, x_init, t_vec).transpose()
1.4 Experiments
t_length = 550
grid_size = 1000
t_vec = np.linspace(0, t_length, grid_size)
for r in R0_vals:
i_path, c_path = solve_path(r, t_vec)
i_paths.append(i_path)
c_paths.append(c_path)
fig, ax = plt.subplots()
ax.legend(loc='upper left')
plt.show()
plot_paths(i_paths, labels)
plot_paths(c_paths, labels)
1.4. Experiments 11
Intermediate Quantitative Economics with Python
Let’s look at a scenario where mitigation (e.g., social distancing) is successively imposed.
Here’s a specification for R0 as a function of time.
This is what the time path of R0 looks like at these alternative rates:
fig, ax = plt.subplots()
ax.legend()
plt.show()
for η in η_vals:
R0 = lambda t: R0_mitigating(t, η=η)
i_path, c_path = solve_path(R0, t_vec)
i_paths.append(i_path)
c_paths.append(c_path)
plot_paths(i_paths, labels)
plot_paths(c_paths, labels)
1.4. Experiments 13
Intermediate Quantitative Economics with Python
The following replicates additional results by Andrew Atkeson on the timing of lifting lockdown.
Consider these two mitigation scenarios:
1. 𝑅𝑡 = 0.5 for 30 days and then 𝑅𝑡 = 2 for the remaining 17 months. This corresponds to lifting lockdown in 30
days.
2. 𝑅𝑡 = 0.5 for 120 days and then 𝑅𝑡 = 2 for the remaining 14 months. This corresponds to lifting lockdown in 4
months.
The parameters considered here start the model with 25,000 active infections and 75,000 agents already exposed to the
virus and thus soon to be contagious.
# initial conditions
i_0 = 25_000 / pop_size
e_0 = 75_000 / pop_size
s_0 = 1 - i_0 - e_0
x_0 = s_0, e_0, i_0
for R0 in R0_paths:
i_path, c_path = solve_path(R0, t_vec, x_init=x_0)
i_paths.append(i_path)
c_paths.append(c_path)
plot_paths(i_paths, labels)
ν = 0.01
Pushing the peak of curve further into the future may reduce cumulative deaths if a vaccine is found.
TWO
LINEAR ALGEBRA
Contents
• Linear Algebra
– Overview
– Vectors
– Matrices
– Solving Systems of Equations
– Eigenvalues and Eigenvectors
– Further Topics
– Exercises
2.1 Overview
Linear algebra is one of the most useful branches of applied mathematics for economists to invest in.
For example, many applied problems in economics and finance require the solution of a linear system of equations, such
as
𝑦1 = 𝑎𝑥1 + 𝑏𝑥2
𝑦2 = 𝑐𝑥1 + 𝑑𝑥2
The objective here is to solve for the “unknowns” 𝑥1 , … , 𝑥𝑘 given 𝑎11 , … , 𝑎𝑛𝑘 and 𝑦1 , … , 𝑦𝑛 .
When considering such problems, it is essential that we first consider at least some of the following questions
• Does a solution actually exist?
• Are there in fact many solutions, and if so how should we interpret them?
• If no solution exists, is there a best “approximate” solution?
• If a solution exists, how should we compute it?
17
Intermediate Quantitative Economics with Python
2.2 Vectors
A vector of length 𝑛 is just a sequence (or array, or tuple) of 𝑛 numbers, which we write as 𝑥 = (𝑥1 , … , 𝑥𝑛 ) or 𝑥 =
[𝑥1 , … , 𝑥𝑛 ].
We will write these sequences either horizontally or vertically as we please.
(Later, when we wish to perform certain matrix operations, it will become necessary to distinguish between the two)
The set of all 𝑛-vectors is denoted by ℝ𝑛 .
For example, ℝ2 is the plane, and a vector in ℝ2 is just a point in the plane.
Traditionally, vectors are represented visually as arrows from the origin to the point.
The following figure represents three vectors in this manner
The two most common operators for vectors are addition and scalar multiplication, which we now describe.
As a matter of definition, when we add two vectors, we add them element-by-element
𝑥1 𝑦1 𝑥1 + 𝑦1
⎡𝑥 ⎤ ⎡𝑦 ⎤ ⎡𝑥 + 𝑦 ⎤
𝑥 + 𝑦 = ⎢ ⎥ + ⎢ ⎥ ∶= ⎢ 2
2 2 2⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⎥ ⎢ ⋮ ⎥
𝑥 𝑦
⎣ 𝑛⎦ ⎣ 𝑛⎦ 𝑥
⎣ 𝑛 + 𝑦 𝑛⎦
Scalar multiplication is an operation that takes a number 𝛾 and a vector 𝑥 and produces
𝛾𝑥1
⎡ 𝛾𝑥 ⎤
𝛾𝑥 ∶= ⎢ 2 ⎥
⎢ ⋮ ⎥
⎣𝛾𝑥𝑛 ⎦
Scalar multiplication is illustrated in the next figure
2.2. Vectors 19
Intermediate Quantitative Economics with Python
scalars = (-2, 2)
x = np.array(x)
for s in scalars:
v = s * x
ax.annotate('', xy=v, xytext=(0, 0),
arrowprops=dict(facecolor='red',
shrink=0,
alpha=0.5,
width=0.5))
ax.text(v[0] + 0.4, v[1] - 0.2, f'${s} x$', fontsize='16')
plt.show()
In Python, a vector can be represented as a list or tuple, such as x = (2, 4, 6), but is more commonly represented
as a NumPy array.
One advantage of NumPy arrays is that scalar multiplication and addition have very natural syntax
4 * x
2.2. Vectors 21
Intermediate Quantitative Economics with Python
12.0
1.7320508075688772
1.7320508075688772
2.2.3 Span
Given a set of vectors 𝐴 ∶= {𝑎1 , … , 𝑎𝑘 } in ℝ𝑛 , it’s natural to think about the new vectors we can create by performing
linear operations.
New vectors created in this manner are called linear combinations of 𝐴.
In particular, 𝑦 ∈ ℝ𝑛 is a linear combination of 𝐴 ∶= {𝑎1 , … , 𝑎𝑘 } if
In this context, the values 𝛽1 , … , 𝛽𝑘 are called the coefficients of the linear combination.
The set of linear combinations of 𝐴 is called the span of 𝐴.
The next figure shows the span of 𝐴 = {𝑎1 , 𝑎2 } in ℝ3 .
The span is a two-dimensional plane passing through these two points and the origin.
ax = plt.figure(figsize=(10, 8)).add_subplot(projection='3d')
α, β = 0.2, 0.1
gs = 3
z = np.linspace(x_min, x_max, gs)
x = np.zeros(gs)
y = np.zeros(gs)
ax.plot(x, y, z, 'k-', lw=2, alpha=0.5)
ax.plot(z, x, y, 'k-', lw=2, alpha=0.5)
ax.plot(y, z, x, 'k-', lw=2, alpha=0.5)
# Lines to vectors
for i in (0, 1):
x = (0, x_coords[i])
y = (0, y_coords[i])
z = (0, f(x_coords[i], y_coords[i]))
ax.plot(x, y, z, 'b-', lw=1.5, alpha=0.6)
2.2. Vectors 23
Intermediate Quantitative Economics with Python
Examples
If 𝐴 contains only one vector 𝑎1 ∈ ℝ2 , then its span is just the scalar multiples of 𝑎1 , which is the unique line passing
through both 𝑎1 and the origin.
If 𝐴 = {𝑒1 , 𝑒2 , 𝑒3 } consists of the canonical basis vectors of ℝ3 , that is
1 0 0
𝑒1 ∶= ⎡ ⎤
⎢0⎥ , 𝑒2 ∶= ⎡ ⎤
⎢1⎥ , 𝑒3 ∶= ⎡
⎢0⎥
⎤
⎣0⎦ ⎣0⎦ ⎣1⎦
then the span of 𝐴 is all of ℝ3 , because, for any 𝑥 = (𝑥1 , 𝑥2 , 𝑥3 ) ∈ ℝ3 , we can write
𝑥 = 𝑥1 𝑒1 + 𝑥2 𝑒2 + 𝑥3 𝑒3
As we’ll see, it’s often desirable to find families of vectors with relatively large span, so that many vectors can be described
by linear operators on a few vectors.
The condition we need for a set of vectors to have a large span is what’s called linear independence.
In particular, a collection of vectors 𝐴 ∶= {𝑎1 , … , 𝑎𝑘 } in ℝ𝑛 is said to be
• linearly dependent if some strict subset of 𝐴 has the same span as 𝐴.
• linearly independent if it is not linearly dependent.
Put differently, a set of vectors is linearly independent if no vector is redundant to the span and linearly dependent
otherwise.
To illustrate the idea, recall the figure that showed the span of vectors {𝑎1 , 𝑎2 } in ℝ3 as a plane through the origin.
If we take a third vector 𝑎3 and form the set {𝑎1 , 𝑎2 , 𝑎3 }, this set will be
• linearly dependent if 𝑎3 lies in the plane
• linearly independent otherwise
As another illustration of the concept, since ℝ𝑛 can be spanned by 𝑛 vectors (see the discussion of canonical basis vectors
above), any collection of 𝑚 > 𝑛 vectors in ℝ𝑛 must be linearly dependent.
The following statements are equivalent to linear independence of 𝐴 ∶= {𝑎1 , … , 𝑎𝑘 } ⊂ ℝ𝑛
1. No vector in 𝐴 can be formed as a linear combination of the other elements.
2. If 𝛽1 𝑎1 + ⋯ 𝛽𝑘 𝑎𝑘 = 0 for scalars 𝛽1 , … , 𝛽𝑘 , then 𝛽1 = ⋯ = 𝛽𝑘 = 0.
(The zero in the first expression is the origin of ℝ𝑛 )
Another nice thing about sets of linearly independent vectors is that each element in the span has a unique representation
as a linear combination of these vectors.
In other words, if 𝐴 ∶= {𝑎1 , … , 𝑎𝑘 } ⊂ ℝ𝑛 is linearly independent and
𝑦 = 𝛽1 𝑎1 + ⋯ 𝛽𝑘 𝑎𝑘
2.2. Vectors 25
Intermediate Quantitative Economics with Python
2.3 Matrices
Matrices are a neat way of organizing data for use in linear operations.
An 𝑛 × 𝑘 matrix is a rectangular array 𝐴 of numbers with 𝑛 rows and 𝑘 columns:
Often, the numbers in the matrix represent coefficients in a system of linear equations, as discussed at the start of this
lecture.
For obvious reasons, the matrix 𝐴 is also called a vector if either 𝑛 = 1 or 𝑘 = 1.
In the former case, 𝐴 is called a row vector, while in the latter it is called a column vector.
If 𝑛 = 𝑘, then 𝐴 is called square.
The matrix formed by replacing 𝑎𝑖𝑗 by 𝑎𝑗𝑖 for every 𝑖 and 𝑗 is called the transpose of 𝐴 and denoted 𝐴′ or 𝐴⊤ .
If 𝐴 = 𝐴′ , then 𝐴 is called symmetric.
For a square matrix 𝐴, the 𝑖 elements of the form 𝑎𝑖𝑖 for 𝑖 = 1, … , 𝑛 are called the principal diagonal.
𝐴 is called diagonal if the only nonzero entries are on the principal diagonal.
If, in addition to being diagonal, each element along the principal diagonal is equal to 1, then 𝐴 is called the identity matrix
and denoted by 𝐼.
Just as was the case for vectors, a number of algebraic operations are defined for matrices.
Scalar multiplication and addition are immediate generalizations of the vector case:
and
𝑎11 ⋯ 𝑎1𝑘 𝑏11 ⋯ 𝑏1𝑘 𝑎11 + 𝑏11 ⋯ 𝑎1𝑘 + 𝑏1𝑘
𝐴+𝐵 =⎡
⎢ ⋮ ⋮ ⋮ ⎤ ⎡
⎥+⎢ ⋮ ⋮ ⋮ ⎤ ⎡
⎥ ∶= ⎢ ⋮ ⋮ ⋮ ⎤
⎥
⎣𝑎𝑛1 ⋯ 𝑎𝑛𝑘 ⎦ ⎣𝑏𝑛1 ⋯ 𝑏𝑛𝑘 ⎦ ⎣𝑎𝑛1 + 𝑏𝑛1 ⋯ 𝑎𝑛𝑘 + 𝑏𝑛𝑘 ⎦
In the latter case, the matrices must have the same shape in order for the definition to make sense.
We also have a convention for multiplying two matrices.
The rule for matrix multiplication generalizes the idea of inner products discussed above and is designed to make multi-
plication play well with basic linear operations.
If 𝐴 and 𝐵 are two matrices, then their product 𝐴𝐵 is formed by taking as its 𝑖, 𝑗-th element the inner product of the 𝑖-th
row of 𝐴 and the 𝑗-th column of 𝐵.
There are many tutorials to help you visualize this operation, such as this one, or the discussion on the Wikipedia page.
If 𝐴 is 𝑛 × 𝑘 and 𝐵 is 𝑗 × 𝑚, then to multiply 𝐴 and 𝐵 we require 𝑘 = 𝑗, and the resulting matrix 𝐴𝐵 is 𝑛 × 𝑚.
As perhaps the most important special case, consider multiplying 𝑛 × 𝑘 matrix 𝐴 and 𝑘 × 1 column vector 𝑥.
NumPy arrays are also used as matrices, and have fast, efficient functions and methods for all the standard matrix oper-
ations1 .
You can create them manually from tuples of tuples (or lists of lists) as follows
A = ((1, 2),
(3, 4))
type(A)
tuple
A = np.array(A)
type(A)
numpy.ndarray
A.shape
(2, 2)
The shape attribute is a tuple giving the number of rows and columns — see here for more discussion.
To get the transpose of A, use A.transpose() or, more simply, A.T.
There are many convenient functions for creating common matrices (matrices of zeros, ones, etc.) — see here.
Since operations are performed elementwise by default, scalar multiplication and addition have very natural syntax
A = np.identity(3)
B = np.ones((3, 3))
2 * A
1 Although there is a specialized matrix data type defined in NumPy, it’s more standard to work with ordinary NumPy arrays. See this discussion.
2.3. Matrices 27
Intermediate Quantitative Economics with Python
A + B
Each 𝑛 × 𝑘 matrix 𝐴 can be identified with a function 𝑓(𝑥) = 𝐴𝑥 that maps 𝑥 ∈ ℝ𝑘 into 𝑦 = 𝐴𝑥 ∈ ℝ𝑛 .
These kinds of functions have a special property: they are linear.
A function 𝑓 ∶ ℝ𝑘 → ℝ𝑛 is called linear if, for all 𝑥, 𝑦 ∈ ℝ𝑘 and all scalars 𝛼, 𝛽, we have
You can check that this holds for the function 𝑓(𝑥) = 𝐴𝑥 + 𝑏 when 𝑏 is the zero vector and fails when 𝑏 is nonzero.
In fact, it’s known that 𝑓 is linear if and only if there exists a matrix 𝐴 such that 𝑓(𝑥) = 𝐴𝑥 for all 𝑥.
𝑦 = 𝐴𝑥 (2.3)
The problem we face is to determine a vector 𝑥 ∈ ℝ𝑘 that solves (2.3), taking 𝑦 and 𝐴 as given.
This is a special case of a more general problem: Find an 𝑥 such that 𝑦 = 𝑓(𝑥).
Given an arbitrary function 𝑓 and a 𝑦, is there always an 𝑥 such that 𝑦 = 𝑓(𝑥)?
If so, is it always unique?
The answer to both these questions is negative, as the next figure shows
def f(x):
return 0.6 * np.cos(4 * x) + 1.4
for ax in axes:
# Set the axes through the origin
for spine in ['left', 'bottom']:
ax.spines[spine].set_position('zero')
for spine in ['right', 'top']:
ax.spines[spine].set_color('none')
ax = axes[0]
ax = axes[1]
ybar = 2.6
ax.plot(x, x * 0 + ybar, 'k--', alpha=0.5)
ax.text(0.04, 0.91 * ybar, '$y$', fontsize=16)
plt.show()
In the first plot, there are multiple solutions, as the function is not one-to-one, while in the second there are no solutions,
since 𝑦 lies outside the range of 𝑓.
Can we impose conditions on 𝐴 in (2.3) that rule out these problems?
In this context, the most important thing to recognize about the expression 𝐴𝑥 is that it corresponds to a linear combination
of the columns of 𝐴.
In particular, if 𝑎1 , … , 𝑎𝑘 are the columns of 𝐴, then
𝐴𝑥 = 𝑥1 𝑎1 + ⋯ + 𝑥𝑘 𝑎𝑘
Indeed, it follows from our earlier discussion that if {𝑎1 , … , 𝑎𝑘 } are linearly independent and 𝑦 = 𝐴𝑥 = 𝑥1 𝑎1 +⋯+𝑥𝑘 𝑎𝑘 ,
then no 𝑧 ≠ 𝑥 satisfies 𝑦 = 𝐴𝑧.
Let’s discuss some more details, starting with the case where 𝐴 is 𝑛 × 𝑛.
This is the familiar case where the number of unknowns equals the number of equations.
For arbitrary 𝑦 ∈ ℝ𝑛 , we hope to find a unique 𝑥 ∈ ℝ𝑛 such that 𝑦 = 𝐴𝑥.
In view of the observations immediately above, if the columns of 𝐴 are linearly independent, then their span, and hence
the range of 𝑓(𝑥) = 𝐴𝑥, is all of ℝ𝑛 .
Hence there always exists an 𝑥 such that 𝑦 = 𝐴𝑥.
Moreover, the solution is unique.
In particular, the following are equivalent
1. The columns of 𝐴 are linearly independent.
2. For any 𝑦 ∈ ℝ𝑛 , the equation 𝑦 = 𝐴𝑥 has a unique solution.
The property of having linearly independent columns is sometimes expressed as having full column rank.
Inverse Matrices
Determinants
Another quick comment about square matrices is that to every such matrix we assign a unique number called the deter-
minant of the matrix — you can find the expression for it here.
If the determinant of 𝐴 is not zero, then we say that 𝐴 is nonsingular.
Perhaps the most important fact about determinants is that 𝐴 is nonsingular if and only if 𝐴 is of full column rank.
This gives us a useful one-number summary of whether or not a square matrix can be inverted.
This is the 𝑛 × 𝑘 case with 𝑛 < 𝑘, so there are fewer equations than unknowns.
In this case there are either no solutions or infinitely many — in other words, uniqueness never holds.
For example, consider the case where 𝑘 = 3 and 𝑛 = 2.
Thus, the columns of 𝐴 consists of 3 vectors in ℝ2 .
This set can never be linearly independent, since it is possible to find two vectors that span ℝ2 .
(For example, use the canonical basis vectors)
It follows that one column is a linear combination of the other two.
For example, let’s say that 𝑎1 = 𝛼𝑎2 + 𝛽𝑎3 .
Then if 𝑦 = 𝐴𝑥 = 𝑥1 𝑎1 + 𝑥2 𝑎2 + 𝑥3 𝑎3 , we can also write
Here’s an illustration of how to solve linear equations with SciPy’s linalg submodule.
All of these routines are Python front ends to time-tested and highly optimized FORTRAN code
-2.0
array([[-2. , 1. ],
[ 1.5, -0.5]])
x = A_inv @ y # Solution
A @ x # Should equal y
array([[1.],
[1.]])
array([[-1.],
[ 1.]])
Observe how we can solve for 𝑥 = 𝐴−1 𝑦 by either via inv(A) @ y, or using solve(A, y).
The latter method uses a different algorithm (LU decomposition) that is numerically more stable, and hence should almost
always be preferred.
To obtain the least-squares solution 𝑥̂ = (𝐴′ 𝐴)−1 𝐴′ 𝑦, use scipy.linalg.lstsq(A, y).
𝐴𝑣 = 𝜆𝑣
A = ((1, 2),
(2, 1))
A = np.array(A)
evals, evecs = eig(A)
evecs = evecs[:, 0], evecs[:, 1]
plt.show()
The eigenvalue equation is equivalent to (𝐴 − 𝜆𝐼)𝑣 = 0, and this has a nonzero solution 𝑣 only when the columns of
𝐴 − 𝜆𝐼 are linearly dependent.
This in turn is equivalent to stating that the determinant is zero.
Hence to find all eigenvalues, we can look for 𝜆 such that the determinant of 𝐴 − 𝜆𝐼 is zero.
This problem can be expressed as one of solving for the roots of a polynomial in 𝜆 of degree 𝑛.
This in turn implies the existence of 𝑛 solutions in the complex plane, although some might be repeated.
Some nice facts about the eigenvalues of a square matrix 𝐴 are as follows
1. The determinant of 𝐴 equals the product of the eigenvalues.
2. The trace of 𝐴 (the sum of the elements on the principal diagonal) equals the sum of the eigenvalues.
3. If 𝐴 is symmetric, then all of its eigenvalues are real.
4. If 𝐴 is invertible and 𝜆1 , … , 𝜆𝑛 are its eigenvalues, then the eigenvalues of 𝐴−1 are 1/𝜆1 , … , 1/𝜆𝑛 .
A corollary of the first statement is that a matrix is invertible if and only if all its eigenvalues are nonzero.
Using SciPy, we can solve for the eigenvalues and eigenvectors of a matrix as follows
A = ((1, 2),
(2, 1))
evecs
It is sometimes useful to consider the generalized eigenvalue problem, which, for given matrices 𝐴 and 𝐵, seeks generalized
eigenvalues 𝜆 and eigenvectors 𝑣 such that
𝐴𝑣 = 𝜆𝐵𝑣
We round out our discussion by briefly mentioning several other important topics.
Matrix Norms
The norms on the right-hand side are ordinary vector norms, while the norm on the left-hand side is a matrix norm — in
this case, the so-called spectral norm.
For example, for a square matrix 𝑆, the condition ‖𝑆‖ < 1 means that 𝑆 is contractive, in the sense that it pulls all vectors
towards the origin2 .
Neumann’s Theorem
Spectral Radius
A result known as Gelfand’s formula tells us that, for any square matrix 𝐴,
Here 𝜌(𝐴) is the spectral radius, defined as max𝑖 |𝜆𝑖 |, where {𝜆𝑖 }𝑖 is the set of eigenvalues of 𝐴.
As a consequence of Gelfand’s formula, if all eigenvalues are strictly less than one in modulus, there exists a 𝑘 with
‖𝐴𝑘 ‖ < 1.
In which case (2.4) is valid.
𝜕𝐴𝑥
2. 𝜕𝑥 = 𝐴′
𝜕𝑥′ 𝐴𝑥
3. 𝜕𝑥 = (𝐴 + 𝐴′ )𝑥
𝜕𝑦′ 𝐵𝑧
4. 𝜕𝑦 = 𝐵𝑧
𝜕𝑦′ 𝐵𝑧
5. 𝜕𝐵 = 𝑦𝑧 ′
Exercise 2.7.1 below asks you to apply these formulas.
2.7 Exercises
Exercise 2.7.1
Let 𝑥 be a given 𝑛 × 1 vector and consider the problem
𝑦 = 𝐴𝑥 + 𝐵𝑢
Here
• 𝑃 is an 𝑛 × 𝑛 matrix and 𝑄 is an 𝑚 × 𝑚 matrix
• 𝐴 is an 𝑛 × 𝑛 matrix and 𝐵 is an 𝑛 × 𝑚 matrix
• both 𝑃 and 𝑄 are symmetric and positive semidefinite
(What must the dimensions of 𝑦 and 𝑢 be to make this a well-posed problem?)
One way to solve the problem is to form the Lagrangian
ℒ = −𝑦′ 𝑃 𝑦 − 𝑢′ 𝑄𝑢 + 𝜆′ [𝐴𝑥 + 𝐵𝑢 − 𝑦]
As we will see, in economic contexts Lagrange multipliers often are shadow prices.
Note: If we don’t care about the Lagrange multipliers, we can substitute the constraint into the objective function, and
then just maximize −(𝐴𝑥 + 𝐵𝑢)′ 𝑃 (𝐴𝑥 + 𝐵𝑢) − 𝑢′ 𝑄𝑢 with respect to 𝑢. You can verify that this leads to the same
maximizer.
s.t.
𝑦 = 𝐴𝑥 + 𝐵𝑢
with primitives
• 𝑃 be a symmetric and positive semidefinite 𝑛 × 𝑛 matrix
• 𝑄 be a symmetric and positive semidefinite 𝑚 × 𝑚 matrix
• 𝐴 an 𝑛 × 𝑛 matrix
• 𝐵 an 𝑛 × 𝑚 matrix
The associated Lagrangian is:
𝐿 = −𝑦′ 𝑃 𝑦 − 𝑢′ 𝑄𝑢 + 𝜆′ [𝐴𝑥 + 𝐵𝑢 − 𝑦]
Step 1.
Differentiating Lagrangian equation w.r.t y and setting its derivative equal to zero yields
𝜕𝐿
= −(𝑃 + 𝑃 ′ )𝑦 − 𝜆 = −2𝑃 𝑦 − 𝜆 = 0 ,
𝜕𝑦
since P is symmetric.
Accordingly, the first-order condition for maximizing L w.r.t. y implies
𝜆 = −2𝑃 𝑦
Step 2.
Differentiating Lagrangian equation w.r.t. u and setting its derivative equal to zero yields
𝜕𝐿
= −(𝑄 + 𝑄′ )𝑢 − 𝐵′ 𝜆 = −2𝑄𝑢 + 𝐵′ 𝜆 = 0
𝜕𝑢
Substituting 𝜆 = −2𝑃 𝑦 gives
𝑄𝑢 + 𝐵′ 𝑃 𝑦 = 0
𝑄𝑢 + 𝐵′ 𝑃 (𝐴𝑥 + 𝐵𝑢) = 0
2.7. Exercises 39
Intermediate Quantitative Economics with Python
(𝑄 + 𝐵′ 𝑃 𝐵)𝑢 + 𝐵′ 𝑃 𝐴𝑥 = 0
which is the first-order condition for maximizing 𝐿 w.r.t. 𝑢.
Thus, the optimal choice of u must satisfy
𝑢 = −(𝑄 + 𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴𝑥 ,
which follows from the definition of the first-order conditions for Lagrangian equation.
Step 3.
Rewriting our problem by substituting the constraint into the objective function, we get
Since we know the optimal choice of u satisfies 𝑢 = −(𝑄 + 𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴𝑥, then
−2𝑢′ 𝐵′ 𝑃 𝐴𝑥 = −2𝑥′ 𝑆 ′ 𝐵′ 𝑃 𝐴𝑥
= 2𝑥′ 𝐴′ 𝑃 𝐵(𝑄 + 𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴𝑥
Notice that the term (𝑄 + 𝐵′ 𝑃 𝐵)−1 is symmetric as both P and Q are symmetric.
Regarding the third term −𝑢′ (𝑄 + 𝐵′ 𝑃 𝐵)𝑢,
Hence, the summation of second and third terms is 𝑥′ 𝐴′ 𝑃 𝐵(𝑄 + 𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴𝑥.
This implies that
Therefore, the solution to the optimization problem 𝑣(𝑥) = −𝑥′ 𝑃 ̃ 𝑥 follows the above result by denoting 𝑃 ̃ ∶= 𝐴′ 𝑃 𝐴 −
𝐴′ 𝑃 𝐵(𝑄 + 𝐵′ 𝑃 𝐵)−1 𝐵′ 𝑃 𝐴
THREE
QR DECOMPOSITION
3.1 Overview
The QR decomposition (also called the QR factorization) of a matrix is a decomposition of a matrix into the product of
an orthogonal matrix and a triangular matrix.
A QR decomposition of a real matrix 𝐴 takes the form
𝐴 = 𝑄𝑅
where
• 𝑄 is an orthogonal matrix (so that 𝑄𝑇 𝑄 = 𝐼)
• 𝑅 is an upper triangular matrix
We’ll use a Gram-Schmidt process to compute a QR decomposition
Because doing so is so educational, we’ll write our own Python code to do the job
41
Intermediate Quantitative Economics with Python
𝐴 = [ 𝑎1 𝑎2 ⋯ 𝑎𝑛 ]
Here (𝑎𝑗 ⋅ 𝑒𝑖 ) can be interpreted as the linear least squares regression coefficient of 𝑎𝑗 on 𝑒𝑖
• it is the inner product of 𝑎𝑗 and 𝑒𝑖 divided by the inner product of 𝑒𝑖 where 𝑒𝑖 ⋅ 𝑒𝑖 = 1, as normalization has assured
us.
• this regression coefficient has an interpretation as being a covariance divided by a variance
It can be verified that
𝑎1 · 𝑒1 𝑎2 · 𝑒1 ⋯ 𝑎 𝑛 · 𝑒1
⎡ 0 𝑎2 · 𝑒 2 ⋯ 𝑎 𝑛 · 𝑒2 ⎤
𝐴 = [ 𝑎1 𝑎2 ⋯ 𝑎𝑛 ] = [ 𝑒1 𝑒2 ⋯ 𝑒𝑛 ]⎢ ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
⎣ 0 0 ⋯ 𝑎𝑛 · 𝑒 𝑛 ⎦
𝐴 = 𝑄𝑅
where
𝑄 = [ 𝑎1 𝑎2 ⋯ 𝑎𝑛 ] = [ 𝑒1 𝑒2 ⋯ 𝑒𝑛 ]
and
𝑎1 · 𝑒1 𝑎2 · 𝑒1 ⋯ 𝑎 𝑛 · 𝑒1
⎡ 0 𝑎2 · 𝑒 2 ⋯ 𝑎 𝑛 · 𝑒2 ⎤
𝑅=⎢ ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⎥
⎣ 0 0 ⋯ 𝑎𝑛 · 𝑒𝑛 ⎦
42 Chapter 3. QR Decomposition
Intermediate Quantitative Economics with Python
𝑎1 · 𝑒1 𝑎2 · 𝑒 1 ⋯ 𝑎 𝑛 · 𝑒1 𝑎𝑛+1 ⋅ 𝑒1 ⋯ 𝑎 𝑚 ⋅ 𝑒1
⎡ 0 𝑎2 · 𝑒 2 ⋯ 𝑎 𝑛 · 𝑒2 𝑎𝑛+1 ⋅ 𝑒2 ⋯ 𝑎 𝑚 ⋅ 𝑒2 ⎤
𝐴 = [ 𝑎1 𝑎2 ⋯ 𝑎𝑚 ] = [ 𝑒1 𝑒2 ⋯ 𝑒𝑛 ]⎢ ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⎥
⎣ 0 0 ⋯ 𝑎𝑛 · 𝑒 𝑛 𝑎𝑛+1 ⋅ 𝑒𝑛 ⋯ 𝑎 𝑚 ⋅ 𝑒𝑛 ⎦
𝑎1 = (𝑎1 ⋅ 𝑒1 )𝑒1
𝑎2 = (𝑎2 ⋅ 𝑒1 )𝑒1 + (𝑎2 ⋅ 𝑒2 )𝑒2
⋮ ⋮
𝑎𝑛 = (𝑎𝑛 ⋅ 𝑒1 )𝑒1 + (𝑎𝑛 ⋅ 𝑒2 )𝑒2 + ⋯ + (𝑎𝑛 ⋅ 𝑒𝑛 )𝑒𝑛
𝑎𝑛+1 = (𝑎𝑛+1 ⋅ 𝑒1 )𝑒1 + (𝑎𝑛+1 ⋅ 𝑒2 )𝑒2 + ⋯ + (𝑎𝑛+1 ⋅ 𝑒𝑛 )𝑒𝑛
⋮ ⋮
𝑎𝑚 = (𝑎𝑚 ⋅ 𝑒1 )𝑒1 + (𝑎𝑚 ⋅ 𝑒2 )𝑒2 + ⋯ + (𝑎𝑚 ⋅ 𝑒𝑛 )𝑒𝑛
Now let’s write some homemade Python code to implement a QR decomposition by deploying the Gram-Schmidt process
described above.
import numpy as np
from scipy.linalg import qr
def QR_Decomposition(A):
n, m = A.shape # get the shape of A
u[:, 0] = A[:, 0]
Q[:, 0] = u[:, 0] / np.linalg.norm(u[:, 0])
u[:, i] = A[:, i]
for j in range(i):
u[:, i] -= (A[:, i] @ Q[:, j]) * Q[:, j] # get each u vector
R = np.zeros((n, m))
for i in range(n):
for j in range(i, m):
R[i, j] = A[:, j] @ Q[:, i]
return Q, R
The preceding code is fine but can benefit from some further housekeeping.
We want to do this because later in this notebook we want to compare results from using our homemade code above with
the code for a QR that the Python scipy package delivers.
There can be be sign differences between the 𝑄 and 𝑅 matrices produced by different numerical algorithms.
All of these are valid QR decompositions because of how the sign differences cancel out when we compute 𝑄𝑅.
However, to make the results from our homemade function and the QR module in scipy comparable, let’s require that
𝑄 have positive diagonal entries.
We do this by adjusting the signs of the columns in 𝑄 and the rows in 𝑅 appropriately.
To accomplish this we’ll define a pair of functions.
def diag_sign(A):
"Compute the signs of the diagonal of matrix A"
D = np.diag(np.sign(np.diag(A)))
return D
D = diag_sign(Q)
Q[:, :] = Q @ D
R[:, :] = D @ R
return Q, R
3.5 Example
Q, R = adjust_sign(*QR_Decomposition(A))
44 Chapter 3. QR Decomposition
Intermediate Quantitative Economics with Python
print('Our Q: \n', Q)
print('\n')
print('Scipy Q: \n', Q_scipy)
Our Q:
[[ 0.70710678 -0.40824829 -0.57735027]
[ 0.70710678 0.40824829 0.57735027]
[ 0. -0.81649658 0.57735027]]
Scipy Q:
[[ 0.70710678 -0.40824829 -0.57735027]
[ 0.70710678 0.40824829 0.57735027]
[ 0. -0.81649658 0.57735027]]
print('Our R: \n', R)
print('\n')
print('Scipy R: \n', R_scipy)
Our R:
[[ 1.41421356 0.70710678 0.70710678]
[ 0. -1.22474487 -0.40824829]
[ 0. 0. 1.15470054]]
Scipy R:
[[ 1.41421356 0.70710678 0.70710678]
[ 0. -1.22474487 -0.40824829]
[ 0. 0. 1.15470054]]
The above outcomes give us the good news that our homemade function agrees with what scipy produces.
Now let’s do a QR decomposition for a rectangular matrix 𝐴 that is 𝑛 × 𝑚 with 𝑚 > 𝑛.
Q, R = adjust_sign(*QR_Decomposition(A))
Q, R
3.5. Example 45
Intermediate Quantitative Economics with Python
A_old = np.copy(A)
A_new = np.copy(A)
diff = np.inf
i = 0
while (diff > tol) and (i < maxiter):
A_old[:, :] = A_new
Q, R = QR_Decomposition(A_old)
A_new[:, :] = R @ Q
eigvals = np.diag(A_new)
return eigvals
46 Chapter 3. QR Decomposition
Intermediate Quantitative Economics with Python
Now let’s try the code and compare the results with what scipy.linalg.eigvals gives us
Here goes
sorted(QR_eigvals(A))
sorted(np.linalg.eigvals(A))
There are interesting connections between the 𝑄𝑅 decomposition and principal components analysis (PCA).
Here are some.
1. Let 𝑋 ′ be a 𝑘 × 𝑛 random matrix where the 𝑗th column is a random draw from 𝒩(𝜇, Σ) where 𝜇 is 𝑘 × 1 vector
of means and Σ is a 𝑘 × 𝑘 covariance matrix. We want 𝑛 >> 𝑘 – this is an “econometrics example”.
2. Form 𝑋 ′ = 𝑄𝑅 where 𝑄 is 𝑘 × 𝑘 and 𝑅 is 𝑘 × 𝑛.
3. Form the eigenvalues of 𝑅𝑅′ , i.e., we’ll compute 𝑅𝑅′ = 𝑃 ̃ Λ𝑃 ̃ ′ .
̂ ′.
4. Form 𝑋 ′ 𝑋 = 𝑄𝑃 ̃ Λ𝑃 ̃ ′ 𝑄′ and compare it with the eigen decomposition 𝑋 ′ 𝑋 = 𝑃 Λ𝑃
5. It will turn out that that Λ = Λ̂ and that 𝑃 = 𝑄𝑃 ̃ .
Let’s verify conjecture 5 with some Python code.
Start by simulating a random (𝑛, 𝑘) matrix 𝑋.
k = 5
n = 1000
X.shape
(1000, 5)
Q, R = adjust_sign(*QR_Decomposition(X.T))
Q.shape, R.shape
RR = R @ R.T
, P_tilde = np.linalg.eigh(RR)
Λ = np.diag( )
̂ ′.
We can also apply the decomposition to 𝑋 ′ 𝑋 = 𝑃 Λ𝑃
XX = X.T @ X
_hat, P = np.linalg.eigh(XX)
Λ_hat = np.diag( _hat)
, _hat
QP_tilde = Q @ P_tilde
3.344546861683284e-15
np.abs(QPΛPQ - XX).max()
5.002220859751105e-12
48 Chapter 3. QR Decomposition
CHAPTER
FOUR
CIRCULANT MATRICES
4.1 Overview
import numpy as np
from numba import njit
import matplotlib.pyplot as plt
np.set_printoptions(precision=3, suppress=True)
[𝑐0 𝑐1 𝑐2 𝑐3 𝑐4 ⋯ 𝑐𝑁−1 ] .
After setting entries in the first row, the remaining rows of a circulant matrix are determined as follows:
𝑐0 𝑐1 𝑐2 𝑐3 𝑐4 ⋯ 𝑐𝑁−1
⎡ 𝑐 𝑐0 𝑐1 𝑐2 𝑐3 ⋯ 𝑐𝑁−2 ⎤
⎢ 𝑁−1 ⎥
⎢ 𝑐𝑁−2 𝑐𝑁−1 𝑐0 𝑐1 𝑐2 ⋯ 𝑐𝑁−3 ⎥
𝐶=⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥ (4.1)
⎢ ⎥
⎢ 𝑐3 𝑐4 𝑐5 𝑐6 𝑐7 ⋯ 𝑐2 ⎥
⎢ 𝑐2 𝑐3 𝑐4 𝑐5 𝑐6 ⋯ 𝑐1 ⎥
⎣ 𝑐1 𝑐2 𝑐3 𝑐4 𝑐5 ⋯ 𝑐0 ⎦
It is also possible to construct a circulant matrix by creating the transpose of the above matrix, in which case only the first
column needs to be specified.
Let’s write some Python code to generate a circulant matrix.
49
Intermediate Quantitative Economics with Python
@njit
def construct_cirlulant(row):
N = row.size
C = np.empty((N, N))
for i in range(N):
return C
𝑐 = [𝑐0 𝑐1 ⋯ 𝑐𝑁−1 ]
𝑎 = [𝑎0 𝑎1 ⋯ 𝑎𝑁−1 ]
𝑏 = 𝐶𝑇 𝑎
0 1 0 0 ⋯ 0
⎡ 0 0 1 0 ⋯ 0 ⎤
⎢ ⎥
0 0 0 1 ⋯ 0
𝑃 =⎢ ⎥ (4.3)
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥
⎢ 0 0 0 0 ⋯ 1 ⎥
⎣ 1 0 0 0 ⋯ 0 ⎦
serves as a cyclic shift operator that, when applied to an 𝑁 × 1 vector ℎ, shifts entries in rows 2 through 𝑁 up one row
and shifts the entry in row 1 to row 𝑁 .
Eigenvalues of the cyclic shift permutation matrix 𝑃 defined in equation (4.3) can be computed by constructing
−𝜆 1 0 0 ⋯ 0
⎡ 0 −𝜆 1 0 ⋯ 0 ⎤
⎢ ⎥
0 0 −𝜆 1 ⋯ 0
𝑃 − 𝜆𝐼 = ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥
⎢ 0 0 0 0 ⋯ 1 ⎥
⎣ 1 0 0 0 ⋯ −𝜆 ⎦
and solving
𝑃𝑃′ = 𝐼
@njit
def construct_P(N):
P = np.zeros((N, N))
for i in range(N-1):
P[i, i+1] = 1
P[-1, 0] = 1
return P
P4 = construct_P(4)
P4
for i in range(4):
print(f' {i} = { [i]:.1f} \nvec{i} = {Q[i, :]}\n')
0 = -1.0+0.0j
vec0 = [-0.5+0.j 0. +0.5j 0. -0.5j -0.5+0.j ]
1 = 0.0+1.0j
vec1 = [ 0.5+0.j -0.5+0.j -0.5-0.j -0.5+0.j]
2 = 0.0-1.0j
vec2 = [-0.5+0.j 0. -0.5j 0. +0.5j -0.5+0.j ]
3 = 1.0+0.0j
vec3 = [ 0.5+0.j 0.5-0.j 0.5+0.j -0.5+0.j]
In graphs below, we shall portray eigenvalues of a shift permutation matrix in the complex plane.
These eigenvalues are uniformly distributed along the unit circle.
They are the 𝑛 roots of unity, meaning they are the 𝑛 numbers 𝑧 that solve 𝑧 𝑛 = 1, where 𝑧 is a complex number.
In particular, the 𝑛 roots of unity are
2𝜋𝑗𝑘
𝑧 = exp ( ), 𝑘 = 0, … , 𝑁 − 1
𝑁
where 𝑗 denotes the purely imaginary unit number.
row_i = i // 2
col_i = i % 2
P = construct_P(N)
, Q = np.linalg.eig(P)
for j in range(N):
ax[row_i, col_i].scatter( [j].real, [j].imag, c='b')
plt.show()
𝐶 = 𝑐0 𝐼 + 𝑐1 𝑃 + 𝑐2 𝑃 2 + ⋯ + 𝑐𝑁−1 𝑃 𝑁−1 .
1 1 1 ⋯ 1
⎡ 1 𝑤 𝑤2 ⋯ 𝑤7 ⎤
⎢ ⎥
⎢ 1 𝑤2 𝑤4 ⋯ 𝑤14 ⎥
⎢ 1 𝑤3 𝑤6 ⋯ 𝑤21 ⎥
𝐹8 = ⎢ ⎥
1 𝑤4 𝑤8 ⋯ 𝑤28
⎢ ⎥
⎢ 1 𝑤5 𝑤10 ⋯ 𝑤35 ⎥
⎢ 1 𝑤6 𝑤12 ⋯ 𝑤42 ⎥
⎣ 1 𝑤7 𝑤14 ⋯ 𝑤49 ⎦
The matrix 𝐹8 defines a Discete Fourier Transform.
√
To convert it into an orthogonal eigenvector matrix, we can simply normalize it by dividing every entry by 8.
• stare at the first column of 𝐹8 above to convince yourself of this fact
The eigenvalues corresponding to each eigenvector are {𝑤𝑗 }7𝑗=0 in order.
def construct_F(N):
return F, w
F8, w = construct_F(8)
(0.7071067811865476-0.7071067811865475j)
F8
# normalize
Q8 = F8 / np.sqrt(8)
P8 = construct_P(8)
diff_arr
Next, we execute calculations to verify that the circulant matrix 𝐶 defined in equation (4.1) can be written as
𝐶 = 𝑐0 𝐼 + 𝑐1 𝑃 + ⋯ + 𝑐𝑛−1 𝑃 𝑛−1
c = np.random.random(8)
C8 = construct_cirlulant(c)
N = 8
C = np.zeros((N, N))
P = np.eye(N)
for i in range(N):
C += c[i] * P
P = P8 @ P
C8
Now let’s compute the difference between two circulant matrices that we have constructed in two different ways.
np.abs(C - C8).max()
0.0
7
The 𝑘th column of 𝑃8 associated with eigenvalue 𝑤𝑘−1 is an eigenvector of 𝐶8 associated with an eigenvalue ∑ℎ=0 𝑐𝑗 𝑤ℎ𝑘 .
for j in range(8):
for k in range(8):
_C8[j] += c[k] * w ** (j * k)
_C8
# verify
for j in range(8):
diff = C8 @ Q8[:, j] - _C8[j] * Q8[:, j]
print(diff)
The Discrete Fourier Transform (DFT) allows us to represent a discrete time sequence as a weighted sum of complex
sinusoids.
Consider a sequence of 𝑁 real number {𝑥𝑗 }𝑁−1
𝑗=0 .
where
𝑁−1
𝑘𝑛
𝑋𝑘 = ∑ 𝑥𝑛 𝑒−2𝜋 𝑁 𝑖
𝑛=0
def DFT(x):
"The discrete Fourier transform."
N = len(x)
w = np.e ** (-complex(0, 2*np.pi/N))
X = np.zeros(N, dtype=complex)
for k in range(N):
for n in range(N):
X[k] += x[n] * w ** (k * n)
return X
1/2 𝑛 = 0, 1
𝑥𝑛 = {
0 otherwise
x = np.zeros(10)
x[0:2] = 1/2
array([0.5, 0.5, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])
X = DFT(x)
We can plot magnitudes of a sequence of numbers and the associated discrete Fourier transform.
data = []
names = []
xs = []
if (x is not None):
data.append(x)
names.append('x')
xs.append('n')
if (X is not None):
data.append(X)
names.append('X')
xs.append('j')
num = len(data)
for i in range(num):
n = data[i].size
plt.figure(figsize=(8, 3))
plt.scatter(range(n), np.abs(data[i]))
plt.vlines(range(n), 0, np.abs(data[i]), color='b')
plt.xlabel(xs[i])
plt.ylabel('magnitude')
plt.title(names[i])
plt.show()
plot_magnitude(x=x, X=X)
def inverse_transform(X):
N = len(X)
w = np.e ** (complex(0, 2*np.pi/N))
x = np.zeros(N, dtype=complex)
for n in range(N):
for k in range(N):
x[n] += X[k] * w ** (k * n) / N
return x
inverse_transform(X)
array([ 0.5+0.j, 0.5-0.j, -0. -0.j, -0. -0.j, -0. -0.j, -0. -0.j,
-0. +0.j, -0. +0.j, -0. +0.j, -0. +0.j])
Another example is
11
𝑥𝑛 = 2 cos (2𝜋 𝑛) , 𝑛 = 0, 1, 2, ⋯ 19
40
1 11
Since 𝑁 = 20, we cannot use an integer multiple of 20 to represent a frequency 40 .
To handle this, we shall end up using all 𝑁 of the availble frequencies in the DFT.
11
Since 40 is in between 10 12
40 and 40 (each of which is an integer multiple of
1
20 ), the complex coefficients in the DFT have
their largest magnitudes at 𝑘 = 5, 6, 15, 16, not just at a single frequency.
N = 20
x = np.empty(N)
for j in range(N):
x[j] = 2 * np.cos(2 * np.pi * 11 * j / 40)
X = DFT(x)
plot_magnitude(x=x, X=X)
N = 20
x = np.empty(N)
for j in range(N):
x[j] = 2 * np.cos(2 * np.pi * 10 * j / 40)
X = DFT(x)
plot_magnitude(x=x, X=X)
If we represent the discrete Fourier transform as a matrix, we discover that it equals the matrix 𝐹𝑁 of eigenvectors of the
permutation matrix 𝑃𝑁 .
We can use the example where 𝑥𝑛 = 2 cos (2𝜋 11
40 𝑛) , 𝑛 = 0, 1, 2, ⋯ 19 to illustrate this.
N = 20
x = np.empty(N)
for j in range(N):
x[j] = 2 * np.cos(2 * np.pi * 11 * j / 40)
X = DFT(x)
X
Now let’s evaluate the outcome of postmultiplying the eigenvector matrix 𝐹20 by the vector 𝑥, a product that we claim
should equal the Fourier tranform of the sequence {𝑥𝑛 }𝑁−1
𝑛=0 .
F20, _ = construct_F(20)
F20 @ x
−1
Similarly, the inverse DFT can be expressed as a inverse DFT matrix 𝐹20 .
F20_inv = np.linalg.inv(F20)
F20_inv @ X
FIVE
5.1 Overview
The singular value decomposition (SVD) is a work-horse in applications of least squares projection that form founda-
tions for many statistical and machine learning methods.
After defining the SVD, we’ll describe how it connects to
• four fundamental spaces of linear algebra
• under-determined and over-determined least squares regressions
• principal components analysis (PCA)
Like principal components analysis (PCA), DMD can be thought of as a data-reduction procedure that represents salient
patterns by projecting data onto a limited set of factors.
In a sequel to this lecture about Dynamic Mode Decompositions, we’ll describe how SVD’s provide ways rapidly to compute
reduced-order approximations to first-order Vector Autoregressions (VARs).
65
Intermediate Quantitative Economics with Python
We’ll again use a singular value decomposition, but now to construct a dynamic mode decomposition (DMD)
𝑋 = 𝑈 Σ𝑉 ⊤ (5.1)
where
𝑈𝑈⊤ = 𝐼 𝑈 ⊤𝑈 = 𝐼
𝑉𝑉⊤ = 𝐼 𝑉 ⊤𝑉 = 𝐼
and
• 𝑈 is an 𝑚 × 𝑚 orthogonal matrix of left singular vectors of 𝑋
• Columns of 𝑈 are eigenvectors of 𝑋𝑋 ⊤
• 𝑉 is an 𝑛 × 𝑛 orthogonal matrix of right singular vectors of 𝑋
• Columns of 𝑉 are eigenvectors of 𝑋 ⊤ 𝑋
• Σ is an 𝑚 × 𝑛 matrix in which the first 𝑝 places on its main diagonal are positive numbers 𝜎1 , 𝜎2 , … , 𝜎𝑝 called
singular values; remaining entries of Σ are all zero
• The 𝑝 singular values are positive square roots of the eigenvalues of the 𝑚 × 𝑚 matrix 𝑋𝑋 ⊤ and also of the 𝑛 × 𝑛
matrix 𝑋 ⊤ 𝑋
• We adopt a convention that when 𝑈 is a complex valued matrix, 𝑈 ⊤ denotes the conjugate-transpose or
⊤
Hermitian-transpose of 𝑈 , meaning that 𝑈𝑖𝑗 is the complex conjugate of 𝑈𝑗𝑖 .
• Similarly, when 𝑉 is a complex valued matrix, 𝑉 ⊤ denotes the conjugate-transpose or Hermitian-transpose of
𝑉
The matrices 𝑈 , Σ, 𝑉 entail linear transformations that reshape in vectors in the following ways:
• multiplying vectors by the unitary matrices 𝑈 and 𝑉 rotates them, but leaves angles between vectors and lengths
of vectors unchanged.
• multiplying vectors by the diagonal matrix Σ leaves angles between vectors unchanged but rescales vectors.
Thus, representation (5.1) asserts that multiplying an 𝑛 × 1 vector 𝑦 by the 𝑚 × 𝑛 matrix 𝑋 amounts to performing the
following three multiplications of 𝑦 sequentially:
• rotating 𝑦 by computing 𝑉 ⊤ 𝑦
• rescaling 𝑉 ⊤ 𝑦 by multiplying it by Σ
• rotating Σ𝑉 ⊤ 𝑦 by multiplying it by 𝑈
This structure of the 𝑚 × 𝑛 matrix 𝑋 opens the door to constructing systems of data encoders and decoders.
Thus,
• 𝑉 ⊤ 𝑦 is an encoder
• Σ is an operator to be applied to the encoded data
• 𝑈 is a decoder to be applied to the output from applying operator Σ to the encoded data
We’ll apply this circle of ideas later in this lecture when we study Dynamic Mode Decomposition.
Road Ahead
What we have described above is called a full SVD.
In a full SVD, the shapes of 𝑈 , Σ, and 𝑉 are (𝑚, 𝑚), (𝑚, 𝑛), (𝑛, 𝑛), respectively.
Later we’ll also describe an economy or reduced SVD.
Before we study a reduced SVD we’ll say a little more about properties of a full SVD.
Let 𝒞 denote a column space, 𝒩 denote a null space, and ℛ denote a row space.
Let’s start by recalling the four fundamental subspaces of an 𝑚 × 𝑛 matrix 𝑋 of rank 𝑝.
• The column space of 𝑋, denoted 𝒞(𝑋), is the span of the columns of 𝑋, i.e., all vectors 𝑦 that can be written as
linear combinations of columns of 𝑋. Its dimension is 𝑝.
• The null space of 𝑋, denoted 𝒩(𝑋) consists of all vectors 𝑦 that satisfy 𝑋𝑦 = 0. Its dimension is 𝑛 − 𝑝.
• The row space of 𝑋, denoted ℛ(𝑋) is the column space of 𝑋 ⊤ . It consists of all vectors 𝑧 that can be written as
linear combinations of rows of 𝑋. Its dimension is 𝑝.
• The left null space of 𝑋, denoted 𝒩(𝑋 ⊤ ), consist of all vectors 𝑧 such that 𝑋 ⊤ 𝑧 = 0. Its dimension is 𝑚 − 𝑝.
For a full SVD of a matrix 𝑋, the matrix 𝑈 of left singular vectors and the matrix 𝑉 of right singular vectors contain
orthogonal bases for all four subspaces.
They form two pairs of orthogonal subspaces that we’ll describe now.
Let 𝑢𝑖 , 𝑖 = 1, … , 𝑚 be the 𝑚 column vectors of 𝑈 and let 𝑣𝑖 , 𝑖 = 1, … , 𝑛 be the 𝑛 column vectors of 𝑉 .
Let’s write the full SVD of X as
Σ𝑝 0 ⊤
𝑋 = [𝑈𝐿 𝑈𝑅 ] [ ] [𝑉𝐿 𝑉𝑅 ] (5.2)
0 0
where Σ𝑝 is a 𝑝 × 𝑝 diagonal matrix with the 𝑝 singular values on the diagonal and
𝑈𝐿 = [𝑢1 ⋯ 𝑢𝑝 ] , 𝑈𝑅 = [𝑢𝑝+1 ⋯ 𝑢𝑚 ]
𝑉𝐿 = [𝑣1 ⋯ 𝑣𝑝 ] , 𝑈𝑅 = [𝑣𝑝+1 ⋯ 𝑢𝑛 ]
Σ𝑝 0
𝑋 [𝑉𝐿 𝑉𝑅 ] = [𝑈𝐿 𝑈𝑅 ] [ ]
0 0
or
𝑋𝑉𝐿 = 𝑈𝐿 Σ𝑝
(5.3)
𝑋𝑉𝑅 = 0
or
𝑋𝑣𝑖 = 𝜎𝑖 𝑢𝑖 , 𝑖 = 1, … , 𝑝
(5.4)
𝑋𝑣𝑖 = 0, 𝑖 = 𝑝 + 1, … , 𝑛
Equations (5.4) tell how the transformation 𝑋 maps a pair of orthonormal vectors 𝑣𝑖 , 𝑣𝑗 for 𝑖 and 𝑗 both less than or equal
to the rank 𝑝 of 𝑋 into a pair of orthonormal vectors 𝑢𝑖 , 𝑢𝑗 .
Notice how equations (5.6) assert that the transformation 𝑋 ⊤ maps a pair of distinct orthonormal vectors 𝑢𝑖 , 𝑢𝑗 for 𝑖 and
𝑗 both less than or equal to the rank 𝑝 of 𝑋 into a pair of distinct orthonormal vectors 𝑣𝑖 , 𝑣𝑗 .
Equations (5.5) assert that
ℛ(𝑋) ≡ 𝒞(𝑋 ⊤ ) = 𝒞(𝑉𝐿 )
𝒩(𝑋 ⊤ ) = 𝒞(𝑈𝑅 )
Thus, taken together, the systems of equations (5.3) and (5.5) describe the four fundamental subspaces of 𝑋 in the
following ways:
𝒞(𝑋) = 𝒞(𝑈𝐿 )
𝒩(𝑋 ⊤ ) = 𝒞(𝑈𝑅 )
ℛ(𝑋) ≡ 𝒞(𝑋 ⊤ ) = 𝒞(𝑉𝐿 ) (5.7)
𝒩(𝑋) = 𝒞(𝑉𝑅 )
Since 𝑈 and 𝑉 are both orthonormal matrices, collection (5.7) asserts that
• 𝑈𝐿 is an orthonormal basis for the column space of 𝑋
• 𝑈𝑅 is an orthonormal basis for the null space of 𝑋 ⊤
• 𝑉𝐿 is an orthonormal basis for the row space of 𝑋
• 𝑉𝑅 is an orthonormal basis for the null space of 𝑋
We have verified the four claims in (5.7) simply by performing the multiplications called for by the right side of (5.2) and
reading them.
The claims in (5.7) and the fact that 𝑈 and 𝑉 are both unitary (i.e, orthonormal) matrices imply that
• the column space of 𝑋 is orthogonal to the null space of 𝑋 ⊤
• the null space of 𝑋 is orthogonal to the row space of 𝑋
Sometimes these properties are described with the following two pairs of orthogonal complement subspaces:
• 𝒞(𝑋) is the orthogonal complement of 𝒩(𝑋 ⊤ )
• ℛ(𝑋) is the orthogonal complement 𝒩(𝑋)
Let’s do an example.
import numpy as np
import numpy.linalg as LA
import matplotlib.pyplot as plt
np.set_printoptions(precision=2)
print("U:\n", U)
print("Column space:\n", col_space)
print("Left null space:\n", left_null_space)
print("V.T:\n", V.T)
print("Row space:\n", row_space.T)
print("Right null space:\n", null_space.T)
Rank of matrix:
2
S:
[2.69e+01 1.86e+00 1.20e-15 2.24e-16 5.82e-17]
U:
[[-0.27 -0.73 0.63 -0.06 0.06]
[-0.35 -0.42 -0.69 -0.45 0.12]
[-0.43 -0.11 -0.24 0.85 0.12]
[-0.51 0.19 0.06 -0.1 -0.83]
[-0.59 0.5 0.25 -0.24 0.53]]
Column space:
[[-0.27 -0.35]
[ 0.73 0.42]
[ 0.32 -0.65]
[ 0.54 -0.39]
[-0.06 -0.35]]
Left null space:
(continues on next page)
||𝑋 − 𝑋𝑟 ||
where || ⋅ || denotes a norm of a matrix 𝑋 and where 𝑋𝑟 belongs to the space of all rank 𝑟 matrices of dimension 𝑚 × 𝑛.
Three popular matrix norms of an 𝑚 × 𝑛 matrix 𝑋 can be expressed in terms of the singular values of 𝑋
||𝑋𝑦||
• the spectral or 𝑙2 norm ||𝑋||2 = max||𝑦||≠0 ||𝑦|| = 𝜎1
This is a very powerful theorem that says that we can take our 𝑚 × 𝑛 matrix 𝑋 that in not full rank, and we can best
approximate it by a full rank 𝑝 × 𝑝 matrix through the SVD.
Moreover, if some of these 𝑝 singular values carry more information than others, and if we want to have the most amount
of information with the least amount of data, we can take 𝑟 leading singular values ordered by magnitude.
We’ll say more about this later when we present Principal Component Analysis.
You can read about the Eckart-Young theorem and some of its uses here.
We’ll make use of this theorem when we discuss principal components analysis (PCA) and also dynamic mode decom-
position (DMD).
Up to now we have described properties of a full SVD in which shapes of 𝑈 , Σ, and 𝑉 are (𝑚, 𝑚), (𝑚, 𝑛), (𝑛, 𝑛),
respectively.
There is an alternative bookkeeping convention called an economy or reduced SVD in which the shapes of 𝑈 , Σ and 𝑉
are different from what they are in a full SVD.
Thus, note that because we assume that 𝑋 has rank 𝑝, there are only 𝑝 nonzero singular values, where 𝑝 = rank(𝑋) ≤
min (𝑚, 𝑛).
A reduced SVD uses this fact to express 𝑈 , Σ, and 𝑉 as matrices with shapes (𝑚, 𝑝), (𝑝, 𝑝), (𝑛, 𝑝).
You can read about reduced and full SVD here https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html
For a full SVD,
𝑈𝑈⊤ = 𝐼 𝑈 ⊤𝑈 = 𝐼
𝑉𝑉⊤ = 𝐼 𝑉 ⊤𝑉 = 𝐼
import numpy as np
X = np.random.rand(5,2)
U, S, V = np.linalg.svd(X,full_matrices=True) # full SVD
Uhat, Shat, Vhat = np.linalg.svd(X,full_matrices=False) # economy SVD
print('U, S, V =')
U, S, V
U, S, V =
(array([[-0.48, 0.29],
[-0.3 , -0.1 ],
[-0.52, -0.76],
[-0.42, 0.57],
[-0.49, 0.09]]),
array([1.93, 0.69]),
array([[-0.52, -0.85],
[-0.85, 0.52]]))
rr = np.linalg.matrix_rank(X)
print(f'rank of X = {rr}')
rank of X = 2
Properties:
• Where 𝑈 is constructed via a full SVD, 𝑈 ⊤ 𝑈 = 𝐼𝑚×𝑚 and 𝑈 𝑈 ⊤ = 𝐼𝑚×𝑚
• Where 𝑈̂ is constructed via a reduced SVD, although 𝑈̂ ⊤ 𝑈̂ = 𝐼𝑝×𝑝 , it happens that 𝑈̂ 𝑈̂ ⊤ ≠ 𝐼𝑚×𝑚
We illustrate these properties for our example with the following code cells.
UTU = U.T@U
UUT = [email protected]
print('UUT, UTU = ')
UUT, UTU
UUT, UTU =
UhatUhatT = [email protected]
UhatTUhat = Uhat.T@Uhat
print('UhatUhatT, UhatTUhat= ')
UhatUhatT, UhatTUhat
UhatUhatT, UhatTUhat=
Remarks:
The cells above illustrate the application of the full_matrices=True and full_matrices=False options.
Using full_matrices=False returns a reduced singular value decomposition.
The full and reduced SVD’s both accurately decompose an 𝑚 × 𝑛 matrix 𝑋
When we study Dynamic Mode Decompositions below, it will be important for us to remember the preceding properties
of full and reduced SVD’s in such tall-skinny cases.
Now let’s turn to a short-fat case.
To illustrate this case, we’ll set 𝑚 = 2 < 5 = 𝑛 and compute both full and reduced SVD’s.
import numpy as np
X = np.random.rand(2,5)
U, S, V = np.linalg.svd(X,full_matrices=True) # full SVD
Uhat, Shat, Vhat = np.linalg.svd(X,full_matrices=False) # economy SVD
print('U, S, V = ')
U, S, V
U, S, V =
SShat=np.diag(Shat)
np.allclose(X, Uhat@SShat@Vhat)
True
𝑋 = 𝑆𝑄
where
𝑆 = 𝑈 Σ𝑈 ⊤
𝑄 = 𝑈𝑉 ⊤
Here
• 𝑆 is an 𝑚 × 𝑚 symmetric matrix
• 𝑄 is an 𝑚 × 𝑛 orthogonal matrix
and in our reduced SVD
• 𝑈 is an 𝑚 × 𝑝 orthonormal matrix
• Σ is a 𝑝 × 𝑝 diagonal matrix
• 𝑉 is an 𝑛 × 𝑝 orthonormal
Let’s begin with a case in which 𝑛 >> 𝑚, so that we have many more individuals 𝑛 than attributes 𝑚.
The matrix 𝑋 is short and fat in an 𝑛 >> 𝑚 case as opposed to a tall and skinny case with 𝑚 >> 𝑛 to be discussed
later.
We regard 𝑋 as an 𝑚 × 𝑛 matrix of data:
𝑋 = [𝑋1 ∣ 𝑋2 ∣ ⋯ ∣ 𝑋𝑛 ]
𝑋1𝑗 𝑥1
⎡𝑋 ⎤ ⎡𝑥 ⎤
2𝑗 ⎥ is a vector of observations on variables ⎢ 2 ⎥.
where for 𝑗 = 1, … , 𝑛 the column vector 𝑋𝑗 = ⎢
⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎣𝑋𝑚𝑗 ⎦ ⎣𝑥𝑚 ⎦
In a time series setting, we would think of columns 𝑗 as indexing different times at which random variables are observed,
while rows index different random variables.
In a cross-section setting, we would think of columns 𝑗 as indexing different individuals for which random variables are
observed, while rows index different attributes.
As we have seen before, the SVD is a way to decompose a matrix into useful components, just like polar decomposition,
eigendecomposition, and many others.
PCA, on the other hand, is a method that builds on the SVD to analyze data. The goal is to apply certain steps, to help
better visualize patterns in data, using statistical tools to capture the most important patterns in data.
Step 1: Standardize the data:
Because our data matrix may hold variables of different units and scales, we first need to standardize the data.
First by computing the average of each row of 𝑋.
1 𝑚
𝑋𝑗̄ = ∑𝑥
𝑚 𝑖=1 𝑖,𝑗
1
⎡1⎤
𝑋̄ = ⎢ ⎥ [𝑋1̄ ∣ 𝑋2̄ ∣ ⋯ ∣ 𝑋𝑛̄ ]
⎢…⎥
⎣1⎦
And subtract out of the original matrix to create a mean centered matrix:
𝐵 = 𝑋 − 𝑋̄
𝐵𝑇 𝐵 = 𝑉 Σ⊤ 𝑈 ⊤ 𝑈 Σ𝑉 ⊤
= 𝑉 Σ⊤ Σ𝑉 ⊤
1
𝐶= 𝑉 Σ⊤ Σ𝑉 ⊤
𝑛
We can then rearrange the columns in the matrices 𝑉 and Σ so that the singular values are in decreasing order.
Step 4: Select singular values, (optional) truncate the rest:
We can now decide how many singular values to pick, based on how much variance you want to retain. (e.g., retaining
95% of the total variance).
We can obtain the percentage by calculating the variance contained in the leading 𝑟 factors divided by the variance in
total:
𝑟
∑𝑖=1 𝜎𝑖2
𝑝
∑𝑖=1 𝜎𝑖2
𝑇 = 𝐵𝑉
= 𝑈 Σ𝑉 ⊤
= 𝑈Σ
To relate an SVD to a PCA of data set 𝑋, first construct the SVD of the data matrix 𝑋:
Let’s assume that sample means of all variables are zero, so we don’t need to standardize our matrix.
where
𝑉1⊤
⎡𝑉 ⊤ ⎤
𝑉⊤ =⎢ 2 ⎥
⎢…⎥
⎣𝑉𝑛⊤ ⎦
In equation (5.9), each of the 𝑚 × 𝑛 matrices 𝑈𝑗 𝑉𝑗⊤ is evidently of rank 1.
Thus, we have
Here is how we would interpret the objects in the matrix equation (5.10) in a time series context:
𝑛
• for each 𝑘 = 1, … , 𝑛, the object {𝑉𝑘𝑗 }𝑗=1 is a time series for the 𝑘th principal component
𝑈1𝑘
⎡𝑈 ⎤
• 𝑈𝑗 = ⎢ 2𝑘 ⎥ 𝑘 = 1, … , 𝑚 is a vector of loadings of variables 𝑋𝑖 on the 𝑘th principal component, 𝑖 = 1, … , 𝑚
⎢ … ⎥
⎣𝑈𝑚𝑘 ⎦
• 𝜎𝑘 for each 𝑘 = 1, … , 𝑝 is the strength of 𝑘th principal component, where strength means contribution to the
overall covariance of 𝑋.
Ω = 𝑋𝑋 ⊤
Ω = 𝑃 Λ𝑃 ⊤
Here
• 𝑃 is 𝑚 × 𝑚 matrix of eigenvectors of Ω
• Λ is a diagonal matrix of eigenvalues of Ω
We can then represent 𝑋 as
𝑋 = 𝑃𝜖
where
𝜖 = 𝑃 −1 𝑋
and
𝜖𝜖⊤ = Λ.
𝑋𝑋 ⊤ = 𝑃 Λ𝑃 ⊤ . (5.11)
𝜖1
⎡𝜖 ⎤
𝑋 = [𝑋1 |𝑋2 | … |𝑋𝑚 ] = [𝑃1 |𝑃2 | … |𝑃𝑚 ] ⎢ 2 ⎥ = 𝑃1 𝜖1 + 𝑃2 𝜖2 + … + 𝑃𝑚 𝜖𝑚
⎢…⎥
⎣𝜖𝑚 ⎦
To reconcile the preceding representation with the PCA that we had obtained earlier through the SVD, we first note that
𝜖2𝑗 = 𝜆𝑗 ≡ 𝜎𝑗2 .
𝜖𝑗
Now define 𝜖𝑗̃ = √𝜆𝑗
, which implies that 𝜖𝑗̃ 𝜖⊤
𝑗̃ = 1.
Therefore
𝑋 = √𝜆1 𝑃1 𝜖1̃ + √𝜆2 𝑃2 𝜖2̃ + … + √𝜆𝑚 𝑃𝑚 𝜖𝑚̃
= 𝜎1 𝑃1 𝜖2̃ + 𝜎2 𝑃2 𝜖2̃ + … + 𝜎𝑚 𝑃𝑚 𝜖𝑚̃ ,
5.11 Connections
To pull things together, it is useful to assemble and compare some formulas presented above.
First, consider an SVD of an 𝑚 × 𝑛 matrix:
𝑋 = 𝑈 Σ𝑉 ⊤
Compute:
𝑋𝑋 ⊤ = 𝑈 Σ𝑉 ⊤ 𝑉 Σ⊤ 𝑈 ⊤
≡ 𝑈 ΣΣ⊤ 𝑈 ⊤ (5.12)
≡ 𝑈 Λ𝑈 ⊤
𝑋 ⊤ 𝑋 = 𝑉 Σ⊤ 𝑈 ⊤ 𝑈 Σ𝑉 ⊤
= 𝑉 Σ⊤ Σ𝑉 ⊤
𝑋𝑋 ⊤ = 𝑃 Λ𝑃 ⊤
𝑋𝑋 ⊤ = 𝑈 ΣΣ⊤ 𝑈 ⊤
𝑋 = 𝑃 𝜖 = 𝑈 Σ𝑉 ⊤
It follows that
𝑈 ⊤ 𝑋 = Σ𝑉 ⊤ = 𝜖
𝜖𝜖⊤ = Σ𝑉 ⊤ 𝑉 Σ⊤ = ΣΣ⊤ = Λ,
class DecomAnalysis:
"""
A class for conducting PCA and SVD.
X: data matrix
r_component: chosen rank for best approximation
"""
self.X = X
self.Ω = (X @ X.T)
if r_component:
self.r_component = r_component
else:
self.r_component = self.m
def pca(self):
# sort by eigenvalues
self. = [ind]
P = P[:, ind]
self.P = P @ diag_sign(P)
self.Λ = np.diag(self. )
P = self.P[:, :self.r_component]
(continues on next page)
5.11. Connections 79
Intermediate Quantitative Economics with Python
# transform data
self.X_pca = P @
def svd(self):
U, , VT = LA.svd(self.X)
# sort by eigenvalues
d = min(self.m, self.n)
self. = [ind]
U = U[:, ind]
D = diag_sign(U)
self.U = U @ D
VT[:d, :] = D @ VT[ind, :]
self.VT = VT
_sq = self. ** 2
self.explained_ratio_svd = np.cumsum( _sq) / _sq.sum()
# transform data
self.X_svd = U @ Σ @ VT
# pca
P = self.P[:, :r_component]
= self. [:r_component, :]
# transform data
self.X_pca = P @
# svd
U = self.U[:, :r_component]
Σ = self.Σ[:r_component, :r_component]
VT = self.VT[:r_component, :]
# transform data
self.X_svd = U @ Σ @ VT
def diag_sign(A):
"Compute the signs of the diagonal of matrix A"
D = np.diag(np.sign(np.diag(A)))
return D
We also define a function that prints out information so that we can compare decompositions obtained by different algo-
rithms.
def compare_pca_svd(da):
"""
Compare the outcomes of PCA and SVD.
"""
da.pca()
da.svd()
# loading matrices
fig, axs = plt.subplots(1, 2, figsize=(14, 5))
plt.suptitle('loadings')
axs[0].plot(da.P.T)
axs[0].set_title('P')
axs[0].set_xlabel('m')
axs[1].plot(da.U.T)
axs[1].set_title('U')
axs[1].set_xlabel('m')
plt.show()
# principal components
fig, axs = plt.subplots(1, 2, figsize=(14, 5))
plt.suptitle('principal components')
axs[0].plot(da.ε.T)
axs[0].set_title('ε')
axs[0].set_xlabel('n')
axs[1].plot(da.VT[:da.r, :].T * np.sqrt(da.λ))
axs[1].set_title('$V^\top *\sqrt{\lambda}$')
axs[1].set_xlabel('n')
plt.show()
5.12 Exercises
Exercise 5.12.1
In Ordinary Least Squares (OLS), we learn to compute 𝛽 ̂ = (𝑋 ⊤ 𝑋)−1 𝑋 ⊤ 𝑦, but there are cases such as when we have
colinearity or an underdetermined system: short fat matrix.
In these cases, the (𝑋 ⊤ 𝑋) matrix is not not invertible (its determinant is zero) or ill-conditioned (its determinant is very
close to zero).
What we can do instead is to create what is called a pseudoinverse, a full rank approximation of the inverted matrix so
we can compute 𝛽 ̂ with it.
5.12. Exercises 81
Intermediate Quantitative Economics with Python
Thinking in terms of the Eckart-Young theorem, build the pseudoinverse matrix 𝑋 + and use it to compute 𝛽.̂
𝑋 = 𝑈 Σ𝑉 ⊤
inverting 𝑋, we have:
𝑋 + = 𝑉 Σ+ 𝑈 ⊤
where:
1
𝜎 0 ⋯ 0 0
⎡ 01 1
⋯ 0 0⎤
⎢ 𝜎2 ⎥
Σ+ = ⎢ ⋮ ⋮ ⋱ ⋮ ⋮⎥
⎢ 1 ⎥
⎢0 0 ⋯ 𝜎𝑝 0⎥
⎣0 0 ⋯ 0 0⎦
and finally:
𝛽 ̂ = 𝑋 + 𝑦 = 𝑉 Σ+ 𝑈 ⊤ 𝑦
For an example PCA applied to analyzing the structure of intelligence tests see this lecture Multivariable Normal Distri-
bution.
Look at parts of that lecture that describe and illustrate the classic factor analysis model.
As mentioned earlier, in a sequel to this lecture about Dynamic Mode Decompositions, we’ll describe how SVD’s provide
ways rapidly to compute reduced-order approximations to first-order Vector Autoregressions (VARs).
SIX
This lecture applies computational methods that we learned about in this lecture Singular Value Decomposition to
• first-order vector autoregressions (VARs)
• dynamic mode decompositions (DMDs)
• connections between DMDs and first-order VARs
where 𝜖𝑡+1 is the time 𝑡 + 1 component of a sequence of i.i.d. 𝑚 × 1 random vectors with mean vector zero and identity
covariance matrix and where the 𝑚 × 1 vector 𝑋𝑡 is
⊤
𝑋𝑡 = [𝑋1,𝑡 𝑋2,𝑡 ⋯ 𝑋𝑚,𝑡 ] (6.2)
and where ⋅⊤ again denotes complex transposition and 𝑋𝑖,𝑡 is variable 𝑖 at time 𝑡.
We want to fit equation (6.1).
Our data are organized in an 𝑚 × (𝑛 + 1) matrix 𝑋̃
𝑋̃ = [𝑋1 ∣ 𝑋2 ∣ ⋯ ∣ 𝑋𝑛 ∣ 𝑋𝑛+1 ]
𝑋 = [𝑋1 ∣ 𝑋2 ∣ ⋯ ∣ 𝑋𝑛 ]
and
𝑋 ′ = [𝑋2 ∣ 𝑋3 ∣ ⋯ ∣ 𝑋𝑛+1 ]
Here ′ is part of the name of the matrix 𝑋 ′ and does not indicate matrix transposition.
83
Intermediate Quantitative Economics with Python
𝐴̂ = 𝑋′𝑋+ (6.3)
𝑋 + = 𝑋 ⊤ (𝑋𝑋 ⊤ )−1
𝑋 + = (𝑋 ⊤ 𝑋)−1 𝑋 ⊤
𝐴 ̂ = 𝑋 ′ (𝑋 ⊤ 𝑋)−1 𝑋 ⊤ (6.5)
̂ = 𝑋′
𝐴𝑋
𝐴̂ = 𝑋′𝑋+ (6.7)
𝑋 = 𝑈 Σ𝑉 ⊤ (6.8)
where we remind ourselves that for a reduced SVD, 𝑋 is an 𝑚 × 𝑛 matrix of data, 𝑈 is an 𝑚 × 𝑝 matrix, Σ is a 𝑝 × 𝑝
matrix, and 𝑉 is an 𝑛 × 𝑝 matrix.
We can efficiently construct the pertinent pseudo-inverse 𝑋 + by recognizing the following string of equalities.
𝑋 + = (𝑋 ⊤ 𝑋)−1 𝑋 ⊤
= (𝑉 Σ𝑈 ⊤ 𝑈 Σ𝑉 ⊤ )−1 𝑉 Σ𝑈 ⊤
= (𝑉 ΣΣ𝑉 ⊤ )−1 𝑉 Σ𝑈 ⊤ (6.9)
−1 −1 ⊤ ⊤
= 𝑉 Σ Σ 𝑉 𝑉 Σ𝑈
= 𝑉 Σ−1 𝑈 ⊤
(Since we are in the 𝑚 >> 𝑛 case in which 𝑉 ⊤ 𝑉 = 𝐼𝑝×𝑝 in a reduced SVD, we can use the preceding string of equalities
for a reduced SVD as well as for a full SVD.)
Thus, we shall construct a pseudo-inverse 𝑋 + of 𝑋 by using a singular value decomposition of 𝑋 in equation (6.8) to
compute
𝑋 + = 𝑉 Σ−1 𝑈 ⊤ (6.10)
where the matrix Σ−1 is constructed by replacing each non-zero element of Σ with 𝜎𝑗−1 .
We can use formula (6.10) together with formula (6.7) to compute the matrix 𝐴 ̂ of regression coefficients.
Thus, our estimator 𝐴 ̂ = 𝑋 ′ 𝑋 + of the 𝑚 × 𝑚 matrix of coefficients 𝐴 is
𝐴 ̂ = 𝑋 ′ 𝑉 Σ−1 𝑈 ⊤ (6.11)
We turn to the 𝑚 >> 𝑛 tall and skinny case associated with Dynamic Mode Decomposition.
Here an 𝑚 × 𝑛 + 1 data matrix 𝑋̃ contains many more attributes (or variables) 𝑚 than time periods 𝑛 + 1.
Dynamic mode decomposition was introduced by [Schmid, 2010],
You can read about Dynamic Mode Decomposition [Kutz et al., 2016] and [Brunton and Kutz, 2019] (section 7.2).
Dynamic Mode Decomposition (DMD) computes a rank 𝑟 < 𝑝 approximation to the least squares regression coefficients
𝐴 ̂ described by formula (6.11).
We’ll build up gradually to a formulation that is useful in applications.
We’ll do this by describing three alternative representations of our first-order linear dynamic system, i.e., our vector
autoregression.
Guide to three representations: In practice, we’ll mainly be interested in Representation 3.
We use the first two representations to present some useful intermediate steps that help us to appreciate what is under the
hood of Representation 3.
In applications, we’ll use only a small subset of DMD modes to approximate dynamics.
We use such a small subset of DMD modes to construct a reduced-rank approximation to 𝐴.
To do that, we’ll want to use the reduced SVD’s affiliated with representation 3, not the full SVD’s affiliated with repre-
sentations 1 and 2.
Guide to impatient reader: In our applications, we’ll be using Representation 3.
You might want to skip the stage-setting representations 1 and 2 on first reading.
6.3 Representation 1
𝑏̃𝑡 = 𝑈 ⊤ 𝑋𝑡 . (6.12)
𝑋𝑡 = 𝑈 𝑏̃𝑡 (6.13)
𝐴 ̃ = 𝑈 ⊤ 𝐴𝑈
̂ (6.14)
𝐴 ̂ = 𝑈 𝐴𝑈
̃ ⊤
𝑏̃𝑡+1 = 𝐴𝑏̃ ̃𝑡
To construct forecasts 𝑋 𝑡 of future values of 𝑋𝑡 conditional on 𝑋1 , we can apply decoders (i.e., rotators) to both sides
of this equation and deduce
𝑋 𝑡+1 = 𝑈 𝐴𝑡̃ 𝑈 ⊤ 𝑋1
6.4 Representation 2
𝐴 ̃ = 𝑊 Λ𝑊 −1 (6.15)
6.4. Representation 2 87
Intermediate Quantitative Economics with Python
where Λ is a diagonal matrix of eigenvalues and 𝑊 is an 𝑚 × 𝑚 matrix whose columns are eigenvectors corresponding
to rows (eigenvalues) in Λ.
When 𝑈 𝑈 ⊤ = 𝐼𝑚×𝑚 , as is true with a full SVD of 𝑋, it follows that
𝐴 ̂ = 𝑈 𝐴𝑈
̃ ⊤ = 𝑈 𝑊 Λ𝑊 −1 𝑈 ⊤ (6.16)
According to equation (6.16), the diagonal matrix Λ contains eigenvalues of 𝐴 ̂ and corresponding eigenvectors of 𝐴 ̂ are
columns of the matrix 𝑈 𝑊 .
It follows that the systematic (i.e., not random) parts of the 𝑋𝑡 dynamics captured by our first-order vector autoregressions
are described by
𝑋𝑡+1 = 𝑈 𝑊 Λ𝑊 −1 𝑈 ⊤ 𝑋𝑡
𝑊 −1 𝑈 ⊤ 𝑋𝑡+1 = Λ𝑊 −1 𝑈 ⊤ 𝑋𝑡
or
𝑏̂𝑡+1 = Λ𝑏̂𝑡
where our encoder is
𝑏̂𝑡 = 𝑊 −1 𝑈 ⊤ 𝑋𝑡
and our decoder is
𝑋𝑡 = 𝑈 𝑊 𝑏̂𝑡
We can use this representation to construct a predictor 𝑋 𝑡+1 of 𝑋𝑡+1 conditional on 𝑋1 via:
𝑋 𝑡+1 = 𝑈 𝑊 Λ𝑡 𝑊 −1 𝑈 ⊤ 𝑋1 (6.17)
Φ𝑠 = 𝑈 𝑊 (6.18)
Φ+
𝑠 =𝑊
−1 ⊤
𝑈 (6.19)
𝑋 𝑡+1 = Φ𝑠 Λ𝑡 Φ+
𝑠 𝑋1 (6.20)
Φ+ ⊤ −1 ⊤
𝑠 = (Φ𝑠 Φ𝑠 ) Φ𝑠
𝑏̂ = Φ +
𝑠𝑋
6.5 Representation 3
Departing from the procedures used to construct Representations 1 and 2, each of which deployed a full SVD, we now
use a reduced SVD.
Again, we let 𝑝 ≤ min(𝑚, 𝑛) be the rank of 𝑋.
Construct a reduced SVD
𝑋 = 𝑈̃ Σ̃ 𝑉 ̃ ⊤ ,
𝐴 ̂ = 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑈̃ ⊤ (6.21)
𝐴 ̃ = 𝑈 ̃ ⊤ 𝐴𝑈
̂ ̃ (6.22)
𝐴 ̃ = 𝑈 ̃ ⊤ 𝐴𝑈
̂ ̃ = 𝑈̃ ⊤ 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑈̃ ⊤ 𝑈̃ = 𝑈̃ ⊤ 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑈̃ ⊤ (6.23)
Next, we’ll just compute the regression coefficients in a projection of 𝐴 ̂ on 𝑈̃ using a standard least-squares formula
𝐴 ̂ ≠ 𝑈 ̃ 𝐴𝑈
̃ ̃ ⊤,
𝐴 ̃ = 𝑊̃ Λ𝑊̃ −1 (6.24)
where Λ is a diagonal matrix of 𝑝 eigenvalues and the columns of 𝑊̃ are corresponding eigenvectors.
Mimicking our procedure in Representation 2, we cross our fingers and compute an 𝑚 × 𝑝 matrix
Φ̃ 𝑠 = 𝑈̃ 𝑊̃ (6.25)
6.5. Representation 3 89
Intermediate Quantitative Economics with Python
̂ ̃ = (𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑈̃ ⊤ )(𝑈̃ 𝑊̃ )
𝐴Φ 𝑠
= 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑊̃
≠ (𝑈̃ 𝑊̃ )Λ
= Φ̃ 𝑠 Λ
That 𝐴Φ̂ ̃ ≠ Φ̃ Λ means that, unlike the corresponding situation in Representation 2, columns of Φ̃ = 𝑈̃ 𝑊̃ are not
𝑠 𝑠 𝑠
eigenvectors of 𝐴 ̂ corresponding to eigenvalues on the diagonal of matix Λ.
An Approach That Works
Continuing our quest for eigenvectors of 𝐴 ̂ that we can compute with a reduced SVD, let’s define an 𝑚 × 𝑝 matrix Φ as
̂ ̃ = 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑊̃
Φ ≡ 𝐴Φ (6.26)
𝑠
̂ = (𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑈̃ ⊤ )(𝑋 ′ 𝑉 ̃ Σ−1 𝑊̃ )
𝐴Φ
̃ ̃
= 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝐴𝑊
= 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑊̃ Λ
= ΦΛ
so that
̂ = ΦΛ.
𝐴Φ (6.27)
Let 𝜙𝑖 be the 𝑖th column of Φ and 𝜆𝑖 be the corresponding 𝑖 eigenvalue of 𝐴 ̃ from decomposition (6.24).
Equating the 𝑚 × 1 vectors that appear on the two sides of equation (6.27) gives
̂ =𝜆𝜙.
𝐴𝜙 𝑖 𝑖 𝑖
This equation confirms that 𝜙𝑖 is an eigenvector of 𝐴 ̂ that corresponds to eigenvalue 𝜆𝑖 of both 𝐴 ̃ and 𝐴.̂
This concludes the proof.
Also see [Brunton and Kutz, 2022] (p. 238)
𝐴 ̂ = ΦΛΦ+ . (6.28)
𝑏̌𝑡+1 = Λ𝑏̌𝑡
where
𝑏̌𝑡 = Φ+ 𝑋𝑡 (6.29)
Since the 𝑚 × 𝑝 matrix Φ has 𝑝 linearly independent columns, the generalized inverse of Φ is
Φ+ = (Φ⊤ Φ)−1 Φ⊤
and so
The 𝑝 × 𝑛 matrix 𝑏̌ is recognizable as a matrix of least squares regression coefficients of the 𝑚 × 𝑛 matrix 𝑋 on the 𝑚 × 𝑝
matrix Φ and consequently
𝑋̌ = Φ𝑏̌ (6.31)
𝑋 = 𝑋̌ + 𝜖
or
𝑋 = Φ 𝑏̌ + 𝜖 (6.32)
where 𝜖 is an 𝑚 × 𝑛 matrix of least squares errors satisfying the least squares orthogonality conditions 𝜖⊤ Φ = 0 or
6.5.2 An Approximation
We now describe a way to approximate the 𝑝 × 1 vector 𝑏̌𝑡 instead of using formula (6.29).
In particular, the following argument adapted from [Brunton and Kutz, 2022] (page 240) provides a computationally
efficient way to approximate 𝑏̌𝑡 .
For convenience, we’ll apply the method at time 𝑡 = 1.
For 𝑡 = 1, from equation (6.32) we have
𝑋̌ 1 = Φ𝑏̌1 (6.34)
𝑈 𝑏̃1 = 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑊̃ 𝑏̌1 + 𝜖1
6.5. Representation 3 91
Intermediate Quantitative Economics with Python
𝑏̃1 = 𝑈 ⊤ 𝑋 ′ 𝑉 Σ̃ −1 𝑊̃ 𝑏̌1 + 𝑈 ⊤ 𝜖1
Replacing the error term 𝑈 ⊤ 𝜖1 by zero, and replacing 𝑈 from a full SVD of 𝑋 with 𝑈̃ from a reduced SVD, we obtain
an approximation 𝑏̂1 to 𝑏̃1 :
𝑏̂1 = 𝑈̃ ⊤ 𝑋 ′ 𝑉 ̃ Σ̃ −1 𝑊̃ 𝑏̌1
𝑏̂1 = 𝑊̃ Λ𝑏̌1
Consequently,
or
which is a computationally efficient approximation to the following instance of equation (6.29) for the initial vector 𝑏̌1 :
𝑏̌1 = Φ+ 𝑋1 (6.36)
(To highlight that (6.35) is an approximation, users of DMD sometimes call components of basis vector 𝑏̌𝑡 = Φ+ 𝑋𝑡 the
exact DMD modes and components of 𝑏̂𝑡 = (𝑊̃ Λ)−1 𝑈̃ ⊤ 𝑋𝑡 the approximate modes.)
Conditional on 𝑋𝑡 , we can compute a decoded 𝑋̌ 𝑡+𝑗 , 𝑗 = 1, 2, … from the exact modes via
In applications, we’ll actually use only a few modes, often three or less.
Some of the preceding formulas assume that we have retained all 𝑝 modes associated with singular values of 𝑋.
We can adjust our formulas to describe a situation in which we instead retain only the 𝑟 < 𝑝 largest singular values.
In that case, we simply replace Σ̃ with the appropriate 𝑟 × 𝑟 matrix of singular values, 𝑈̃ with the 𝑚 × 𝑟 matrix whose
columns correspond to the 𝑟 largest singular values, and 𝑉 ̃ with the 𝑛 × 𝑟 matrix whose columns correspond to the 𝑟
largest singular values.
Counterparts of all of the salient formulas above then apply.
SEVEN
Contents
See also:
GPU: A version of this lecture which makes use of jax to run the code on a GPU is available here
7.1 Overview
Many economic problems involve finding fixed points or zeros (sometimes called “roots”) of functions.
For example, in a simple supply and demand model, an equilibrium price is one that makes excess demand zero.
In other words, an equilibrium is a zero of the excess demand function.
There are various computational techniques for solving for fixed points and zeros.
In this lecture we study an important gradient-based technique called Newton’s method.
Newton’s method does not always work but, in situations where it does, convergence is often fast when compared to other
methods.
The lecture will apply Newton’s method in one-dimensional and multi-dimensional settings to solve fixed-point and zero-
finding problems.
• When finding the fixed point of a function 𝑓, Newton’s method updates an existing guess of the fixed point by
solving for the fixed point of a linear approximation to the function 𝑓.
• When finding the zero of a function 𝑓, Newton’s method updates an existing guess by solving for the zero of a
linear approximation to the function 𝑓.
To build intuition, we first consider an easy, one-dimensional fixed point problem where we know the solution and solve
it using both successive approximation and Newton’s method.
Then we apply Newton’s method to multi-dimensional settings to solve market for equilibria with multiple goods.
95
Intermediate Quantitative Economics with Python
At the end of the lecture we leverage the power of automatic differentiation in autograd to solve a very high-dimensional
equilibrium problem
In this section we solve the fixed point of the law of motion for capital in the setting of the Solow growth model.
We will inspect the fixed point visually, solve it by successive approximation, and then apply Newton’s method to achieve
faster convergence.
In the Solow growth model, assuming Cobb-Douglas production technology and zero population growth, the law of motion
for capital is
Here
• 𝑘𝑡 is capital stock per worker,
• 𝐴, 𝛼 > 0 are production parameters, 𝛼 < 1
• 𝑠 > 0 is a savings rate, and
• 𝛿 ∈ (0, 1) is a rate of depreciation
In this example, we wish to calculate the unique strictly positive fixed point of 𝑔, the law of motion for capital.
In other words, we seek a 𝑘∗ > 0 such that 𝑔(𝑘∗ ) = 𝑘∗ .
• such a 𝑘∗ is called a steady state, since 𝑘𝑡 = 𝑘∗ implies 𝑘𝑡+1 = 𝑘∗ .
Using pencil and paper to solve 𝑔(𝑘) = 𝑘, you will be able to confirm that
1/(1−𝛼)
𝑠𝐴
𝑘∗ = ( )
𝛿
7.2.2 Implementation
Let’s store our parameters in namedtuple to help us keep our code clean and concise.
The next two functions implement the law of motion (7.2.1) and store the true fixed point 𝑘∗ .
def exact_fixed_point(params):
A, s, α, δ = params
return ((s * A) / δ)**(1/(1 - α))
ax.set_yticks((0, 1, 2, 3))
ax.set_yticklabels((0.0, 1.0, 2.0, 3.0), fontsize=fontsize)
ax.set_ylim(0, 3)
ax.set_xlabel("$k_t$", fontsize=fontsize)
ax.set_ylabel("$k_{t+1}$", fontsize=fontsize)
params = create_solow_params()
fig, ax = plt.subplots(figsize=(8, 8))
plot_45(params, ax)
plt.show()
Successive Approximation
params = create_solow_params()
k_0 = 0.25
k_series = compute_iterates(k_0, g, params)
k_star = exact_fixed_point(params)
fig, ax = plt.subplots()
ax.plot(k_series, 'o')
ax.plot([k_star] * len(k_series), 'k--')
ax.set_ylim(0, 3)
plt.show()
1.7846741842265788
k_star
1.7846741842265788
Newton’s Method
In general, when applying Newton’s fixed point method to some function 𝑔, we start with a guess 𝑥0 of the fixed point
and then update by solving for the fixed point of a tangent line at 𝑥0 .
To begin with, we recall that the first-order approximation of 𝑔 at 𝑥0 (i.e., the first order Taylor approximation of 𝑔 at 𝑥0 )
is the function
𝑔(𝑥)
̂ ≈ 𝑔(𝑥0 ) + 𝑔′ (𝑥0 )(𝑥 − 𝑥0 ) (7.2)
def plot_trajectories(params,
k0_a=0.8, # first initial condition
k0_b=3.1, # second initial condition
n=20, # length of time series
fs=14): # fontsize
for ax in axes:
ax.plot(k_star * np.ones(n), "k--")
ax.legend(fontsize=fs, frameon=False)
ax.set_ylim(0.6, 3.2)
ax.set_yticks((k_star,))
ax.set_yticklabels(("$k^*$",), fontsize=fs)
ax.set_xticks(np.linspace(0, 19, 20))
plt.show()
params = create_solow_params()
plot_trajectories(params)
We can see that Newton’s method converges faster than successive approximation.
Let’s suppose we want to find an 𝑥 such that 𝑓(𝑥) = 0 for some smooth function 𝑓 mapping real numbers to real numbers.
Suppose we have a guess 𝑥0 and we want to update it to a new point 𝑥1 .
As a first step, we take the first-order approximation of 𝑓 around 𝑥0 :
̂ ≈ 𝑓 (𝑥 ) + 𝑓 ′ (𝑥 ) (𝑥 − 𝑥 )
𝑓(𝑥) 0 0 0
𝑓(𝑥0 )
𝑥1 = 𝑥 0 − , 𝑥0 given
𝑓 ′ (𝑥0 )
Generalizing the formula above, for one-dimensional zero-finding problems, Newton’s method iterates on
𝑓(𝑥𝑡 )
𝑥𝑡+1 = 𝑥𝑡 − , 𝑥0 given (7.5)
𝑓 ′ (𝑥𝑡 )
error = tol + 1
n = 0
while error > tol:
n += 1
if(n > max_iter):
raise Exception('Max iteration reached without convergence')
y = q(x)
error = np.abs(x - y)
x = y
print(f'iteration {n}, error = {error:.5f}')
return x
Numerous libraries implement Newton’s method in one dimension, including SciPy, so the code is just for illustrative
purposes.
(That said, when we want to apply Newton’s method using techniques such as automatic differentiation or GPU acceler-
ation, it will be helpful to know how to implement Newton’s method ourselves.)
Now consider again the Solow fixed-point calculation, where we solve for 𝑘 satisfying 𝑔(𝑘) = 𝑘.
We can convert to this to a zero-finding problem by setting 𝑓(𝑥) ∶= 𝑔(𝑥) − 𝑥.
Any zero of 𝑓 is clearly a fixed point of 𝑔.
Let’s apply this idea to the Solow problem
params = create_solow_params()
k_star_approx_newton = newton(f=lambda x: g(x, params) - x,
Df=lambda x: Dg(x, params) - 1,
x_0=0.8)
k_star_approx_newton
1.7846741842265788
The result confirms the descent we saw in the graphs above: a very accurate result is reached with only 5 iterations.
In this section, we introduce a two-good problem, present a visualization of the problem, and solve for the equilibrium of
the two-good market using both a zero finder in SciPy and Newton’s method.
We then expand the idea to a larger market with 5,000 goods and compare the performance of the two methods again.
We will see a significant performance gain when using Netwon’s method.
A Graphical Exploration
Since our problem is only two-dimensional, we can use graphical analysis to visualize and help understand the problem.
Our first step is to define the excess demand function
𝑒0 (𝑝)
𝑒(𝑝) = ( )
𝑒1 (𝑝)
The function below calculates the excess demand for given parameters
0.5 0.4 1 1
𝐴=( ), 𝑏=( ) and 𝑐=( )
0.8 0.2 1 1
A = np.array([
[0.5, 0.4],
[0.8, 0.2]
])
b = np.ones(2)
c = np.ones(2)
Next we plot the two functions 𝑒0 and 𝑒1 on a grid of (𝑝0 , 𝑝1 ) values, using contour surfaces and lines.
We will use the following function to build the contour plots
if surface:
cs1 = ax.contourf(p_grid, p_grid, z.T, alpha=0.5)
plt.colorbar(cs1, ax=ax, format="%.6f")
fig, ax = plt.subplots()
plot_excess_demand(ax, good=0)
plt.show()
fig, ax = plt.subplots()
plot_excess_demand(ax, good=1)
plt.show()
We see the black contour line of zero, which tells us when 𝑒𝑖 (𝑝) = 0.
For a price vector 𝑝 such that 𝑒𝑖 (𝑝) = 0 we know that good 𝑖 is in equilibrium (demand equals supply).
If these two contour lines cross at some price vector 𝑝∗ , then 𝑝∗ is an equilibrium price vector.
init_p = np.ones(2)
%%time
solution = root(lambda p: e(p, A, b, c), init_p, method='hybr')
p = solution.x
p
array([1.57080182, 1.46928838])
This looks close to our guess from observing the figure. We can plug it back into 𝑒 to test that 𝑒(𝑝) ≈ 0:
np.max(np.abs(e(p, A, b, c)))
2.0383694732117874e-13
In many cases, for zero-finding algorithms applied to smooth functions, supplying the Jacobian of the function leads to
better convergence properties.
Here we manually calculate the elements of the Jacobian
𝜕𝑒0 𝜕𝑒0
(𝑝) 𝜕𝑝1 (𝑝)
𝐽 (𝑝) = ( 𝜕𝑝
𝜕𝑒1
0
𝜕𝑒1 )
𝜕𝑝0 (𝑝) 𝜕𝑝1 (𝑝)
%%time
solution = root(lambda p: e(p, A, b, c),
init_p,
jac=lambda p: jacobian_e(p, A, b, c),
method='hybr')
Now the solution is even more accurate (although, in this low-dimensional problem, the difference is quite small):
p = solution.x
np.max(np.abs(e(p, A, b, c)))
1.3322676295501878e-15
Now let’s use Newton’s method to compute the equilibrium price using the multivariate version of Newton’s method
The iteration starts from some initial guess of the price vector 𝑝0 .
Here, instead of coding Jacobian by hand, We use the jacobian() function in the autograd library to auto-
differentiate and calculate the Jacobian.
With only slight modification, we can generalize our previous attempt to multi-dimensional problems
%%time
p = newton(lambda p: e(p, A, b, c), init_p)
CPU times: user 4.86 ms, sys: 394 µs, total: 5.25 ms
Wall time: 3.62 ms
np.max(np.abs(e(p, A, b, c)))
1.4632739464559563e-13
dim = 3000
np.random.seed(123)
# Set up b and c
b = np.ones(dim)
c = np.ones(dim)
init_p = np.ones(dim)
%%time
p = newton(lambda p: e(p, A, b, c), init_p)
CPU times: user 2min 9s, sys: 1.76 s, total: 2min 11s
Wall time: 32.9 s
np.max(np.abs(e(p, A, b, c)))
6.661338147750939e-16
With the same tolerance, we compare the runtime and accuracy of Newton’s method to SciPy’s root function
%%time
solution = root(lambda p: e(p, A, b, c),
init_p,
jac=lambda p: jacobian(e)(p, A, b, c),
method='hybr',
tol=1e-5)
CPU times: user 1min 20s, sys: 618 ms, total: 1min 21s
Wall time: 42.7 s
p = solution.x
np.max(np.abs(e(p, A, b, c)))
8.295585953721485e-07
7.5 Exercises
Exercise 7.5.1
Consider a three-dimensional extension of the Solow fixed point problem with
2 3 3
𝐴=⎛
⎜2 4 2⎞⎟, 𝑠 = 0.2, 𝛼 = 0.5, 𝛿 = 0.8
⎝1 5 1⎠
As before the law of motion is
𝑘10 = (1, 1, 1)
𝑘20 = (3, 5, 5)
𝑘30 = (50, 50, 50)
Hint:
• The computation of the fixed point is equivalent to computing 𝑘∗ such that 𝑓(𝑘∗ ) − 𝑘∗ = 0.
• If you are unsure about your solution, you can start with the solved example:
2 0 0
𝐴=⎛
⎜0 2 0⎞⎟
⎝0 0 2⎠
with 𝑠 = 0.3, 𝛼 = 0.3, and 𝛿 = 0.4 and starting value:
𝑘0 = (1, 1, 1)
s = 0.2
α = 0.5
δ = 0.8
initLs = [np.ones(3),
np.array([3.0, 5.0, 5.0]),
np.repeat(50.0, 3)]
Then define the multivariate version of the formula for the (7.2.1)
Let’s run through each starting value and see the output
attempt = 1
for init in initLs:
print(f'Attempt {attempt}: Starting value is {init} \n')
%time k = newton(lambda k: multivariate_solow(k) - k, \
init)
print('-'*64)
attempt += 1
We find that the results are invariant to the starting values given the well-defined property of this question.
But the number of iterations it takes to converge is dependent on the starting values.
Let substitute the output back to the formulate to check our last result
multivariate_solow(k) - k
s = 0.3
α = 0.3
δ = 0.4
init = np.repeat(1.0, 3)
The result is very close to the ground truth but still slightly different.
Exercise 7.5.2
In this exercise, let’s try different initial values and check how Newton’s method responds to different starting points.
Let’s define a three-good problem with the following default values:
0.2 0.1 0.7 1 1
𝐴=⎛
⎜0.3 0.2 0.5⎞⎟, 𝑏=⎛
⎜1⎞⎟ and 𝑐=⎛
⎜1⎞
⎟
⎝0.1 0.8 0.1⎠ 1
⎝ ⎠ 1
⎝ ⎠
For this exercise, use the following extreme price vectors as initial values:
𝑝10 = (5, 5, 5)
𝑝20 = (1, 1, 1)
𝑝30 = (4.5, 0.1, 4)
Set the tolerance to 0.0 for more accurate output.
A = np.array([
[0.2, 0.1, 0.7],
[0.3, 0.2, 0.5],
[0.1, 0.8, 0.1]
])
Let’s run through each initial guess and check the output
attempt = 1
for init in initLs:
print(f'Attempt {attempt}: Starting value is {init} \n')
%time p = newton(lambda p: e(p, A, b, c), \
init, \
tol=1e-15, \
max_iter=15)
print('-'*64)
attempt += 1
/opt/conda/envs/quantecon/lib/python3.11/site-packages/autograd/tracer.py:48:␣
↪RuntimeWarning: invalid value encountered in sqrt
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
File <timed exec>:1
----------------------------------------------------------------
Attempt 2: Starting value is [1. 1. 1.]
We can find that Newton’s method may fail for some starting values.
Sometimes it may take a few initial guesses to achieve convergence.
Substitute the result back to the formula to check our result
e(p, A, b, c)
Elementary Statistics
119
CHAPTER
EIGHT
This lecture uses matrix algebra to illustrate some basic ideas about probability theory.
After providing somewhat informal definitions of the underlying objects, we’ll use matrices and vectors to describe
probability distributions.
Among concepts that we’ll be studying include
• a joint probability distribution
• marginal distributions associated with a given joint distribution
• conditional probability distributions
• statistical independence of two random variables
• joint distributions associated with a prescribed set of marginal distributions
– couplings
– copulas
• the probability distribution of a sum of two independent random variables
– convolution of marginal distributions
• parameters that define a probability distribution
• sufficient statistics as data summaries
We’ll use a matrix to represent a bivariate probability distribution and a vector to represent a univariate probability dis-
tribution
In addition to what’s in Anaconda, this lecture will need the following libraries:
import numpy as np
import matplotlib.pyplot as plt
import prettytable as pt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib_inline.backend_inline import set_matplotlib_formats
set_matplotlib_formats('retina')
121
Intermediate Quantitative Economics with Python
We’ll briefly define what we mean by a probability space, a probability measure, and a random variable.
For most of this lecture, we sweep these objects into the background, but they are there underlying the other objects that
we’ll mainly focus on.
Let Ω be a set of possible underlying outcomes and let 𝜔 ∈ Ω be a particular underlying outcomes.
Let 𝒢 ⊂ Ω be a subset of Ω.
Let ℱ be a collection of such subsets 𝒢 ⊂ Ω.
The pair Ω, ℱ forms our probability space on which we want to put a probability measure.
A probability measure 𝜇 maps a set of possible underlying outcomes 𝒢 ∈ ℱ into a scalar number between 0 and 1
• this is the “probability” that 𝑋 belongs to 𝐴, denoted by Prob{𝑋 ∈ 𝐴}.
A random variable 𝑋(𝜔) is a function of the underlying outcome 𝜔 ∈ Ω.
The random variable 𝑋(𝜔) has a probability distribution that is induced by the underlying probability measure 𝜇 and
the function 𝑋(𝜔):
Before diving in, we’ll say a few words about what probability theory means and how it connects to statistics.
We also touch on these topics in the quantecon lectures https://python.quantecon.org/prob_meaning.html and https://
python.quantecon.org/navy_captain.html.
For much of this lecture we’ll be discussing fixed “population” probabilities.
These are purely mathematical objects.
To appreciate how statisticians connect probabilities to data, the key is to understand the following concepts:
• A single draw from a probability distribution
• Repeated independently and identically distributed (i.i.d.) draws of “samples” or “realizations” from the same
probability distribution
• A statistic defined as a function of a sequence of samples
• An empirical distribution or histogram (a binned empirical distribution) that records observed relative fre-
quencies
• The idea that a population probability distribution is what we anticipate relative frequencies will be in a long
sequence of i.i.d. draws. Here the following mathematical machinery makes precise what is meant by anticipated
relative frequencies
– Law of Large Numbers (LLN)
– Central Limit Theorem (CLT)
Scalar example
Let 𝑋 be a scalar random variable that takes on the 𝐼 possible values 0, 1, 2, … , 𝐼 − 1 with probabilities
Prob(𝑋 = 𝑖) = 𝑓𝑖 ,
where
𝑓𝑖 ⩾ 0, ∑ 𝑓𝑖 = 1.
𝑖
We sometimes write
𝑋 ∼ {𝑓𝑖 }𝐼−1
𝑖=0
as a short-hand way of saying that the random variable 𝑋 is described by the probability distribution {𝑓𝑖 }𝐼−1
𝑖=0 .
𝑁𝑖 = number of times 𝑋 = 𝑖,
𝐼−1
𝑁 = ∑ 𝑁𝑖 total number of draws,
𝑖=0
𝑁𝑖
𝑓𝑖̃ = ∼ frequency of draws for which 𝑋 = 𝑖
𝑁
Key ideas that justify connecting probability theory with statistics are laws of large numbers and central limit theorems
LLN:
• A Law of Large Numbers (LLN) states that 𝑓𝑖̃ → 𝑓𝑖 as 𝑁 → ∞
CLT:
• A Central Limit Theorem (CLT) describes a rate at which 𝑓𝑖̃ → 𝑓𝑖
Remarks
• For “frequentist” statisticians, anticipated relative frequency is all that a probability distribution means.
• But for a Bayesian it means something more or different.
A probability distribution Prob(𝑋 ∈ 𝐴) can be described by its cumulative distribution function (CDF)
Sometimes, but not always, a random variable can also be described by density function 𝑓(𝑥) that is related to its CDF
by
Prob{𝑋 ∈ 𝐵} = ∫ 𝑓(𝑡)𝑑𝑡
𝑡∈𝐵
𝑥
𝐹 (𝑥) = ∫ 𝑓(𝑡)𝑑𝑡
−∞
Here 𝐵 is a set of possible 𝑋’s whose probability we want to compute.
When a probability density exists, a probability distribution can be characterized either by its CDF or by its density.
For a discrete-valued random variable
• the number of possible values of 𝑋 is finite or countably infinite
• we replace a density with a probability mass function, a non-negative sequence that sums to one
• we replace integration with summation in the formula like (8.1) that relates a CDF to a probability mass function
In this lecture, we mostly discuss discrete random variables.
Doing this enables us to confine our tool set basically to linear algebra.
Later we’ll briefly discuss how to approximate a continuous random variable with a discrete random variable.
We’ll devote most of this lecture to discrete-valued random variables, but we’ll say a few things about continuous-valued
random variables.
𝑓0
⎡ 𝑓 ⎤
𝑓 =⎢ 1 ⎥ (8.2)
⎢ ⋮ ⎥
⎣ 𝑓𝐼−1 ⎦
𝐼−1
for which 𝑓𝑖 ∈ [0, 1] for each 𝑖 and ∑𝑖=0 𝑓𝑖 = 1.
This vector defines a probability mass function.
𝐼−2
The distribution (8.2) has parameters {𝑓𝑖 }𝑖=0,1,⋯,𝐼−2 since 𝑓𝐼−1 = 1 − ∑𝑖=0 𝑓𝑖 .
These parameters pin down the shape of the distribution.
(Sometimes 𝐼 = ∞.)
Such a “non-parametric” distribution has as many “parameters” as there are possible values of the random variable.
We often work with special distributions that are characterized by a small number parameters.
𝑓𝑖 = 𝑔(𝑖; 𝜃)
Let 𝑋 be a continous random variable that takes values 𝑋 ∈ 𝑋̃ ≡ [𝑋𝑈 , 𝑋𝐿 ] whose distributions have parameters 𝜃.
̃ =1
Prob{𝑋 ∈ 𝑋}
𝑋 ∈ {0, … , 𝐼 − 1}
𝑌 ∈ {0, … , 𝐽 − 1}
Then their joint distribution is described by a matrix
𝑓𝑖𝑗 = Prob{𝑋 = 𝑖, 𝑌 = 𝑗} ≥ 0
where
∑ ∑ 𝑓𝑖𝑗 = 1
𝑖 𝑗
𝐼−1
Prob{𝑌 = 𝑗} = ∑ 𝑓𝑖𝑗 = 𝜈𝑗 , 𝑗 = 0, … , 𝐽 − 1
𝑖=0
.25 .1
𝐹 =[ ] (8.3)
.15 .5
Digression: If two random variables 𝑋, 𝑌 are continuous and have joint density 𝑓(𝑥, 𝑦), then marginal distributions can
be computed by
Prob{𝐴 ∩ 𝐵}
Prob{𝐴 ∣ 𝐵} =
Prob{𝐵}
𝑓𝑖𝑗 Prob{𝑋 = 𝑖, 𝑌 = 𝑗}
Prob{𝑋 = 𝑖|𝑌 = 𝑗} = =
∑𝑖 𝑓𝑖𝑗 Prob{𝑌 = 𝑗}
where 𝑖 = 0, … , 𝐼 − 1, 𝑗 = 0, … , 𝐽 − 1.
Note that
∑𝑖 𝑓𝑖𝑗
∑ Prob{𝑋𝑖 = 𝑖|𝑌𝑗 = 𝑗} = =1
𝑖
∑𝑖 𝑓𝑖𝑗
Prob{𝑋 = 𝑖, 𝑌 = 𝑗} = 𝑓𝑖 𝑔𝑗
where
Prob{𝑋 = 𝑖} = 𝑓𝑖 ≥ 0 ∑ 𝑓𝑖 = 1
Prob{𝑌 = 𝑗} = 𝑔𝑗 ≥ 0 ∑ 𝑔𝑗 = 1
𝜇𝑋 ≡ 𝔼 [𝑋] = ∑ 𝑘Prob{𝑋 = 𝑘}
𝑘
2 2
𝜎𝑋 ≡ 𝔻 [𝑋] = ∑ (𝑘 − 𝔼 [𝑋]) Prob{𝑋 = 𝑘}
𝑘
A continuous random variable having density 𝑓𝑋 (𝑥)) has mean and variance
∞
𝜇𝑋 ≡ 𝔼 [𝑋] = ∫ 𝑥𝑓𝑋 (𝑥)𝑑𝑥
−∞
∞
2 2 2
𝜎𝑋 ≡ 𝔻 [𝑋] = E [(𝑋 − 𝜇𝑋 ) ] = ∫ (𝑥 − 𝜇𝑋 ) 𝑓𝑋 (𝑥)𝑑𝑥
−∞
Suppose we have at our disposal a pseudo random number that draws a uniform random variable, i.e., one with probability
distribution
1
Prob{𝑋̃ = 𝑖} = , 𝑖 = 0, … , 𝐼 − 1
𝐼
How can we transform 𝑋̃ to get a random variable 𝑋 for which Prob{𝑋 = 𝑖} = 𝑓𝑖 , 𝑖 = 0, … , 𝐼 − 1, where 𝑓𝑖 is an
arbitary discrete probability distribution on 𝑖 = 0, 1, … , 𝐼 − 1?
The key tool is the inverse of a cumulative distribution function (CDF).
Observe that the CDF of a distribution is monotone and non-decreasing, taking values between 0 and 1.
We can draw a sample of a random variable 𝑋 with a known CDF as follows:
• draw a random variable 𝑢 from a uniform distribution on [0, 1]
• pass the sample value of 𝑢 into the “inverse” target CDF for 𝑋
• 𝑋 has the target CDF
Thus, knowing the “inverse” CDF of a distribution is enough to simulate from this distribution.
Note: The “inverse” CDF needs to exist for this method to work.
𝑋 = 𝐹 −1 (𝑈 ),
where the last equality occurs because 𝑈 is distributed uniformly on [0, 1] while 𝐹 (𝑥) is a constant given 𝑥 that also lies
on [0, 1].
Let’s use numpy to compute some examples.
Example: A continuous geometric (exponential) distribution
𝑓(𝑥) = 𝜆𝑒−𝜆𝑥
Its CDF is
∞
𝐹 (𝑥) = ∫ 𝜆𝑒−𝜆𝑥 = 1 − 𝑒−𝜆𝑥
0
n, λ = 1_000_000, 0.3
# transform
x = -np.log(1-u)/λ
Geometric distribution
Let 𝑋 distributed geometrically, that is
1 − 𝜆𝑖+1
= (1 − 𝜆)[ ]
1−𝜆
= 1 − 𝜆𝑖+1
= 𝐹 (𝑋) = 𝐹𝑖
Again, let 𝑈̃ follow a uniform distribution and we want to find 𝑋 such that 𝐹 (𝑋) = 𝑈̃ .
Let’s deduce the distribution of 𝑋 from
𝑈̃ = 𝐹 (𝑋) = 1 − 𝜆𝑥+1
1 − 𝑈̃ = 𝜆𝑥+1
log(1 − 𝑈̃ ) = (𝑥 + 1) log 𝜆
log(1 − 𝑈̃ )
=𝑥+1
log 𝜆
log(1 − 𝑈̃ )
−1=𝑥
log 𝜆
log(1 − 𝑈̃ )
𝑥=⌈ − 1⌉
log 𝜆
n, λ = 1_000_000, 0.8
# transform
x = np.ceil(np.log(1-u)/np.log(λ) - 1)
np.random.geometric(1-λ, n).max()
64
np.log(0.4)/np.log(0.3)
0.7610560044063083
Let’s write some Python code to compute means and variances of some univariate random variables.
We’ll use our code to
• compute population means and variances from the probability distribution
• generate a sample of 𝑁 independently and identically distributed draws and compute sample means and variances
• compare population and sample means and variances
Prob(𝑋 = 𝑘) = (1 − 𝑝)𝑘−1 𝑝, 𝑘 = 1, 2, …
⟹
1
𝔼(𝑋) =
𝑝
1−𝑝
𝔻(𝑋) =
𝑝2
We draw observations from the distribution and compare the sample mean and variance with the theoretical results.
# specify parameters
p, n = 0.3, 1_000_000
print("The sample mean is: ", μ_hat, "\nThe sample variance is: ", σ2_hat)
The Newcomb–Benford law fits many data sets, e.g., reports of incomes to tax authorities, in which the leading digit is
more likely to be small than large.
See https://en.wikipedia.org/wiki/Benford’s_law
A Benford probability distribution is
1
Prob{𝑋 = 𝑑} = log10 (𝑑 + 1) − log10 (𝑑) = log10 (1 + )
𝑑
where 𝑑 ∈ {1, 2, ⋯ , 9} can be thought of as a first digit in a sequence of digits.
This is a well defined discrete distribution since we can verify that probabilities are nonnegative and sum to 1.
9
1 1
log10 (1 + ) ≥ 0, ∑ log10 (1 + )=1
𝑑 𝑑=1
𝑑
We verify the above and compute the mean and variance using numpy.
# mean
(continues on next page)
# variance
var = np.sum([(k-mean)**2 * Benford_pmf])
# verify sum to 1
print(np.sum(Benford_pmf))
print(mean)
print(var)
0.9999999999999999
3.440236967123206
6.056512631375667
# plot distribution
plt.plot(range(1,10), Benford_pmf, 'o')
plt.title('Benford\'s distribution')
plt.show()
print("The sample mean is: ", μ_hat, "\nThe sample variance is: ", σ2_hat)
print("\nThe population mean is: ", r*(1-p)/p)
print("The population variance is: ", r*(1-p)/p**2)
We write
𝑋 ∼ 𝑁 (𝜇, 𝜎2 )
# specify parameters
μ, σ = 0, 0.1
# compare
print(μ-μ_hat < 1e-3)
print(σ-σ_hat < 1e-3)
True
True
𝑋 ∼ 𝑈 [𝑎, 𝑏]
1
, 𝑎≤𝑥≤𝑏
𝑓(𝑥) = { 𝑏−𝑎
0, otherwise
The population mean and variance are
𝑎+𝑏
𝔼(𝑋) =
2
(𝑏 − 𝑎)2
𝕍(𝑋) =
12
# specify parameters
a, b = 10, 20
𝑃 (𝑋 = 0) = 0.95
400
𝑃 (300 ≤ 𝑋 ≤ 400) = ∫ 𝑓(𝑥) 𝑑𝑥 = 0.05
300
𝑓(𝑥) = 0.0005
Let’s start by generating a random sample and computing sample moments.
x = np.random.rand(1_000_000)
# x[x > 0.95] = 100*x[x > 0.95]+300
x[x > 0.95] = 100*np.random.rand(len(x[x > 0.95]))+300
x[x <= 0.95] = 0
μ_hat = np.mean(x)
σ2_hat = np.var(x)
print("The sample mean is: ", μ_hat, "\nThe sample variance is: ", σ2_hat)
400
𝜎2 = 0.95 × (0 − 17.5)2 + ∫ (𝑥 − 17.5)2 𝑓(𝑥)𝑑𝑥
300
400
= 0.95 × 17.52 + 0.0005 ∫ (𝑥 − 17.5)2 𝑑𝑥
300
400
1
2
= 0.95 × 17.5 + 0.0005 × (𝑥 − 17.5)3 ∣
3 300
mean: 17.5
variance: 5860.416666666666
Let’s use matrices to represent a joint distribution, conditional distribution, marginal distribution, and the mean and
variance of a bivariate random variable.
The table below illustrates a probability distribution for a bivariate random variable.
0.3 0.2
𝐹 = [𝑓𝑖𝑗 ] = [ ]
0.1 0.4
Prob(𝑋 = 𝑖) = ∑ 𝑓𝑖𝑗 = 𝑢𝑖
𝑗
Prob(𝑌 = 𝑗) = ∑ 𝑓𝑖𝑗 = 𝑣𝑗
𝑖
Below we draw some samples confirm that the “sampling” distribution agrees well with the “population” distribution.
Sample results:
# specify parameters
xs = np.array([0, 1])
ys = np.array([10, 20])
f = np.array([[0.3, 0.2], [0.1, 0.4]])
f_cum = np.cumsum(f)
[[ 1. 1. 0. ... 1. 0. 0.]
[20. 20. 20. ... 20. 20. 10.]]
Here, we use exactly the inverse CDF technique to generate sample from the joint distribution 𝐹 .
# marginal distribution
xp = np.sum(x[0, :] == xs[0])/1_000_000
yp = np.sum(x[1, :] == ys[0])/1_000_000
# print output
print("marginal distribution for x")
xmtb = pt.PrettyTable()
xmtb.field_names = ['x_value', 'x_prob']
xmtb.add_row([xs[0], xp])
xmtb.add_row([xs[1], 1-xp])
print(xmtb)
# conditional distributions
xc1 = x[0, x[1, :] == ys[0]]
xc2 = x[0, x[1, :] == ys[1]]
yc1 = x[1, x[0, :] == xs[0]]
yc2 = x[1, x[0, :] == xs[1]]
# print output
print("conditional distribution for x")
xctb = pt.PrettyTable()
xctb.field_names = ['y_value', 'prob(x=0)', 'prob(x=1)']
xctb.add_row([ys[0], xc1p, 1-xc1p])
xctb.add_row([ys[1], xc2p, 1-xc2p])
print(xctb)
Let’s calculate population marginal and conditional probabilities using matrix algebra.
⋮ 𝑦 1 𝑦2 ⋮ 𝑥
⎡ ⋯ ⋮ ⋯ ⋯ ⋮ ⋯ ⎤
⎢ ⎥
⎢ 𝑥1 ⋮ 0.3 0.2 ⋮ 0.5 ⎥
⎢ 𝑥2 ⋮ 0.1 0.4 ⋮ 0.5 ⎥
⎢ ⋯ ⋮ ⋯ ⋯ ⋮ ⋯ ⎥
⎣ 𝑦 ⋮ 0.4 0.6 ⋮ 1 ⎦
⟹
(1) Marginal distribution:
𝑣𝑎𝑟 ⋮ 𝑣𝑎𝑟1 𝑣𝑎𝑟2
⎡ ⋯ ⋮ ⋯ ⋯ ⎤
⎢ ⎥
⎢ 𝑥 ⋮ 0.5 0.5 ⎥
⎢ ⋯ ⋮ ⋯ ⋯ ⎥
⎣ 𝑦 ⋮ 0.4 0.6 ⎦
(2) Conditional distribution:
𝑥 ⋮ 𝑥1 𝑥2
⎡ ⋯⋯⋯ ⋮ ⋯⋯⋯ ⋯⋯⋯ ⎤
⎢ 0.3 0.1 ⎥
⎢ 𝑦 = 𝑦1 ⋮ 0.4 = 0.75 0.4 = 0.25 ⎥
⎢ ⋯⋯⋯ ⋮ ⋯⋯⋯ ⋯⋯⋯ ⎥
0.2 0.4
⎣ 𝑦 = 𝑦2 ⋮ 0.6 ≈ 0.33 0.6 ≈ 0.67 ⎦
𝑦 ⋮ 𝑦1 𝑦2
⎡ ⋯⋯⋯ ⋮ ⋯⋯⋯ ⋯⋯⋯ ⎤
⎢ 0.3 0.2 ⎥
⎢ 𝑥 = 𝑥1 ⋮ 0.5 = 0.6 0.5 = 0.4 ⎥
⎢ ⋯⋯⋯ ⋮ ⋯⋯⋯ ⋯⋯⋯ ⎥
0.1 0.4
⎣ 𝑥 = 𝑥2 ⋮ 0.5 = 0.2 0.5 = 0.8 ⎦
These population objects closely resemble sample counterparts computed above.
Let’s wrap some of the functions we have used in a Python class for a general discrete bivariate joint distribution.
class discrete_bijoint:
def joint_tb(self):
'''print the joint distribution table'''
xs = self.xs
ys = self.ys
f = self.f
jtb = pt.PrettyTable()
jtb.field_names = ['x_value/y_value', *ys, 'marginal sum for x']
for i in range(len(xs)):
jtb.add_row([xs[i], *f[i, :], np.sum(f[i, :])])
jtb.add_row(['marginal_sum for y', *np.sum(f, 0), np.sum(f)])
print("\nThe joint probability distribution for x and y\n", jtb)
self.jtb = jtb
def marg_dist(self):
'''marginal distribution'''
x = self.x
xs = self.xs
ys = self.ys
n = self.n
xmp = [np.sum(x[0, :] == xs[i])/n for i in range(len(xs))]
ymp = [np.sum(x[1, :] == ys[i])/n for i in range(len(ys))]
# print output
xmtb = pt.PrettyTable()
ymtb = pt.PrettyTable()
xmtb.field_names = ['x_value', 'x_prob']
ymtb.field_names = ['y_value', 'y_prob']
for i in range(max(len(xs), len(ys))):
if i < len(xs):
xmtb.add_row([xs[i], xmp[i]])
if i < len(ys):
ymtb.add_row([ys[i], ymp[i]])
xmtb.add_row(['sum', np.sum(xmp)])
ymtb.add_row(['sum', np.sum(ymp)])
print("\nmarginal distribution for x\n", xmtb)
print("\nmarginal distribution for y\n", ymtb)
self.xmp = xmp
self.ymp = ymp
def cond_dist(self):
'''conditional distribution'''
x = self.x
xs = self.xs
ys = self.ys
n = self.n
xcp = np.empty([len(ys), len(xs)])
ycp = np.empty([len(xs), len(ys)])
for i in range(max(len(ys), len(xs))):
if i < len(ys):
xi = x[0, x[1, :] == ys[i]]
idx = xi.reshape(len(xi), 1) == xs.reshape(1, len(xs))
xcp[i, :] = np.sum(idx, 0)/len(xi)
if i < len(xs):
yi = x[1, x[0, :] == xs[i]]
idy = yi.reshape(len(yi), 1) == ys.reshape(1, len(ys))
ycp[i, :] = np.sum(idy, 0)/len(yi)
# print output
xctb = pt.PrettyTable()
yctb = pt.PrettyTable()
xctb.field_names = ['x_value', *xs, 'sum']
yctb.field_names = ['y_value', *ys, 'sum']
for i in range(max(len(xs), len(ys))):
if i < len(ys):
xctb.add_row([ys[i], *xcp[i], np.sum(xcp[i])])
if i < len(xs):
yctb.add_row([xs[i], *ycp[i], np.sum(ycp[i])])
self.xcp = xcp
self.xyp = ycp
# joint
d = discrete_bijoint(f, xs, ys)
d.joint_tb()
# sample marginal
d.draw(1_000_000)
d.marg_dist()
# sample conditional
d.cond_dist()
Example 2
d_new.draw(1_000_000)
d_new.marg_dist()
d_new.cond_dist()
1 (𝑥 − 𝜇1 )2 2𝜌(𝑥 − 𝜇1 )(𝑦 − 𝜇2 ) (𝑦 − 𝜇2 )2
𝑓(𝑥, 𝑦) = (2𝜋𝜎1 𝜎2 √1 − 𝜌2 )−1 exp [− ( − + )]
2(1 − 𝜌2 ) 𝜎12 𝜎1 𝜎2 𝜎22
1 1 (𝑥 − 𝜇1 )2 2𝜌(𝑥 − 𝜇1 )(𝑦 − 𝜇2 ) (𝑦 − 𝜇2 )2
exp [− ( 2
− + )]
2𝜋𝜎1 𝜎2 √1 − 𝜌2 2(1 − 𝜌2 ) 𝜎1 𝜎1 𝜎2 𝜎22
We start with a bivariate normal distribution pinned down by
0 5 .2
𝜇=[ ], Σ=[ ]
5 .2 1
μ1 = 0
μ2 = 5
σ1 = np.sqrt(5)
σ2 = np.sqrt(1)
ρ = .2 / np.sqrt(5 * 1)
Joint Distribution
Let’s plot the population joint density.
# %matplotlib notebook
fig = plt.figure()
ax = plt.axes(projection='3d')
# %matplotlib notebook
fig = plt.figure()
ax = plt.axes(projection='3d')
Next we can simulate from a built-in numpy function and calculate a sample marginal distribution from the sample mean
and variance.
μ= np.array([0, 5])
σ= np.array([[5, .2], [.2, 1]])
n = 1_000_000
data = np.random.multivariate_normal(μ, σ, n)
x = data[:, 0]
y = data[:, 1]
Marginal distribution
-0.0009410653678662386 2.237337853596715
4.999005281178264 1.0003086878642835
Conditional distribution
The population conditional distribution is
𝑦 − 𝜇𝑌 2
[𝑋|𝑌 = 𝑦] ∼ ℕ[𝜇𝑋 + 𝜌𝜎𝑋 , 𝜎𝑋 (1 − 𝜌2 )]
𝜎𝑌
𝑥 − 𝜇𝑋 2
[𝑌 |𝑋 = 𝑥] ∼ ℕ[𝜇𝑌 + 𝜌𝜎𝑌 , 𝜎𝑌 (1 − 𝜌2 )]
𝜎𝑋
Let’s approximate the joint density by discretizing and mapping the approximating joint density into a matrix.
We can compute the discretized marginal density by just using matrix algebra and noting that
𝑓𝑖𝑗
Prob{𝑋 = 𝑖|𝑌 = 𝑗} =
∑𝑖 𝑓𝑖𝑗
Fix 𝑦 = 0.
𝑓𝑖𝑗
𝔼 [𝑋|𝑌 = 𝑗] = ∑ 𝑖𝑃 𝑟𝑜𝑏{𝑋 = 𝑖|𝑌 = 𝑗} = ∑ 𝑖
𝑖 𝑖
∑𝑖 𝑓𝑖𝑗
𝑓𝑖𝑗
2
𝔻 [𝑋|𝑌 = 𝑗] = ∑ (𝑖 − 𝜇𝑋|𝑌 =𝑗 )
𝑖
∑ 𝑓
𝑖 𝑖𝑗
Let’s draw from a normal distribution with above mean and variance and check how accurate our approximation is.
# discretized mean
μx = np.dot(x, z)
# sample
zz = np.random.normal(μx, σx, 1_000_000)
plt.hist(zz, bins=300, density=True, alpha=0.3, range=[-10, 10])
plt.show()
Fix 𝑥 = 1.
# sample
zz = np.random.normal(μy,σy,1_000_000)
plt.hist(zz, bins=100, density=True, alpha=0.3)
plt.show()
We compare with the analytically computed parameters and note that they are close.
print(μx, σx)
print(μ1 + ρ * σ1 * (0 - μ2) / σ2, np.sqrt(σ1**2 * (1 - ρ**2)))
print(μy, σy)
print(μ2 + ρ * σ2 * (1 - μ1) / σ1, np.sqrt(σ2**2 * (1 - ρ**2)))
-0.9997518414498433 2.22658413316977
-1.0 2.227105745132009
5.039999456960768 0.9959851265795597
5.04 0.9959919678390986
Let 𝑋, 𝑌 be two independent discrete random variables that take values in 𝑋,̄ 𝑌 ̄ , respectively.
Define a new random variable 𝑍 = 𝑋 + 𝑌 .
Evidently, 𝑍 takes values from 𝑍 ̄ defined as follows:
Thus, we have:
𝑘
ℎ𝑘 = ∑ 𝑓𝑖 𝑔𝑘−𝑖 ≡ 𝑓 ∗ 𝑔
𝑖=0
Prob{𝑋 = 𝑖, 𝑌 = 𝑗} = 𝜌𝑖𝑗
where 𝑖 = 0, … , 𝐼 − 1; 𝑗 = 0, … , 𝐽 − 1 and
∑ ∑ 𝜌𝑖𝑗 = 1, 𝜌𝑖𝑗 ⩾ 0.
𝑖 𝑗
where
𝑝 𝑝12
[ 11 ]
𝑝21 𝑝22
8.19 Coupling
𝑓𝑖𝑗 = Prob{𝑋 = 𝑖, 𝑌 = 𝑗}
𝑖 = 0, ⋯ 𝐼 − 1
𝑗 = 0, ⋯ 𝐽 − 1
stacked to an 𝐼 × 𝐽 matrix
𝑒.𝑔. 𝐼 = 1, 𝐽 = 1
where
𝑓11 𝑓12
[ ]
𝑓21 𝑓22
From the joint distribution, we have shown above that we obtain unique marginal distributions.
Now we’ll try to go in a reverse direction.
We’ll find that from two marginal distributions, can we usually construct more than one joint distribution that verifies
these marginals.
Each of these joint distributions is called a coupling of the two marginal distributions.
Let’s start with marginal distributions
Prob{𝑋 = 𝑖} = ∑ 𝑓𝑖𝑗 = 𝜇𝑖 , 𝑖 = 0, ⋯ , 𝐼 − 1
𝑗
Prob{𝑌 = 𝑗} = ∑ 𝑓𝑖𝑗 = 𝜈𝑗 , 𝑗 = 0, ⋯ , 𝐽 − 1
𝑗
Given two marginal distribution, 𝜇 for 𝑋 and 𝜈 for 𝑌 , a joint distribution 𝑓𝑖𝑗 is said to be a coupling of 𝜇 and 𝜈.
Example:
Consider the following bivariate example.
Prob{𝑋 = 0} =1 − 𝑞 = 𝜇0
Prob{𝑋 = 1} =𝑞 = 𝜇1
Prob{𝑌 = 0} =1 − 𝑟 = 𝜈0
Prob{𝑌 = 1} =𝑟 = 𝜈1
where 0 ≤ 𝑞 < 𝑟 ≤ 1
(1 − 𝑞)(1 − 𝑟) (1 − 𝑞)𝑟
𝑓𝑖𝑗 = [ ]
𝑞(1 − 𝑟) 𝑞𝑟
# define parameters
mu = np.array([0.6, 0.4])
nu = np.array([0.3, 0.7])
# number of draws
draws = 1_000_000
# print output
print("distribution for x")
xmtb = pt.PrettyTable()
xmtb.field_names = ['x_value', 'x_prob']
xmtb.add_row([0, 1-q_hat])
xmtb.add_row([1, q_hat])
print(xmtb)
distribution for x
+---------+--------------------+
| x_value | x_prob |
+---------+--------------------+
| 0 | 0.6006279999999999 |
| 1 | 0.399372 |
+---------+--------------------+
distribution for y
+---------+----------+
| y_value | y_prob |
+---------+----------+
| 0 | 0.300752 |
| 1 | 0.699248 |
+---------+----------+
Let’s now take our two marginal distributions, one for 𝑋, the other for 𝑌 , and construct two distinct couplings.
For the first joint distribution:
Prob(𝑋 = 𝑖, 𝑌 = 𝑗) = 𝑓𝑖𝑗
where
0.18 0.42
[𝑓𝑖𝑗 ] = [ ]
0.12 0.28
Let’s use Python to construct this joint distribution and then verify that its marginal distributions are what we want.
# define parameters
f1 = np.array([[0.18, 0.42], [0.12, 0.28]])
f1_cum = np.cumsum(f1)
# number of draws
draws1 = 1_000_000
# print output
print("marginal distribution for x")
c1_x_mtb = pt.PrettyTable()
c1_x_mtb.field_names = ['c1_x_value', 'c1_x_prob']
c1_x_mtb.add_row([0, 1-c1_q_hat])
c1_x_mtb.add_row([1, c1_q_hat])
print(c1_x_mtb)
+------------+-----------+
| c1_x_value | c1_x_prob |
+------------+-----------+
| 0 | 0.600077 |
| 1 | 0.399923 |
+------------+-----------+
marginal distribution for y
+------------+---------------------+
| c1_y_value | c1_y_prob |
+------------+---------------------+
| 0 | 0.30001999999999995 |
| 1 | 0.69998 |
+------------+---------------------+
Now, let’s construct another joint distribution that is also a coupling of 𝑋 and 𝑌
0.3 0.3
[𝑓𝑖𝑗 ] = [ ]
0 0.4
# define parameters
f2 = np.array([[0.3, 0.3], [0, 0.4]])
f2_cum = np.cumsum(f2)
# number of draws
draws2 = 1_000_000
# print output
print("marginal distribution for x")
c2_x_mtb = pt.PrettyTable()
c2_x_mtb.field_names = ['c2_x_value', 'c2_x_prob']
c2_x_mtb.add_row([0, 1-c2_q_hat])
c2_x_mtb.add_row([1, c2_q_hat])
print(c2_x_mtb)
+------------+-----------+
| c2_x_value | c2_x_prob |
+------------+-----------+
| 0 | 0.600538 |
| 1 | 0.399462 |
+------------+-----------+
marginal distribution for y
+------------+---------------------+
| c2_y_value | c2_y_prob |
+------------+---------------------+
| 0 | 0.29983000000000004 |
| 1 | 0.70017 |
+------------+---------------------+
We have verified that both joint distributions, 𝑐1 and 𝑐2 , have identical marginal distributions of 𝑋 and 𝑌 , respectively.
So they are both couplings of 𝑋 and 𝑌 .
Remark:
• This is a key formula for a theory of optimally predicting a time series.
NINE
Contents
9.1 Overview
This lecture illustrates two of the most important theorems of probability and statistics: The law of large numbers (LLN)
and the central limit theorem (CLT).
These beautiful theorems lie behind many of the most fundamental results in econometrics and quantitative economic
modeling.
The lecture is based around simulations that show the LLN and CLT in action.
We also demonstrate how the LLN and CLT break down when the assumptions they are based on do not hold.
In addition, we examine several useful extensions of the classical theorems, such as
• The delta method, for smooth functions of random variables, and
• the multivariate case.
Some of these extensions are presented as exercises.
We’ll need the following imports:
163
Intermediate Quantitative Economics with Python
9.2 Relationships
9.3 LLN
We begin with the law of large numbers, which tells us when sample averages will converge to their population means.
The classical law of large numbers concerns independent and identically distributed (IID) random variables.
Here is the strongest version of the classical LLN, known as Kolmogorov’s strong law.
Let 𝑋1 , … , 𝑋𝑛 be independent and identically distributed scalar random variables, with common distribution 𝐹 .
When it exists, let 𝜇 denote the common mean of this sample:
𝜇 ∶= 𝔼𝑋 = ∫ 𝑥𝐹 (𝑑𝑥)
In addition, let
1 𝑛
𝑋̄ 𝑛 ∶= ∑ 𝑋𝑖
𝑛 𝑖=1
ℙ {𝑋̄ 𝑛 → 𝜇 as 𝑛 → ∞} = 1 (9.1)
9.3.2 Proof
The proof of Kolmogorov’s strong law is nontrivial – see, for example, theorem 8.3.5 of [Dudley, 2002].
On the other hand, we can prove a weaker version of the LLN very easily and still get most of the intuition.
The version we prove is as follows: If 𝑋1 , … , 𝑋𝑛 is IID with 𝔼𝑋𝑖2 < ∞, then, for any 𝜖 > 0, we have
(This version is weaker because we claim only convergence in probability rather than almost sure convergence, and assume
a finite second moment)
To see that this is so, fix 𝜖 > 0, and let 𝜎2 be the variance of each 𝑋𝑖 .
Recall the Chebyshev inequality, which tells us that
𝔼[(𝑋̄ 𝑛 − 𝜇)2 ]
ℙ {|𝑋̄ 𝑛 − 𝜇| ≥ 𝜖} ≤ (9.3)
𝜖2
Now observe that
2
⎧
{ 1 𝑛 ⎫
}
̄ 2
𝔼[(𝑋𝑛 − 𝜇) ] = 𝔼 ⎨[ ∑(𝑋𝑖 − 𝜇)] ⎬
{ 𝑛 𝑖=1 }
⎩ ⎭
𝑛 𝑛
1
= 2 ∑ ∑ 𝔼(𝑋𝑖 − 𝜇)(𝑋𝑗 − 𝜇)
𝑛 𝑖=1 𝑗=1
1 𝑛
= ∑ 𝔼(𝑋𝑖 − 𝜇)2
𝑛2 𝑖=1
𝜎2
=
𝑛
Here the crucial step is at the third equality, which follows from independence.
Independence means that if 𝑖 ≠ 𝑗, then the covariance term 𝔼(𝑋𝑖 − 𝜇)(𝑋𝑗 − 𝜇) drops out.
As a result, 𝑛2 − 𝑛 terms vanish, leading us to a final expression that goes to zero in 𝑛.
Combining our last result with (9.3), we come to the estimate
𝜎2
ℙ {|𝑋̄ 𝑛 − 𝜇| ≥ 𝜖} ≤ (9.4)
𝑛𝜖2
The claim in (9.2) is now clear.
Of course, if the sequence 𝑋1 , … , 𝑋𝑛 is correlated, then the cross-product terms 𝔼(𝑋𝑖 − 𝜇)(𝑋𝑗 − 𝜇) are not necessarily
zero.
While this doesn’t mean that the same line of argument is impossible, it does mean that if we want a similar result then
the covariances should be “almost zero” for “most” of these terms.
In a long sequence, this would be true if, for example, 𝔼(𝑋𝑖 − 𝜇)(𝑋𝑗 − 𝜇) approached zero when the difference between
𝑖 and 𝑗 became large.
In other words, the LLN can still work if the sequence 𝑋1 , … , 𝑋𝑛 has a kind of “asymptotic independence”, in the sense
that correlation falls to zero as variables become further apart in the sequence.
This idea is very important in time series analysis, and we’ll come across it again soon enough.
9.3.3 Illustration
Let’s now illustrate the classical IID law of large numbers using simulation.
In particular, we aim to generate some sequences of IID random variables and plot the evolution of 𝑋̄ 𝑛 as 𝑛 increases.
Below is a figure that does just this (as usual, you can click on it to expand it).
It shows IID observations from three different distributions and plots 𝑋̄ 𝑛 against 𝑛 in each case.
The dots represent the underlying observations 𝑋𝑖 for 𝑖 = 1, … , 100.
In each of the three cases, convergence of 𝑋̄ 𝑛 to 𝜇 occurs as predicted
n = 100
for ax in axes:
# Choose a randomly selected distribution
name = random.choice(list(distributions.keys()))
distribution = distributions.pop(name)
# Plot
ax.plot(list(range(n)), data, 'o', color='grey', alpha=0.5)
axlabel = '$\\bar{X}_n$ for $X_i \sim$' + name
ax.plot(list(range(n)), sample_mean, 'g-', lw=3, alpha=0.6, label=axlabel)
m = distribution.mean()
ax.plot(list(range(n)), [m] * n, 'k--', lw=1.5, label='$\mu$')
ax.vlines(list(range(n)), m, data, lw=0.2)
ax.legend(**legend_args, fontsize=12)
plt.show()
The three distributions are chosen at random from a selection stored in the dictionary distributions.
9.4 CLT
Next, we turn to the central limit theorem, which tells us about the distribution of the deviation between sample averages
and population means.
The central limit theorem is one of the most remarkable results in all of mathematics.
In the classical IID setting, it tells us the following:
If the sequence 𝑋1 , … , 𝑋𝑛 is IID, with common mean 𝜇 and common variance 𝜎2 ∈ (0, ∞), then
√ 𝑑
𝑛(𝑋̄ 𝑛 − 𝜇) → 𝑁 (0, 𝜎2 ) as 𝑛 → ∞ (9.5)
𝑑
Here → 𝑁 (0, 𝜎2 ) indicates convergence in distribution to a centered (i.e, zero mean) normal with standard deviation 𝜎.
9.4.2 Intuition
The striking implication of the CLT is that for any distribution with finite second moment, the simple operation of adding
independent copies always leads to a Gaussian curve.
A relatively simple proof of the central limit theorem can be obtained by working with characteristic functions (see, e.g.,
theorem 9.5.6 of [Dudley, 2002]).
The proof is elegant but almost anticlimactic, and it provides surprisingly little intuition.
In fact, all of the proofs of the CLT that we know are similar in this respect.
Why does adding independent copies produce a bell-shaped distribution?
Part of the answer can be obtained by investigating the addition of independent Bernoulli random variables.
In particular, let 𝑋𝑖 be binary, with ℙ{𝑋𝑖 = 0} = ℙ{𝑋𝑖 = 1} = 0.5, and let 𝑋1 , … , 𝑋𝑛 be independent.
𝑛
Think of 𝑋𝑖 = 1 as a “success”, so that 𝑌𝑛 = ∑𝑖=1 𝑋𝑖 is the number of successes in 𝑛 trials.
The next figure plots the probability mass function of 𝑌𝑛 for 𝑛 = 1, 2, 4, 8
plt.show()
When 𝑛 = 1, the distribution is flat — one success or no successes have the same probability.
When 𝑛 = 2 we can either have 0, 1 or 2 successes.
Notice the peak in probability mass at the mid-point 𝑘 = 1.
The reason is that there are more ways to get 1 success (“fail then succeed” or “succeed then fail”) than to get zero or two
successes.
Moreover, the two trials are independent, so the outcomes “fail then succeed” and “succeed then fail” are just as likely as
the outcomes “fail then fail” and “succeed then succeed”.
(If there was positive correlation, say, then “succeed then fail” would be less likely than “succeed then succeed”)
Here, already we have the essence of the CLT: addition under independence leads probability mass to pile up in the middle
and thin out at the tails.
For 𝑛 = 4 and 𝑛 = 8 we again get a peak at the “middle” value (halfway between the minimum and the maximum
possible value).
The intuition is the same — there are simply more ways to get these middle outcomes.
If we continue, the bell-shaped curve becomes even more pronounced.
We are witnessing the binomial approximation of the normal distribution.
9.4.3 Simulation 1
Since the CLT seems almost magical, running simulations that verify its implications is one good way to build intuition.
To this end, we now perform the following simulation
1. Choose an arbitrary distribution 𝐹 for the underlying observations 𝑋𝑖 .
√
2. Generate independent draws of 𝑌𝑛 ∶= 𝑛(𝑋̄ 𝑛 − 𝜇).
3. Use these draws to compute some measure of their distribution — such as a histogram.
4. Compare the latter to 𝑁 (0, 𝜎2 ).
Here’s some code that does exactly this for the exponential distribution 𝐹 (𝑥) = 1 − 𝑒−𝜆𝑥 .
(Please experiment with other choices of 𝐹 , but remember that, to conform with the conditions of the CLT, the distribution
must have a finite second moment.)
# Set parameters
n = 250 # Choice of n
k = 100000 # Number of draws of Y_n
distribution = expon(2) # Exponential distribution, λ = 1/2
μ, s = distribution.mean(), distribution.std()
# Plot
fig, ax = plt.subplots(figsize=(10, 6))
xmin, xmax = -3 * s, 3 * s
ax.set_xlim(xmin, xmax)
ax.hist(Y, bins=60, alpha=0.5, density=True)
xgrid = np.linspace(xmin, xmax, 200)
ax.plot(xgrid, norm.pdf(xgrid, scale=s), 'k-', lw=2, label='$N(0, \sigma^2)$')
ax.legend()
plt.show()
Notice the absence of for loops — every operation is vectorized, meaning that the major calculations are all shifted to
highly optimized C code.
The fit to the normal density is already tight and can be further improved by increasing n.
You can also experiment with other specifications of 𝐹 .
9.4.4 Simulation 2
√
Our next simulation is somewhat like the first, except that we aim to track the distribution of 𝑌𝑛 ∶= 𝑛(𝑋̄ 𝑛 − 𝜇) as 𝑛
increases.
In the simulation, we’ll be working with random variables having 𝜇 = 0.
Thus, when 𝑛 = 1, we have 𝑌1 = 𝑋1 , so the first distribution is just the distribution of the underlying random variable.
√
For 𝑛 = 2, the distribution of 𝑌2 is that of (𝑋1 + 𝑋2 )/ 2, and so on.
What we expect is that, regardless of the distribution of the underlying random variable, the distribution of 𝑌𝑛 will smooth
out into a bell-shaped curve.
The next figure shows this process for 𝑋𝑖 ∼ 𝑓, where 𝑓 was specified as the convex combination of three different beta
densities.
(Taking a convex combination is an easy way to produce an irregular shape for 𝑓.)
In the figure, the closest density is that of 𝑌1 , while the furthest is that of 𝑌5
beta_dist = beta(2, 2)
def gen_x_draws(k):
"""
Returns a flat array containing k independent draws from the
distribution of X, the underlying random variable. This distribution
(continues on next page)
nmax = 5
reps = 100000
ns = list(range(1, nmax + 1))
# Plot
ax = plt.figure(figsize = (10, 6)).add_subplot(projection='3d')
a, b = -3, 3
gs = 100
xs = np.linspace(a, b, gs)
# Build verts
greys = np.linspace(0.3, 0.7, nmax)
verts = []
for n in ns:
density = gaussian_kde(Y[:, n-1])
ys = density(xs)
verts.append(list(zip(xs, ys)))
The law of large numbers and central limit theorem work just as nicely in multidimensional settings.
To state the results, let’s recall some elementary facts about random vectors.
A random vector X is just a sequence of 𝑘 random variables (𝑋1 , … , 𝑋𝑘 ).
Each realization of X is an element of ℝ𝑘 .
A collection of random vectors X1 , … , X𝑛 is called independent if, given any 𝑛 vectors x1 , … , x𝑛 in ℝ𝑘 , we have
𝔼[𝑋1 ] 𝜇1
⎛ 𝔼[𝑋2 ] ⎞ ⎛ 𝜇2 ⎞
𝔼[X] ∶= ⎜
⎜
⎜
⎟
⎟ ⎜
⎟=⎜
⎜
⎟
⎟
⎟ =∶ 𝜇
⋮ ⋮
⎝ 𝔼[𝑋 ]
𝑘 ⎠ ⎝ 𝜇𝑘 ⎠
1 𝑛
X̄ 𝑛 ∶= ∑ X𝑖
𝑛 𝑖=1
ℙ {X̄ 𝑛 → 𝜇 as 𝑛 → ∞} = 1 (9.6)
9.5 Exercises
Exercise 9.5.1
One very useful consequence of the central limit theorem is as follows.
Assume the conditions of the CLT as stated above.
If 𝑔 ∶ ℝ → ℝ is differentiable at 𝜇 and 𝑔′ (𝜇) ≠ 0, then
√ 𝑑
𝑛{𝑔(𝑋̄ 𝑛 ) − 𝑔(𝜇)} → 𝑁 (0, 𝑔′ (𝜇)2 𝜎2 ) as 𝑛 → ∞ (9.8)
This theorem is used frequently in statistics to obtain the asymptotic distribution of estimators — many of which can be
expressed as functions of sample means.
(These kinds of results are often said to use the “delta method”.)
"""
Illustrates the delta method, a consequence of the central limit theorem.
"""
# Set parameters
n = 250
replications = 100000
distribution = uniform(loc=0, scale=(np.pi / 2))
μ, s = distribution.mean(), distribution.std()
g = np.sin
g_prime = np.cos
# Plot
asymptotic_sd = g_prime(μ) * s
fig, ax = plt.subplots(figsize=(10, 6))
xmin = -3 * g_prime(μ) * s
xmax = -xmin
ax.set_xlim(xmin, xmax)
ax.hist(error_obs, bins=60, alpha=0.5, density=True)
xgrid = np.linspace(xmin, xmax, 200)
lb = "$N(0, g'(\mu)^2 \sigma^2)$"
ax.plot(xgrid, norm.pdf(xgrid, scale=asymptotic_sd), 'k-', lw=2, label=lb)
ax.legend()
plt.show()
What happens when you replace [0, 𝜋/2] with [0, 𝜋]?
In this case, the mean 𝜇 of this distribution is 𝜋/2, and since 𝑔′ = cos, we have 𝑔′ (𝜇) = 0.
Hence the conditions of the delta theorem are not satisfied.
Exercise 9.5.2
Here’s a result that’s often used in developing statistical tests, and is connected to the multivariate central limit theorem.
If you study econometric theory, you will see this result used again and again.
Assume the setting of the multivariate CLT discussed above, so that
1. X1 , … , X𝑛 is a sequence of IID random vectors, each taking values in ℝ𝑘 .
2. 𝜇 ∶= 𝔼[X𝑖 ], and Σ is the variance-covariance matrix of X𝑖 .
3. The convergence
√ 𝑑
𝑛(X̄ 𝑛 − 𝜇) → 𝑁 (0, Σ) (9.9)
is valid.
In a statistical setting, one often wants the right-hand side to be standard normal so that confidence intervals are easily
computed.
This normalization can be achieved on the basis of three observations.
First, if X is a random vector in ℝ𝑘 and A is constant and 𝑘 × 𝑘, then
Var[AX] = A Var[X]A′
𝑑
Second, by the continuous mapping theorem, if Z𝑛 → Z in ℝ𝑘 and A is constant and 𝑘 × 𝑘, then
𝑑
AZ𝑛 → AZ
Third, if S is a 𝑘 × 𝑘 symmetric positive definite matrix, then there exists a symmetric positive definite matrix Q, called
the inverse square root of S, such that
QSQ′ = I
Applying the continuous mapping theorem one more time tells us that
𝑑
‖Z𝑛 ‖2 → ‖Z‖2
𝑊𝑖
X𝑖 ∶= ( )
𝑈𝑖 + 𝑊𝑖
where
• each 𝑊𝑖 is an IID draw from the uniform distribution on [−1, 1].
• each 𝑈𝑖 is an IID draw from the uniform distribution on [−2, 2].
• 𝑈𝑖 and 𝑊𝑖 are independent of each other.
Hint:
1. scipy.linalg.sqrtm(A) computes the square root of A. You still need to invert it.
2. You should be able to work out Σ from the preceding information.
Since linear combinations of normal random variables are normal, the vector QY is also normal.
Its mean is clearly 0, and its variance-covariance matrix is
𝑑
In conclusion, QY𝑛 → QY ∼ 𝑁 (0, I), which is what we aimed to show.
Now we turn to the simulation exercise.
Our solution is as follows
# Set parameters
n = 250
replications = 50000
dw = uniform(loc=-1, scale=2) # Uniform(-1, 1)
du = uniform(loc=-2, scale=4) # Uniform(-2, 2)
sw, su = dw.std(), du.std()
vw, vu = sw**2, su**2
Σ = ((vw, vw), (vw, vw + vu))
Σ = np.array(Σ)
# Compute Σ^{-1/2}
Q = inv(sqrtm(Σ))
# Plot
fig, ax = plt.subplots(figsize=(10, 6))
xmax = 8
ax.set_xlim(0, xmax)
xgrid = np.linspace(0, xmax, 200)
lb = "Chi-squared with 2 degrees of freedom"
ax.plot(xgrid, chi2.pdf(xgrid, 2), 'k-', lw=2, label=lb)
ax.legend()
ax.hist(chisq_obs, bins=50, density=True)
plt.show()
TEN
10.1 Overview
https://youtu.be/8JIe_cz6qGA
After you watch that video, please watch the following video on the Bayesian approach to constructing coverage intervals
https://youtu.be/Pahyv9i_X2k
After you are familiar with the material in these videos, this lecture uses the Socratic method to to help consolidate your
understanding of the different questions that are answered by
• a frequentist confidence interval
• a Bayesian coverage interval
We do this by inviting you to write some Python code.
It would be especially useful if you tried doing this after each question that we pose for you, before proceeding to read
the rest of the lecture.
We provide our own answers as the lecture unfolds, but you’ll learn more if you try writing your own code before reading
and running ours.
Code for answering questions:
In addition to what’s in Anaconda, this lecture will deploy the following library:
import numpy as np
import pandas as pd
import prettytable as pt
import matplotlib.pyplot as plt
from scipy.stats import binom
import scipy.stats as st
181
Intermediate Quantitative Economics with Python
Empowered with these Python tools, we’ll now explore the two meanings described above.
𝑛!
Prob(𝑋 = 𝑘|𝜃) = ( ) 𝜃𝑘 (1 − 𝜃)𝑛−𝑘
𝑘!(𝑛 − 𝑘)!
Exercise 10.2.1
1. Please write a Python class to compute 𝑓𝑘𝐼
2. Please use your code to compute 𝑓𝑘𝐼 , 𝑘 = 0, … , 𝑛 and compare them to Prob(𝑋 = 𝑘|𝜃) for various values of 𝜃, 𝑛
and 𝐼
3. With the Law of Large numbers in mind, use your code to say something
class frequentist:
'''
initialization
-----------------
parameters:
θ : probability that one toss of a coin will be a head with Y = 1
n : number of independent flips in each independent sequence of draws
I : number of independent sequence of draws
'''
θ, n = self.θ, self.n
self.k = k
self.P = binom.pmf(k, n, θ)
def draw(self):
Y, I = self.Y, self.I
K = np.sum(Y, 1)
f_kI = np.sum(K == kk) / I
self.f_kI = f_kI
self.kk = kk
def compare(self):
n = self.n
comp = pt.PrettyTable()
comp.field_names = ['k', 'Theoretical', 'Frequentist']
self.draw()
for i in range(n):
self.binomial(i+1)
self.compute_fk(i+1)
comp.add_row([i+1, self.P, self.f_kI])
print(comp)
freq = frequentist(θ, n, I)
freq.compare()
+----+------------------------+-------------+
| k | Theoretical | Frequentist |
+----+------------------------+-------------+
| 1 | 1.6271660538000033e-09 | 0.0 |
| 2 | 3.606884752589999e-08 | 0.0 |
| 3 | 5.04963865362601e-07 | 2e-06 |
| 4 | 5.007558331512455e-06 | 3e-06 |
| 5 | 3.7389768875293014e-05 | 4.9e-05 |
| 6 | 0.00021810698510587546 | 0.000211 |
| 7 | 0.001017832597160754 | 0.001035 |
| 8 | 0.003859281930901185 | 0.003907 |
| 9 | 0.012006654896137007 | 0.011892 |
| 10 | 0.030817080900085007 | 0.03103 |
| 11 | 0.06536956554563476 | 0.065302 |
| 12 | 0.11439673970486108 | 0.11459 |
| 13 | 0.1642619852172365 | 0.164278 |
| 14 | 0.19163898275344252 | 0.191064 |
| 15 | 0.17886305056987967 | 0.179323 |
| 16 | 0.1304209743738704 | 0.130184 |
| 17 | 0.07160367220526209 | 0.071683 |
| 18 | 0.027845872524268643 | 0.027709 |
| 19 | 0.006839337111223895 | 0.006971 |
| 20 | 0.0007979226629761189 | 0.000767 |
+----+------------------------+-------------+
From the table above, can you see the law of large numbers at work?
From the above graphs, we can see that 𝐼, the number of independent sequences, plays an important role.
When 𝐼 becomes larger, the difference between theoretical probability and frequentist estimate becomes smaller.
Also, as long as 𝐼 is large enough, changing 𝜃 or 𝑛 does not substantially change the accuracy of the observed fraction as
an approximation of 𝜃.
The Law of Large Numbers is at work here.
For each draw of an independent sequence, Prob(𝑋𝑖 = 𝑘|𝜃) is the same, so aggregating all draws forms an i.i.d sequence
of a binary random variable 𝜌𝑘,𝑖 , 𝑖 = 1, 2, ...𝐼, with a mean of Prob(𝑋 = 𝑘|𝜃) and a variance of
𝑛!
𝐸[𝜌𝑘,𝑖 ] = Prob(𝑋 = 𝑘|𝜃) = ( ) 𝜃𝑘 (1 − 𝜃)𝑛−𝑘
𝑘!(𝑛 − 𝑘)!
as 𝐼 goes to infinity.
𝜃𝛼−1 (1 − 𝜃)𝛽−1
𝑃 (𝜃) =
𝐵(𝛼, 𝛽)
where 𝐵(𝛼, 𝛽) is a beta function , so that 𝑃 (𝜃) is a beta distribution with parameters 𝛼, 𝛽.
Exercise 10.3.1
a) Please write down the likelihood function for a sample of length 𝑛 from a binomial distribution with parameter 𝜃.
b) Please write down the posterior distribution for 𝜃 after observing one flip of the coin.
c) Now pretend that the true value of 𝜃 = .4 and that someone who doesn’t know this has a beta prior distribution with
parameters with 𝛽 = 𝛼 = .5. Please write a Python class to simulate this person’s personal posterior distribution for 𝜃
for a single sequence of 𝑛 draws.
d) Please plot the posterior distribution for 𝜃 as a function of 𝜃 as 𝑛 grows as 1, 2, ….
e) For various 𝑛’s, please describe and compute a Bayesian coverage interval for the interval [.45, .55].
f) Please tell what question a Bayesian coverage interval answers.
g) Please compute the Posterior probabililty that 𝜃 ∈ [.45, .55] for various values of sample size 𝑛.
h) Please use your Python class to study what happens to the posterior distribution as 𝑛 → +∞, again assuming that the
true value of 𝜃 = .4, though it is unknown to the person doing the updating via Bayes’ Law.
b) Please write the posterior distribution for 𝜃 after observing one flip of our coin.
The prior distribution is
𝜃𝛼−1 (1 − 𝜃)𝛽−1
Prob(𝜃) =
𝐵(𝛼, 𝛽)
We can derive the posterior distribution for 𝜃 via
Prob(𝑌 |𝜃)Prob(𝜃)
Prob(𝜃|𝑌 ) =
Prob(𝑌 )
Prob(𝑌 |𝜃)Prob(𝜃)
= 1
∫0 Prob(𝑌 |𝜃)Prob(𝜃)𝑑𝜃
𝜃𝛼−1 (1−𝜃)𝛽−1
𝜃𝑌 (1 − 𝜃)1−𝑌 𝐵(𝛼,𝛽)
= 1 𝜃𝛼−1 (1−𝜃)𝛽−1
∫0 𝜃𝑌 (1 − 𝜃)1−𝑌 𝐵(𝛼,𝛽) 𝑑𝜃
𝜃𝑌 +𝛼−1 (1 − 𝜃)1−𝑌 +𝛽−1
= 1
∫0 𝜃𝑌 +𝛼−1 (1 − 𝜃)1−𝑌 +𝛽−1 𝑑𝜃
Prob(𝜃|𝑌 ) ∼ Beta(𝛼 + 𝑌 , 𝛽 + (1 − 𝑌 ))
Now please pretend that the true value of 𝜃 = .4 and that someone who doesn’t know this has a beta prior with 𝛽 = 𝛼 = .5.
c) Now pretend that the true value of 𝜃 = .4 and that someone who doesn’t know this has a beta prior distribution with
parameters with 𝛽 = 𝛼 = .5. Please write a Python class to simulate this person’s personal posterior distribution for 𝜃
for a single sequence of 𝑛 draws.
class Bayesian:
n : int.
number of independent flips in an independent sequence of draws
"""
self.θ, self.n, self.α, self.β = θ, n, α, β
self.prior = st.beta(α, β)
def draw(self):
"""
simulate a single sequence of draws of length n, given probability θ
"""
array = np.random.rand(self.n)
self.draws = (array < self.θ).astype(int)
Parameters
----------
step_num: int.
number of steps observed to form a posterior distribution
Returns
------
the posterior distribution for sake of plotting in the subsequent steps
"""
heads_num = self.draws[:step_num].sum()
tails_num = step_num - heads_num
def form_posterior_series(self,num_obs_list):
"""
form a series of posterior distributions that form after observing different␣
↪number of draws.
Parameters
----------
num_obs_list: a list of int.
a list of the number of observations used to form a series of␣
↪posterior distributions.
"""
self.posterior_list = []
for num in num_obs_list:
self.posterior_list.append(self.form_single_posterior(num))
Bay_stat = Bayesian()
Bay_stat.draw()
num_list = [1, 2, 3, 4, 5, 10, 20, 30, 50, 70, 100, 300, 500, 1000, # this line for␣
↪finite n
(continues on next page)
Bay_stat.form_posterior_series(num_list)
ax.legend(fontsize=11)
plt.show()
e) For various 𝑛’s, please describe and compute .05 and .95 quantiles for posterior probabilities.
interval_df = pd.DataFrame()
(continues on next page)
1 2 3 4 5 10 20 \
upper 0.228520 0.097308 0.062413 0.16528 0.260634 0.347322 0.280091
lower 0.998457 0.902692 0.764466 0.83472 0.872224 0.814884 0.629953
As 𝑛 increases, we can see that Bayesian coverage intervals narrow and move toward 0.4.
f) Please tell what question a Bayesian coverage interval answers.
The Bayesian coverage interval tells the range of 𝜃 that corresponds to the [𝑝1 , 𝑝2 ] quantiles of the cumulative probability
distribution (CDF) of the posterior distribution.
To construct the coverage interval we first compute a posterior distribution of the unknown parameter 𝜃.
If the CDF is 𝐹 (𝜃), then the Bayesian coverage interval [𝑎, 𝑏] for the interval [𝑝1 , 𝑝2 ] is described by
𝐹 (𝑎) = 𝑝1 , 𝐹 (𝑏) = 𝑝2
g) Please compute the Posterior probabililty that 𝜃 ∈ [.45, .55] for various values of sample size 𝑛.
fontsize=13)
ax.set_xticks(np.arange(0, len(posterior_prob_list), 3))
ax.set_xticklabels(num_list[::3])
ax.set_xlabel('Number of Observations', fontsize=11)
plt.show()
Notice that in the graph above the posterior probabililty that 𝜃 ∈ [.45, .55] typically exhibits a hump shape as 𝑛 increases.
Two opposing forces are at work.
The first force is that the individual adjusts his belief as he observes new outcomes, so his posterior probability distribution
becomes more and more realistic, which explains the rise of the posterior probabililty.
However, [.45, .55] actually excludes the true 𝜃 = .4 that generates the data.
As a result, the posterior probabililty drops as larger and larger samples refine his posterior probability distribution of 𝜃.
The descent seems precipitous only because of the scale of the graph that has the number of observations increasing
disproportionately.
When the number of observations becomes large enough, our Bayesian becomes so confident about 𝜃 that he considers
𝜃 ∈ [.45, .55] very unlikely.
That is why we see a nearly horizontal line when the number of observations exceeds 500.
h) Please use your Python class to study what happens to the posterior distribution as 𝑛 → +∞, again assuming that the
true value of 𝜃 = .4, though it is unknown to the person doing the updating via Bayes’ Law.
Using the Python class we made above, we can see the evolution of posterior distributions as 𝑛 approaches infinity.
ax.legend(fontsize=11)
plt.show()
As 𝑛 increases, we can see that the probability density functions concentrate on 0.4, the true value of 𝜃.
Here the posterior means converges to 0.4 while the posterior standard deviations converges to 0 from above.
To show this, we compute the means and variances statistics of the posterior distributions.
ax[0].plot(mean_list)
ax[0].set_title('Mean Values of Posterior Distribution', fontsize=13)
ax[0].set_xticks(np.arange(0, len(mean_list), 3))
ax[0].set_xticklabels(num_list[::3])
ax[0].set_xlabel('Number of Observations', fontsize=11)
ax[1].plot(std_list)
ax[1].set_title('Standard Deviations of Posterior Distribution', fontsize=13)
ax[1].set_xticks(np.arange(0, len(std_list), 3))
ax[1].set_xticklabels(num_list[::3])
ax[1].set_xlabel('Number of Observations', fontsize=11)
(continues on next page)
plt.show()
𝜃𝛼−1 (1−𝜃)𝛽−1
(𝑁
𝑘 )(1 − 𝜃)
𝑁−𝑘 𝑘
𝜃 ∗ 𝐵(𝛼,𝛽)
= 1 𝜃𝛼−1 (1−𝜃)𝛽−1
∫0 (𝑁
𝑘 )(1 − 𝜃)
𝑁−𝑘 𝜃𝑘 ∗
𝐵(𝛼,𝛽) 𝑑𝜃
(1 − 𝜃)𝛽+𝑁−𝑘−1 ∗ 𝜃𝛼+𝑘−1
= 1
∫0 (1 − 𝜃)𝛽+𝑁−𝑘−1 ∗ 𝜃𝛼+𝑘−1 𝑑𝜃
= 𝐵𝑒𝑡𝑎(𝛼 + 𝑘, 𝛽 + 𝑁 − 𝑘)
A beta distribution with 𝛼 and 𝛽 has the following mean and variance.
𝛼
The mean is 𝛼+𝛽
𝛼𝛽
The variance is (𝛼+𝛽)2 (𝛼+𝛽+1)
ax.legend(fontsize=11)
plt.show()
After observing a large number of outcomes, the posterior distribution collapses around 0.4.
Thus, the Bayesian statististian comes to believe that 𝜃 is near .4.
As shown in the figure above, as the number of observations grows, the Bayesian coverage intervals (BCIs) become
narrower and narrower around 0.4.
However, if you take a closer look, you will find that the centers of the BCIs are not exactly 0.4, due to the persistent
influence of the prior distribution and the randomness of the simulation path.
We have made assumptions that link functional forms of our likelihood function and our prior in a way that has eased our
calculations considerably.
In particular, our assumptions that the likelihood function is binomial and that the prior distribution is a beta distribution
have the consequence that the posterior distribution implied by Bayes’ Law is also a beta distribution.
So posterior and prior are both beta distributions, albeit ones with different parameters.
When a likelihood function and prior fit together like hand and glove in this way, we can say that the prior and posterior
are conjugate distributions.
In this situation, we also sometimes say that we have conjugate prior for the likelihood function Prob(𝑋|𝜃).
Typically, the functional form of the likelihood function determines the functional form of a conjugate prior.
A natural question to ask is why should a person’s personal prior about a parameter 𝜃 be restricted to be described by a
conjugate prior?
Why not some other functional form that more sincerely describes the person’s beliefs?
To be argumentative, one could ask, why should the form of the likelihood function have anything to say about my personal
beliefs about 𝜃?
A dignified response to that question is, well, it shouldn’t, but if you want to compute a posterior easily you’ll just be
happier if your prior is conjugate to your likelihood.
Otherwise, your posterior won’t have a convenient analytical form and you’ll be in the situation of wanting to apply the
Markov chain Monte Carlo techniques deployed in this quantecon lecture.
We also apply these powerful methods to approximating Bayesian posteriors for non-conjugate priors in this quantecon
lecture and this quantecon lecture
ELEVEN
Contents
11.1 Overview
This lecture describes how an administrator deployed a multivariate hypergeometric distribution in order to access
the fairness of a procedure for awarding research grants.
In the lecture we’ll learn about
• properties of the multivariate hypergeometric distribution
• first and second moments of a multivariate hypergeometric distribution
• using a Monte Carlo simulation of a multivariate normal distribution to evaluate the quality of a normal approxi-
mation
• the administrator’s problem and why the multivariate hypergeometric distribution is the right tool
199
Intermediate Quantitative Economics with Python
To evaluate whether the selection procedure is color blind the administrator wants to study whether the particular re-
alization of 𝑋 drawn can plausibly be said to be a random draw from the probability distribution that is implied by the
color blind hypothesis.
The appropriate probability distribution is the one described here.
Let’s now instantiate the administrator’s problem, while continuing to use the colored balls metaphor.
The administrator has an urn with 𝑁 = 238 balls.
157 balls are blue, 11 balls are green, 46 balls are yellow, and 24 balls are black.
So (𝐾1 , 𝐾2 , 𝐾3 , 𝐾4 ) = (157, 11, 46, 24) and 𝑐 = 4.
15 balls are drawn without replacement.
So 𝑛 = 15.
The administrator wants to know the probability distribution of outcomes
𝑘1
⎛
⎜𝑘2 ⎞
⎟.
𝑋=⎜
⎜⋮⎟ ⎟
⎝𝑘4 ⎠
In particular, he wants to know whether a particular outcome - in the form of a 4 × 1 vector of integers recording the
numbers of blue, green, yellow, and black balls, respectively, - contains evidence against the hypothesis that the selection
process is fair, which here means color blind and truly are random draws without replacement from the population of 𝑁
balls.
The right tool for the administrator’s job is the multivariate hypergeometric distribution.
Pr{𝑋𝑖 = 𝑘𝑖 ∀𝑖} = 𝑖
(𝑁
𝑛)
Mean:
𝐾𝑖
E(𝑋𝑖 ) = 𝑛
𝑁
Variances and covariances:
𝑁 − 𝑛 𝐾𝑖 𝐾
Var(𝑋𝑖 ) = 𝑛 (1 − 𝑖 )
𝑁 −1 𝑁 𝑁
𝑁 − 𝑛 𝐾𝑖 𝐾𝑗
Cov(𝑋𝑖 , 𝑋𝑗 ) = −𝑛
𝑁 −1 𝑁 𝑁
To do our work for us, we’ll write an Urn class.
class Urn:
Parameters
----------
K_arr: ndarray(int)
number of each type i object.
"""
self.K_arr = np.array(K_arr)
self.N = np.sum(K_arr)
self.c = len(K_arr)
Parameters
----------
k_arr: ndarray(int)
number of observed successes of each object.
"""
k_arr = np.atleast_2d(k_arr)
n = np.sum(k_arr, 1)
pr = num / denom
return pr
Parameters
----------
n: int
number of draws.
"""
# mean
μ = n * K_arr / N
# variance-covariance matrix
Σ = np.full((c, c), n * (N - n) / (N - 1) / N ** 2)
for i in range(c-1):
Σ[i, i] *= K_arr[i] * (N - K_arr[i])
for j in range(i+1, c):
Σ[i, j] *= - K_arr[i] * K_arr[j]
Σ[j, i] = Σ[i, j]
return μ, Σ
K_arr = self.K_arr
gen = np.random.Generator(np.random.PCG64(seed))
sample = gen.multivariate_hypergeometric(K_arr, n, size=size)
return sample
11.3 Usage
(52)(10 15
2 )( 2 )
𝑃 (2 black, 2 white, 2 red) = = 0.079575596816976
(30
6)
Now use the Urn Class method pmf to compute the probability of the outcome 𝑋 = (2 2 2)
array([0.0795756])
We can use the code to compute probabilities of a list of possible outcomes by constructing a 2-dimensional array k_arr
and pmf will return an array of probabilities for observing each case.
array([0.0795756, 0.1061008])
n = 6
μ, Σ = urn.moments(n)
k_arr = [10, 1, 4, 0]
urn.pmf(k_arr)
array([0.01547738])
We can compute probabilities of three possible outcomes by constructing a 3-dimensional arrays k_arr and utilizing
the method pmf of the Urn class.
n = 6 # number of draws
μ, Σ = urn.moments(n)
# mean
μ
# variance-covariance matrix
Σ
We can simulate a large sample and verify that sample means and covariances closely approximate the population means
and covariances.
size = 10_000_000
sample = urn.simulate(n, size=size)
# mean
np.mean(sample, 0)
Evidently, the sample means and covariances approximate their population counterparts well.
To judge the quality of a multivariate normal approximation to the multivariate hypergeometric distribution, we draw
a large sample from a multivariate normal distribution with the mean vector and covariance matrix for the correspond-
ing multivariate hypergeometric distribution and compare the simulated distribution with the population multivariate
hypergeometric distribution.
x_μ = x - μ_x
y_μ = y - μ_y
@njit
def count(vec1, vec2, n):
size = sample.shape[0]
return count_mat
c = urn.c
fig, axs = plt.subplots(c, c, figsize=(14, 14))
for i in range(c):
axs[i, i].hist(sample[:, i], bins=np.arange(0, n, 1), alpha=0.5, density=True,␣
↪label='hypergeom')
axs[i, i].legend()
axs[i, i].set_title('$k_{' +str(i+1) +'}$')
for j in range(c):
if i == j:
continue
plt.show()
The diagonal graphs plot the marginal distributions of 𝑘𝑖 for each 𝑖 using histograms.
Note the substantial differences between hypergeometric distribution and the approximating normal distribution.
The off-diagonal graphs plot the empirical joint distribution of 𝑘𝑖 and 𝑘𝑗 for each pair (𝑖, 𝑗).
The darker the blue, the more data points are contained in the corresponding cell. (Note that 𝑘𝑖 is on the x-axis and 𝑘𝑗 is
on the y-axis).
The contour maps plot the bivariate Gaussian density function of (𝑘𝑖 , 𝑘𝑗 ) with the population mean and covariance given
by slices of 𝜇 and Σ that we computed above.
Let’s also test the normality for each 𝑘𝑖 using scipy.stats.normaltest that implements D’Agostino and Pearson’s
test that combines skew and kurtosis to form an omnibus test of normality.
The null hypothesis is that the sample follows normal distribution.
normaltest returns an array of p-values associated with tests for each 𝑘𝑖 sample.
test_multihyper = normaltest(sample)
test_multihyper.pvalue
As we can see, all the p-values are almost 0 and the null hypothesis is soundly rejected.
By contrast, the sample from normal distribution does not reject the null hypothesis.
test_normal = normaltest(sample_normal)
test_normal.pvalue
The lesson to take away from this is that the normal approximation is imperfect.
TWELVE
Contents
12.1 Overview
This lecture describes a workhorse in probability theory, statistics, and economics, namely, the multivariate normal
distribution.
In this lecture, you will learn formulas for
• the joint distribution of a random vector 𝑥 of length 𝑁
• marginal distributions for all subvectors of 𝑥
• conditional distributions for subvectors of 𝑥 conditional on other subvectors of 𝑥
We will use the multivariate normal distribution to formulate some useful models:
209
Intermediate Quantitative Economics with Python
This lecture defines a Python class MultivariateNormal to be used to generate marginal and conditional distri-
butions associated with a multivariate normal distribution.
For a multivariate normal distribution it is very convenient that
• conditional expectations equal linear least squares projections
• conditional distributions are characterized by multivariate linear regressions
We apply our Python class to some examples.
We use the following imports:
@njit
def f(z, μ, Σ):
"""
The density function of multivariate normal distribution.
Parameters
---------------
z: ndarray(float, dim=2)
random vector, N by 1
μ: ndarray(float, dim=1 or 2)
the mean of z, N by 1
Σ: ndarray(float, dim=2)
the covarianece matrix of z, N by 1
"""
z = np.atleast_2d(z)
(continues on next page)
N = z.size
𝑧1
𝑧=[ ],
𝑧2
where
𝛽 = Σ12 Σ−1
22
class MultivariateNormal:
"""
Class of multivariate normal distribution.
Arguments
---------
μ, Σ:
see parameters
μs: list(ndarray(float, dim=1))
list of mean vectors μ1 and μ2 in order
Σs: list(list(ndarray(float, dim=2)))
2 dimensional list of covariance matrices
Σ11, Σ12, Σ21, Σ22 in order
βs: list(ndarray(float, dim=1))
list of regression coefficients β1 and β2 in order
"""
Returns
---------
μ_hat: ndarray(float, ndim=1)
The conditional mean of z1 or z2.
Σ_hat: ndarray(float, ndim=2)
The conditional covariance matrix of z1 or z2.
"""
μ = np.array([.5, 1.])
Σ = np.array([[1., .5], [.5 ,1.]])
k = 1 # choose partition
(array([[0.5]]), array([[0.5]]))
Let’s illustrate the fact that you can regress anything on anything else.
We have computed everything we need to compute two regression lines, one of 𝑧2 on 𝑧1 , the other of 𝑧1 on 𝑧2 .
We’ll represent these regressions as
𝑧1 = 𝑎 1 + 𝑏 1 𝑧2 + 𝜖 1
and
𝑧2 = 𝑎 2 + 𝑏 2 𝑧1 + 𝜖 2
𝐸𝜖1 𝑧2 = 0
and
𝐸𝜖2 𝑧1 = 0
Let’s compute 𝑎1 , 𝑎2 , 𝑏1 , 𝑏2 .
beta = multi_normal.βs
a1 = μ[0] - beta[0]*μ[1]
b1 = beta[0]
a2 = μ[1] - beta[1]*μ[0]
b2 = beta[1]
a1 = [[0.]]
b1 = [[0.5]]
a2 = [[0.75]]
b2 = [[0.5]]
Now let’s plot the two regression lines and stare at them.
z2 = np.linspace(-4,4,100)
a1 = np.squeeze(a1)
b1 = np.squeeze(b1)
a2 = np.squeeze(a2)
b2 = np.squeeze(b2)
z1 = b1*z2 + a1
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(1, 1, 1)
ax.set(xlim=(-4, 4), ylim=(-4, 4))
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('zero')
ax.spines['right'].set_color('none')
(continues on next page)
a1 = 0.0
b1 = 0.5
-a2/b2 = -1.5
1/b2 = 2.0
We can use these regression lines or our code to compute conditional expectations.
Let’s compute the mean and variance of the distribution of 𝑧2 conditional on 𝑧1 = 5.
After that we’ll reverse what are on the left and right sides of the regression.
Now let’s compute the mean and variance of the distribution of 𝑧1 conditional on 𝑧2 = 5.
Let’s compare the preceding population mean and variance with outcomes from drawing a large sample and then regressing
𝑧1 − 𝜇1 on 𝑧2 − 𝜇2 .
We know that
𝑧1 − 𝜇1 = 𝛽 (𝑧2 − 𝜇2 ) + 𝜖,
We anticipate that for larger and larger sample sizes, estimated OLS coefficients will converge to 𝛽 and the estimated
variance of 𝜖 will converge to Σ̂ 1 .
# OLS regression
μ1, μ2 = multi_normal.μs
results = sm.OLS(z1_data - μ1, z2_data - μ2).fit()
Let’s compare the preceding population 𝛽 with the OLS sample estimate on 𝑧2 − 𝜇2
multi_normal.βs[0], results.params
(array([[0.5]]), array([0.50068711]))
Let’s compare our population Σ̂ 1 with the degrees-of-freedom adjusted estimate of the variance of 𝜖
(array([[0.75]]), 0.7504621422788655)
(array([2.5]), array([2.50274842]))
Thus, in each case, for our very large sample size, the sample analogues closely approximate their population counterparts.
A Law of Large Numbers explains why sample analogues approximate population objects.
μ = np.random.random(3)
C = np.random.random((3, 3))
Σ = C @ C.T # positive semi-definite
multi_normal = MultivariateNormal(μ, Σ)
μ, Σ
k = 1
multi_normal.partition(k)
2
Let’s compute the distribution of 𝑧1 conditional on 𝑧2 = [ ].
5
ind = 0
z2 = np.array([2., 5.])
n = 1_000_000
data = np.random.multivariate_normal(μ, Σ, size=n)
z1_data = data[:, :k]
z2_data = data[:, k:]
μ1, μ2 = multi_normal.μs
results = sm.OLS(z1_data - μ1, z2_data - μ2).fit()
As above, we compare population and sample regression coefficients, the conditional covariance matrix, and the condi-
tional mean vector in that order.
multi_normal.βs[0], results.params
(array([[0.00013182]]), 0.0001318492235146378)
(array([2.41090846]), array([2.41088967]))
Once again, sample analogues do a good job of approximating their populations counterparts.
Let’s move closer to a real-life example, namely, inferring a one-dimensional measure of intelligence called IQ from a list
of test scores.
The 𝑖th test score 𝑦𝑖 equals the sum of an unknown scalar IQ 𝜃 and a random variable 𝑤𝑖 .
𝑦𝑖 = 𝜃 + 𝜎𝑦 𝑤𝑖 , 𝑖 = 1, … , 𝑛
The distribution of IQ’s for a cross-section of people is a normal random variable described by
𝜃 = 𝜇𝜃 + 𝜎𝜃 𝑤𝑛+1 .
𝑤1
⎡ 𝑤 ⎤
2
⎢ ⎥
𝑤=⎢ ⋮ ⎥ ∼ 𝑁 (0, 𝐼𝑛+1 )
⎢ 𝑤𝑛 ⎥
⎣ 𝑤𝑛+1 ⎦
The following system describes the (𝑛 + 1) × 1 random vector 𝑋 that interests us:
𝑦1 𝜇𝜃 𝜎𝑦 0 ⋯ 0 𝜎𝜃 𝑤1
⎡ 𝑦2 ⎤ ⎡ 𝜇𝜃 ⎤ ⎡ 0 𝜎𝑦 ⋯ 0 𝜎𝜃 ⎤⎡ 𝑤 ⎤
2
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
𝑋=⎢ ⋮ ⎥=⎢ ⋮ ⎥+⎢ ⋮ ⋮ ⋱ ⋮ ⋮ ⎥⎢ ⋮ ⎥,
⎢ 𝑦𝑛 ⎥ ⎢ 𝜇𝜃 ⎥ ⎢ 0 0 ⋯ 𝜎𝑦 𝜎𝜃 ⎥ ⎢ 𝑤𝑛 ⎥
⎣ 𝜃 ⎦ ⎣ 𝜇𝜃 ⎦ ⎣ 0 0 ⋯ 0 𝜎𝜃 ⎦ ⎣ 𝑤𝑛+1 ⎦
or equivalently,
𝑋 = 𝜇𝜃 1𝑛+1 + 𝐷𝑤
𝑦
where 𝑋 = [ ], 1𝑛+1 is a vector of 1s of size 𝑛 + 1, and 𝐷 is an 𝑛 + 1 by 𝑛 + 1 matrix.
𝜃
Let’s define a Python function that constructs the mean 𝜇 and covariance matrix Σ of the random vector 𝑋 that we know
is governed by a multivariate normal distribution.
As arguments, the function takes the number of tests 𝑛, the mean 𝜇𝜃 and the standard deviation 𝜎𝜃 of the IQ distribution,
and the standard deviation of the randomness in test scores 𝜎𝑦 .
n = 50
μθ, σθ, σy = 100., 10., 10.
(array([100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100.,
100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100.,
100., 100., 100., 100., 100., 100., 100., 100., 100., 100., 100.,
(continues on next page)
We can now use our MultivariateNormal class to construct an instance, then partition the mean vector and co-
variance matrix as we wish.
We want to regress IQ, the random variable 𝜃 (what we don’t know), on the vector 𝑦 of test scores (what we do know).
We choose k=n so that 𝑧1 = 𝑦 and 𝑧2 = 𝜃.
k = n
multi_normal_IQ.partition(k)
Using the generator multivariate_normal, we can make one draw of the random vector from our distribution and
then compute the distribution of 𝜃 conditional on our test scores.
Let’s do that and then print out some pertinent quantities.
x = np.random.multivariate_normal(μ_IQ, Σ_IQ)
y = x[:-1] # test scores
θ = x[-1] # IQ
103.64946988092446
The method cond_dist takes test scores 𝑦 as input and returns the conditional normal distribution of the IQ 𝜃.
In the following code, ind sets the variables on the right side of the regression.
Given the way we have defined the vector 𝑋, we want to set ind=1 in order to make 𝜃 the left side variable in the
population regression.
ind = 1
multi_normal_IQ.cond_dist(ind, y)
(array([106.80818783]), array([[1.96078431]]))
The first number is the conditional mean 𝜇𝜃̂ and the second is the conditional variance Σ̂ 𝜃 .
How do additional test scores affect our inferences?
To shed light on this, we compute a sequence of conditional distributions of 𝜃 by varying the number of test scores in the
conditioning set from 1 to 𝑛.
We’ll make a pretty graph showing how our judgment of the person’s IQ change as more test results come in.
plt.show()
The solid blue line in the plot above shows 𝜇𝜃̂ as a function of the number of test scores that we have recorded and
conditioned on.
The blue area shows the span that comes from adding or subtracting 1.96𝜎̂𝜃 from 𝜇𝜃̂ .
Therefore, 95% of the probability mass of the conditional distribution falls in this range.
The value of the random 𝜃 that we drew is shown by the black dotted line.
As more and more test scores come in, our estimate of the person’s 𝜃 become more and more reliable.
By staring at the changes in the conditional distributions, we see that adding more test scores makes 𝜃 ̂ settle down and
approach 𝜃.
Thus, each 𝑦𝑖 adds information about 𝜃.
1
If we were to drive the number of tests 𝑛 → +∞, the conditional standard deviation 𝜎̂𝜃 would converge to 0 at rate 𝑛.5 .
Σ ≡ 𝐷𝐷′ = 𝐶𝐶 ′
and
𝐸𝜖𝜖′ = 𝐼.
It follows that
𝜖 ∼ 𝑁 (0, 𝐼).
Let 𝐺 = 𝐶 −1
𝜖 = 𝐺 (𝑋 − 𝜇𝜃 1𝑛+1 )
This formula confirms that the orthonormal vector 𝜖 contains the same information as the non-orthogonal vector
(𝑋 − 𝜇𝜃 1𝑛+1 ).
We can say that 𝜖 is an orthogonal basis for (𝑋 − 𝜇𝜃 1𝑛+1 ).
Let 𝑐𝑖 be the 𝑖th element in the last row of 𝐶.
Then we can write
The mutual orthogonality of the 𝜖𝑖 ’s provides us with an informative way to interpret them in light of equation (12.1).
Thus, relative to what is known from tests 𝑖 = 1, … , 𝑛 − 1, 𝑐𝑖 𝜖𝑖 is the amount of new information about 𝜃 brought by
the test number 𝑖.
Here new information means surprise or what could not be predicted from earlier information.
Formula (12.1) also provides us with an enlightening way to express conditional means and conditional variances that we
computed earlier.
In particular,
𝐸 [𝜃 ∣ 𝑦1 , … , 𝑦𝑘 ] = 𝜇𝜃 + 𝑐1 𝜖1 + ⋯ + 𝑐𝑘 𝜖𝑘
and
2 2 2
𝑉 𝑎𝑟 (𝜃 ∣ 𝑦1 , … , 𝑦𝑘 ) = 𝑐𝑘+1 + 𝑐𝑘+2 + ⋯ + 𝑐𝑛+1 .
C = np.linalg.cholesky(Σ_IQ)
G = np.linalg.inv(C)
ε = G @ (x - μθ)
cε = C[n, :] * ε
To confirm that these formulas give the same answers that we computed earlier, we can compare the means and variances
of 𝜃 conditional on {𝑦𝑖 }𝑘𝑖=1 with what we obtained above using the formulas implemented in the class Multivari-
ateNormal built on our original representation of conditional distributions for multivariate normal distributions.
# conditional mean
np.max(np.abs(μθ_hat_arr - μθ_hat_arr_C)) < 1e-10
True
# conditional variance
np.max(np.abs(Σθ_hat_arr - Σθ_hat_arr_C)) < 1e-10
True
Evidently, the Cholesky factorizations automatically computes the population regression coefficients and associated
statistics that are produced by our MultivariateNormal class.
The Cholesky factorization computes these things recursively.
Indeed, in formula (12.1),
• the random variable 𝑐𝑖 𝜖𝑖 is information about 𝜃 that is not contained by the information in 𝜖1 , 𝜖2 , … , 𝜖𝑖−1
• the coefficient 𝑐𝑖 is the simple population regression coefficient of 𝜃 − 𝜇𝜃 on 𝜖𝑖
When 𝑛 = 2, we assume that outcomes are draws from a multivariate normal distribution with representation
𝑦1 𝜇𝜃 𝜎𝑦 0 0 0 𝜎𝜃 0 𝑤1
⎡ 𝑦2 ⎤ ⎡ 𝜇𝜃 ⎤ ⎡ 0 𝜎𝑦 0 0 𝜎𝜃 0 ⎤⎡ 𝑤2 ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
𝑦3 ⎥=⎢ 𝜇𝜂 ⎥+⎢ 0 0 𝜎𝑦 0 0 𝜎𝜂 ⎥⎢ 𝑤3
𝑋=⎢ ⎥
⎢ 𝑦4 ⎥ ⎢ 𝜇𝜂 ⎥ ⎢ 0 0 0 𝜎𝑦 0 𝜎𝜂 ⎥⎢ 𝑤4 ⎥
⎢ 𝜃 ⎥ ⎢ 𝜇𝜃 ⎥ ⎢ 0 0 0 0 𝜎𝜃 0 ⎥⎢ 𝑤5 ⎥
⎣ 𝜂 ⎦ ⎣ 𝜇𝜂 ⎦ ⎣ 0 0 0 0 0 𝜎𝜂 ⎦⎣ 𝑤6 ⎦
𝑤1
⎡𝑤 ⎤
where 𝑤 ⎢ 2 ⎥ is a standard normal random vector.
⎢ ⋮ ⎥
⎣𝑤6 ⎦
We construct a Python function construct_moments_IQ2d to construct the mean vector and covariance matrix of
the joint normal distribution.
μ_IQ2d = np.empty(2*(n+1))
μ_IQ2d[:n] = μθ
μ_IQ2d[2*n] = μθ
μ_IQ2d[n:2*n] = μη
μ_IQ2d[2*n+1] = μη
(continues on next page)
n = 2
# mean and variance of θ, η, and y
μθ, σθ, μη, ση, σy = 100., 10., 100., 10, 10
(83.26886447129678, 112.92159885842455)
Now let’s compute distributions of 𝜃 and 𝜇 separately conditional on various subsets of test scores.
It will be fun to compare outcomes with the help of an auxiliary function cond_dist_IQ2d that we now construct.
n = len(μ)
multi_normal = MultivariateNormal(μ, Σ)
multi_normal.partition(n-1)
μ_hat, Σ_hat = multi_normal.cond_dist(1, data)
for indices, IQ, conditions in [([*range(2*n), 2*n], 'θ', 'y1, y2, y3, y4'),
([*range(n), 2*n], 'θ', 'y1, y2'),
([*range(n, 2*n), 2*n], 'θ', 'y3, y4'),
([*range(2*n), 2*n+1], 'η', 'y1, y2, y3, y4'),
([*range(n), 2*n+1], 'η', 'y1, y2'),
([*range(n, 2*n), 2*n+1], 'η', 'y3, y4')]:
The mean and variance of θ conditional on y1, y2, y3, y4 are 85.62 and 33.33␣
↪respectively
The mean and variance of θ conditional on y1, y2 are 85.62 and 33.33␣
↪respectively
The mean and variance of θ conditional on y3, y4 are 100.00 and 100.00␣
↪respectively
The mean and variance of η conditional on y1, y2, y3, y4 are 105.80 and 33.33␣
↪respectively
The mean and variance of η conditional on y1, y2 are 100.00 and 100.00␣
↪respectively
The mean and variance of η conditional on y3, y4 are 105.80 and 33.33␣
↪respectively
Evidently, math tests provide no information about 𝜇 and language tests provide no information about 𝜂.
We can use the multivariate normal distribution and a little matrix algebra to present foundations of univariate linear time
series analysis.
Let 𝑥𝑡 , 𝑦𝑡 , 𝑣𝑡 , 𝑤𝑡+1 each be scalars for 𝑡 ≥ 0.
Consider the following model:
𝑥0 ∼ 𝑁 (0, 𝜎02 )
𝑥𝑡+1 = 𝑎𝑥𝑡 + 𝑏𝑤𝑡+1 , 𝑤𝑡+1 ∼ 𝑁 (0, 1) , 𝑡 ≥ 0
𝑦𝑡 = 𝑐𝑥𝑡 + 𝑑𝑣𝑡 , 𝑣𝑡 ∼ 𝑁 (0, 1) , 𝑡 ≥ 0
𝑥0
⎡ 𝑥 ⎤
𝑋=⎢ 1 ⎥
⎢ ⋮ ⎥
⎣ 𝑥𝑇 ⎦
and the covariance matrix Σ𝑥 can be constructed using the moments we have computed above.
Similarly, we can define
𝑦0 𝑣0
⎡ 𝑦 ⎤ ⎡ 𝑣 ⎤
𝑌 =⎢ 1 ⎥, 𝑣=⎢ 1 ⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎣ 𝑦𝑇 ⎦ ⎣ 𝑣𝑇 ⎦
and therefore
𝑌 = 𝐶𝑋 + 𝐷𝑉
where 𝐶 and 𝐷 are both diagonal matrices with constant 𝑐 and 𝑑 as diagonal respectively.
Consequently, the covariance matrix of 𝑌 is
Σ𝑦 = 𝐸𝑌 𝑌 ′ = 𝐶Σ𝑥 𝐶 ′ + 𝐷𝐷′
𝑋
𝑍=[ ]
𝑌
and
Σ𝑥 Σ𝑥 𝐶 ′
Σ𝑧 = 𝐸𝑍𝑍 ′ = [ ]
𝐶Σ𝑥 Σ𝑦
Thus, the stacked sequences {𝑥𝑡 }𝑇𝑡=0 and {𝑦𝑡 }𝑇𝑡=0 jointly follow the multivariate normal distribution 𝑁 (0, Σ𝑧 ).
Σx[0, 0] = σ0 ** 2
for i in range(T):
Σx[i, i+1:] = Σx[i, i] * a ** np.arange(1, T+1-i)
Σx[i+1:, i] = Σx[i, i+1:]
Σx
Σy = C @ Σx @ C.T + D @ D.T
Σz[:T+1, :T+1] = Σx
Σz[:T+1, T+1:] = Σx @ C.T
Σz[T+1:, :T+1] = C @ Σx
Σz[T+1:, T+1:] = Σy
Σz
z = np.random.multivariate_normal(μz, Σz)
x = z[:T+1]
y = z[T+1:]
print("X = ", x)
print("Y = ", y)
print(" E [ X | Y] = ", )
multi_normal_ex1.cond_dist(0, y)
t = 3
sub_Σz
sub_y = y[:t]
multi_normal_ex2.cond_dist(0, sub_y)
(array([1.76190901]), array([[1.00201996]]))
t = 3
j = 2
sub_μz = np.zeros(t-j+2)
sub_Σz = np.empty((t-j+2, t-j+2))
sub_Σz
sub_y = y[:t-j+1]
multi_normal_ex3.cond_dist(0, sub_y)
(array([0.29476547]), array([[1.81413617]]))
𝜖 = 𝐻 −1 𝑌 .
H = np.linalg.cholesky(Σy)
array([[1.00124922, 0. , 0. , 0. ],
[0.8988771 , 1.00225743, 0. , 0. ],
[0.80898939, 0.89978675, 1.00225743, 0. ],
[0.72809046, 0.80980808, 0.89978676, 1.00225743]])
ε = np.linalg.inv(H) @ y
This example is an instance of what is known as a Wold representation in time series analysis.
𝐶Σ𝑦̃ 𝐶 ′ 0𝑁−2×𝑁−2 𝛼2 𝛼1
Σ𝑏 = [ ], 𝐶=[ ]
0𝑁−2×2 0𝑁−2×𝑁−2 0 𝛼2
𝜎𝑢2 0 ⋯ 0
⎡ 0 𝜎𝑢2 ⋯ 0 ⎤
Σ𝑢 = ⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⎥
⎣ 0 0 ⋯ 𝜎𝑢2 ⎦
# set parameters
T = 80
T = 160
# coefficients of the second order difference equation
0 = 10
1 = 1.53
2 = -.9
# variance of u
σu = 1.
σu = 10.
for i in range(T):
A[i, i] = 1
if i-1 >= 0:
A[i, i-1] = - 1
if i-2 >= 0:
A[i, i-2] = - 2
A_inv = np.linalg.inv(A)
μy = A_inv @ μb
Σb = np.zeros((T, T))
Let
𝑇 −𝑡
𝑝𝑡 = ∑ 𝛽 𝑗 𝑦𝑡+𝑗
𝑗=0
Form
𝑝1 1 𝛽 𝛽 2 ⋯ 𝛽 𝑇 −1 𝑦1
⎡ 𝑝 ⎤ ⎡ 0 1 𝛽 ⋯ 𝛽 𝑇 −2 ⎤ ⎡ 𝑦2 ⎤
⎢ 2 ⎥ ⎢ 𝑇 −3 ⎥ ⎢ ⎥
⎢ 𝑝3 ⎥ = ⎢ 0 0 1 ⋯ 𝛽 ⎥⎢ 𝑦3 ⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⋮ ⋮ ⋮ ⋮ ⎥⎢ ⋮ ⎥
⎣ 𝑝𝑇 ⎦ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
⏟ ⎣ 0 0 0 ⋯ 1 ⎦⎣ 𝑦𝑇 ⎦
≡𝑝 ≡𝐵
we have
𝜇𝑝 = 𝐵𝜇𝑦
Σ𝑝 = 𝐵Σ𝑦 𝐵′
β = .96
# construct B
B = np.zeros((T, T))
for i in range(T):
B[i, i:] = β ** np.arange(0, T-i)
Denote
𝑦 𝐼
𝑧=[ ]= [ ]𝑦
𝑝 ⏟ 𝐵
≡𝐷
Thus, {𝑦𝑡 }𝑇𝑡=1 and {𝑝𝑡 }𝑇𝑡=1 jointly follow the multivariate normal distribution 𝑁 (𝜇𝑧 , Σ𝑧 ), where
𝜇𝑧 = 𝐷𝜇𝑦
Σ𝑧 = 𝐷Σ𝑦 𝐷′
D = np.vstack([np.eye(T), B])
μz = D @ μy
Σz = D @ Σy @ D.T
We can simulate paths of 𝑦𝑡 and 𝑝𝑡 and compute the conditional mean 𝐸 [𝑝𝑡 ∣ 𝑦𝑡−1 , 𝑦𝑡 ] using the MultivariateNor-
mal class.
z = np.random.multivariate_normal(μz, Σz)
y, p = z[:T], z[T:]
cond_Ep = np.empty(T-1)
sub_μ = np.empty(3)
sub_Σ = np.empty((3, 3))
for t in range(2, T+1):
sub_μ[:] = μz[[t-2, t-1, T-1+t]]
sub_Σ[:, :] = Σz[[t-2, t-1, T-1+t], :][:, [t-2, t-1, T-1+t]]
plt.xlabel('t')
plt.legend(loc=1)
plt.show()
In the above graph, the green line is what the price of the stock would be if people had perfect foresight about the path of
dividends while the green line is the conditional expectation 𝐸𝑝𝑡 |𝑦𝑡 , 𝑦𝑡−1 , which is what the price would be if people did
not have perfect foresight but were optimally predicting future dividends on the basis of the information 𝑦𝑡 , 𝑦𝑡−1 at time
𝑡.
Assume that 𝑥0 is an 𝑛 × 1 random vector and that 𝑦0 is a 𝑝 × 1 random vector determined by the observation equation
𝑥0̂ Σ0 Σ0 𝐺′
𝜇=[ ], Σ=[ ]
𝐺𝑥0̂ 𝐺Σ0 𝐺Σ0 𝐺′ + 𝑅
By applying an appropriate instance of the above formulas for the mean vector 𝜇1̂ and covariance matrix Σ̂ 11 of 𝑧1
conditional on 𝑧2 , we find that the probability distribution of 𝑥0 conditional on 𝑦0 is 𝒩(𝑥0̃ , Σ̃ 0 ) where
𝛽0 = Σ0 𝐺′ (𝐺Σ0 𝐺′ + 𝑅)−1
𝑥0̃ = 𝑥0̂ + 𝛽0 (𝑦0 − 𝐺𝑥0̂ )
Σ̃ 0 = Σ0 − Σ0 𝐺′ (𝐺Σ0 𝐺′ + 𝑅)−1 𝐺Σ0
We can express our finding that the probability distribution of 𝑥0 conditional on 𝑦0 is 𝒩(𝑥0̃ , Σ̃ 0 ) by representing 𝑥0 as
𝑥0 = 𝑥0̃ + 𝜁0 (12.2)
where 𝜁0 is a Gaussian random vector that is orthogonal to 𝑥0̃ and 𝑦0 and that has mean vector 0 and conditional covariance
matrix 𝐸[𝜁0 𝜁0′ |𝑦0 ] = Σ̃ 0 .
Now suppose that we are in a time series setting and that we have the one-step state transition equation
𝑥1 = 𝐴(𝑥0̃ + 𝜁0 ) + 𝐶𝑤1
It follows that
and that the corresponding conditional covariance matrix 𝐸(𝑥1 − 𝐸𝑥1 |𝑦0 )(𝑥1 − 𝐸𝑥1 |𝑦0 )′ ≡ Σ1 is
Σ1 = 𝐴Σ̃ 0 𝐴′ + 𝐶𝐶 ′
or
or
where
where as before 𝑥0 ∼ 𝒩(𝑥0̂ , Σ0 ), 𝑤𝑡+1 is the 𝑡 + 1th component of an i.i.d. stochastic process distributed as 𝑤𝑡+1 ∼
𝒩(0, 𝐼), and 𝑣𝑡 is the 𝑡th component of an i.i.d. process distributed as 𝑣𝑡 ∼ 𝒩(0, 𝑅) and the {𝑤𝑡+1 }∞ ∞
𝑡=0 and {𝑣𝑡 }𝑡=0
processes are orthogonal at all pairs of dates.
The logic and formulas that we applied above imply that the probability distribution of 𝑥𝑡 conditional on 𝑦0 , 𝑦1 , … , 𝑦𝑡−1 =
𝑦𝑡−1 is
where {𝑥𝑡̃ , Σ̃ 𝑡 }∞
𝑡=1 can be computed by iterating on the following equations starting from 𝑡 = 1 and initial conditions for
̃
𝑥0̃ , Σ0 computed as we have above:
Σ𝑡 = 𝐴Σ̃ 𝑡−1 𝐴′ + 𝐶𝐶 ′
𝑥𝑡̂ = 𝐴𝑥𝑡−1
̃
𝛽𝑡 = Σ𝑡 𝐺′ (𝐺Σ𝑡 𝐺′ + 𝑅)−1
𝑥𝑡̃ = 𝑥𝑡̂ + 𝛽𝑡 (𝑦𝑡 − 𝐺𝑥𝑡̂ )
Σ̃ 𝑡 = Σ𝑡 − Σ𝑡 𝐺′ (𝐺Σ𝑡 𝐺′ + 𝑅)−1 𝐺Σ𝑡
If we shift the first equation forward one period and then substitute the expression for Σ̃ 𝑡 on the right side of the fifth
equation into it we obtain
This is a matrix Riccati difference equation that is closely related to another matrix Riccati difference equation that appears
in a quantecon lecture on the basics of linear quadratic control theory.
Stare at the two preceding equations for a moment or two, the first being a matrix difference equation for a conditional
covariance matrix, the second being a matrix difference equation in the matrix appearing in a quadratic form for an
intertemporal cost of value function.
Although the two equations are not identical, they display striking family resemblences.
• the first equation tells dynamics that work forward in time
• the second equation tells dynamics that work backward in time
• while many of the terms are similar, one equation seems to apply matrix transformations to some matrices that play
similar roles in the other equation
The family resemblences of these two equations reflects a transcendent duality that prevails between control theory and
filtering theory.
12.12.3 An example
G = np.array([[1., 3.]])
R = np.array([[1.]])
μ = np.hstack([x0_hat, G @ x0_hat])
Σ = np.block([[Σ0, Σ0 @ G.T], [G @ Σ0, G @ Σ0 @ G.T + R]])
multi_normal.partition(2)
# the observation of y
y0 = 2.3
# conditional distribution of x0
μ1_hat, Σ11 = multi_normal.cond_dist(0, y0)
μ1_hat, Σ11
(array([-0.078125, 0.803125]),
array([[ 0.72098214, -0.203125 ],
[-0.403125 , 0.228125 ]]))
Here is code for solving a dynamic filtering problem by iterating on our equations, followed by an example.
p, n = G.shape
T = len(y_seq)
x_hat_seq = np.empty((T+1, n))
Σ_hat_seq = np.empty((T+1, n, n))
x_hat_seq[0] = x0_hat
Σ_hat_seq[0] = Σ0
for t in range(T):
xt_hat = x_hat_seq[t]
Σt = Σ_hat_seq[t]
μ = np.hstack([xt_hat, G @ xt_hat])
Σ = np.block([[Σt, Σt @ G.T], [G @ Σt, G @ Σt @ G.T + R]])
# filtering
multi_normal = MultivariateNormal(μ, Σ)
multi_normal.partition(n)
x_tilde, Σ_tilde = multi_normal.cond_dist(0, y_seq[t])
# forecasting
x_hat_seq[t+1] = A @ x_tilde
Σ_hat_seq[t+1] = C @ C.T + A @ Σ_tilde @ A.T
(array([[0. , 1. ],
[0.1215625 , 0.24875 ],
[0.18680212, 0.06904689],
[0.75576875, 0.05558463]]),
array([[[1. , 0.5 ],
[0.3 , 2. ]],
[[4.12874554, 1.95523214],
[1.92123214, 1.04592857]],
(continues on next page)
[[4.08198663, 1.99218488],
[1.98640488, 1.00886423]],
[[4.06457628, 2.00041999],
[1.99943739, 1.00275526]]]))
The iterative algorithm just described is a version of the celebrated Kalman filter.
We describe the Kalman filter and some applications of it in A First Look at the Kalman Filter
The factor analysis model widely used in psychology and other fields can be represented as
𝑌 = Λ𝑓 + 𝑈
where
1. 𝑌 is 𝑛 × 1 random vector, 𝐸𝑈 𝑈 ′ = 𝐷 is a diagonal matrix,
2. Λ is 𝑛 × 𝑘 coefficient matrix,
3. 𝑓 is 𝑘 × 1 random vector, 𝐸𝑓𝑓 ′ = 𝐼,
4. 𝑈 is 𝑛 × 1 random vector, and 𝑈 ⟂ 𝑓 (i.e., 𝐸𝑈 𝑓 ′ = 0 )
5. It is presumed that 𝑘 is small relative to 𝑛; often 𝑘 is only 1 or 2, as in our IQ examples.
This implies that
Σ𝑦 = 𝐸𝑌 𝑌 ′ = ΛΛ′ + 𝐷
𝐸𝑌 𝑓 ′ = Λ
𝐸𝑓𝑌 ′ = Λ′
Thus, the covariance matrix Σ𝑌 is the sum of a diagonal matrix 𝐷 and a positive semi-definite matrix ΛΛ′ of rank 𝑘.
This means that all covariances among the 𝑛 components of the 𝑌 vector are intermediated by their common dependencies
on the 𝑘 < factors.
Form
𝑓
𝑍=( )
𝑌
𝐼 Λ′
Σ𝑧 = 𝐸𝑍𝑍 ′ = ( )
Λ ΛΛ′ + 𝐷
In the following, we first construct the mean vector and the covariance matrix for the case where 𝑁 = 10 and 𝑘 = 2.
N = 10
k = 2
where the first half of the first column of Λ is filled with 1s and 0s for the rest half, and symmetrically for the second
column.
𝐷 is a diagonal matrix with parameter 𝜎𝑢2 on the diagonal.
Λ = np.zeros((N, k))
Λ[:N//2, 0] = 1
Λ[N//2:, 1] = 1
σu = .5
D = np.eye(N) * σu ** 2
# compute Σy
Σy = Λ @ Λ.T + D
We can now construct the mean vector and the covariance matrix for 𝑍.
μz = np.zeros(k+N)
Σz = np.empty((k+N, k+N))
z = np.random.multivariate_normal(μz, Σz)
f = z[:k]
y = z[k:]
Let’s compute the conditional distribution of the hidden factor 𝑓 on the observations 𝑌 , namely, 𝑓 ∣ 𝑌 = 𝑦.
multi_normal_factor.cond_dist(0, y)
(array([-0.30191322, 1.22653669]),
array([[0.04761905, 0. ],
[0. , 0.04761905]]))
B = Λ.T @ np.linalg.inv(Σy)
B @ y
array([-0.30191322, 1.22653669])
multi_normal_factor.cond_dist(1, f)
Λ @ f
To learn about Principal Components Analysis (PCA), please see this lecture Singular Value Decompositions.
For fun, let’s apply a PCA decomposition to a covariance matrix Σ𝑦 that in fact is governed by our factor-analytic model.
Technically, this means that the PCA model is misspecified. (Can you explain why?)
Nevertheless, this exercise will let us study how well the first two principal components from a PCA can approximate the
conditional expectations 𝐸𝑓𝑖 |𝑌 for our two factors 𝑓𝑖 , 𝑖 = 1, 2 for the factor analytic model that we have assumed truly
governs the data on 𝑌 we have generated.
So we compute the PCA decomposition
̃ ′
Σ𝑦 = 𝑃 Λ𝑃
𝑌 = 𝑃𝜖
and
𝜖 = 𝑃 ′𝑌
Note that we will arrange the eigenvectors in 𝑃 in the descending order of eigenvalues.
_tilde, P = np.linalg.eigh(Σy)
P = P[:, ind]
_tilde = _tilde[ind]
Λ_tilde = np.diag( _tilde)
_tilde = [5.25 5.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25]
4.440892098500626e-16
array([[1.25, 1. , 1. , 1. , 1. , 0. , 0. , 0. , 0. , 0. ],
[1. , 1.25, 1. , 1. , 1. , 0. , 0. , 0. , 0. , 0. ],
[1. , 1. , 1.25, 1. , 1. , 0. , 0. , 0. , 0. , 0. ],
[1. , 1. , 1. , 1.25, 1. , 0. , 0. , 0. , 0. , 0. ],
[1. , 1. , 1. , 1. , 1.25, 0. , 0. , 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 0. , 1.25, 1. , 1. , 1. , 1. ],
[0. , 0. , 0. , 0. , 0. , 1. , 1.25, 1. , 1. , 1. ],
[0. , 0. , 0. , 0. , 0. , 1. , 1. , 1.25, 1. , 1. ],
[0. , 0. , 0. , 0. , 0. , 1. , 1. , 1. , 1.25, 1. ],
[0. , 0. , 0. , 0. , 0. , 1. , 1. , 1. , 1. , 1.25]])
ε = P.T @ y
print("ε = ", ε)
print('f = ', f)
f = [-0.1949429 1.36894286]
• the 𝑁 values of 𝑦
• the 𝑁 values of the principal components 𝜖
• the value of the first factor 𝑓1 plotted only for the first 𝑁 /2 observations of 𝑦 for which it receives a non-zero
loading in Λ
• the value of the second factor 𝑓2 plotted only for the final 𝑁 /2 observations for which it receives a non-zero loading
in Λ
plt.scatter(range(N), y, label='y')
plt.scatter(range(N), ε, label='$\epsilon$')
plt.hlines(f[0], 0, N//2-1, ls='--', label='$f_{1}$')
plt.hlines(f[1], N//2, N-1, ls='-.', label='$f_{2}$')
plt.legend()
plt.show()
ε[:2]
array([-0.30191322, 1.22653669])
The fraction of variance in 𝑦𝑡 explained by the first two principal components can be computed as below.
_tilde[:2].sum() / _tilde.sum()
0.84
Compute
𝑌 ̂ = 𝑃 𝑗 𝜖𝑗 + 𝑃 𝑘 𝜖𝑘
In this example, it turns out that the projection 𝑌 ̂ of 𝑌 on the first two principal components does a good job of approx-
imating 𝐸𝑓 ∣ 𝑦.
We confirm this in the following plot of 𝑓, 𝐸𝑦 ∣ 𝑓, 𝐸𝑓 ∣ 𝑦, and 𝑦 ̂ on the coordinate axis versus 𝑦 on the ordinate axis.
plt.scatter(range(N), Λ @ f, label='$Ey|f$')
plt.scatter(range(N), y_hat, label='$\hat{y}$')
plt.hlines(f[0], 0, N//2-1, ls='--', label='$f_{1}$')
plt.hlines(f[1], N//2, N-1, ls='-.', label='$f_{2}$')
Efy = B @ y
plt.hlines(Efy[0], 0, N//2-1, ls='--', color='b', label='$Ef_{1}|y$')
plt.hlines(Efy[1], N//2, N-1, ls='-.', color='b', label='$Ef_{2}|y$')
plt.legend()
plt.show()
The covariance matrix of 𝑌 ̂ can be computed by first constructing the covariance matrix of 𝜖 and then use the upper left
block for 𝜖1 and 𝜖2 .
Σy_hat =
[[1.05 1.05 1.05 1.05 1.05 0. 0. 0. 0. 0. ]
[1.05 1.05 1.05 1.05 1.05 0. 0. 0. 0. 0. ]
[1.05 1.05 1.05 1.05 1.05 0. 0. 0. 0. 0. ]
[1.05 1.05 1.05 1.05 1.05 0. 0. 0. 0. 0. ]
[1.05 1.05 1.05 1.05 1.05 0. 0. 0. 0. 0. ]
[0. 0. 0. 0. 0. 1.05 1.05 1.05 1.05 1.05]
[0. 0. 0. 0. 0. 1.05 1.05 1.05 1.05 1.05]
[0. 0. 0. 0. 0. 1.05 1.05 1.05 1.05 1.05]
[0. 0. 0. 0. 0. 1.05 1.05 1.05 1.05 1.05]
[0. 0. 0. 0. 0. 1.05 1.05 1.05 1.05 1.05]]
THIRTEEN
13.1 Overview
This lecture puts elementary tools to work to approximate probability distributions of the annual failure rates of a system
consisting of a number of critical parts.
We’ll use log normal distributions to approximate probability distributions of critical component parts.
To approximate the probability distribution of the sum of 𝑛 log normal probability distributions that describes the failure
rate of the entire system, we’ll compute the convolution of those 𝑛 log normal probability distributions.
We’ll use the following concepts and tools:
• log normal distributions
• the convolution theorem that describes the probability distribution of the sum independent random variables
• fault tree analysis for approximating a failure rate of a multi-component system
• a hierarchical probability model for describing uncertain probabilities
• Fourier transforms and inverse Fourier tranforms as efficient ways of computing convolutions of sequences
For more about Fourier transforms see this quantecon lecture Circulant Matrices as well as these lecture Covariance
Stationary Processes and Estimation of Spectra.
El-Shanawany, Ardron, and Walker [El-Shanawany et al., 2018] and Greenfield and Sargent [Greenfield and Sargent,
1993] used some of the methods described here to approximate probabilities of failures of safety systems in nuclear
facilities.
These methods respond to some of the recommendations made by Apostolakis [Apostolakis, 1990] for constructing
procedures for quantifying uncertainty about the reliability of a safety system.
We’ll start by bringing in some Python machinery.
WARNING: Running pip as the 'root' user can result in broken permissions and␣
↪conflicting behaviour with the system package manager. It is recommended to use␣
247
Intermediate Quantitative Economics with Python
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import fftconvolve
from tabulate import tabulate
import time
np.set_printoptions(precision=3, suppress=True)
If a random variable 𝑥 follows a normal distribution with mean 𝜇 and variance 𝜎2 , then the natural logarithm of 𝑥, say
𝑦 = log(𝑥), follows a log normal distribution with parameters 𝜇, 𝜎2 .
Notice that we said parameters and not mean and variance 𝜇, 𝜎2 .
• 𝜇 and 𝜎2 are the mean and variance of 𝑥 = exp(𝑦)
• they are not the mean and variance of 𝑦
1 2 2 2
• instead, the mean of 𝑦 is 𝑒𝜇+ 2 𝜎 and the variance of 𝑦 is (𝑒𝜎 − 1)𝑒2𝜇+𝜎
A log normal random variable 𝑦 is nonnegative.
The density for a log normal random variate 𝑦 is
1 −(log 𝑦 − 𝜇)2
𝑓(𝑦) = √ exp ( )
𝑦𝜎 2𝜋 2𝜎2
for 𝑦 ≥ 0.
Important features of a log normal random variable are
1 2
mean: 𝑒𝜇+ 2 𝜎
2 2
variance: (𝑒𝜎 − 1)𝑒2𝜇+𝜎
median: 𝑒𝜇
2
mode: 𝑒𝜇−𝜎
.95 quantile: 𝑒𝜇+1.645𝜎
.95-.05 quantile ratio: 𝑒1.645𝜎
Recall the following stability property of two independent normally distributed random variables:
If 𝑥1 is normal with mean 𝜇1 and variance 𝜎12 and 𝑥2 is independent of 𝑥1 and normal with mean 𝜇2 and variance 𝜎22 ,
then 𝑥1 + 𝑥2 is normally distributed with mean 𝜇1 + 𝜇2 and variance 𝜎12 + 𝜎22 .
Independent log normal distributions have a different stability property.
The product of independent log normal random variables is also log normal.
In particular, if 𝑦1 is log normal with parameters (𝜇1 , 𝜎12 ) and 𝑦2 is log normal with parameters (𝜇2 , 𝜎22 ), then the product
𝑦1 𝑦2 is log normal with parameters (𝜇1 + 𝜇2 , 𝜎12 + 𝜎22 ).
Note: While the product of two log normal distributions is log normal, the sum of two log normal distributions is not
log normal.
This observation sets the stage for challenge that confronts us in this lecture, namely, to approximate probability distri-
butions of sums of independent log normal random variables.
To compute the probability distribution of the sum of two log normal distributions, we can use the following convolution
property of a probability distribution that is a sum of independent random variables.
to compute a discretized version of the probability distribution of the sum of two random variables, one with probability
mass function 𝑓, the other with probability mass function 𝑔.
Before applying the convolution property to sums of log normal distributions, let’s practice on some simple discrete
distributions.
To take one example, let’s consider the following two probability distributions
𝑓𝑗 = Prob(𝑋 = 𝑗), 𝑗 = 0, 1
and
𝑔𝑗 = Prob(𝑌 = 𝑗), 𝑗 = 0, 1, 2, 3
and
ℎ𝑗 = Prob(𝑍 ≡ 𝑋 + 𝑌 = 𝑗), 𝑗 = 0, 1, 2, 3, 4
ℎ=𝑓 ∗𝑔 =𝑔∗𝑓
f = [.75, .25]
g = [0., .6, 0., .4]
h = np.convolve(f,g)
hf = fftconvolve(f,g)
A little later we’ll explain some advantages that come from using scipy.signal.ftconvolve rather than numpy.
convolve.numpy program convolve.
They provide the same answers but scipy.signal.ftconvolve is much faster.
That’s why we rely on it later in this lecture.
We’ll construct an example to verify that discretized distributions can do a good job of approximating samples drawn
from underlying continuous distributions.
We’ll start by generating samples of size 25000 of three independent log normal random variates as well as pairwise and
triple-wise sums.
Then we’ll plot histograms and compare them with convolutions of appropriate discretized log normal distributions.
## create sums of two and three log normal random variates ssum2 = s1 + s2 and ssum3␣
↪= s1 + s2 + s3
ssum2 = s1 + s2
ssum3 = s1 + s2 + s3
samp_mean2 = np.mean(s2)
pop_mean2 = np.exp(mu2+ (sigma2**2)/2)
Here are helper functions that create a discretized version of a log normal probability density function.
def p_log_normal(x,μ,σ):
p = 1 / (σ*x*np.sqrt(2*np.pi)) * np.exp(-1/2*((np.log(x) - μ)/σ)**2)
return p
def pdf_seq(μ,σ,I,m):
x = np.arange(1e-7,I,m)
p_array = p_log_normal(x,μ,σ)
p_array_norm = p_array/np.sum(p_array)
return p_array,p_array_norm,x
Now we shall set a grid length 𝐼 and a grid increment size 𝑚 = 1 for our discretizations.
Note: We set 𝐼 equal to a power of two because we want to be free to use a Fast Fourier Transform to compute a
convolution of two sequences (discrete distributions).
Setting it to 15 rather than 12, for example, improves how well the discretized probability mass function approximates
the original continuous probability density function being studied.
p=15
I = 2**p # Truncation value
m = .1 # increment size
p1,p1_norm,x = pdf_seq(mu1,sigma1,I,m)
## compute number of points to evaluate the probability mass function
NT = x.size
plt.figure(figsize = (8,8))
plt.subplot(2,1,1)
plt.plot(x[:int(NT)],p1[:int(NT)],label = '')
plt.xlim(0,2500)
count, bins, ignored = plt.hist(s1, 1000, density=True, align='mid')
plt.show()
# Compute mean from discretized pdf and compare with the theoretical value
mean= np.sum(np.multiply(x[:NT],p1_norm[:NT]))
meantheory = np.exp(mu1+.5*sigma1**2)
mean, meantheory
(244.69059898302908, 244.69193226422038)
Now let’s use the convolution theorem to compute the probability distribution of a sum of the two log normal random
variables we have parameterized above.
We’ll also compute the probability of a sum of three log normal distributions constructed above.
Before we do these things, we shall explain our choice of Python algorithm to compute a convolution of two sequences.
Because the sequences that we convolve are long, we use the scipy.signal.fftconvolve function rather than
the numpy.convove function.
These two functions give virtually equivalent answers but for long sequences scipy.signal.fftconvolve is much
faster.
The program scipy.signal.fftconvolve uses fast Fourier transforms and their inverses to calculate convolu-
tions.
Let’s define the Fourier transform and the inverse Fourier transform.
The Fourier transform of a sequence {𝑥𝑡 }𝑇𝑡=0
−1
is a sequence of complex numbers {𝑥(𝜔𝑗 )}𝑇𝑗=0
−1
given by
𝑇 −1
𝑥(𝜔𝑗 ) = ∑ 𝑥𝑡 exp(−𝑖𝜔𝑗 𝑡) (13.1)
𝑡=0
2𝜋𝑗
where 𝜔𝑗 = 𝑇 for 𝑗 = 0, 1, … , 𝑇 − 1.
The inverse Fourier transform of the sequence {𝑥(𝜔𝑗 )}𝑇𝑗=0
−1
is
𝑇 −1
𝑥𝑡 = 𝑇 −1 ∑ 𝑥(𝜔𝑗 ) exp(𝑖𝜔𝑗 𝑡) (13.2)
𝑗=0
p1,p1_norm,x = pdf_seq(mu1,sigma1,I,m)
p2,p2_norm,x = pdf_seq(mu2,sigma2,I,m)
p3,p3_norm,x = pdf_seq(mu3,sigma3,I,m)
tic = time.perf_counter()
c1 = np.convolve(p1_norm,p2_norm)
c2 = np.convolve(c1,p3_norm)
(continues on next page)
toc = time.perf_counter()
tic = time.perf_counter()
c1f = fftconvolve(p1_norm,p2_norm)
c2f = fftconvolve(c1f,p3_norm)
toc = time.perf_counter()
toc = time.perf_counter()
print("time with np.convolve = ", tdiff1, "; time with fftconvolve = ", tdiff2)
The fast Fourier transform is two orders of magnitude faster than numpy.convolve
Now let’s plot our computed probability mass function approximation for the sum of two log normal random variables
against the histogram of the sample that we formed above.
NT= np.size(x)
plt.figure(figsize = (8,8))
plt.subplot(2,1,1)
plt.plot(x[:int(NT)],c1f[:int(NT)]/m,label = '')
plt.xlim(0,5000)
plt.show()
NT= np.size(x)
plt.figure(figsize = (8,8))
plt.subplot(2,1,1)
plt.plot(x[:int(NT)],c2f[:int(NT)]/m,label = '')
plt.xlim(0,5000)
plt.show()
(489.3810974093853, 489.38386452844077)
(734.0714863312272, 734.0757967926611)
We shall soon apply the convolution theorem to compute the probability of a top event in a failure tree analysis.
Before applying the convolution theorem, we first describe the model that connects constituent events to the top end whose
failure rate we seek to quantify.
The model is an example of the widely used failure tree analysis described by El-Shanawany, Ardron, and Walker
[El-Shanawany et al., 2018].
To construct the statistical model, we repeatedly use what is called the rare event approximation.
We want to compute the probabilty of an event 𝐴 ∪ 𝐵.
• the union 𝐴 ∪ 𝐵 is the event that 𝐴 OR 𝐵 occurs
A law of probability tells us that 𝐴 OR 𝐵 occurs with probability
𝑃 (𝐴 ∪ 𝐵) = 𝑃 (𝐴) + 𝑃 (𝐵) − 𝑃 (𝐴 ∩ 𝐵)
where the intersection 𝐴 ∩ 𝐵 is the event that 𝐴 AND 𝐵 both occur and the union 𝐴 ∪ 𝐵 is the event that 𝐴 OR 𝐵
occurs.
If 𝐴 and 𝐵 are independent, then
𝑃 (𝐴 ∩ 𝐵) = 𝑃 (𝐴)𝑃 (𝐵)
If 𝑃 (𝐴) and 𝑃 (𝐵) are both small, then 𝑃 (𝐴)𝑃 (𝐵) is even smaller.
The rare event approximation is
𝑃 (𝐴 ∪ 𝐵) ≈ 𝑃 (𝐴) + 𝑃 (𝐵)
13.7 Application
A system has been designed with the feature a system failure occurs when any of 𝑛 critical components fails.
The failure probability 𝑃 (𝐴𝑖 ) of each event 𝐴𝑖 is small.
We assume that failures of the components are statistically independent random variables.
We repeatedly apply a rare event approximation to obtain the following formula for the problem of a system failure:
or
𝑛
𝑃 (𝐹 ) ≈ ∑ 𝑃 (𝐴𝑖 ) (13.3)
𝑖=1
Probabilities for each event are recorded as failure rates per year.
Now we come to the problem that really interests us, following [El-Shanawany et al., 2018] and Greenfield and Sargent
[Greenfield and Sargent, 1993] in the spirit of Apostolakis [Apostolakis, 1990].
The constituent probabilities or failure rates 𝑃 (𝐴𝑖 ) are not known a priori and have to be estimated.
We address this problem by specifying probabilities of probabilities that capture one notion of not knowing the con-
stituent probabilities that are inputs into a failure tree analysis.
Thus, we assume that a system analyst is uncertain about the failure rates 𝑃 (𝐴𝑖 ), 𝑖 = 1, … , 𝑛 for components of a system.
The analyst copes with this situation by regarding the systems failure probability 𝑃 (𝐹 ) and each of the component prob-
abilities 𝑃 (𝐴𝑖 ) as random variables.
• dispersions of the probability distribution of 𝑃 (𝐴𝑖 ) characterizes the analyst’s uncertainty about the failure prob-
ability 𝑃 (𝐴𝑖 )
• the dispersion of the implied probability distribution of 𝑃 (𝐹 ) characterizes his uncertainty about the probability
of a system’s failure.
This leads to what is sometimes called a hierarchical model in which the analyst has probabilities about the probabilities
𝑃 (𝐴𝑖 ).
The analyst formalizes his uncertainty by assuming that
• the failure probability 𝑃 (𝐴𝑖 ) is itself a log normal random variable with parameters (𝜇𝑖 , 𝜎𝑖 ).
• failure rates 𝑃 (𝐴𝑖 ) and 𝑃 (𝐴𝑗 ) are statistically independent for all pairs with 𝑖 ≠ 𝑗.
The analyst calibrates the parameters (𝜇𝑖 , 𝜎𝑖 ) for the failure events 𝑖 = 1, … , 𝑛 by reading reliability studies in engineering
papers that have studied historical failure rates of components that are as similar as possible to the components being used
in the system under study.
The analyst assumes that such information about the observed dispersion of annual failure rates, or times to failure, can
inform him of what to expect about parts’ performances in his system.
The analyst assumes that the random variables 𝑃 (𝐴𝑖 ) are statistically mutually independent.
The analyst wants to approximate a probability mass function and cumulative distribution function of the systems failure
probability 𝑃 (𝐹 ).
• We say probability mass function because of how we discretize each random variable, as described earlier.
The analyst calculates the probability mass function for the top event 𝐹 , i.e., a system failure, by repeatedly applying
the convolution theorem to compute the probability distribution of a sum of independent log normal random variables, as
described in equation (13.3).
Note: Because the failure rates are all very small, log normal distributions with the above parameter values actually
describe 𝑃 (𝐴𝑖 ) times 10−09 .
So the probabilities that we’ll put on the 𝑥 axis of the probability mass function and associated cumulative distribution
function should be multiplied by 10−09
To extract a table that summarizes computed quantiles, we’ll use a helper function
p=15
I = 2**p # Truncation value
m = .05 # increment size
p1,p1_norm,x = pdf_seq(mu1,sigma1,I,m)
p2,p2_norm,x = pdf_seq(mu2,sigma2,I,m)
p3,p3_norm,x = pdf_seq(mu3,sigma3,I,m)
p4,p4_norm,x = pdf_seq(mu4,sigma4,I,m)
p5,p5_norm,x = pdf_seq(mu5,sigma5,I,m)
p6,p6_norm,x = pdf_seq(mu6,sigma6,I,m)
p7,p7_norm,x = pdf_seq(mu7,sigma7,I,m)
p8,p8_norm,x = pdf_seq(mu7,sigma7,I,m)
p9,p9_norm,x = pdf_seq(mu7,sigma7,I,m)
p10,p10_norm,x = pdf_seq(mu7,sigma7,I,m)
p11,p11_norm,x = pdf_seq(mu7,sigma7,I,m)
p12,p12_norm,x = pdf_seq(mu7,sigma7,I,m)
p13,p13_norm,x = pdf_seq(mu7,sigma7,I,m)
p14,p14_norm,x = pdf_seq(mu7,sigma7,I,m)
c1 = fftconvolve(p1_norm,p2_norm)
c2 = fftconvolve(c1,p3_norm)
c3 = fftconvolve(c2,p4_norm)
c4 = fftconvolve(c3,p5_norm)
c5 = fftconvolve(c4,p6_norm)
c6 = fftconvolve(c5,p7_norm)
c7 = fftconvolve(c6,p8_norm)
c8 = fftconvolve(c7,p9_norm)
c9 = fftconvolve(c8,p10_norm)
c10 = fftconvolve(c9,p11_norm)
c11 = fftconvolve(c10,p12_norm)
c12 = fftconvolve(c11,p13_norm)
c13 = fftconvolve(c12,p14_norm)
toc = time.perf_counter()
d13 = np.cumsum(c13)
Nx=int(1400)
plt.figure()
plt.plot(x[0:int(Nx/m)],d13[0:int(Nx/m)]) # show Yad this -- I multiplied by m --␣
↪step size
plt.hlines(0.5,min(x),Nx,linestyles='dotted',colors = {'black'})
plt.hlines(0.9,min(x),Nx,linestyles='dotted',colors = {'black'})
plt.hlines(0.95,min(x),Nx,linestyles='dotted',colors = {'black'})
plt.hlines(0.1,min(x),Nx,linestyles='dotted',colors = {'black'})
plt.hlines(0.05,min(x),Nx,linestyles='dotted',colors = {'black'})
plt.ylim(0,1)
plt.xlim(0,Nx)
plt.xlabel("$x10^{-9}$",loc = "right")
plt.show()
x_1 = x[find_nearest(d13,0.01)]
x_5 = x[find_nearest(d13,0.05)]
x_10 = x[find_nearest(d13,0.1)]
x_50 = x[find_nearest(d13,0.50)]
x_66 = x[find_nearest(d13,0.665)]
x_85 = x[find_nearest(d13,0.85)]
x_90 = x[find_nearest(d13,0.90)]
x_95 = x[find_nearest(d13,0.95)]
x_99 = x[find_nearest(d13,0.99)]
x_9978 = x[find_nearest(d13,0.9978)]
print(tabulate([
['1%',f"{x_1}"],
['5%',f"{x_5}"],
['10%',f"{x_10}"],
['50%',f"{x_50}"],
(continues on next page)
Percentile x * 1e-9
------------ ----------
1% 76.15
5% 106.5
10% 128.2
50% 260.55
66.5% 338.55
85% 509.4
90% 608.8
95% 807.6
99% 1470.2
99.78% 2474.85
The above table agrees closely with column 2 of Table 11 on p. 28 of of [Greenfield and Sargent, 1993].
Discrepancies are probably due to slight differences in the number of digits retained in inputting 𝜇𝑖 , 𝜎𝑖 , 𝑖 = 1, … , 14 and
in the number of points deployed in the discretizations.
FOURTEEN
Note: If you are running this on Google Colab the above cell will present an error. This is because Google Colab doesn’t
use Anaconda to manage the Python packages. However this lecture will still execute as Google Colab has plotly
installed.
14.1 Overview
263
Intermediate Quantitative Economics with Python
𝑦 = 𝑓(𝑥)
ℎ(𝑧) = max(0, 𝑧)
ℎ(𝑧) = 𝑧
As activation functions below, we’ll use the sigmoid function for layers 1 to 𝑁 − 1 and the identity function for layer 𝑁 .
̂ by proceeding as follows.
To approximate a function 𝑓(𝑥) we construct 𝑓(𝑥)
Let
𝑙𝑖 (𝑥) = 𝑤𝑖 𝑥 + 𝑏𝑖 .
̂ =ℎ ∘𝑙 ∘ℎ
𝑓(𝑥) ≈ 𝑓(𝑥) 𝑁 𝑁 𝑁−1 ∘ 𝑙1 ∘ ⋯ ∘ ℎ1 ∘ 𝑙1 (𝑥)
We now consider a neural network like the one describe above with width 1, depth 𝑁 , and activation functions ℎ𝑖 for
1 ⩽ 𝑖 ⩽ 𝑁 that map ℝ into itself.
𝑁
Let {(𝑤𝑖 , 𝑏𝑖 )}𝑖=1 denote a sequence of weights and biases.
As mentioned above, for a given input 𝑥1 , our approximating function 𝑓 ̂ evaluated at 𝑥1 equals the “output” 𝑥𝑁+1 from
our network that can be computed by iterating on 𝑥𝑖+1 = ℎ𝑖 (𝑤𝑖 𝑥𝑖 + 𝑏𝑖 ).
For a given prediction 𝑦(𝑥)
̂ and target 𝑦 = 𝑓(𝑥), consider the loss function
1 2
ℒ (𝑦,̂ 𝑦) (𝑥) = (𝑦 ̂ − 𝑦) (𝑥).
2
𝑁
This criterion is a function of the parameters {(𝑤𝑖 , 𝑏𝑖 )}𝑖=1 and the point 𝑥.
We’re interested in solving the following problem:
̂ to 𝑓(𝑥).
where 𝜇(𝑥) is some measure of points 𝑥 ∈ ℝ over which we want a good approximation 𝑓(𝑥)
Stack weights and biases into a vector of parameters 𝑝:
𝑤1
⎡𝑏 ⎤
⎢ 1⎥
⎢ 𝑤2 ⎥
𝑝 = ⎢ 𝑏2 ⎥
⎢ ⎥
⎢ ⋮ ⎥
⎢𝑤𝑁 ⎥
⎣ 𝑏𝑁 ⎦
Applying a “poor man’s version” of a stochastic gradient descent algorithm for finding a zero of a function leads to the
following update rule for parameters:
𝑑ℒ 𝑑𝑥𝑁+1
𝑝𝑘+1 = 𝑝𝑘 − 𝛼 (14.2)
𝑑𝑥𝑁+1 𝑑𝑝𝑘
𝑑ℒ
where 𝑑𝑥𝑁+1 = − (𝑥𝑁+1 − 𝑦) and 𝛼 > 0 is a step size.
(See this and this to gather insights about how stochastic gradient descent relates to Newton’s method.)
𝑑𝑥𝑁+1
To implement one step of this parameter update rule, we want the vector of derivatives 𝑑𝑝𝑘 .
In the neural network literature, this step is accomplished by what is known as back propagation.
Thanks to properties of
• the chain and product rules for differentiation from differential calculus, and
• lower triangular matrices
back propagation can actually be accomplished in one step by
https://youtu.be/rZS2LGiurKY
Here goes.
Define the derivative of ℎ(𝑧) with respect to 𝑧 evaluated at 𝑧 = 𝑧𝑖 as 𝛿𝑖 :
𝑑
𝛿𝑖 = ℎ(𝑧)|𝑧=𝑧𝑖
𝑑𝑧
or
𝛿𝑖 = ℎ′ (𝑤𝑖 𝑥𝑖 + 𝑏𝑖 ) .
Repeated application of the chain rule and product rule to our recursion (14.1) allows us to obtain:
𝑑𝑤1
⎛ ⎞ 0 0 0 0
𝑑𝑥2 𝛿1 𝑤1 𝛿1 0 0 0 ⎜ 𝑑𝑏1 ⎟ ⎛ 𝑑𝑥2
⎜ ⎟ 𝑤 0 0 0 ⎞
⎛
⎜ ⋮ ⎞
⎟= ⎛
⎜ 0 0 ⋱ 0 0 ⎞
⎟⎜⎜ ⋮ ⎟
⎟ +⎜
⎜ 2
⎜ 0 ⋱ 0 0 ⎟
⎟
⎟ ⎛
⎜ ⋮ ⎞
⎟
⎜ ⎟
⎝ 0 0 0 𝛿𝑁 𝑤𝑁 𝛿𝑁 ⎠ ⎜ 𝑑𝑤𝑁
⎝ 𝑑𝑥𝑁+1 ⎠ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⎟ ⎝ 𝑑𝑥𝑁+1 ⎠
⎝ 0 0 𝑤𝑁 0 ⎠
𝐷 ⎝ 𝑑𝑏𝑁 ⎠ ⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝐿
or
𝑑𝑥 = 𝐷𝑑𝑝 + 𝐿𝑑𝑥
𝑑𝑥 = (𝐼 − 𝐿)−1 𝐷𝑑𝑝
𝑑𝑥𝑁+1 /𝑑𝑤1
⎛
⎜ 𝑑𝑥𝑁+1 /𝑑𝑏1 ⎞
⎟
⎜
⎜ ⎟
⎟ −1
⎜ ⋮ ⎟ = 𝑒𝑁 (𝐼 − 𝐿) 𝐷.
⎜
⎜ 𝑑𝑥𝑁+1 /𝑑𝑤𝑁 ⎟
⎟
⎝ 𝑑𝑥𝑁+1 /𝑑𝑏𝑁 ⎠
We can then solve the above problem by applying our update for 𝑝 multiple times for a collection of input-output pairs
𝑀
{(𝑥𝑖1 , 𝑦𝑖 )}𝑖=1 that we’ll call our “training set”.
Choosing a training set amounts to a choice of measure 𝜇 in the above formulation of our function approximation problem
as a minimization problem.
In this spirit, we shall use a uniform grid of, say, 50 or 200 points.
There are many possible approaches to the minimization problem posed above:
• batch gradient descent in which you use an average gradient over the training set
• stochastic gradient descent in which you sample points randomly and use individual gradients
• something in-between (so-called “mini-batch gradient descent”)
The update rule (14.2) described above amounts to a stochastic gradient descent algorithm.
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:],␣
↪keys)]
h = jax.nn.sigmoid
xs = xs.at[0].set(x)
for i, (w, b) in enumerate(params[:-1]):
output = w * xs[i] + b
activation = h(output[0, 0])
# Store elements
δ = δ.at[i].set(grad(h)(output[0, 0]))
ws = ws.at[i].set(w[0, 0])
bs = bs.at[i].set(b[0])
xs = xs.at[i+1].set(activation)
# Store elements
δ = δ.at[-1].set(1.)
ws = ws.at[-1].set(final_w[0, 0])
bs = bs.at[-1].set(final_b[0])
xs = xs.at[-1].set(preds[0, 0])
return 1 / 2 * (y - preds) ** 2
# Parameters
N = 3 # Number of layers
layer_sizes = [1, ] * (N + 1)
param_scale = 0.1
step_size = 0.01
params = init_network_params(layer_sizes, random.PRNGKey(1))
An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not␣
↪installed. Falling back to cpu.
x = 5
y = 3
xs, δ, ws, bs = compute_xδw_seq(params, x)
Array(0., dtype=float32)
# Check that the gradient of the loss is the same for both approaches
jnp.max(jnp.abs(-(y - xs[-1]) * dxs_la[-1] - grad_loss_ad))
Array(1.4901161e-08, dtype=float32)
@jit
def update_ad(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
@jit
def update_la(params, x, y):
xs, δ, ws, bs = compute_xδw_seq(params, x)
N = len(params)
L = jnp.diag(δ * ws, k=-1)
L = L[1:, 1:]
update_ad(params, x, y)
14.6 Example 1
𝑓 (𝑥) = −3𝑥 + 2
on [0.5, 3].
We use a uniform grid of 200 points and update the parameters for each point on the grid 300 times.
ℎ𝑖 is the sigmoid activation function for all layers except the final one for which we use the identity function and 𝑁 = 3.
Weights are initialized randomly.
def f(x):
return -3 * x + 2
M = 200
grid = jnp.linspace(0.5, 3, num=M)
f_val = f(grid)
indices = jnp.arange(M)
key = random.PRNGKey(0)
return params
# Parameters
N = 3 # Number of layers
layer_sizes = [1, ] * (N + 1)
params_ex1 = init_network_params(layer_sizes, key)
%%time
params_ex1 = train(params_ex1, grid, f_val, key, num_epochs=500)
fig = go.Figure()
fig.add_trace(go.Scatter(x=grid, y=f_val, name=r'$-3x+2$'))
fig.add_trace(go.Scatter(x=grid, y=predictions, name='Approximation'))
It is fun to think about how deepening the neural net for the above example affects the quality of approximation
• If the network is too deep, you’ll run into the vanishing gradient problem
• Other parameters such as the step size and the number of epochs can be as important or more important than the
number of layers in the situation considered in this lecture.
• Indeed, since 𝑓 is a linear function of 𝑥, a one-layer network with the identity map as an activation would probably
work best.
14.8 Example 2
def f(x):
return jnp.log(x)
# Parameters
N = 1 # Number of layers
layer_sizes = [1, ] * (N + 1)
params_ex2_1 = init_network_params(layer_sizes, key)
# Parameters
N = 2 # Number of layers
layer_sizes = [1, ] * (N + 1)
params_ex2_2 = init_network_params(layer_sizes, key)
# Parameters
N = 3 # Number of layers
layer_sizes = [1, ] * (N + 1)
params_ex2_3 = init_network_params(layer_sizes, key)
fig = go.Figure()
fig.add_trace(go.Scatter(x=grid, y=f_val, name=r'$\log{x}$'))
fig.add_trace(go.Scatter(x=grid, y=predictions_1, name='One-layer neural network'))
fig.add_trace(go.Scatter(x=grid, y=predictions_2, name='Two-layer neural network'))
fig.add_trace(go.Scatter(x=grid, y=predictions_3, name='Three-layer neural network'))
cpu
Note: Cloud Environment: This lecture site is built in a server environment that doesn’t have access to a gpu If you
run this lecture locally this lets you know where your code is being executed, either via the cpu or the gpu
FIFTEEN
15.1 Overview
Social stigmas can inhibit people from confessing potentially embarrassing activities or opinions.
When people are reluctant to participate a sample survey about personally sensitive issues, they might decline to partici-
pate, and even if they do participate, they might choose to provide incorrect answers to sensitive questions.
These problems induce selection biases that present challenges to interpreting and designing surveys.
To illustrate how social scientists have thought about estimating the prevalence of such embarrassing activities and opin-
ions, this lecture describes a classic approach of S. L. Warner [Warner, 1965].
Warner used elementary probability to construct a way to protect the privacy of individual respondents to surveys while
still estimating the fraction of a collection of individuals who have a socially stigmatized characteristic or who engage in
a socially stigmatized activity.
Warner’s idea was to add noise between the respondent’s answer and the signal about that answer that the survey maker
ultimately receives.
Knowing about the structure of the noise assures the respondent that the survey maker does not observe his answer.
Statistical properties of the noise injection procedure provide the respondent plausible deniability.
Related ideas underlie modern differential privacy systems.
(See https://en.wikipedia.org/wiki/Differential_privacy)
import numpy as np
import pandas as pd
275
Intermediate Quantitative Economics with Python
• Prepare a random spinner that with 𝑝 probability points to the Letter A and with (1 − 𝑝) probability points to the
Letter B.
• Each subject spins a random spinner and sees an outcome (A or B) that the interviewer does not observe.
• The subject states whether he belongs to the group to which the spinner points.
• If the spinner points to the group that the spinner belongs, the subject reports “yes”; otherwise he reports “no”.
• The subject answers the question truthfully.
Warner constructed a maximum likelihood estimators of the proportion of the population in set A.
Let
• 𝜋 : True probability of A in the population
• 𝑝 : Probability that the spinner points to A
1, if the 𝑖th subject says yes
• 𝑋𝑖 = {
0, if the 𝑖th subject says no
Index the sample set so that the first 𝑛1 report “yes”, while the second 𝑛 − 𝑛1 report “no”.
The likelihood function of a sample set is
𝑛1 𝑛−𝑛1
𝐿 = [𝜋𝑝 + (1 − 𝜋)(1 − 𝑝)] [(1 − 𝜋)𝑝 + 𝜋(1 − 𝑝)] (15.1)
The log of the likelihood function is:
log(𝐿) = 𝑛1 log [𝜋𝑝 + (1 − 𝜋)(1 − 𝑝)] + (𝑛 − 𝑛1 ) log [(1 − 𝜋)𝑝 + 𝜋(1 − 𝑝)] (15.2)
The first-order necessary condition for maximizing the log likelihood function with respect to 𝜋 is:
(𝑛 − 𝑛1 )(2𝑝 − 1) 𝑛1 (2𝑝 − 1)
=
(1 − 𝜋)𝑝 + 𝜋(1 − 𝑝) 𝜋𝑝 + (1 − 𝜋)(1 − 𝑝)
or
𝑛1
𝜋𝑝 + (1 − 𝜋)(1 − 𝑝) = (15.3)
𝑛
If 𝑝 ≠ 21 , then the maximum likelihood estimator (MLE) of 𝜋 is:
𝑝−1 𝑛1
𝜋̂ = + (15.4)
2𝑝 − 1 (2𝑝 − 1)𝑛
We compute the mean and variance of the MLE estimator 𝜋̂ to be:
1 1 𝑛
𝔼(𝜋)̂ = [𝑝 − 1 + ∑ 𝔼𝑋𝑖 ]
2𝑝 − 1 𝑛 𝑖=1
1 (15.5)
= [𝑝 − 1 + 𝜋𝑝 + (1 − 𝜋)(1 − 𝑝)]
2𝑝 − 1
=𝜋
and
𝑛𝑉 𝑎𝑟(𝑋𝑖 )
𝑉 𝑎𝑟(𝜋)̂ =
(2𝑝 − 1)2 𝑛2
[𝜋𝑝 + (1 − 𝜋)(1 − 𝑝)] [(1 − 𝜋)𝑝 + 𝜋(1 − 𝑝)]
=
(2𝑝 − 1)2 𝑛2
1
+ (2𝑝2 − 2𝑝 + 12 )(−2𝜋2 + 2𝜋 − 21 ) (15.6)
4
=
(2𝑝 − 1)2 𝑛2
1 1 1
= [ − (𝜋 − )2 ]
𝑛 16(𝑝 − 12 )2 2
Equation (15.5) indicates that 𝜋̂ is an unbiased estimator of 𝜋 while equation (15.6) tell us the variance of the estimator.
To compute a confidence interval, first rewrite (15.6) as:
1 1
1
− (𝜋 − 12 )2 16(𝑝− 12 )2
− 4
𝑉 𝑎𝑟(𝜋)̂ = 4
+ (15.7)
𝑛 𝑛
This equation indicates that the variance of 𝜋̂ can be represented as a sum of the variance due to sampling plus the variance
due to the random device.
From the expressions above we can find that:
• When 𝑝 is 12 , expression (15.1) degenerates to a constant.
• When 𝑝 is 1 or 0, the randomized estimate degenerates to an estimator without randomized sampling.
We shall only discuss situations in which 𝑝 ∈ ( 12 , 1)
(a situation in which 𝑝 ∈ (0, 21 ) is symmetric).
From expressions (15.5) and (15.7) we can deduce that:
• The MSE of 𝜋̂ decreases as 𝑝 increases.
Let’s compare the preceding randomized-response method with a stylized non-randomized response method.
In our non-randomized response method, we suppose that:
• Members of Group A tells the truth with probability 𝑇𝑎 while the members of Group B tells the truth with proba-
bility 𝑇𝑏
• 𝑌𝑖 is 1 or 0 according to whether the sample’s 𝑖th member’s report is in Group A or not.
Then we can estimate 𝜋 as:
𝑛
∑ 𝑌𝑖
𝜋̂ = 𝑖=1 (15.8)
𝑛
We calculate the expectation, bias, and variance of the estimator to be:
𝐵𝑖𝑎𝑠(𝜋)̂ = 𝔼(𝜋̂ − 𝜋)
(15.10)
= 𝜋[𝑇𝑎 + 𝑇𝑏 − 2] + [1 − 𝑇𝑏 ]
[𝜋𝑇𝑎 + (1 − 𝜋)(1 − 𝑇𝑏 )] [1 − 𝜋𝑇𝑎 − (1 − 𝜋)(1 − 𝑇𝑏 )]
𝑉 𝑎𝑟(𝜋)̂ = (15.11)
𝑛
It is useful to define a
Mean Square Error Randomized
MSE Ratio =
Mean Square Error Regular
We can compute MSE Ratios for different survey designs associated with different parameter values.
The following Python code computes objects we want to stare at in order to make comparisons under different values of
𝜋𝐴 and 𝑛:
class Comparison:
def __init__(self, A, n):
self.A = A
self.n = n
TaTb = np.array([[0.95, 1], [0.9, 1], [0.7, 1],
[0.5, 1], [1, 0.95], [1, 0.9],
[1, 0.7], [1, 0.5], [0.95, 0.95],
[0.9, 0.9], [0.7, 0.7], [0.5, 0.5]])
self.p_arr = np.array([0.6, 0.7, 0.8, 0.9])
self.p_map = dict(zip(self.p_arr, [f"MSE Ratio: p = {x}" for x in self.p_
↪arr]))
self.template = pd.DataFrame(columns=self.p_arr)
self.template[['T_a','T_b']] = TaTb
self.template['Bias'] = None
def theoretical(self):
A = self.A
n = self.n
df = self.template.copy()
df['Bias'] = A * (df['T_a'] + df['T_b'] - 2) + (1 - df['T_b'])
for p in self.p_arr:
df[p] = (1 / (16 * (p - 1/2)**2) - (A - 1/2)**2) / n / \
(df['Bias']**2 + ((A * df['T_a'] + (1 - A) * (1 - df['T_b'])) *␣
↪(1 - A * df['T_a'] - (1 - A) * (1 - df['T_b'])) / n))
df[p] = df[p].round(2)
df = df.set_index(["T_a", "T_b", "Bias"]).rename(columns=self.p_map)
return df
• 𝑛 = 1000
We can generate MSE Ratios theoretically using the above formulas.
We can also perform Monte Carlo simulations of a MSE Ratio.
df1_mc = cp1.MCsimulation()
df1_mc
df2_mc = cp2.MCsimulation()
df2_mc
We can also revisit a calculation in the concluding section of Warner [Warner, 1965] in which
• 𝜋𝐴 = 0.6
• 𝑛 = 2000
We use the code
df3_mc = cp3.MCsimulation()
df3_mc
Evidently, as 𝑛 increases, the randomized response method does better performance in more situations.
SIXTEEN
16.1 Overview
This QuantEcon lecture describes randomized response surveys in the tradition of Warner [Warner, 1965] that are designed
to protect respondents’ privacy.
Lars Ljungqvist [Ljungqvist, 1993] analyzed how a respondent’s decision about whether to answer truthfully depends on
expected utility.
The lecture tells how Ljungqvist used his framework to shed light on alternative randomized response survey techniques
proposed, for example, by [Lanke, 1975], [Lanke, 1976], [Leysieffer and Warner, 1976], [Anderson, 1976], [Fligner et
al., 1977], [Greenberg et al., 1977], [Greenberg et al., 1969].
We consider randomized response models with only two possible answers, “yes” and “no.”
The design determines probabilities
Pr(yes|𝐴) = 1 − Pr(no|𝐴)
′ ′
Pr(yes|𝐴 ) = 1 − Pr(no|𝐴 )
These design probabilities in turn can be used to compute the conditional probability of belonging to the sensitive group
𝐴 for a given response, say 𝑟:
𝜋𝐴 Pr(𝑟|𝐴)
Pr(𝐴|𝑟) = (16.1)
𝜋𝐴 Pr(𝑟|𝐴) + (1 − 𝜋𝐴 )Pr(𝑟|𝐴′ )
285
Intermediate Quantitative Economics with Python
Pr(yes|𝐴) = 1
Pr(𝐴|no) = 0
16.3.2 Lanke(1976)
Lanke (1975) [Lanke, 1975] argued that “it is membership in Group A that people may want to hide, not membership in
the complementary Group A’.”
For that reason, Lanke (1976) [Lanke, 1976] argued that an appropriate measure of protection is to minimize
Holding this measure constant, he explained under what conditions the smallest variance of the estimate was achieved
with the unrelated question model or Warner’s (1965) original model.
Fligner, Policello, and Singh reached similar conclusion as Lanke (1976). [Fligner et al., 1977]
They measured “private protection” as
1 − max {Pr(𝐴|yes), Pr(𝐴|no)}
(16.6)
1 − 𝜋𝐴
Similarly, the hazard for an individual who does not belong to 𝐴 would be
′ ′
Pr(yes|𝐴 ) × Pr(𝐴|yes) + Pr(no|𝐴 ) × Pr(𝐴|no) (16.8)
Greenberg et al. (1977) also considered an alternative related measure of hazard that “is likely to be closer to the actual
concern felt by a respondent.”
′
The “limited hazard” for an individual in 𝐴 and 𝐴 is
Pr(yes|𝐴) × Pr(𝐴|yes) (16.9)
and
′
Pr(yes|𝐴 ) × Pr(𝐴|yes) (16.10)
This measure is just the first term in (16.7), i.e., the probability that an individual answers “yes” and is perceived to belong
to 𝐴.
Key assumptions that underlie a randomized response technique for estimating the fraction of a population that belongs
to 𝐴 are:
• Assumption 1: Respondents feel discomfort from being thought of as belonging to 𝐴.
• Assumption 2: Respondents prefer to answer questions truthfully than to lie, so long as the cost of doing so is not
too high. The cost is taken to be the discomfort in 1.
Let 𝑟𝑖 denote individual 𝑖’s response to the randomized question.
𝑟𝑖 can only take values “yes” or “no”.
For a given design of a randomized response interview and a given belief about the fraction of the population that belongs
to 𝐴, the respondent’s answer is associated with a conditional probability Pr(𝐴|𝑟𝑖 ) that the individual belongs to 𝐴.
Given 𝑟𝑖 and complete privacy, the individual’s utility is higher if 𝑟𝑖 represents a truthful answer rather than a lie.
In terms of a respondent’s expected utility as a function of Pr(𝐴|𝑟𝑖 ) and 𝑟𝑖
𝜕𝑈𝑖 (Pr(𝐴|𝑟𝑖 ), 𝜙𝑖 )
< 0, for 𝜙𝑖 ∈ {truth, lie} (16.11)
𝜕Pr(𝐴|𝑟𝑖 )
and
If the correct answer is “no”, individual 𝑖 would volunteer the correct answer only if
Assume that
x1 = np.arange(0, 1, 0.001)
y1 = x1 - 0.4
x2 = np.arange(0.4**2, 1, 0.001)
y2 = (pow(x2, 0.5) - 0.4)**2
x3 = np.arange(0.4**0.5, 1, 0.001)
y3 = pow(x3**2 - 0.4, 0.5)
plt.figure(figsize=(12, 10))
plt.plot(x1, y1, 'r-', label='Truth Border of: $U_i(Pr(A|r_i),\phi_i)=-Pr(A|r_i)+f(\
↪phi_i)$')
plt.xlabel('Pr(A|yes)')
plt.ylabel('Pr(A|no)')
plt.text(0.42, 0.3, "Truth Telling", fontdict={'size':28, 'style':'italic'})
plt.text(0.8, 0.1, "Lying", fontdict={'size':28, 'style':'italic'})
plt.legend(loc=0, fontsize='large')
plt.title('Figure 1.1')
plt.show()
and plot the “truth telling” and “lying area” of individual 𝑖 in Figure 1.2:
x1 = np.arange(0, 1, 0.001)
y1 = x1 - 0.4
z1 = x1
z2 = 0
plt.figure(figsize=(12, 10))
plt.plot(x1, y1,'r-',label='Truth Border of: $U_i(Pr(A|r_i),\phi_i)=-Pr(A|r_i)+f(\phi_
↪i)$')
plt.xlabel('Pr(A|yes)')
plt.ylabel('Pr(A|no)')
(continues on next page)
plt.legend(loc=0, fontsize='large')
plt.title('Figure 1.2')
plt.show()
A statistician’s objective is
• to find a randomized response survey design that minimizes the bias and the variance of the estimator.
Given a design that ensures truthful answers by all respondents, Anderson(1976, Theorem 1) [Anderson, 1976] showed
that the minimum variance estimate in the two-response model has variance
𝜋𝐴 2 (1 − 𝜋𝐴 )2 1 1
𝑉 (Pr(𝐴|yes), Pr(𝐴|no)) = × × (16.17)
𝑛 Pr(𝐴|yes) − 𝜋𝐴 𝜋𝐴 − Pr(𝐴|no)
𝑑 Pr(𝐴|no) 𝜋 − Pr(𝐴|no)
∣ = 𝐴 >0 (16.18)
𝑑 Pr(𝐴|yes) constant variance Pr(𝐴|yes) − 𝜋𝐴
class Iso_Variance:
def __init__(self, pi, n):
self.pi = pi
self.n = n
def plotting_iso_variance_curve(self):
pi = self.pi
n = self.n
plt.figure(figsize=(12, 10))
plt.plot(x0, y0, 'm-', label='Warner')
plt.plot(x, x, 'c:', linewidth=2)
plt.plot(x0, y1,'c:', linewidth=2)
plt.plot(y2, x2, 'c:', linewidth=2)
for i in range(len(nv)):
y = pi - (pi**2 * (1 - pi)**2) / (n * (nv[i] / n) * (x0 - pi + 1e-8))
plt.plot(x0, y, 'k--', alpha=1 - 0.07 * i, label=f'V{i+1}')
plt.xlim([0, 1])
plt.ylim([0, 0.5])
plt.xlabel('Pr(A|yes)')
(continues on next page)
A point on an iso-variance curves can be attained with the unrelated question design.
We now focus on finding an “optimal survey design” that
• Minimizes the variance of the estimator subject to privacy restrictions.
To obtain an optimal design, we first superimpose all individuals’ truth borders on the iso-variance mapping.
To construct an optimal design
• The statistician should find the intersection of areas above all truth borders; that is, the set of conditional probabilities
ensuring truthful answers from all respondents.
• The point where this set touches the lowest possible iso-variance curve determines an optimal survey design.
Consquently, a minimum variance unbiased estimator is pinned down by an individual who is the least willing to volunteer
a truthful answer.
Here are some comments about the model design:
• An individual’s decision of whether or not to answer truthfully depends on his or her belief about other respondents’
behavior, because this determines the individual’s calculation of Pr(𝐴|yes) and Pr(𝐴|no).
• An equilibrium of the optimal design model is a Nash equilibrium of a noncooperative game.
• Assumption (16.12) is sufficient to guarantee existence of an optimal model design. By choosing Pr(𝐴|yes) and
Pr(𝐴|no) sufficiently close to each other, all respondents will find it optimal to answer truthfully. The closer are
these probabilities, the higher the variance of the estimator becomes.
• If respondents experience a large enough increase in expected utility from telling the truth, then there is no need to
use a randomized response model. The smallest possible variance of the estimate is then obtained at Pr(𝐴|yes) = 1
and Pr(𝐴|no) = 0 ; that is, when respondents answer truthfully to direct questioning.
• A more general design problem would be to minimize some weighted sum of the estimator’s variance and bias. It
would be optimal to accept some lies from the most “reluctant” respondents.
Following Lanke’s suggestion, the statistician should find the highest possible Pr(𝐴|yes) consistent with truth telling while
Pr(𝐴|no) is fixed at 0. The variance is then minimized at point 𝑋 in Figure 3.
However, we can see that in Figure 3, point 𝑍 offers a smaller variance that still allows cooperation of the respondents,
and it is achievable following our discussion of the truth border in Part III:
pi = 0.3
n = 100
nv = [0.27, 0.34, 0.49, 0.74, 0.92, 1.1, 1.47, 2.94, 14.7]
x = np.arange(0, 1, 0.001)
y = x - 0.4
z = x
x0 = np.arange(pi, 1, 0.001)
x2 = np.arange(0, pi, 0.001)
y1 = [pi for i in x0]
y2 = [pi for i in x2]
plt.figure(figsize=(12, 10))
plt.plot(x, x, 'c:', linewidth=2)
plt.plot(x0, y1, 'c:', linewidth=2)
plt.plot(y2, x2, 'c:', linewidth=2)
plt.plot(x, y, 'r-', label='Truth Border')
plt.fill_between(x, y, z, facecolor='blue', alpha=0.05, label='truth telling')
plt.fill_between(x, 0, y, facecolor='green', alpha=0.05, label='lying')
for i in range(len(nv)):
y = pi - (pi**2 * (1 - pi)**2) / (n * (nv[i] / n) * (x0 - pi + 1e-8))
plt.plot(x0, y, 'k--', alpha=1 - 0.07 * i, label=f'V{i+1}')
Leysieffer and Warner (1976) recommend a two-dimensional measure of jeopardy that reduces to a single dimension
when there is no jeopardy in a ‘no’ answer”, which means that
Pr(yes|𝐴) = 1
and
Pr(𝐴|no) = 0
Pr(𝐴|no) = 0
Here the gain from lying is too high for someone to volunteer a “yes” answer.
This means that
def f(x):
if x < 0.16:
return 0
else:
return (pow(x, 0.5) - 0.4)**2
pi = 0.3
n = 100
nv = [0.27, 0.34, 0.49, 0.74, 0.92, 1.1, 1.47, 2.94, 14.7]
x = np.arange(0, 1, 0.001)
y = [f(i) for i in x]
z = x
x0 = np.arange(pi, 1, 0.001)
x2 = np.arange(0, pi, 0.001)
y1 = [pi for i in x0]
y2 = [pi for i in x2]
x3 = np.arange(0.16, 1, 0.001)
y3 = (pow(x3, 0.5) - 0.4)**2
plt.figure(figsize=(12, 10))
plt.plot(x, x, 'c:', linewidth=2)
plt.plot(x0, y1,'c:', linewidth=2)
plt.plot(y2, x2,'c:', linewidth=2)
plt.plot(x3, y3,'b-', label='Truth Border')
plt.fill_between(x, y, z, facecolor='blue', alpha=0.05, label='Truth telling')
plt.fill_between(x3, 0, y3,facecolor='green', alpha=0.05, label='Lying')
for i in range(len(nv)):
y = pi - (pi**2 * (1 - pi)**2) / (n * (nv[i] / n) * (x0 - pi + 1e-8))
plt.plot(x0, y, 'k--', alpha=1 - 0.07 * i, label=f'V{i+1}')
plt.scatter(0.61, 0.146, c='r', marker='*', label='Z', s=150)
plt.xlim([0, 1])
plt.ylim([0, 0.5])
(continues on next page)
They also considered an alternative related measure of hazard that they said “is likely to be closer to the actual concern
felt by a respondent.”
′
Their “limited hazard” for an individual in 𝐴 and 𝐴 is
and
′
Pr(yes|𝐴 ) × Pr(𝐴|yes) (16.24)
According to Greenberg et al. (1977), a respondent commits himself or herself to answer truthfully on the basis of a
probability in (16.21) or (16.23) before randomly selecting the question to be answered.
Suppose that the appropriate privacy measure is captured by the notion of “limited hazard” in (16.23) and (16.24).
Consider an unrelated question model where the unrelated question is replaced by the instruction “Say the word ‘no’”,
which implies that
Pr(𝐴|yes) = 1
Linear Programming
301
CHAPTER
SEVENTEEN
OPTIMAL TRANSPORT
17.1 Overview
The transportation or optimal transport problem is interesting both because of its many applications and because of
its important role in the history of economic theory.
In this lecture, we describe the problem, tell how linear programming is a key tool for solving it, and then provide some
examples.
We will provide other applications in followup lectures.
The optimal transport problem was studied in early work about linear programming, as summarized for example by
[Dorfman et al., 1958]. A modern reference about applications in economics is [Galichon, 2016].
Below, we show how to solve the optimal transport problem using several implementations of linear programming, in-
cluding, in order,
1. the linprog solver from SciPy,
2. the linprog_simplex solver from QuantEcon and
3. the simplex-based solvers included in the Python Optimal Transport package.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import linprog
from quantecon.optimize.linprog_simplex import linprog_simplex
import ot
from scipy.stats import betabinom
import networkx as nx
303
Intermediate Quantitative Economics with Python
Summing the 𝑞𝑗 ’s across all 𝑗’s and the 𝑝𝑖 ’s across all 𝑖’s indicates that the total capacity of all the factories equals total
requirements at all locations:
𝑛 𝑛 𝑚 𝑚 𝑛 𝑚
∑ 𝑞𝑗 = ∑ ∑ 𝑥𝑖𝑗 = ∑ ∑ 𝑥𝑖𝑗 = ∑ 𝑝𝑖 (17.2)
𝑗=1 𝑗=1 𝑖=1 𝑖=1 𝑗=1 𝑖=1
The presence of the restrictions in (17.2) will be the source of one redundancy in the complete set of restrictions that we
describe below.
More about this later.
In this section we discuss using using standard linear programming solvers to tackle the optimal transport problem.
min tr(𝐶 ′ 𝑋)
𝑋
subject to 𝑋 1𝑛 = 𝑝
𝑋 ′ 1𝑚 = 𝑞
𝑋≥0
We can convert the matrix 𝑋 into a vector by stacking all of its columns into a column vector.
Doing this is called vectorization, an operation that we denote vec(𝑋).
Similarly, we convert the matrix 𝐶 into an 𝑚𝑛-dimensional vector vec(𝐶).
The objective function can be expressed as the inner product between vec(𝐶) and vec(𝑋):
vec(𝐶)′ ⋅ vec(𝑋).
To express the constraints in terms of vec(𝑋), we use a Kronecker product denoted by ⊗ and defined as follows.
Suppose 𝐴 is an 𝑚 × 𝑠 matrix with entries (𝑎𝑖𝑗 ) and that 𝐵 is an 𝑛 × 𝑡 matrix.
The Kronecker product of 𝐴 and 𝐵 is defined, in block matrix form, by
𝐴 ⊗ 𝐵 is an 𝑚𝑛 × 𝑠𝑡 matrix.
It has the property that for any 𝑚 × 𝑛 matrix 𝑋
(1′𝑛 ⊗ I𝑚 ) vec(𝑋) = 𝑝.
With 𝑧 ∶= vec(𝑋), our problem can now be expressed in terms of an 𝑚𝑛-dimensional vector of decision variables:
min vec(𝐶)′ 𝑧
𝑧
subject to 𝐴𝑧 = 𝑏 (17.4)
𝑧≥0
where
1′𝑛 ⊗ I𝑚 𝑝
𝐴=( ) and 𝑏 = ( )
I𝑛 ⊗ 1′𝑚 𝑞
17.3.2 An Application
We now provide an example that takes the form (17.4) that we’ll solve by deploying the function linprog.
The table below provides numbers for the requirements vector 𝑞, the capacity vector 𝑝, and entries 𝑐𝑖𝑗 of the cost-of-
shipping matrix 𝐶.
The numbers in the above table tell us to set 𝑚 = 3, 𝑛 = 5, and construct the following objects:
25
50 ⎛
⎜115⎞ 10 15 20 20 40
⎜ ⎟ ⎟
𝑝=⎛
⎜100⎞⎟, 𝑞=⎜
⎜ 60 ⎟
⎟ and 𝐶=⎛
⎜20 40 15 30 30⎞⎟.
⎜
⎜ 30 ⎟
⎟
⎝150⎠ ⎝30 35 40 55 25⎠
⎝ 70 ⎠
Let’s write Python code that sets up the problem and solves it.
# Define parameters
m = 3
n = 5
# Vectorize matrix C
C_vec = C.reshape((m*n, 1), order='F')
# Construct vector b
b = np.hstack([p, q])
# Print results
print("message:", res.message)
print("nit:", res.nit)
print("fun:", res.fun)
print("z:", res.x)
print("X:", res.x.reshape((m,n), order='F'))
Notice how, in the line C_vec = C.reshape((m*n, 1), order='F'), we are careful to vectorize using the
flag order='F'.
This is consistent with converting 𝐶 into a vector by stacking all of its columns into a column vector.
Here 'F' stands for “Fortran”, and we are using Fortran style column-major order.
(For an alternative approach, using Python’s default row-major ordering, see this lecture by Alfred Galichon.)
Interpreting the warning:
The above warning message from SciPy points out that A is not full rank.
This indicates that the linear program has been set up to include one or more redundant constraints.
Here, the source of the redundancy is the structure of restrictions (17.2).
Let’s explore this further by printing out 𝐴 and staring at it.
array([[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0.],
[0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0., 1.],
[1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1.]])
The singularity of 𝐴 reflects that the first three constraints and the last five constraints both require that “total requirements
equal total capacities” expressed in (17.2).
One equality constraint here is redundant.
Below we drop one of the equality constraints, and use only 7 of them.
After doing this, we attain the same minimized cost.
However, we find a different transportation plan.
Though it is a different plan, it attains the same cost!
Evidently, it is slightly quicker to work with the system that removed a redundant constraint.
Let’s drill down and do some more calculations to help us understand whether or not our finding two different optimal
transport plans reflects our having dropped a redundant equality constraint.
Hint
It will turn out that dropping a redundant equality isn’t really what mattered.
To verify our hint, we shall simply use all of the original equality constraints (including a redundant one), but we’ll just
shuffle the order of the constraints.
arr = np.arange(m+n)
sol_found = []
cost = []
np.random.shuffle(arr)
res_shuffle = linprog(C_vec, A_eq=A[arr], b_eq=b[arr])
for i in range(len(sol_found)):
print(f"transportation plan {i}: ", sol_found[i])
print(f" minimized cost {i}: ", cost[i])
transportation plan 0: (0.0, 10.0, 15.0, 50.0, 0.0, 65.0, 0.0, 60.0, 0.0, 0.0, 30.
↪0, 0.0, 0.0, 0.0, 70.0)
Ah hah! As you can see, putting constraints in different orders in this case uncovers two optimal transportation plans that
achieve the same minimized cost.
These are the same two plans computed earlier.
Next, we show that leaving out the first constraint “accidentally” leads to the initial plan that we computed.
res.x
array([ 0., 10., 15., 50., 0., 65., 0., 60., 0., 0., 30., 0., 0.,
0., 70.])
Here the matrix 𝑋 contains entries 𝑥𝑖𝑗 that tell amounts shipped from factor 𝑖 = 1, 2, 3 to location 𝑗 = 1, 2, … , 5.
The vector 𝑧 evidently equals vec(𝑋).
The minimized cost from the optimal transport plan is given by the 𝑓𝑢𝑛 variable.
We can also solve optimal transportation problems using a powerful tool from QuantEcon, namely, quantecon.
optimize.linprog_simplex.
While this routine uses the same simplex algorithm as scipy.optimize.linprog, the code is accelerated by using
a just-in-time compiler shipped in the numba library.
As you will see very soon, by using scipy.optimize.linprog the time required to solve an optimal transportation
problem can be reduced significantly.
# Equality constraints
A_eq = np.zeros((m+n, m*n))
for i in range(m):
for j in range(n):
A_eq[i, i*n+j] = 1
A_eq[m+j, i*n+j] = 1
Since the two LP solvers use the same simplex algorithm, we expect to get exactly the same solutions
# scipy.optimize.linprog
%time res = linprog(C_vec, A_eq=A[:-1, :], b_eq=b[:-1])
CPU times: user 2.74 ms, sys: 203 µs, total: 2.95 ms
Wall time: 2.04 ms
# quantecon.optimize.linprog_simplex
%time out = linprog_simplex(-c, A_eq=A_eq, b_eq=b_eq)
Let 𝑢, 𝑣 denotes vectors of dual decision variables with entries (𝑢𝑖 ), (𝑣𝑗 ).
The dual to minimization problem (17.1) is the maximization problem:
𝑚 𝑛
max ∑ 𝑝𝑖 𝑢𝑖 + ∑ 𝑞𝑗 𝑣𝑗
𝑢𝑖 ,𝑣𝑗 (17.5)
𝑖=1 𝑗=1
subject to 𝑢𝑖 + 𝑣𝑗 ≤ 𝑐𝑖𝑗 , 𝑖 = 1, 2, … , 𝑚; 𝑗 = 1, 2, … , 𝑛
max 𝑝𝑢 + 𝑞𝑣
𝑢𝑖 ,𝑣𝑗
(17.6)
𝑢
subject to 𝐴′ ( ) = vec(𝐶)
𝑣
For the same numerical example described above, let’s solve the dual problem.
#Print results
print("message:", res_dual.message)
print("nit:", res_dual.nit)
print("fun:", res_dual.fun)
print("u:", res_dual.x[:m])
print("v:", res_dual.x[-n:])
And the shadow prices computed by the two programs are identical.
res_dual_qe.x
res_dual.x
SimplexResult(x=array([ 5., 15., 25., 5., 10., 0., 15., 0.]), lambd=array([ 0.,␣
↪35., 0., 15., 0., 25., 0., 60., 15., 0., 0., 80., 0.,
0., 70.]), fun=7225.0, success=True, status=0, num_iter=24)
message:
Optimization terminated successfully. (HiGHS Status 7: Optimal)
success:
True
status:
0
fun:
-7225.0
x:
[ 5.000e+00 1.500e+01 2.500e+01 5.000e+00 1.000e+01
-0.000e+00 1.500e+01]
nit: 9
lower: residual: [ inf inf inf inf
inf inf inf]
marginals: [ 0.000e+00 0.000e+00 0.000e+00 0.000e+00
0.000e+00 0.000e+00 0.000e+00]
upper: residual: [ inf inf inf inf
inf inf inf]
marginals: [ 0.000e+00 0.000e+00 0.000e+00 0.000e+00
0.000e+00 0.000e+00 0.000e+00]
eqlin: residual: []
marginals: []
ineqlin: residual: [ 0.000e+00 0.000e+00 ... 1.500e+01
0.000e+00]
marginals: [-0.000e+00 -1.000e+01 ... -0.000e+00
-7.000e+01]
mip_node_count: 0
mip_dual_bound: 0.0
mip_gap: 0.0
By strong duality (please see this lecture Linear Programming), we know that:
𝑚 𝑛 𝑚 𝑛
∑ ∑ 𝑐𝑖𝑗 𝑥𝑖𝑗 = ∑ 𝑝𝑖 𝑢𝑖 + ∑ 𝑞𝑗 𝑣𝑗
𝑖=1 𝑗=1 𝑖=1 𝑗=1
One unit more capacity in factory 𝑖, i.e. 𝑝𝑖 , results in 𝑢𝑖 more transportation costs.
Thus, 𝑢𝑖 describes the cost of shipping one unit from factory 𝑖.
Call this the ship-out cost of one unit shipped from factory 𝑖.
Similarly, 𝑣𝑗 is the cost of shipping one unit to location 𝑗.
Call this the ship-in cost of one unit to location 𝑗.
Strong duality implies that total transprotation costs equals total ship-out costs plus total ship-in costs.
It is reasonable that, for one unit of a product, ship-out cost 𝑢𝑖 plus ship-in cost 𝑣𝑗 should equal transportation cost 𝑐𝑖𝑗 .
This equality is assured by complementary slackness conditions that state that whenever 𝑥𝑖𝑗 > 0, meaning that there
are positive shipments from factory 𝑖 to location 𝑗, it must be true that 𝑢𝑖 + 𝑣𝑗 = 𝑐𝑖𝑗 .
There is an excellent Python package for optimal transport that simplifies some of the steps we took above.
In particular, the package takes care of the vectorization steps before passing the data out to a linear programming routine.
(That said, the discussion provided above on vectorization remains important, since we want to understand what happens
under the hood.)
The following line of code solves the example application discussed above using linear programming.
X = ot.emd(p, q, C)
X
↪loss of precision. If this behaviour is unwanted, please make sure your input␣
X = ot.emd(p, q, C)
Sure enough, we have the same solution and the same cost
total_cost = np.sum(X * C)
total_cost
7225
Now let’s try using the same package on a slightly larger application.
The application has the same interpretation as above but we will also give each node (i.e., vertex) a location in the plane.
This will allow us to plot the resulting transport plan as edges in a graph.
The following class defines a node by
• its location (𝑥, 𝑦) ∈ ℝ2 ,
• its group (factory or location, denoted by p or q) and
• its mass (e.g., 𝑝𝑖 or 𝑞𝑗 ).
class Node:
self.x, self.y = x, y
self.mass, self.group = mass, group
self.name = name
Next we write a function that repeatedly calls the class above to build instances.
It allocates to the nodes it creates their location, mass, and group.
Locations are assigned randomly.
nodes = []
np.random.seed(seed)
for i in range(n):
if group == 'p':
m = 1/n
x = np.random.uniform(-2, 2)
y = np.random.uniform(-2, 2)
else:
m = betabinom.pmf(i, n-1, 2, 2)
x = 0.6 * np.random.uniform(-1.5, 1.5)
y = 0.6 * np.random.uniform(-1.5, 1.5)
return nodes
Now we build two lists of nodes, each one containing one type (factories or locations)
n_p = 32
n_q = 32
(continues on next page)
For the cost matrix 𝐶, we use the Euclidean distance between each factory and location.
c = np.empty((n_p, n_q))
for i in range(n_p):
for j in range(n_q):
x0, y0 = p_list[i].x, p_list[i].y
x1, y1 = q_list[j].x, q_list[j].y
c[i, j] = np.sqrt((x0-x1)**2 + (y0-y1)**2)
g = nx.DiGraph()
g.add_nodes_from([p.name for p in p_list])
g.add_nodes_from([q.name for q in q_list])
for i in range(n_p):
for j in range(n_q):
if pi[i, j] > 0:
g.add_edge(p_list[i].name, q_list[j].name, weight=pi[i, j])
node_pos_dict={}
for p in p_list:
node_pos_dict[p.name] = (p.x, p.y)
for q in q_list:
node_pos_dict[q.name] = (q.x, q.y)
node_color_list = []
node_size_list = []
scale = 8_000
for p in p_list:
node_color_list.append('blue')
node_size_list.append(p.mass * scale)
for q in q_list:
node_color_list.append('red')
node_size_list.append(q.mass * scale)
(continues on next page)
nx.draw_networkx_nodes(g,
node_pos_dict,
node_color=node_color_list,
node_size=node_size_list,
edgecolors='grey',
linewidths=1,
alpha=0.5,
ax=ax)
nx.draw_networkx_edges(g,
node_pos_dict,
arrows=True,
connectionstyle='arc3,rad=0.1',
alpha=0.6)
plt.show()
EIGHTEEN
Contents
This lecture uses the class Neumann to calculate key objects of a linear growth model of John von Neumann [von
Neumann, 1937] that was generalized by Kemeny, Morgenstern and Thompson [Kemeny et al., 1956].
Objects of interest are the maximal expansion rate (𝛼), the interest factor (𝛽), the optimal intensities (𝑥), and prices (𝑝).
In addition to watching how the towering mind of John von Neumann formulated an equilibrium model of price and
quantity vectors in balanced growth, this lecture shows how fruitfully to employ the following important tools:
• a zero-sum two-player game
• linear programming
• the Perron-Frobenius theorem
We’ll begin with some imports:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, linprog
from textwrap import dedent
np.set_printoptions(precision=2)
class Neumann(object):
"""
This class describes the Generalized von Neumann growth model as it was
discussed in Kemeny et al. (1956, ECTA) and Gale (1960, Chapter 9.5):
321
Intermediate Quantitative Economics with Python
Parameters
----------
A : array_like or scalar(float)
Part of the state transition equation. It should be `n x n`
B : array_like or scalar(float)
Part of the state transition equation. It should be `n x k`
"""
def __repr__(self):
return self.__str__()
def __str__(self):
me = """
Generalized von Neumann expanding model:
- number of goods : {n}
- number of activities : {m}
Assumptions:
- AI: every column of B has a positive entry : {AI}
- AII: every row of A has a positive entry : {AII}
"""
# Irreducible : {irr}
return dedent(me.format(n=self.n, m=self.m,
AI=self.AI, AII=self.AII))
def bounds(self):
"""
Calculate the trivial upper and lower bounds for alpha (expansion rate)
and beta (interest factor). See the proof of Theorem 9.8 in Gale (1960)
"""
n, m = self.n, self.m
A, B = self.A, self.B
return LB, UB
M(gamma) = B - gamma * A
323
Intermediate Quantitative Economics with Python
Outputs:
--------
value: scalar
value of the zero-sum game
strategy: vector
if dual = False, it is the intensity vector,
if dual = True, it is the price vector
"""
if dual == False:
# Solve the primal LP (for details see the description)
# (1) Define the problem for v as a maximization (linprog minimizes)
c = np.hstack([np.zeros(m), -1])
else:
# Solve the dual LP (for details see the description)
# (1) Define the problem for v as a maximization (linprog minimizes)
c = np.hstack([np.zeros(n), 1])
if res.status != 0:
print(res.message)
Outputs:
--------
alpha: scalar
optimal expansion rate
"""
LB, UB = self.bounds()
γ = (LB + UB) / 2
ZS = self.zerosum(γ=γ)
V = ZS[0] # value of the game with γ
if V >= 0:
LB = γ
else:
UB = γ
return γ, x, p
325
Intermediate Quantitative Economics with Python
Outputs:
--------
beta: scalar
optimal interest rate
"""
LB, UB = self.bounds()
if V > 0:
LB = γ
else:
UB = γ
return γ, x, p
18.1 Notation
𝑏.,𝑗 > 0 ∀𝑗 = 1, 2, … , 𝑛
𝑎𝑖,. > 0 ∀𝑖 = 1, 2, … , 𝑚
B1 = np.array([[1, 0, 0, 0],
[0, 0, 2, 0],
[0, 1, 0, 1]])
B2 = np.array([[1, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 2, 0],
[0, 0, 0, 1, 0, 1]])
The following code sets up our first Neumann economy or Neumann instance
n1 = Neumann(A1, B1)
n1
Assumptions:
- AI: every column of B has a positive entry : True
- AII: every row of A has a positive entry : True
n2 = Neumann(A2, B2)
n2
Assumptions:
- AI: every column of B has a positive entry : True
- AII: every row of A has a positive entry : True
Attach a time index 𝑡 to the preceding objects, regard an economy as a dynamic system, and study sequences
An interesting special case holds the technology process constant and investigates the dynamics of quantities and prices
only.
Accordingly, in the rest of this lecture, we assume that (𝐴𝑡 , 𝐵𝑡 ) = (𝐴, 𝐵) for all 𝑡 ≥ 0.
A crucial element of the dynamic interpretation involves the timing of production.
We assume that production (consumption of inputs) takes place in period 𝑡, while the consequent output materializes in
period 𝑡 + 1, i.e., consumption of 𝑥𝑇𝑡 𝐴 in period 𝑡 results in 𝑥𝑇𝑡 𝐵 amounts of output in period 𝑡 + 1.
𝑥𝑇𝑡 𝐵 ≥ 𝑥𝑇𝑡+1 𝐴 ∀𝑡 ≥ 1
which asserts that no more goods can be used today than were produced yesterday.
Accordingly, 𝐴𝑝𝑡 tells the costs of production in period 𝑡 and 𝐵𝑝𝑡 tells revenues in period 𝑡 + 1.
𝑥𝑡+1 ./𝑥𝑡 = 𝛼, ∀𝑡 ≥ 0
With balanced growth, the law of motion of 𝑥 is evidently 𝑥𝑡+1 = 𝛼𝑥𝑡 and so we can rewrite the feasibility constraint as
𝑥𝑇𝑡 𝐵 ≥ 𝛼𝑥𝑇𝑡 𝐴 ∀𝑡
In the same spirit, define 𝛽 ∈ ℝ as the interest factor per unit of time.
We assume that it is always possible to earn a gross return equal to the constant interest factor 𝛽 by investing “outside the
model”.
Under this assumption about outside investment opportunities, a no-arbitrage condition gives rise to the following (no
profit) restriction on the price sequence:
𝛽𝐴𝑝𝑡 ≥ 𝐵𝑝𝑡 ∀𝑡
This says that production cannot yield a return greater than that offered by the outside investment opportunity (here we
compare values in period 𝑡 + 1).
The balanced growth assumption allows us to drop time subscripts and conduct an analysis purely in terms of a time-
invariant growth rate 𝛼 and interest factor 𝛽.
18.4 Duality
Two problems are connected by a remarkable dual relationship between technological and valuation characteristics of the
economy:
Definition: The technological expansion problem (TEP) for the economy (𝐴, 𝐵) is to find a semi-positive 𝑚-vector 𝑥 > 0
and a number 𝛼 ∈ ℝ that satisfy
max 𝛼
𝛼
s.t. 𝑥𝑇 𝐵 ≥ 𝛼𝑥𝑇 𝐴
Theorem 9.3 of David Gale’s book [Gale, 1989] asserts that if Assumptions I and II are both satisfied, then a maximum
value of 𝛼 exists and that it is positive.
The maximal value is called the technological expansion rate and is denoted by 𝛼0 . The associated intensity vector 𝑥0 is
the optimal intensity vector.
Definition: The economic expansion problem (EEP) for (𝐴, 𝐵) is to find a semi-positive 𝑛-vector 𝑝 > 0 and a number
𝛽 ∈ ℝ that satisfy
min 𝛽
𝛽
s.t. 𝐵𝑝 ≤ 𝛽𝐴𝑝
Assumptions I and II imply existence of a minimum value 𝛽0 > 0 called the economic expansion rate.
The corresponding price vector 𝑝0 is the optimal price vector.
Because the criterion functions in the technological expansion problem and the economical expansion problem are both
linearly homogeneous, the optimality of 𝑥0 and 𝑝0 are defined only up to a positive scale factor.
For convenience (and to emphasize a close connection to zero-sum games), we normalize both vectors 𝑥0 and 𝑝0 to have
unit length.
A standard duality argument (see Lemma 9.4. in (Gale, 1960) [Gale, 1989]) implies that under Assumptions I and II,
𝛽0 ≤ 𝛼 0 .
But to deduce that 𝛽0 ≥ 𝛼0 , Assumptions I and II are not sufficient.
Therefore, von Neumann [von Neumann, 1937] went on to prove the following remarkable “duality” result that connects
TEP and EEP.
Theorem 1 (von Neumann): If the economy (𝐴, 𝐵) satisfies Assumptions I and II, then there exist (𝛾 ∗ , 𝑥0 , 𝑝0 ), where
𝛾 ∗ ∈ [𝛽0 , 𝛼0 ] ⊂ ℝ, 𝑥0 > 0 is an 𝑚-vector, 𝑝0 > 0 is an 𝑛-vector, and the following arbitrage true
𝑥𝑇0 𝐵 ≥ 𝛾 ∗ 𝑥𝑇0 𝐴
𝐵𝑝0 ≤ 𝛾 ∗ 𝐴𝑝0
𝑥𝑇0 (𝐵 − 𝛾 ∗ 𝐴) 𝑝0 = 0
Note: Proof (Sketch): Assumption I and II imply that there exist (𝛼0 , 𝑥0 ) and (𝛽0 , 𝑝0 ) that solve the TEP and EEP,
respectively. If 𝛾 ∗ > 𝛼0 , then by definition of 𝛼0 , there cannot exist a semi-positive 𝑥 that satisfies 𝑥𝑇 𝐵 ≥ 𝛾 ∗ 𝑥𝑇 𝐴.
Similarly, if 𝛾 ∗ < 𝛽0 , there is no semi-positive 𝑝 for which 𝐵𝑝 ≤ 𝛾 ∗ 𝐴𝑝. Let 𝛾 ∗ ∈ [𝛽0 , 𝛼0 ], then 𝑥𝑇0 𝐵 ≥ 𝛼0 𝑥𝑇0 𝐴 ≥
𝛾 ∗ 𝑥𝑇0 𝐴. Moreover, 𝐵𝑝0 ≤ 𝛽0 𝐴𝑝0 ≤ 𝛾 ∗ 𝐴𝑝0 . These two inequalities imply 𝑥0 (𝐵 − 𝛾 ∗ 𝐴) 𝑝0 = 0.
Here the constant 𝛾 ∗ is both an expansion factor and an interest factor (not necessarily optimal).
We have already encountered and discussed the first two inequalities that represent feasibility and no-profit conditions.
Moreover, the equality 𝑥𝑇0 (𝐵 − 𝛾 ∗ 𝐴) 𝑝0 = 0 concisely expresses the requirements that if any good grows at a rate larger
than 𝛾 ∗ (i.e., if it is oversupplied), then its price must be zero; and that if any activity provides negative profit, it must be
unused.
Therefore, the conditions stated in Theorem I ex encode all equilibrium conditions.
So Theorem I essentially states that under Assumptions I and II there always exists an equilibrium (𝛾 ∗ , 𝑥0 , 𝑝0 ) with
balanced growth.
Note that Theorem I is silent about uniqueness of the equilibrium. In fact, it does not rule out (trivial) cases with 𝑥𝑇0 𝐵𝑝0 =
0 so that nothing of value is produced.
To exclude such uninteresting cases, Kemeny, Morgenstern and Thomspson [Kemeny et al., 1956] add an extra require-
ment
To compute the equilibrium (𝛾 ∗ , 𝑥0 , 𝑝0 ), we follow the algorithm proposed by Hamburger, Thompson and Weil (1967),
building on the key insight that an equilibrium (with balanced growth) can be solves a particular two-player zero-sum
game. First, we introduce some notation.
Consider the 𝑚 × 𝑛 matrix 𝐶 as a payoff matrix, with the entries representing payoffs from the minimizing column
player to the maximizing row player and assume that the players can use mixed strategies. Thus,
• the row player chooses the 𝑚-vector 𝑥 > 0 subject to 𝜄𝑇𝑚 𝑥 = 1
• the column player chooses the 𝑛-vector 𝑝 > 0 subject to 𝜄𝑇𝑛 𝑝 = 1.
Definition: The 𝑚 × 𝑛 matrix game 𝐶 has the solution (𝑥∗ , 𝑝∗ , 𝑉 (𝐶)) in mixed strategies if
Nash equilibria of a finite two-player zero-sum game solve a linear programming problem.
To see this, we introduce the following notation
• For a fixed 𝑥, let 𝑣 be the value of the minimization problem: 𝑣 ≡ min𝑝 𝑥𝑇 𝐶𝑝 = min𝑗 𝑥𝑇 𝐶𝑒𝑗
• For a fixed 𝑝, let 𝑢 be the value of the maximization problem: 𝑢 ≡ max𝑥 𝑥𝑇 𝐶𝑝 = max𝑖 (𝑒𝑖 )𝑇 𝐶𝑝
Then the max-min problem (the game from the maximizing player’s point of view) can be written as the primal LP
𝑉 (𝐶) = max 𝑣
s.t. 𝑣𝜄𝑇𝑛 ≤ 𝑥𝑇 𝐶
𝑥≥0
𝜄𝑇𝑛 𝑥 = 1
while the min-max problem (the game from the minimizing player’s point of view) is the dual LP
𝑉 (𝐶) = min 𝑢
s.t. 𝑢𝜄𝑚 ≥ 𝐶𝑝
𝑝≥0
𝜄𝑇𝑚 𝑝 = 1
Hamburger, Thompson and Weil [Hamburger et al., 1967] view the input-output pair of the economy as payoff matrices
of two-player zero-sum games.
Using this interpretation, they restate Assumption I and II as follows
In order to (re)state Theorem I in terms of a particular two-player zero-sum game, we define a matrix for 𝛾 ∈ ℝ
𝑀 (𝛾) ≡ 𝐵 − 𝛾𝐴
For fixed 𝛾, treating 𝑀 (𝛾) as a matrix game, calculating the solution of the game implies
• If 𝛾 > 𝛼0 , then for all 𝑥 > 0, there ∃𝑗 ∈ {1, … , 𝑛}, s.t. [𝑥𝑇 𝑀 (𝛾)]𝑗 < 0 implying that 𝑉 (𝑀 (𝛾)) < 0.
• If 𝛾 < 𝛽0 , then for all 𝑝 > 0, there ∃𝑖 ∈ {1, … , 𝑚}, s.t. [𝑀 (𝛾)𝑝]𝑖 > 0 implying that 𝑉 (𝑀 (𝛾)) > 0.
• If 𝛾 ∈ {𝛽0 , 𝛼0 }, then (by Theorem I) the optimal intensity and price vectors 𝑥0 and 𝑝0 satisfy
𝑥𝑇0 𝑀 (𝛾) ≥ 0𝑇 and 𝑀 (𝛾)𝑝0 ≤ 0
That is, (𝑥0 , 𝑝0 , 0) is a solution of the game 𝑀 (𝛾) so that 𝑉 (𝑀 (𝛽0 )) = 𝑉 (𝑀 (𝛼0 )) = 0.
• If 𝛽0 < 𝛼0 and 𝛾 ∈ (𝛽0 , 𝛼0 ), then 𝑉 (𝑀 (𝛾)) = 0.
Moreover, if 𝑥′ is optimal for the maximizing player in 𝑀 (𝛾 ′ ) for 𝛾 ′ ∈ (𝛽0 , 𝛼0 ) and 𝑝″ is optimal for the minimizing
player in 𝑀 (𝛾 ″ ) where 𝛾 ″ ∈ (𝛽0 , 𝛾 ′ ), then (𝑥′ , 𝑝″ , 0) is a solution for 𝑀 (𝛾) ∀𝛾 ∈ (𝛾 ″ , 𝛾 ′ ).
Proof (Sketch): If 𝑥′ is optimal for a maximizing player in game 𝑀 (𝛾 ′ ), then (𝑥′ )𝑇 𝑀 (𝛾 ′ ) ≥ 0𝑇 and so for all 𝛾 < 𝛾 ′ .
hence 𝑉 (𝑀 (𝛾)) ≥ 0. If 𝑝″ is optimal for a minimizing player in game 𝑀 (𝛾 ″ ), then 𝑀 (𝛾)𝑝 ≤ 0 and so for all 𝛾 ″ < 𝛾
𝑀 (𝛾)𝑝″ = 𝑀 (𝛾 ″ ) + (𝛾 ″ − 𝛾)𝐴𝑝″ ≤ 0
hence 𝑉 (𝑀 (𝛾)) ≤ 0.
It is clear from the above argument that 𝛽0 , 𝛼0 are the minimal and maximal 𝛾 for which 𝑉 (𝑀 (𝛾)) = 0.
Furthermore, Hamburger et al. [Hamburger et al., 1967] show that the function 𝛾 ↦ 𝑉 (𝑀 (𝛾)) is continuous and
nonincreasing in 𝛾.
This suggests an algorithm to compute (𝛼0 , 𝑥0 ) and (𝛽0 , 𝑝0 ) for a given input-output pair (𝐴, 𝐵).
18.5.2 Algorithm
Hamburger, Thompson and Weil [Hamburger et al., 1967] propose a simple bisection algorithm to find the minimal and
maximal roots (i.e. 𝛽0 and 𝛼0 ) of the function 𝛾 ↦ 𝑉 (𝑀 (𝛾)).
Step 1
First, notice that we can easily find trivial upper and lower bounds for 𝛼0 and 𝛽0 .
• TEP requires that 𝑥𝑇 (𝐵 − 𝛼𝐴) ≥ 0𝑇 and 𝑥 > 0, so if 𝛼 is so large that max𝑖 {[(𝐵 − 𝛼𝐴)𝜄𝑛 ]𝑖 } < 0, then TEP
ceases to have a solution.
Accordingly, let UB be the 𝛼∗ that solves max𝑖 {[(𝐵 − 𝛼∗ 𝐴)𝜄𝑛 ]𝑖 } = 0.
• Similar to the upper bound, if 𝛽 is so low that min𝑗 {[𝜄𝑇𝑚 (𝐵 − 𝛽𝐴)]𝑗 } > 0, then the EEP has no solution and so
we can define LB as the 𝛽 ∗ that solves min𝑗 {[𝜄𝑇𝑚 (𝐵 − 𝛽 ∗ 𝐴)]𝑗 } = 0.
The bounds method calculates these trivial bounds for us
n1.bounds()
(1.0, 2.0)
Step 2
Compute 𝛼0 and 𝛽0
• Finding 𝛼0
1. Fix 𝛾 = 𝑈𝐵+𝐿𝐵2 and compute the solution of the two-player zero-sum game associated with 𝑀 (𝛾). We can
use either the primal or the dual LP problem.
2. If 𝑉 (𝑀 (𝛾)) ≥ 0, then set 𝐿𝐵 = 𝛾, otherwise let 𝑈 𝐵 = 𝛾.
3. Iterate on 1. and 2. until |𝑈 𝐵 − 𝐿𝐵| < 𝜖.
• Finding 𝛽0
1. Fix 𝛾 = 𝑈𝐵+𝐿𝐵2 and compute the solution of the two-player zero-sum game associated. with 𝑀 (𝛾). We can
use either the primal or the dual LP problem.
2. If 𝑉 (𝑀 (𝛾)) > 0, then set 𝐿𝐵 = 𝛾, otherwise let 𝑈 𝐵 = 𝛾.
3. Iterate on 1. and 2. until |𝑈 𝐵 − 𝐿𝐵| < 𝜖.
• Existence: Since 𝑉 (𝑀 (𝐿𝐵)) > 0 and 𝑉 (𝑀 (𝑈 𝐵)) < 0 and 𝑉 (𝑀 (⋅)) is a continuous, nonincreasing function,
there is at least one 𝛾 ∈ [𝐿𝐵, 𝑈 𝐵], s.t. 𝑉 (𝑀 (𝛾)) = 0.
The zerosum method calculates the value and optimal strategies associated with a given 𝛾.
γ = 2
numb_grid = 100
γ_grid = np.linspace(0.4, 2.1, numb_grid)
value_ex1_grid = np.asarray([n1.zerosum(γ=γ_grid[i])[0]
for i in range(numb_grid)])
value_ex2_grid = np.asarray([n2.zerosum(γ=γ_grid[i])[0]
for i in range(numb_grid)])
plt.show()
The expansion method implements the bisection algorithm for 𝛼0 (and uses the primal LP problem for 𝑥0 )
α_0, x, p = n1.expansion()
print(f'α_0 = {α_0}')
print(f'x_0 = {x}')
print(f'The corresponding p from the dual = {p}')
α_0 = 1.2599210478365421
x_0 = [0.33 0.26 0.41]
(continues on next page)
The interest method implements the bisection algorithm for 𝛽0 (and uses the dual LP problem for 𝑝0 )
β_0, x, p = n1.interest()
print(f'β_0 = {β_0}')
print(f'p_0 = {p}')
print(f'The corresponding x from the primal = {x}')
β_0 = 1.2599210478365421
p_0 = [0.41 0.33 0.26 0. ]
The corresponding x from the primal = [0.33 0.26 0.41]
Of course, when 𝛾 ∗ is unique, it is irrelevant which one of the two methods we use – both work.
In particular, as will be shown below, in case of an irreducible (𝐴, 𝐵) (like in Example 1), the maximal and minimal
roots of 𝑉 (𝑀 (𝛾)) necessarily coincide implying a ‘‘full duality’’ result, i.e. 𝛼0 = 𝛽0 = 𝛾 ∗ so that the expansion (and
interest) rate 𝛾 ∗ is unique.
As an illustration, compute first the maximal and minimal roots of 𝑉 (𝑀 (⋅)) for our Example 2 that has a reducible
input-output pair (𝐴, 𝐵)
α_0, x, p = n2.expansion()
print(f'α_0 = {α_0}')
print(f'x_0 = {x}')
print(f'The corresponding p from the dual = {p}')
α_0 = 1.259921052493155
x_0 = [5.27e-10 0.00e+00 3.27e-01 2.60e-01 4.13e-01]
The corresponding p from the dual = [0. 0.21 0.33 0.26 0.21 0. ]
β_0, x, p = n2.interest()
print(f'β_0 = {β_0}')
print(f'p_0 = {p}')
print(f'The corresponding x from the primal = {x}')
β_0 = 1.0000000009313226
p_0 = [ 5.00e-01 5.00e-01 -1.55e-09 -1.24e-09 -9.31e-10 0.00e+00]
The corresponding x from the primal = [-0. 0. 0.25 0.25 0.5 ]
As we can see, with a reducible (𝐴, 𝐵), the roots found by the bisection algorithms might differ, so there might be multiple
𝛾 ∗ that make the value of the game with 𝑀 (𝛾 ∗ ) zero. (see the figure above).
Indeed, although the von Neumann theorem assures existence of the equilibrium, Assumptions I and II are not sufficient
for uniqueness. Nonetheless, Kemeny et al. (1967) show that there are at most finitely many economic solutions, meaning
that there are only finitely many 𝛾 ∗ that satisfy 𝑉 (𝑀 (𝛾 ∗ )) = 0 and 𝑥𝑇0 𝐵𝑝0 > 0 and that for each such 𝛾𝑖∗ , there is a
self-contained part of the economy (a sub-economy) that in equilibrium can expand independently with the expansion
coefficient 𝛾𝑖∗ .
The following theorem (see Theorem 9.10. in Gale [Gale, 1989]) asserts that imposing irreducibility is sufficient for
uniqueness of (𝛾 ∗ , 𝑥0 , 𝑝0 ).
Theorem II: Adopt the conditions of Theorem 1. If the economy (𝐴, 𝐵) is irreducible, then 𝛾 ∗ = 𝛼0 = 𝛽0 .
There is a special (𝐴, 𝐵) that allows us to simplify the solution method significantly by invoking the powerful Perron-
Frobenius theorem for non-negative matrices.
Definition: We call an economy simple if it satisfies
• 𝑛=𝑚
• Each activity produces exactly one good
• Each good is produced by one and only one activity.
These assumptions imply that 𝐵 = 𝐼𝑛 , i.e., that 𝐵 can be written as an identity matrix (possibly after reshuffling its rows
and columns).
The simple model has the following special property (Theorem 9.11. in Gale [Gale, 1989]): if 𝑥0 and 𝛼0 > 0 solve the
TEP with (𝐴, 𝐼𝑛 ), then
1
𝑥𝑇0 = 𝛼0 𝑥𝑇0 𝐴 ⇔ 𝑥𝑇0 𝐴 = ( ) 𝑥𝑇0
𝛼0
The latter shows that 1/𝛼0 is a positive eigenvalue of 𝐴 and 𝑥0 is the corresponding non-negative left eigenvector.
The classic result of Perron and Frobenius implies that a non-negative matrix has a non-negative eigenvalue-eigenvector
pair.
Moreover, if 𝐴 is irreducible, then the optimal intensity vector 𝑥0 is positive and unique up to multiplication by a positive
scalar.
Suppose that 𝐴 is reducible with 𝑘 irreducible subsets 𝑆1 , … , 𝑆𝑘 . Let 𝐴𝑖 be the submatrix corresponding to 𝑆𝑖 and let
𝛼𝑖 and 𝛽𝑖 be the associated expansion and interest factors, respectively. Then we have
Introduction to Dynamics
337
CHAPTER
NINETEEN
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
19.1 Overview
Markov chains are one of the most useful classes of stochastic processes, being
• simple, flexible and supported by many elegant theoretical results
• valuable for building intuition about random dynamic models
• central to quantitative modeling in their own right
You will find them in many of the workhorse models of economics and finance.
In this lecture, we review some of the theory of Markov chains.
We will also introduce some of the high-quality routines for working with Markov chains available in QuantEcon.py.
Prerequisite knowledge is basic probability and linear algebra.
Let’s start with some standard imports:
339
Intermediate Quantitative Economics with Python
19.2 Definitions
In other words, knowing the current state is enough to know probabilities for future states.
In particular, the dynamics of a Markov chain are fully determined by the set of values
By construction,
• 𝑃 (𝑥, 𝑦) is the probability of going from 𝑥 to 𝑦 in one unit of time (one step)
• 𝑃 (𝑥, ⋅) is the conditional distribution of 𝑋𝑡+1 given 𝑋𝑡 = 𝑥
We can view 𝑃 as a stochastic matrix where
𝑃𝑖𝑗 = 𝑃 (𝑥𝑖 , 𝑥𝑗 ) 1 ≤ 𝑖, 𝑗 ≤ 𝑛
Going the other way, if we take a stochastic matrix 𝑃 , we can generate a Markov chain {𝑋𝑡 } as follows:
1 Hint: First show that if 𝑃 and 𝑄 are stochastic matrices then so is their product — to check the row sums, try post multiplying by a column vector
19.2.3 Example 1
Consider a worker who, at any given time 𝑡, is either unemployed (state 0) or employed (state 1).
Suppose that, over a one month period,
1. An unemployed worker finds a job with probability 𝛼 ∈ (0, 1).
2. An employed worker loses her job and becomes unemployed with probability 𝛽 ∈ (0, 1).
In terms of a Markov model, we have
• 𝑆 = {0, 1}
• 𝑃 (0, 1) = 𝛼 and 𝑃 (1, 0) = 𝛽
We can write out the transition probabilities in matrix form as
1−𝛼 𝛼
𝑃 =( ) (19.3)
𝛽 1−𝛽
Once we have the values 𝛼 and 𝛽, we can address a range of questions, such as
• What is the average duration of unemployment?
• Over the long-run, what fraction of time does a worker find herself unemployed?
• Conditional on employment, what is the probability of becoming unemployed at least once over the next 12 months?
We’ll cover such applications below.
19.2.4 Example 2
From US unemployment data, Hamilton [Hamilton, 2005] estimated the stochastic matrix
0.971 0.029 0
𝑃 =⎛
⎜ 0.145 0.778 0.077 ⎞
⎟
⎝ 0 0.508 0.492 ⎠
where
• the frequency is monthly
• the first state represents “normal growth”
• the second state represents “mild recession”
• the third state represents “severe recession”
For example, the matrix tells us that when the state is normal growth, the state will again be normal growth next month
with probability 0.97.
In general, large values on the main diagonal indicate persistence in the process {𝑋𝑡 }.
This Markov process can also be represented as a directed graph, with edges labeled by transition probabilities
Here “ng” is normal growth, “mr” is mild recession, etc.
19.3 Simulation
One natural way to answer questions about Markov chains is to simulate them.
(To approximate the probability of event 𝐸, we can simulate many times and count the fraction of times that 𝐸 occurs).
Nice functionality for simulating Markov chains exists in QuantEcon.py.
• Efficient, bundled with lots of other useful routines for handling Markov chains.
However, it’s also a good exercise to roll our own routines — let’s do that first and then come back to the methods in
QuantEcon.py.
In these exercises, we’ll take the state space to be 𝑆 = 0, … , 𝑛 − 1.
To simulate a Markov chain, we need its stochastic matrix 𝑃 and a marginal probability distribution 𝜓 from which to
draw a realization of 𝑋0 .
The Markov chain is then constructed as discussed above. To repeat:
1. At time 𝑡 = 0, draw a realization of 𝑋0 from 𝜓.
2. At each subsequent time 𝑡, draw a realization of the new state 𝑋𝑡+1 from 𝑃 (𝑋𝑡 , ⋅).
To implement this simulation procedure, we need a method for generating draws from a discrete distribution.
For this task, we’ll use random.draw from QuantEcon, which works as follows:
array([1, 0, 1, 1, 1])
We’ll write our code as a function that accepts the following three arguments
• A stochastic matrix P
• An initial state init
• A positive integer sample_size representing the length of the time series the function should return
# set up
P = np.asarray(P)
X = np.empty(sample_size, dtype=int)
# simulate
X[0] = X_0
for t in range(sample_size - 1):
X[t+1] = qe.random.draw(P_dist[X[t]])
return X
P = [[0.4, 0.6],
[0.2, 0.8]]
As we’ll see later, for a long series drawn from P, the fraction of the sample that takes value 0 will be about 0.25.
Moreover, this is true, regardless of the initial distribution from which 𝑋0 is drawn.
The following code illustrates this
0.25041
You can try changing the initial distribution to confirm that the output is always close to 0.25, at least for the P matrix
above.
As discussed above, QuantEcon.py has routines for handling Markov chains, including simulation.
Here’s an illustration using the same P as the preceding example
mc = qe.MarkovChain(P)
X = mc.simulate(ts_length=1_000_000)
np.mean(X == 0)
0.249516
CPU times: user 19.6 ms, sys: 4.75 ms, total: 24.3 ms
Wall time: 23.8 ms
mc.simulate(ts_length=4, init='unemployed')
If we want to see indices rather than state values as outputs as we can use
mc.simulate_indices(ts_length=4)
array([1, 1, 1, 1])
Suppose that
1. {𝑋𝑡 } is a Markov chain with stochastic matrix 𝑃
2. the marginal distribution of 𝑋𝑡 is known to be 𝜓𝑡
What then is the marginal distribution of 𝑋𝑡+1 , or, more generally, of 𝑋𝑡+𝑚 ?
To answer this, we let 𝜓𝑡 be the marginal distribution of 𝑋𝑡 for 𝑡 = 0, 1, 2, ….
Our first aim is to find 𝜓𝑡+1 given 𝜓𝑡 and 𝑃 .
To begin, pick any 𝑦 ∈ 𝑆.
Using the law of total probability, we can decompose the probability that 𝑋𝑡+1 = 𝑦 as follows:
In words, to get the probability of being at 𝑦 tomorrow, we account for all ways this can happen and sum their probabilities.
Rewriting this statement in terms of marginal and conditional probabilities gives
𝜓𝑡+1 = 𝜓𝑡 𝑃 (19.4)
𝑋0 ∼ 𝜓0 ⟹ 𝑋𝑚 ∼ 𝜓 0 𝑃 𝑚 (19.5)
𝑋𝑡 ∼ 𝜓𝑡 ⟹ 𝑋𝑡+𝑚 ∼ 𝜓𝑡 𝑃 𝑚 (19.6)
We know that the probability of transitioning from 𝑥 to 𝑦 in one step is 𝑃 (𝑥, 𝑦).
It turns out that the probability of transitioning from 𝑥 to 𝑦 in 𝑚 steps is 𝑃 𝑚 (𝑥, 𝑦), the (𝑥, 𝑦)-th element of the 𝑚-th
power of 𝑃 .
To see why, consider again (19.6), but now with a 𝜓𝑡 that puts all probability on state 𝑥 so that the transition probabilities
are
• 1 in the 𝑥-th position and zero elsewhere
Inserting this into (19.6), we see that, conditional on 𝑋𝑡 = 𝑥, the distribution of 𝑋𝑡+𝑚 is the 𝑥-th row of 𝑃 𝑚 .
In particular
Recall the stochastic matrix 𝑃 for recession and growth considered above.
Suppose that the current state is unknown — perhaps statistics are available only at the end of the current month.
We guess that the probability that the economy is in state 𝑥 is 𝜓(𝑥).
The probability of being in recession (either mild or severe) in 6 months time is given by the inner product
0
𝜓𝑃 6 ⋅ ⎛
⎜ 1 ⎞
⎟
1
⎝ ⎠
The marginal distributions we have been studying can be viewed either as probabilities or as cross-sectional frequencies
that a Law of Large Numbers leads us to anticipate for large samples.
To illustrate, recall our model of employment/unemployment dynamics for a given worker discussed above.
Consider a large population of workers, each of whose lifetime experience is described by the specified dynamics, with
each worker’s outcomes being realizations of processes that are statistically independent of all other workers’ processes.
Let 𝜓 be the current cross-sectional distribution over {0, 1}.
The cross-sectional distribution records fractions of workers employed and unemployed at a given moment.
• For example, 𝜓(0) is the unemployment rate.
What will the cross-sectional distribution be in 10 periods hence?
The answer is 𝜓𝑃 10 , where 𝑃 is the stochastic matrix in (19.3).
This is because each worker’s state evolves according to 𝑃 , so 𝜓𝑃 10 is a marginal distibution for a single randomly selected
worker.
But when the sample is large, outcomes and probabilities are roughly equal (by an application of the Law of Large
Numbers).
So for a very large (tending to infinite) population, 𝜓𝑃 10 also represents fractions of workers in each state.
This is exactly the cross-sectional distribution.
Irreducibility and aperiodicity are central concepts of modern Markov chain theory.
Let’s see what they’re about.
19.5.1 Irreducibility
We can translate this into a stochastic matrix, putting zeros where there’s no edge between nodes
0.9 0.1 0
𝑃 ∶= ⎛
⎜ 0.4 0.4 0.2 ⎞
⎟
⎝ 0.1 0.1 0.8 ⎠
It’s clear from the graph that this stochastic matrix is irreducible: we can eventually reach any state from any other state.
We can also test this using QuantEcon.py’s MarkovChain class
True
Here’s a more pessimistic scenario in which poor people remain poor forever
This stochastic matrix is not irreducible, since, for example, rich is not accessible from poor.
Let’s confirm this
False
mc.communication_classes
It might be clear to you already that irreducibility is going to be important in terms of long run outcomes.
For example, poverty is a life sentence in the second graph but not the first.
We’ll come back to this a bit later.
19.5.2 Aperiodicity
Loosely speaking, a Markov chain is called periodic if it cycles in a predictable way, and aperiodic otherwise.
Here’s a trivial example with three states
P = [[0, 1, 0],
[0, 0, 1],
[1, 0, 0]]
mc = qe.MarkovChain(P)
mc.period
More formally, the period of a state 𝑥 is the largest common divisor of a set of integers
In the last example, 𝐷(𝑥) = {3, 6, 9, …} for every state 𝑥, so the period is 3.
A stochastic matrix is called aperiodic if the period of every state is 1, and periodic otherwise.
For example, the stochastic matrix associated with the transition probabilities below is periodic because, for example,
state 𝑎 has period 2
We can confirm that the stochastic matrix is periodic with the following code
mc = qe.MarkovChain(P)
mc.period
mc.is_aperiodic
False
As seen in (19.4), we can shift a marginal distribution forward one unit of time via postmultiplication by 𝑃 .
Some distributions are invariant under this updating process — for example,
P = np.array([[0.4, 0.6],
[0.2, 0.8]])
ψ = (0.25, 0.75)
ψ @ P
array([0.25, 0.75])
19.6.1 Example
Recall our model of the employment/unemployment dynamics of a particular worker discussed above.
Assuming 𝛼 ∈ (0, 1) and 𝛽 ∈ (0, 1), the uniform ergodicity condition is satisfied.
Let 𝜓∗ = (𝑝, 1 − 𝑝) be the stationary distribution, so that 𝑝 corresponds to unemployment (state 0).
Using 𝜓∗ = 𝜓∗ 𝑃 and a bit of algebra yields
𝛽
𝑝=
𝛼+𝛽
This is, in some sense, a steady state probability of unemployment — more about the interpretation of this below.
Not surprisingly it tends to zero as 𝛽 → 0, and to one as 𝛼 → 0.
As discussed above, a particular Markov matrix 𝑃 can have many stationary distributions.
That is, there can be many row vectors 𝜓 such that 𝜓 = 𝜓𝑃 .
In fact if 𝑃 has two distinct stationary distributions 𝜓1 , 𝜓2 then it has infinitely many, since in this case, as you can verify,
for any 𝜆 ∈ [0, 1]
𝜓3 ∶= 𝜆𝜓1 + (1 − 𝜆)𝜓2
𝜓(𝐼𝑛 − 𝑃 ) = 0 (19.7)
P = [[0.4, 0.6],
[0.2, 0.8]]
mc = qe.MarkovChain(P)
mc.stationary_distributions # Show all stationary distributions
array([[0.25, 0.75]])
Part 2 of the Markov chain convergence theorem stated above tells us that the marginal distribution of 𝑋𝑡 converges to
the stationary distribution regardless of where we begin.
This adds considerable authority to our interpretation of 𝜓∗ as a stochastic steady state.
The convergence in the theorem is illustrated in the next figure
mc = qe.MarkovChain(P)
ψ_star = mc.stationary_distributions[0]
ax.scatter(ψ_star[0], ψ_star[1], ψ_star[2], c='k', s=60)
plt.show()
Here
• 𝑃 is the stochastic matrix for recession and growth considered above.
• The highest red dot is an arbitrarily chosen initial marginal probability distribution 𝜓, represented as a vector in
ℝ3 .
• The other red dots are the marginal distributions 𝜓𝑃 𝑡 for 𝑡 = 1, 2, ….
• The black dot is 𝜓∗ .
You might like to try experimenting with different initial conditions.
19.7 Ergodicity
1 𝑚
∑ 1{𝑋𝑡 = 𝑥} → 𝜓∗ (𝑥) as 𝑚 → ∞ (19.8)
𝑚 𝑡=1
Here
• 1{𝑋𝑡 = 𝑥} = 1 if 𝑋𝑡 = 𝑥 and zero otherwise
• convergence is with probability one
• the result does not depend on the marginal distribution of 𝑋0
The result tells us that the fraction of time the chain spends at state 𝑥 converges to 𝜓∗ (𝑥) as time goes to infinity.
This gives us another way to interpret the stationary distribution — provided that the convergence result in (19.8) is valid.
The convergence asserted in (19.8) is a special case of a law of large numbers result for Markov chains — see EDTC,
section 4.3.4 for some additional information.
19.7.1 Example
𝛽
𝑝=
𝛼+𝛽
In the cross-sectional interpretation, this is the fraction of people unemployed.
In view of our latest (ergodicity) result, it is also the fraction of time that a single worker can expect to spend unemployed.
Thus, in the long-run, cross-sectional averages for a population and time-series averages for a given person coincide.
This is one aspect of the concept of ergodicity.
𝔼[ℎ(𝑋𝑡 )] (19.9)
𝔼[ℎ(𝑋𝑡+𝑘 ) ∣ 𝑋𝑡 = 𝑥] (19.10)
where
• {𝑋𝑡 } is a Markov chain generated by 𝑛 × 𝑛 stochastic matrix 𝑃
• ℎ is a given function, which, in terms of matrix algebra, we’ll think of as the column vector
ℎ(𝑥1 )
ℎ=⎛
⎜ ⋮ ⎞
⎟
⎝ ℎ(𝑥𝑛 ) ⎠
Computing the unconditional expectation (19.9) is easy.
We just sum over the marginal distribution of 𝑋𝑡 to get
𝔼[ℎ(𝑋𝑡 )] = 𝜓𝑃 𝑡 ℎ
For the conditional expectation (19.10), we need to sum over the conditional distribution of 𝑋𝑡+𝑘 given 𝑋𝑡 = 𝑥.
where the outer 𝔼 on the left side is an unconditional distribution taken with respect to the marginal distribution 𝜓𝑡 of 𝑋𝑡
(again see equation (19.6)).
To verify the law of iterated expectations, use equation (19.11) to substitute (𝑃 𝑘 ℎ)(𝑥) for 𝐸[ℎ(𝑋𝑡+𝑘 ) ∣ 𝑋𝑡 = 𝑥], write
𝔼 [𝔼[ℎ(𝑋𝑡+𝑘 ) ∣ 𝑋𝑡 = 𝑥]] = 𝜓𝑡 𝑃 𝑘 ℎ,
Sometimes we want to compute the mathematical expectation of a geometric sum, such as ∑𝑡 𝛽 𝑡 ℎ(𝑋𝑡 ).
In view of the preceding discussion, this is
∞
𝔼[∑ 𝛽 𝑗 ℎ(𝑋𝑡+𝑗 ) ∣ 𝑋𝑡 = 𝑥] = [(𝐼 − 𝛽𝑃 )−1 ℎ](𝑥)
𝑗=0
where
(𝐼 − 𝛽𝑃 )−1 = 𝐼 + 𝛽𝑃 + 𝛽 2 𝑃 2 + ⋯
19.9 Exercises
Exercise 19.9.1
According to the discussion above, if a worker’s employment dynamics obey the stochastic matrix
1−𝛼 𝛼
𝑃 =( )
𝛽 1−𝛽
with 𝛼 ∈ (0, 1) and 𝛽 ∈ (0, 1), then, in the long-run, the fraction of time spent unemployed will be
𝛽
𝑝 ∶=
𝛼+𝛽
In other words, if {𝑋𝑡 } represents the Markov chain for employment, then 𝑋̄ 𝑚 → 𝑝 as 𝑚 → ∞, where
1 𝑚
𝑋̄ 𝑚 ∶= ∑ 1{𝑋𝑡 = 0}
𝑚 𝑡=1
This exercise asks you to illustrate convergence by computing 𝑋̄ 𝑚 for large 𝑚 and checking that it is close to 𝑝.
You will see that this statement is true regardless of the choice of initial condition or the values of 𝛼, 𝛽, provided both lie
in (0, 1).
α = β = 0.1
N = 10000
p = β / (α + β)
ax.legend(loc='upper right')
plt.show()
Exercise 19.9.2
A topic of interest for economics and many other disciplines is ranking.
Let’s now consider one of the most practical and important ranking problems — the rank assigned to web pages by search
engines.
(Although the problem is motivated from outside of economics, there is in fact a deep connection between search ranking
systems and prices in certain competitive equilibria — see [Du et al., 2013].)
To understand the issue, consider the set of results returned by a query to a web search engine.
For the user, it is desirable to
1. receive a large set of accurate matches
2. have the matches returned in order, where the order corresponds to some measure of “importance”
Ranking according to a measure of importance is the problem we now consider.
The methodology developed to solve this problem by Google founders Larry Page and Sergey Brin is known as PageRank.
To illustrate the idea, consider the following diagram
Imagine that this is a miniature version of the WWW, with
• each node representing a web page
• each arrow representing the existence of a link from one page to another
Now let’s think about which pages are likely to be important, in the sense of being valuable to a search engine user.
One possible criterion for the importance of a page is the number of inbound links — an indication of popularity.
By this measure, m and j are the most important pages, with 5 inbound links each.
However, what if the pages linking to m, say, are not themselves important?
Thinking this way, it seems appropriate to weight the inbound nodes by relative importance.
The PageRank algorithm does precisely this.
A slightly simplified presentation that captures the basic idea is as follows.
Letting 𝑗 be (the integer index of) a typical page and 𝑟𝑗 be its ranking, we set
𝑟𝑖
𝑟𝑗 = ∑
𝑖∈𝐿𝑗
ℓ𝑖
where
• ℓ𝑖 is the total number of outbound links from 𝑖
• 𝐿𝑗 is the set of all pages 𝑖 such that 𝑖 has a link to 𝑗
This is a measure of the number of inbound links, weighted by their own ranking (and normalized by 1/ℓ𝑖 ).
There is, however, another interpretation, and it brings us back to Markov chains.
Let 𝑃 be the matrix given by 𝑃 (𝑖, 𝑗) = 1{𝑖 → 𝑗}/ℓ𝑖 where 1{𝑖 → 𝑗} = 1 if 𝑖 has a link to 𝑗 and zero otherwise.
The matrix 𝑃 is a stochastic matrix provided that each page has at least one link.
With this definition of 𝑃 we have
𝑟𝑖 𝑟
𝑟𝑗 = ∑ = ∑ 1{𝑖 → 𝑗} 𝑖 = ∑ 𝑃 (𝑖, 𝑗)𝑟𝑖
𝑖∈𝐿𝑗
ℓ𝑖 all 𝑖
ℓ𝑖 all 𝑖
d -> h;
%%file web_graph_data.txt
a -> d;
a -> f;
b -> j;
b -> k;
b -> m;
c -> c;
c -> g;
c -> j;
c -> m;
d -> f;
d -> h;
d -> k;
e -> d;
e -> h;
e -> l;
f -> a;
f -> b;
f -> j;
f -> l;
g -> b;
g -> j;
h -> d;
h -> g;
h -> l;
h -> m;
i -> g;
i -> h;
i -> n;
j -> e;
j -> i;
j -> k;
k -> n;
(continues on next page)
Overwriting web_graph_data.txt
To parse this file and extract the relevant information, you can use regular expressions.
The following code snippet provides a hint as to how you can go about this
import re
re.findall('\w', 'x +++ y ****** z') # \w matches alphanumerics
When you solve for the ranking, you will find that the highest ranked node is in fact g, while the lowest is a.
"""
Return list of pages, ordered by rank
"""
import re
from operator import itemgetter
infile = 'web_graph_data.txt'
alphabet = 'abcdefghijklmnopqrstuvwxyz'
Rankings
***
g: 0.1607
j: 0.1594
m: 0.1195
n: 0.1088
k: 0.09106
b: 0.08326
e: 0.05312
i: 0.05312
c: 0.04834
h: 0.0456
l: 0.03202
d: 0.03056
f: 0.01164
a: 0.002911
Exercise 19.9.3
In numerical work, it is sometimes convenient to replace a continuous model with a discrete one.
In particular, Markov chains are routinely generated as discrete approximations to AR(1) processes of the form
𝜎𝑢2
𝜎𝑦2 ∶=
1 − 𝜌2
Tauchen’s method [Tauchen, 1986] is the most common method for approximating this continuous state process with a
finite state Markov chain.
A routine for this already exists in QuantEcon.py but let’s write our own version as an exercise.
As a first step, we choose
• 𝑛, the number of states for the discrete approximation
• 𝑚, an integer that parameterizes the width of the state space
Next, we create a state space {𝑥0 , … , 𝑥𝑛−1 } ⊂ ℝ and a stochastic 𝑛 × 𝑛 matrix 𝑃 such that
• 𝑥0 = −𝑚 𝜎𝑦
• 𝑥𝑛−1 = 𝑚 𝜎𝑦
2. If 𝑗 = 𝑛 − 1, then set
3. Otherwise, set
The exercise is to write a function approx_markov(rho, sigma_u, m=3, n=7) that returns {𝑥0 , … , 𝑥𝑛−1 } ⊂
ℝ and 𝑛 × 𝑛 matrix 𝑃 as described above.
• Even better, write a function that returns an instance of QuantEcon.py’s MarkovChain class.
TWENTY
INVENTORY DYNAMICS
Contents
• Inventory Dynamics
– Overview
– Sample Paths
– Marginal Distributions
– Exercises
20.1 Overview
In this lecture we will study the time path of inventories for firms that follow so-called s-S inventory dynamics.
Such firms
1. wait until inventory falls below some level 𝑠 and then
2. order sufficient quantities to bring their inventory back up to capacity 𝑆.
These kinds of policies are common in practice and also optimal in certain circumstances.
A review of early literature and some macroeconomic implications can be found in [Caplin, 1985].
Here our main aim is to learn more about simulation, time series and Markov dynamics.
While our Markov environment and many of the concepts we consider are related to those found in our lecture on finite
Markov chains, the state space is a continuum in the current application.
Let’s start with some imports
363
Intermediate Quantitative Economics with Python
(𝑆 − 𝐷𝑡+1 )+ if 𝑋𝑡 ≤ 𝑠
𝑋𝑡+1 = {
(𝑋𝑡 − 𝐷𝑡+1 )+ if 𝑋𝑡 > 𝑠
𝐷𝑡 = exp(𝜇 + 𝜎𝑍𝑡 )
where 𝜇 and 𝜎 are parameters and {𝑍𝑡 } is IID and standard normal.
Here’s a class that stores parameters and generates time paths for inventory.
firm_data = [
('s', float64), # restock trigger level
('S', float64), # capacity
('mu', float64), # shock location parameter
('sigma', float64) # shock scale parameter
]
@jitclass(firm_data)
class Firm:
Z = np.random.randn()
D = np.exp(self.mu + self.sigma * Z)
if x <= self.s:
return max(self.S - D, 0)
else:
return max(x - D, 0)
X = np.empty(sim_length)
X[0] = x_init
for t in range(sim_length-1):
X[t+1] = self.update(X[t])
return X
firm = Firm()
s, S = firm.s, firm.S
sim_length = 100
x_init = 50
X = firm.sim_inventory_path(x_init, sim_length)
fig, ax = plt.subplots()
bbox = (0., 1.02, 1., .102)
legend_args = {'ncol': 3,
'bbox_to_anchor': bbox,
'loc': 3,
'mode': 'expand'}
ax.plot(X, label="inventory")
ax.plot(np.full(sim_length, s), 'k--', label="$s$")
ax.plot(np.full(sim_length, S), 'k-', label="$S$")
ax.set_ylim(0, S+10)
ax.set_xlabel("time")
ax.legend(**legend_args)
plt.show()
Now let’s simulate multiple paths in order to build a more complete picture of the probabilities of different outcomes:
sim_length=200
fig, ax = plt.subplots()
plt.show()
T = 50
M = 200 # Number of draws
ymin, ymax = 0, S + 10
for ax in axes:
ax.grid(alpha=0.4)
ax = axes[0]
ax.set_ylim(ymin, ymax)
ax.set_ylabel('$X_t$', fontsize=16)
ax.vlines((T,), -1.5, 1.5)
ax.set_xticks((T,))
(continues on next page)
sample = np.empty(M)
for m in range(M):
X = firm.sim_inventory_path(x_init, 2 * T)
ax.plot(X, 'b-', lw=1, alpha=0.5)
ax.plot((T,), (X[T+1],), 'ko', alpha=0.5)
sample[m] = X[T+1]
axes[1].set_ylim(ymin, ymax)
axes[1].hist(sample,
bins=16,
density=True,
orientation='horizontal',
histtype='bar',
alpha=0.5)
plt.show()
T = 50
M = 50_000
fig, ax = plt.subplots()
sample = np.empty(M)
for m in range(M):
X = firm.sim_inventory_path(x_init, T+1)
sample[m] = X[T]
ax.hist(sample,
(continues on next page)
plt.show()
fig, ax = plt.subplots()
plot_kde(sample, ax)
plt.show()
The allocation of probability mass is similar to what was shown by the histogram just above.
20.4 Exercises
Exercise 20.4.1
This model is asymptotically stationary, with a unique stationary distribution.
(See the discussion of stationarity in our lecture on AR(1) processes for background — the fundamental concepts are the
same.)
In particular, the sequence of marginal distributions {𝜓𝑡 } is converging to a unique limiting distribution that does not
depend on initial conditions.
Although we will not prove this here, we can investigate it using simulation.
Your task is to generate and plot the sequence {𝜓𝑡 } at times 𝑡 = 10, 50, 250, 500, 750 based on the discussion above.
(The kernel density estimator is probably the best way to present each distribution.)
You should see convergence, in the sense that differences between successive distributions are getting smaller.
Try different initial conditions to verify that, in the long run, the distribution is invariant across initial conditions.
@njit(parallel=True)
def shift_firms_forward(current_inventory_levels, num_periods):
for f in prange(num_firms):
x = current_inventory_levels[f]
for t in range(num_periods):
Z = np.random.randn()
D = np.exp(mu + sigma * Z)
if x <= s:
x = max(S - D, 0)
else:
x = max(x - D, 0)
new_inventory_levels[f] = x
return new_inventory_levels
x_init = 50
num_firms = 50_000
first_diffs = np.diff(sample_dates)
fig, ax = plt.subplots()
X = np.full(num_firms, x_init)
current_date = 0
for d in first_diffs:
X = shift_firms_forward(X, d)
current_date += d
plot_kde(X, ax, label=f't = {current_date}')
ax.set_xlabel('inventory')
ax.set_ylabel('probability')
ax.legend()
plt.show()
Exercise 20.4.2
Using simulation, calculate the probability that firms that start with 𝑋0 = 70 need to order twice or more in the first 50
periods.
You will need a large sample size to get an accurate reading.
@njit(parallel=True)
def compute_freq(sim_length=50, x_init=70, num_firms=1_000_000):
for t in range(sim_length):
Z = np.random.randn()
D = np.exp(mu + sigma * Z)
if x <= s:
(continues on next page)
if restock_counter > 1:
firm_counter += 1
Note the time the routine takes to run, as well as the output.
%%time
freq = compute_freq()
print(f"Frequency of at least two stock outs = {freq}")
Try switching the parallel flag to False in the jitted function above.
Depending on your system, the difference can be substantial.
(On our desktop machine, the speed up is by a factor of 5.)
TWENTYONE
Contents
“We may regard the present state of the universe as the effect of its past and the cause of its future” – Marquis
de Laplace
In addition to what’s in Anaconda, this lecture will need the following libraries:
21.1 Overview
373
Intermediate Quantitative Economics with Python
21.2.1 Primitives
We’ve made the common assumption that the shocks are independent standardized normal vectors.
But some of what we say will be valid under the assumption that {𝑤𝑡+1 } is a martingale difference sequence.
A martingale difference sequence is a sequence that is zero mean when conditioned on past information.
In the present case, since {𝑥𝑡 } is our state sequence, this means that it satisfies
This is a weaker condition than that {𝑤𝑡 } is IID with 𝑤𝑡+1 ∼ 𝑁 (0, 𝐼).
21.2.2 Examples
By appropriate choice of the primitives, a variety of dynamics can be represented in terms of the linear state space model.
The following examples help to highlight this point.
They also illustrate the wise dictum finding the state is an art.
1 1 0 0 0
𝑥𝑡 = ⎡ 𝑦
⎢ 𝑡 ⎥
⎤ 𝐴=⎡𝜙
⎢ 0 𝜙1 𝜙2 ⎤
⎥ 𝐶=⎡
⎢0⎥
⎤ 𝐺 = [0 1 0]
𝑦
⎣ 𝑡−1 ⎦ ⎣0 1 0⎦ 0
⎣ ⎦
You can confirm that under these definitions, (21.1) and (21.1) agree.
The next figure shows the dynamics of this process when 𝜙0 = 1.1, 𝜙1 = 0.8, 𝜙2 = −0.8, 𝑦0 = 𝑦−1 = 1.
def plot_lss(A,
C,
G,
n=3,
ts_length=50):
ar = LinearStateSpace(A, C, G, mu_0=np.ones(n))
x, y = ar.simulate(ts_length)
fig, ax = plt.subplots()
y = y.flatten()
ax.plot(y, 'b-', lw=2, alpha=0.7)
ax.grid()
ax.set_xlabel('time', fontsize=12)
ax.set_ylabel('$y_t$', fontsize=12)
plt.show()
A = [[1, 0, 0 ],
[ϕ_0, ϕ_1, ϕ_2],
[0, 1, 0 ]]
C = np.zeros((3, 1))
G = [0, 1, 0]
plot_lss(A, C, G)
𝜙1 𝜙2 𝜙3 𝜙4 𝜎
⎡1 0 0 0⎤ ⎡0⎤
𝐴=⎢ ⎥ 𝐶=⎢ ⎥ 𝐺 = [1 0 0 0]
⎢0 1 0 0⎥ ⎢0⎥
⎣0 0 1 0⎦ ⎣0⎦
The matrix 𝐴 has the form of the companion matrix to the vector [𝜙1 𝜙2 𝜙3 𝜙4 ].
The next figure shows the dynamics of this process when
C_1 = [[σ],
[0],
[0],
[0]]
G_1 = [1, 0, 0, 0]
Vector Autoregressions
𝑦𝑡 𝜙1 𝜙2 𝜙3 𝜙4 𝜎
⎡𝑦 ⎤ ⎡𝐼 0 0 0⎤ ⎡0⎤
𝑥𝑡 = ⎢ 𝑡−1 ⎥ 𝐴=⎢ ⎥ 𝐶=⎢ ⎥ 𝐺 = [𝐼 0 0 0]
⎢𝑦𝑡−2 ⎥ ⎢0 𝐼 0 0⎥ ⎢0⎥
⎣𝑦𝑡−3 ⎦ ⎣0 0 𝐼 0⎦ ⎣0⎦
Seasonals
0 0 0 1
⎡1 0 0 0⎤
𝐴=⎢ ⎥
⎢0 1 0 0⎥
⎣0 0 1 0⎦
It is easy to check that 𝐴4 = 𝐼, which implies that 𝑥𝑡 is strictly periodic with period 4:1
𝑥𝑡+4 = 𝑥𝑡
Such an 𝑥𝑡 process can be used to model deterministic seasonals in quarterly time series.
The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations.
Time Trends
1 1 0
𝐴=[ ] 𝐶=[ ] 𝐺 = [𝑎 𝑏] (21.3)
0 1 0
′
and starting at initial condition 𝑥0 = [0 1] .
In fact, it’s possible to use the state-space system to represent polynomial trends of any order.
For instance, we can represent the model 𝑦𝑡 = 𝑎𝑡2 + 𝑏𝑡 + 𝑐 in the linear state space form by taking
1 1 0 0
𝐴=⎡
⎢ 1 1⎥
0 ⎤ 𝐶=⎡
⎢0⎥
⎤ 𝐺 = [2𝑎 𝑎 + 𝑏 𝑐]
⎣0 0 1 ⎦ ⎣0⎦
′
and starting at initial condition 𝑥0 = [0 0 1] .
It follows that
1 𝑡 𝑡(𝑡 − 1)/2
𝐴𝑡 = ⎡
⎢0 1 𝑡 ⎤
⎥
⎣0 0 1 ⎦
Then 𝑥′𝑡 = [𝑡(𝑡 − 1)/2 𝑡 1]. You can now confirm that 𝑦𝑡 = 𝐺𝑥𝑡 has the correct form.
1 The eigenvalues of 𝐴 are (1, −1, 𝑖, −𝑖).
A nonrecursive expression for 𝑥𝑡 as a function of 𝑥0 , 𝑤1 , 𝑤2 , … , 𝑤𝑡 can be found by using (21.1) repeatedly to obtain
𝑥𝑡 = 𝐴𝑥𝑡−1 + 𝐶𝑤𝑡
= 𝐴2 𝑥𝑡−2 + 𝐴𝐶𝑤𝑡−1 + 𝐶𝑤𝑡
⋮
𝑡−1
= ∑ 𝐴𝑗 𝐶𝑤𝑡−𝑗 + 𝐴𝑡 𝑥0
𝑗=0
1 1 1
𝐴=[ ] 𝐶=[ ]
0 1 0
1 𝑡 ′
You will be able to show that 𝐴𝑡 = [ ] and 𝐴𝑗 𝐶 = [1 0] .
0 1
Substituting into the moving average representation (21.4), we obtain
𝑡−1
𝑥1𝑡 = ∑ 𝑤𝑡−𝑗 + [1 𝑡] 𝑥0
𝑗=0
Using (21.1), it’s easy to obtain expressions for the (unconditional) means of 𝑥𝑡 and 𝑦𝑡 .
We’ll explain what unconditional and conditional mean soon.
Letting 𝜇𝑡 ∶= 𝔼[𝑥𝑡 ] and using linearity of expectations, we find that
21.3.2 Distributions
In general, knowing the mean and variance-covariance matrix of a random vector is not quite as good as knowing the full
distribution.
However, there are some situations where these moments alone tell us all we need to know.
These are situations in which the mean vector and covariance matrix are all of the parameters that pin down the population
distribution.
One such situation is when the vector in question is Gaussian (i.e., normally distributed).
This is the case here, given
1. our Gaussian assumptions on the primitives
2. the fact that normality is preserved under linear operations
In fact, it’s well-known that
In particular, given our Gaussian assumptions on the primitives and the linearity of (21.1) we can see immediately that
both 𝑥𝑡 and 𝑦𝑡 are Gaussian for all 𝑡 ≥ 02 .
Since 𝑥𝑡 is Gaussian, to find the distribution, all we need to do is find its mean and variance-covariance matrix.
But in fact we’ve already done this, in (21.4) and (21.5).
Letting 𝜇𝑡 and Σ𝑡 be as defined by these equations, we have
𝑥𝑡 ∼ 𝑁 (𝜇𝑡 , Σ𝑡 ) (21.9)
def cross_section_plot(A,
C,
G,
T=20, # Set the time
ymin=-0.8,
ymax=1.25,
sample_size = 20, # 20 observations/simulations
n=4): # The number of dimensions for the initial x0
ar = LinearStateSpace(A, C, G, mu_0=np.ones(n))
for ax in axes:
ax.grid(alpha=0.4)
ax.set_ylim(ymin, ymax)
ax = axes[0]
ax.set_ylim(ymin, ymax)
ax.set_ylabel('$y_t$', fontsize=12)
ax.set_xlabel('time', fontsize=12)
ax.vlines((T,), -1.5, 1.5)
ax.set_xticks((T,))
ax.set_xticklabels(('$T$',))
sample = []
for i in range(sample_size):
rcolor = random.choice(('c', 'g', 'b', 'k'))
x, y = ar.simulate(ts_length=T+15)
y = y.flatten()
ax.plot(y, color=rcolor, lw=1, alpha=0.5)
ax.plot((T,), (y[T],), 'ko', alpha=0.5)
sample.append(y[T])
y = y.flatten()
axes[1].set_ylim(ymin, ymax)
axes[1].set_ylabel('$y_t$', fontsize=12)
axes[1].set_xlabel('relative frequency', fontsize=12)
axes[1].hist(sample, bins=16, density=True, orientation='horizontal', alpha=0.5)
plt.show()
G_2 = [1, 0, 0, 0]
In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies from our
sample of 20 𝑦𝑇 ’s.
Here is another figure, this time with 100 observations
t = 100
cross_section_plot(A_2, C_2, G_2, T=t)
Let’s now try with 500,000 observations, showing only the histogram (without rotation)
T = 100
ymin=-0.8
ymax=1.25
(continues on next page)
↪extract a single element from your array before performing this operation.␣
Ensemble Means
1 𝐼 𝑖
𝑦𝑇̄ ∶= ∑𝑦
𝐼 𝑖=1 𝑇
approximates the expectation 𝔼[𝑦𝑇 ] = 𝐺𝜇𝑇 (as implied by the law of large numbers).
Here’s a simulation comparing the ensemble averages and population means at time points 𝑡 = 0, … , 50.
The parameters are the same as for the preceding figures, and the sample size is relatively small (𝐼 = 20).
I = 20
T = 50
ymin = -0.5
ymax = 1.15
fig, ax = plt.subplots()
ensemble_mean = np.zeros(T)
for i in range(I):
x, y = ar.simulate(ts_length=T)
y = y.flatten()
ax.plot(y, 'c-', lw=0.8, alpha=0.5)
ensemble_mean = ensemble_mean + y
ensemble_mean = ensemble_mean / I
ax.plot(ensemble_mean, color='b', lw=2, alpha=0.8, label='$\\bar y_t$')
m = ar.moment_sequence()
population_means = []
for t in range(T):
μ_x, μ_y, Σ_x, Σ_y = next(m)
population_means.append(float(μ_y))
↪extract a single element from your array before performing this operation.␣
population_means.append(float(μ_y))
1 𝐼 𝑖
𝑥𝑇̄ ∶= ∑ 𝑥 → 𝜇𝑇 (𝐼 → ∞)
𝐼 𝑖=1 𝑇
1 𝐼
∑(𝑥𝑖 − 𝑥𝑇̄ )(𝑥𝑖𝑇 − 𝑥𝑇̄ )′ → Σ𝑇 (𝐼 → ∞)
𝐼 𝑖=1 𝑇
𝑝(𝑥𝑡+1 | 𝑥𝑡 ) = 𝑁 (𝐴𝑥𝑡 , 𝐶𝐶 ′ )
Autocovariance Functions
Σ𝑡+𝑗,𝑡 = 𝐴𝑗 Σ𝑡 (21.12)
Notice that Σ𝑡+𝑗,𝑡 in general depends on both 𝑗, the gap between the two dates, and 𝑡, the earlier date.
Stationarity and ergodicity are two properties that, when they hold, greatly aid analysis of linear state space models.
Let’s start with the intuition.
Let’s look at some more time series from the same model that we analyzed above.
This picture shows cross-sectional distributions for 𝑦 at times 𝑇 , 𝑇 ′ , 𝑇 ″
def cross_plot(A,
C,
G,
steady_state='False',
T0 = 10,
T1 = 50,
T2 = 75,
T4 = 100):
ar = LinearStateSpace(A, C, G, mu_0=np.ones(4))
if steady_state == 'True':
μ_x, μ_y, Σ_x, Σ_y, Σ_yx = ar.stationary_distributions()
ar_state = LinearStateSpace(A, C, G, mu_0=μ_x, Sigma_0=Σ_x)
if steady_state == 'True':
x, y = ar_state.simulate(ts_length=T4)
else:
x, y = ar.simulate(ts_length=T4)
y = y.flatten()
ax.plot(y, color=rcolor, lw=0.8, alpha=0.5)
ax.plot((T0, T1, T2), (y[T0], y[T1], y[T2],), 'ko', alpha=0.5)
plt.show()
Note how the time series “settle down” in the sense that the distributions at 𝑇 ′ and 𝑇 ″ are relatively similar to each other
— but unlike the distribution at 𝑇 .
Apparently, the distributions of 𝑦𝑡 converge to a fixed long-run distribution as 𝑡 → ∞.
When such a distribution exists it is called a stationary distribution.
Since
1. in the present case, all distributions are Gaussian
2. a Gaussian distribution is pinned down by its mean and variance-covariance matrix
𝜓∞ = 𝑁 (𝜇∞ , Σ∞ )
Let’s see what happens to the preceding figure if we start 𝑥0 at the stationary distribution.
Now the differences in the observed distributions at 𝑇 , 𝑇 ′ and 𝑇 ″ come entirely from random fluctuations due to the
finite sample size.
By
• our choosing 𝑥0 ∼ 𝑁 (𝜇∞ , Σ∞ )
• the definitions of 𝜇∞ and Σ∞ as fixed points of (21.4) and (21.5) respectively
we’ve ensured that
Moreover, in view of (21.12), the autocovariance function takes the form Σ𝑡+𝑗,𝑡 = 𝐴𝑗 Σ∞ , which depends on 𝑗 but not
on 𝑡.
This motivates the following definition.
A process {𝑥𝑡 } is said to be covariance stationary if
• both 𝜇𝑡 and Σ𝑡 are constant in 𝑡
• Σ𝑡+𝑗,𝑡 depends on the time gap 𝑗 but not on time 𝑡
In our setting, {𝑥𝑡 } will be covariance stationary if 𝜇0 , Σ0 , 𝐴, 𝐶 assume values that imply that none of 𝜇𝑡 , Σ𝑡 , Σ𝑡+𝑗,𝑡
depends on 𝑡.
The difference equation 𝜇𝑡+1 = 𝐴𝜇𝑡 is known to have unique fixed point 𝜇∞ = 0 if all eigenvalues of 𝐴 have moduli
strictly less than unity.
That is, if (np.absolute(np.linalg.eigvals(A)) < 1).all() == True.
The difference equation (21.5) also has a unique fixed point in this case, and, moreover
𝜇𝑡 → 𝜇 ∞ = 0 and Σ𝑡 → Σ∞ as 𝑡→∞
𝐴1 𝑎 𝐶1
𝐴=[ ] 𝐶=[ ]
0 1 0
where
• 𝐴1 is an (𝑛 − 1) × (𝑛 − 1) matrix
• 𝑎 is an (𝑛 − 1) × 1 column vector
′
Let 𝑥𝑡 = [𝑥′1𝑡 1] where 𝑥1𝑡 is (𝑛 − 1) × 1.
It follows that
Let 𝜇1𝑡 = 𝔼[𝑥1𝑡 ] and take expectations on both sides of this expression to get
Assume now that the moduli of the eigenvalues of 𝐴1 are all strictly less than one.
Then (21.13) has a unique stationary solution, namely,
𝜇1∞ = (𝐼 − 𝐴1 )−1 𝑎
′
The stationary value of 𝜇𝑡 itself is then 𝜇∞ ∶= [𝜇′1∞ 1] .
The stationary values of Σ𝑡 and Σ𝑡+𝑗,𝑡 satisfy
Σ∞ = 𝐴Σ∞ 𝐴′ + 𝐶𝐶 ′
Σ𝑡+𝑗,𝑡 = 𝐴𝑗 Σ∞
Notice that here Σ𝑡+𝑗,𝑡 depends on the time gap 𝑗 but not on calendar time 𝑡.
In conclusion, if
• 𝑥0 ∼ 𝑁 (𝜇∞ , Σ∞ ) and
• the moduli of the eigenvalues of 𝐴1 are all strictly less than unity
then the {𝑥𝑡 } process is covariance stationary, with constant state component.
Note: If the eigenvalues of 𝐴1 are less than unity in modulus, then (a) starting from any initial value, the mean and
variance-covariance matrix both converge to their stationary values; and (b) iterations on (21.5) converge to the fixed
point of the discrete Lyapunov equation in the first line of (21.14).
21.4.5 Ergodicity
Ensemble averages across simulations are interesting theoretically, but in real life, we usually observe only a single real-
ization {𝑥𝑡 , 𝑦𝑡 }𝑇𝑡=0 .
So now let’s take a single realization and form the time-series averages
1 𝑇 1 𝑇
𝑥̄ ∶= ∑𝑥 and 𝑦 ̄ ∶= ∑𝑦
𝑇 𝑡=1 𝑡 𝑇 𝑡=1 𝑡
Do these time series averages converge to something interpretable in terms of our basic state-space representation?
The answer depends on something called ergodicity.
Ergodicity is the property that time series and ensemble averages coincide.
More formally, ergodicity implies that time series sample averages converge to their expectation under the stationary
distribution.
In particular,
1 𝑇
• 𝑇 ∑𝑡=1 𝑥𝑡 → 𝜇∞
1 𝑇
• 𝑇 ∑𝑡=1 (𝑥𝑡 − 𝑥𝑇̄ )(𝑥𝑡 − 𝑥𝑇̄ )′ → Σ∞
1 𝑇
• 𝑇 ∑𝑡=1 (𝑥𝑡+𝑗 − 𝑥𝑇̄ )(𝑥𝑡 − 𝑥𝑇̄ )′ → 𝐴𝑗 Σ∞
In our linear Gaussian setting, any covariance stationary process is also ergodic.
In some settings, the observation equation 𝑦𝑡 = 𝐺𝑥𝑡 is modified to include an error term.
Often this error term represents the idea that the true state can only be observed imperfectly.
To include an error term in the observation we introduce
• An IID sequence of ℓ × 1 random vectors 𝑣𝑡 ∼ 𝑁 (0, 𝐼).
• A 𝑘 × ℓ matrix 𝐻.
and extend the linear state-space system to
𝑦𝑡 ∼ 𝑁 (𝐺𝜇𝑡 , 𝐺Σ𝑡 𝐺′ + 𝐻𝐻 ′ )
21.6 Prediction
The theory of prediction for linear state space systems is elegant and simple.
The right-hand side follows from 𝑥𝑡+1 = 𝐴𝑥𝑡 + 𝐶𝑤𝑡+1 and the fact that 𝑤𝑡+1 is zero mean and independent of
𝑥𝑡 , 𝑥𝑡−1 , … , 𝑥0 .
That 𝔼𝑡 [𝑥𝑡+1 ] = 𝔼[𝑥𝑡+1 ∣ 𝑥𝑡 ] is an implication of {𝑥𝑡 } having the Markov property.
The one-step-ahead forecast error is
More generally, we’d like to compute the 𝑗-step ahead forecasts 𝔼𝑡 [𝑥𝑡+𝑗 ] and 𝔼𝑡 [𝑦𝑡+𝑗 ].
With a bit of algebra, we obtain
In view of the IID property, current and past state values provide no information about future values of the shock.
Hence 𝔼𝑡 [𝑤𝑡+𝑘 ] = 𝔼[𝑤𝑡+𝑘 ] = 0.
It now follows from linearity of expectations that the 𝑗-step ahead forecast of 𝑥 is
𝔼𝑡 [𝑥𝑡+𝑗 ] = 𝐴𝑗 𝑥𝑡
It is useful to obtain the covariance matrix of the vector of 𝑗-step-ahead prediction errors
𝑗−1
𝑥𝑡+𝑗 − 𝔼𝑡 [𝑥𝑡+𝑗 ] = ∑ 𝐴𝑠 𝐶𝑤𝑡−𝑠+𝑗 (21.16)
𝑠=0
Evidently,
𝑗−1
′
𝑉𝑗 ∶= 𝔼𝑡 [(𝑥𝑡+𝑗 − 𝔼𝑡 [𝑥𝑡+𝑗 ])(𝑥𝑡+𝑗 − 𝔼𝑡 [𝑥𝑡+𝑗 ])′ ] = ∑ 𝐴𝑘 𝐶𝐶 ′ 𝐴𝑘 (21.17)
𝑘=0
𝑉𝑗 is the conditional covariance matrix of the errors in forecasting 𝑥𝑡+𝑗 , conditioned on time 𝑡 information 𝑥𝑡 .
Under particular conditions, 𝑉𝑗 converges to
𝑉∞ = 𝐶𝐶 ′ + 𝐴𝑉∞ 𝐴′ (21.19)
21.7 Code
Our preceding simulations and calculations are based on code in the file lss.py from the QuantEcon.py package.
The code implements a class for handling linear state space models (simulations, calculating moments, etc.).
One Python construct you might not be familiar with is the use of a generator function in the method mo-
ment_sequence().
Go back and read the relevant documentation if you’ve forgotten how generator functions work.
Examples of usage are given in the solutions to the exercises.
21.8 Exercises
Exercise 21.8.1
In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear
state-space system (21.1).
We want the following objects
∞
• Forecast of a geometric sum of future 𝑥’s, or 𝔼𝑡 [∑𝑗=0 𝛽 𝑗 𝑥𝑡+𝑗 ].
∞
• Forecast of a geometric sum of future 𝑦’s, or 𝔼𝑡 [∑𝑗=0 𝛽 𝑗 𝑦𝑡+𝑗 ].
These objects are important components of some famous and interesting dynamic models.
For example,
∞
• if {𝑦𝑡 } is a stream of dividends, then 𝔼 [∑𝑗=0 𝛽 𝑗 𝑦𝑡+𝑗 |𝑥𝑡 ] is a model of a stock price
∞
• if {𝑦𝑡 } is the money supply, then 𝔼 [∑𝑗=0 𝛽 𝑗 𝑦𝑡+𝑗 |𝑥𝑡 ] is a model of the price level
Show that:
∞
𝔼𝑡 [∑ 𝛽 𝑗 𝑥𝑡+𝑗 ] = [𝐼 − 𝛽𝐴]−1 𝑥𝑡
𝑗=0
and
∞
𝔼𝑡 [∑ 𝛽 𝑗 𝑦𝑡+𝑗 ] = 𝐺[𝐼 − 𝛽𝐴]−1 𝑥𝑡
𝑗=0
∞
𝔼𝑡 [∑ 𝛽 𝑗 𝑦𝑡+𝑗 ] = 𝐺[𝐼 + 𝛽𝐴 + 𝛽 2 𝐴2 + ⋯ ]𝑥𝑡 = 𝐺[𝐼 − 𝛽𝐴]−1 𝑥𝑡
𝑗=0
TWENTYTWO
SAMUELSON MULTIPLIER-ACCELERATOR
Contents
• Samuelson Multiplier-Accelerator
– Overview
– Details
– Implementation
– Stochastic Shocks
– Government Spending
– Wrapping Everything Into a Class
– Using the LinearStateSpace Class
– Pure Multiplier Model
– Summary
In addition to what’s in Anaconda, this lecture will need the following libraries:
22.1 Overview
This lecture creates non-stochastic and stochastic versions of Paul Samuelson’s celebrated multiplier accelerator model
[Samuelson, 1939].
In doing so, we extend the example of the Solow model class in our second OOP lecture.
Our objectives are to
• provide a more detailed example of OOP and classes
• review a famous model
• review linear difference equations, both deterministic and stochastic
Let’s start with some standard imports:
395
Intermediate Quantitative Economics with Python
We’ll also use the following for various tasks described below:
Samuelson used a second-order linear difference equation to represent a model of national output based on three compo-
nents:
• a national output identity asserting that national output or national income is the sum of consumption plus investment
plus government purchases.
• a Keynesian consumption function asserting that consumption at time 𝑡 is equal to a constant times national output
at time 𝑡 − 1.
• an investment accelerator asserting that investment at time 𝑡 equals a constant called the accelerator coefficient times
the difference in output between period 𝑡 − 1 and 𝑡 − 2.
Consumption plus investment plus government purchases constitute aggregate demand, which automatically calls forth an
equal amount of aggregate supply.
(To read about linear difference equations see here or chapter IX of [Sargent, 1987].)
Samuelson used the model to analyze how particular values of the marginal propensity to consume and the accelerator
coefficient might give rise to transient business cycles in national output.
Possible dynamic properties include
• smooth convergence to a constant level of output
• damped business cycles that eventually converge to a constant level of output
• persistent business cycles that neither dampen nor explode
Later we present an extension that adds a random shock to the right side of the national income identity representing
random fluctuations in aggregate demand.
This modification makes national output become governed by a second-order stochastic linear difference equation that,
with appropriate parameter values, gives rise to recurrent irregular business cycles.
(To read about stochastic linear difference equations see chapter XI of [Sargent, 1987].)
22.2 Details
We create a random or stochastic version of the model by adding a random process of shocks or disturbances {𝜎𝜖𝑡 }
to the right side of equation (22.4), leading to the second-order scalar linear stochastic difference equation:
𝑌𝑡 = 𝜌1 𝑌𝑡−1 + 𝜌2 𝑌𝑡−2
or
To discover the properties of the solution of (22.6), it is useful first to form the characteristic polynomial for (22.6):
𝑧 2 − 𝜌1 𝑧 − 𝜌 2 (22.7)
𝑧 2 − 𝜌1 𝑧 − 𝜌2 = (𝑧 − 𝜆1 )(𝑧 − 𝜆2 ) = 0 (22.8)
𝜆1 = 𝑟𝑒𝑖𝜔 , 𝜆2 = 𝑟𝑒−𝑖𝜔
where 𝑟 is the amplitude of the complex number and 𝜔 is its angle or phase.
These can also be represented as
𝜆1 = 𝑟(𝑐𝑜𝑠(𝜔) + 𝑖 sin(𝜔))
𝜆2 = 𝑟(𝑐𝑜𝑠(𝜔) − 𝑖 sin(𝜔))
(To read about the polar form, see here)
Given initial conditions 𝑌−1 , 𝑌−2 , we want to generate a solution of the difference equation (22.6).
It can be represented as
𝑌𝑡 = 𝜆𝑡1 𝑐1 + 𝜆𝑡2 𝑐2
where 𝑐1 and 𝑐2 are constants that depend on the two initial conditions and on 𝜌1 , 𝜌2 .
When the roots are complex, it is useful to pursue the following calculations.
Notice that
𝑌𝑡 = 𝑐1 (𝑟𝑒𝑖𝜔 )𝑡 + 𝑐2 (𝑟𝑒−𝑖𝜔 )𝑡
= 𝑐1 𝑟𝑡 𝑒𝑖𝜔𝑡 + 𝑐2 𝑟𝑡 𝑒−𝑖𝜔𝑡
= 𝑐1 𝑟𝑡 [cos(𝜔𝑡) + 𝑖 sin(𝜔𝑡)] + 𝑐2 𝑟𝑡 [cos(𝜔𝑡) − 𝑖 sin(𝜔𝑡)]
= (𝑐1 + 𝑐2 )𝑟𝑡 cos(𝜔𝑡) + 𝑖(𝑐1 − 𝑐2 )𝑟𝑡 sin(𝜔𝑡)
The only way that 𝑌𝑡 can be a real number for each 𝑡 is if 𝑐1 + 𝑐2 is a real number and 𝑐1 − 𝑐2 is an imaginary number.
This happens only when 𝑐1 and 𝑐2 are complex conjugates, in which case they can be written in the polar forms
𝑐1 = 𝑣𝑒𝑖𝜃 , 𝑐2 = 𝑣𝑒−𝑖𝜃
So we can write
𝑌𝑡 = 𝑣𝑒𝑖𝜃 𝑟𝑡 𝑒𝑖𝜔𝑡 + 𝑣𝑒−𝑖𝜃 𝑟𝑡 𝑒−𝑖𝜔𝑡
= 𝑣𝑟𝑡 [𝑒𝑖(𝜔𝑡+𝜃) + 𝑒−𝑖(𝜔𝑡+𝜃) ]
= 2𝑣𝑟𝑡 cos(𝜔𝑡 + 𝜃)
where 𝑣 and 𝜃 are constants that must be chosen to satisfy initial conditions for 𝑌−1 , 𝑌−2 .
2𝜋
This formula shows that when the roots are complex, 𝑌𝑡 displays oscillations with period 𝑝̌ = 𝜔 and damping factor 𝑟.
We say that 𝑝̌ is the period because in that amount of time the cosine wave cos(𝜔𝑡+𝜃) goes through exactly one complete
cycles.
(Draw a cosine function to convince yourself of this please)
Remark: Following [Samuelson, 1939], we want to choose the parameters 𝑎, 𝑏 of the model so that the absolute values
(of the possibly complex) roots 𝜆1 , 𝜆2 of the characteristic polynomial are both strictly less than one:
Remark: When both roots 𝜆1 , 𝜆2 of the characteristic polynomial have absolute values strictly less than one, the absolute
value of the larger one governs the rate of convergence to the steady state of the non stochastic version of the model.
We use the Samuelson multiplier-accelerator model as a vehicle for teaching how we can gradually add more features to
the class.
We want to have a method in the class that automatically generates a simulation, either non-stochastic (𝜎 = 0) or stochastic
(𝜎 > 0).
We also show how to map the Samuelson model into a simple instance of the LinearStateSpace class described
here.
We can use a LinearStateSpace instance to do various things that we did above with our homemade function and
class.
Among other things, we show by example that the eigenvalues of the matrix 𝐴 that we use to form the instance of the
LinearStateSpace class for the Samuelson model equal the roots of the characteristic polynomial (22.7) for the
Samuelson multiplier accelerator model.
Here is the formula for the matrix 𝐴 in the linear state space system in the case that government expenditures are a
constant 𝐺:
1 0 0
𝐴=⎡
⎢𝛾 + 𝐺 𝜌1 𝜌2 ⎤
⎥
⎣ 0 1 0⎦
22.3 Implementation
We’ll start by drawing an informative graph from page 189 of [Sargent, 1987]
def param_plot():
# Set axis
xmin, ymin = -3, -2
xmax, ymax = -xmin, -ymin
plt.axis([xmin, xmax, ymin, ymax])
return fig
param_plot()
plt.show()
The graph portrays regions in which the (𝜆1 , 𝜆2 ) root pairs implied by the (𝜌1 = (𝑎 + 𝑏), 𝜌2 = −𝑏) difference equation
parameter pairs in the Samuelson model are such that:
• (𝜆1 , 𝜆2 ) are complex with modulus less than 1 - in this case, the {𝑌𝑡 } sequence displays damped oscillations.
• (𝜆1 , 𝜆2 ) are both real, but one is strictly greater than 1 - this leads to explosive growth.
• (𝜆1 , 𝜆2 ) are both real, but one is strictly less than −1 - this leads to explosive oscillations.
• (𝜆1 , 𝜆2 ) are both real and both are less than 1 in absolute value - in this case, there is smooth convergence to the
steady state without damped cycles.
Later we’ll present the graph with a red mark showing the particular point implied by the setting of (𝑎, 𝑏).
discriminant = ρ1 ** 2 + 4 * ρ2
if ρ2 > 1 + ρ1 or ρ2 < -1:
print('Explosive oscillations')
elif ρ1 + ρ2 > 1:
print('Explosive growth')
elif discriminant < 0:
print('Roots are complex with modulus less than one; \
(continues on next page)
categorize_solution(1.3, -.4)
Roots are real and absolute values are less than one; therefore get smooth␣
↪convergence to a steady state
def plot_y(function=None):
plt.subplots(figsize=(10, 6))
plt.plot(function)
plt.xlabel('Time $t$')
plt.ylabel('$Y_t$', rotation=0)
plt.grid()
plt.show()
The following function calculates roots of the characteristic polynomial using high school algebra.
(We’ll calculate the roots in other ways later)
The function also plots a 𝑌𝑡 starting from initial conditions that we set
roots = []
ρ1 = α + β
ρ2 = -β
discriminant = ρ1 ** 2 + 4 * ρ2
if discriminant == 0:
roots.append(-ρ1 / 2)
print('Single real root: ')
print(''.join(str(roots)))
elif discriminant > 0:
roots.append((-ρ1 + sqrt(discriminant).real) / 2)
roots.append((-ρ1 - sqrt(discriminant).real) / 2)
print('Two real roots: ')
print(''.join(str(roots)))
else:
roots.append((-ρ1 + sqrt(discriminant)) / 2)
roots.append((-ρ1 - sqrt(discriminant)) / 2)
print('Two complex roots: ')
print(''.join(str(roots)))
return y_t
plot_y(y_nonstochastic())
ρ_1 is 1.42
ρ_2 is -0.5
Two real roots:
[-0.6459687576256715, -0.7740312423743284]
Absolute values of roots are less than one
The next cell writes code that takes as inputs the modulus 𝑟 and phase 𝜙 of a conjugate pair of complex numbers in polar
form
𝜆1 = 𝑟 exp(𝑖𝜙), 𝜆2 = 𝑟 exp(−𝑖𝜙)
• The code assumes that these two complex numbers are the roots of the characteristic polynomial
• It then reverse-engineers (𝑎, 𝑏) and (𝜌1 , 𝜌2 ), pairs that would generate those roots
r = .95
period = 10 # Length of cycle in units of time
ϕ = 2 * math.pi/period
a, b = (0.6346322893124001+0j), (0.9024999999999999-0j)
ρ1, ρ2 = (1.5371322893124+0j), (-0.9024999999999999+0j)
ρ1 = ρ1.real
ρ2 = ρ2.real
ρ1, ρ2
(1.5371322893124, -0.9024999999999999)
Here we’ll use numpy to compute the roots of the characteristic polynomial
p1 = cmath.polar(r1)
p2 = cmath.polar(r2)
r, ϕ = 0.95, 0.6283185307179586
p1, p2 = (0.95, 0.6283185307179586), (0.95, -0.6283185307179586)
a, b = (0.6346322893124001+0j), (0.9024999999999999-0j)
ρ1, ρ2 = 1.5371322893124, -0.9024999999999999
# Useful constants
ρ1 = α + β
ρ2 = -β
categorize_solution(ρ1, ρ2)
return y_t
plot_y(y_nonstochastic())
Roots are complex with modulus less than one; therefore damped oscillations
Roots are [0.85+0.27838822j 0.85-0.27838822j]
Roots are complex
Roots are less than one
a, b = 0.6180339887498949, 1.0
Roots are complex with modulus less than one; therefore damped oscillations
Roots are [0.80901699+0.58778525j 0.80901699-0.58778525j]
Roots are complex
Roots are not less than one
We can also use sympy to compute analytic formulas for the roots
init_printing()
r1 = Symbol("ρ_1")
r2 = Symbol("ρ_2")
z = Symbol("z")
a = Symbol("α")
b = Symbol("β")
r1 = a + b
r2 = -b
Now we’ll construct some code to simulate the stochastic version of the model that emerges when we add a random shock
process to aggregate demand
# Useful constants
ρ1 = α + β
ρ2 = -β
# Categorize solution
categorize_solution(ρ1, ρ2)
# Generate shocks
ϵ = np.random.normal(0, 1, n)
return y_t
plot_y(y_stochastic())
Roots are real and absolute values are less than one; therefore get smooth␣
↪convergence to a steady state
[0.7236068 0.2763932]
(continues on next page)
Let’s do a simulation in which there are shocks and the characteristic polynomial has complex roots
r = .97
a, b = 0.6285929690873979, 0.9409000000000001
Roots are complex with modulus less than one; therefore damped oscillations
[0.78474648+0.57015169j 0.78474648-0.57015169j]
Roots are complex
Roots are less than one
This function computes a response to either a permanent or one-off increase in government expenditures
def y_stochastic_g(y_0=20,
y_1=20,
α=0.8,
β=0.2,
γ=10,
n=100,
σ=2,
g=0,
g_t=0,
duration='permanent'):
# Useful constants
ρ1 = α + β
ρ2 = -β
# Categorize solution
categorize_solution(ρ1, ρ2)
# Generate shocks
ϵ = np.random.normal(0, 1, n)
# Stochastic
else:
ϵ = np.random.normal(0, 1, n)
return ρ1 * x[t - 1] + ρ2 * x[t - 2] + γ + g + σ * ϵ[t]
# No government spending
if g == 0:
y_t.append(transition(y_t, t))
Roots are real and absolute values are less than one; therefore get smooth␣
↪convergence to a steady state
[0.7236068 0.2763932]
Roots are real
Roots are less than one
We can also see the response to a one time jump in government expenditures
Roots are real and absolute values are less than one; therefore get smooth␣
↪convergence to a steady state
[0.7236068 0.2763932]
Roots are real
Roots are less than one
class Samuelson():
.. math::
Parameters
----------
y_0 : scalar
Initial condition for Y_0
y_1 : scalar
Initial condition for Y_1
α : scalar
Marginal propensity to consume
β : scalar
Accelerator coefficient
n : int
(continues on next page)
"""
def __init__(self,
y_0=100,
y_1=50,
α=1.3,
β=0.2,
γ=10,
n=100,
σ=0,
g=0,
g_t=0,
duration=None):
def root_type(self):
if all(isinstance(root, complex) for root in self.roots):
return 'Complex conjugate'
elif len(self.roots) > 1:
return 'Double real'
else:
return 'Single real'
def root_less_than_one(self):
if all(abs(root) < 1 for root in self.roots):
return True
def solution_type(self):
ρ1, ρ2 = self.ρ1, self.ρ2
discriminant = ρ1 ** 2 + 4 * ρ2
if ρ2 >= 1 + ρ1 or ρ2 <= -1:
return 'Explosive oscillations'
elif ρ1 + ρ2 >= 1:
return 'Explosive growth'
elif discriminant < 0:
return 'Damped oscillations'
else:
# Stochastic
else:
ϵ = np.random.normal(0, 1, self.n)
return self.ρ1 * x[t - 1] + self.ρ2 * x[t - 2] + self.γ + g \
+ self.σ * ϵ[t]
def generate_series(self):
# No government spending
if self.g == 0:
y_t.append(self._transition(y_t, t))
def summary(self):
print('Summary\n' + '-' * 50)
print(f'Root type: {self.root_type()}')
print(f'Solution type: {self.solution_type()}')
print(f'Roots: {str(self.roots)}')
if self.root_less_than_one() == True:
print('Absolute value of roots is less than one')
else:
print('Absolute value of roots is not less than one')
if self.σ > 0:
print('Stochastic series with σ = ' + str(self.σ))
else:
print('Non-stochastic series')
if self.g != 0:
print('Government spending equal to ' + str(self.g))
if self.duration != None:
print(self.duration.capitalize() +
' government spending shock at t = ' + str(self.g_t))
def plot(self):
fig, ax = plt.subplots(figsize=(10, 6))
ax.plot(self.generate_series())
ax.set(xlabel='Iteration', xlim=(0, self.n))
ax.set_ylabel('$Y_t$', rotation=0)
ax.grid()
return fig
def param_plot(self):
fig = param_plot()
ax = fig.gca()
else:
label = rf'$\lambda_{i+1} = {sam.roots[i].real:.2f}$'
ax.scatter(0, 0, 0, label=label) # dummy to add to legend
plt.legend(fontsize=12, loc=3)
return fig
Summary
--------------------------------------------------
Root type: Complex conjugate
Solution type: Damped oscillations
Roots: [0.65+0.27838822j 0.65-0.27838822j]
Absolute value of roots is less than one
Stochastic series with σ = 2
Government spending equal to 10
Permanent government spending shock at t = 20
sam.plot()
plt.show()
We’ll use our graph to show where the roots lie and how their location is consistent with the behavior of the path just
graphed.
The red + sign shows the location of the roots
sam.param_plot()
plt.show()
It turns out that we can use the QuantEcon.py LinearStateSpace class to do much of the work that we have done from
scratch above.
Here is how we map the Samuelson model into an instance of a LinearStateSpace class
A = [[1, 0, 0],
[γ + g, ρ1, ρ2],
[0, 1, 0]]
x, y = sam_t.simulate(ts_length=n)
axes[-1].set_xlabel('Iteration')
plt.show()
Let’s plot impulse response functions for the instance of the Samuelson model using a method in the LinearStateS-
pace class
imres = sam_t.impulse_response()
imres = np.asarray(imres)
y1 = imres[:, :, 0]
y2 = imres[:, :, 1]
y1.shape
(2, 6, 1)
Now let’s compute the zeros of the characteristic polynomial by simply calculating the eigenvalues of 𝐴
A = np.asarray(A)
w, v = np.linalg.eig(A)
print(w)
We could also create a subclass of LinearStateSpace (inheriting all its methods and attributes) to add more functions
to use
class SamuelsonLSS(LinearStateSpace):
"""
This subclass creates a Samuelson multiplier-accelerator model
as a linear state space system.
"""
def __init__(self,
y_0=100,
y_1=50,
α=0.8,
β=0.9,
γ=10,
σ=1,
g=10):
self.α, self.β = α, β
self.y_0, self.y_1, self.g = y_0, y_1, g
self.γ, self.σ = γ, σ
self.ρ1 = α + β
self.ρ2 = -β
x, y = self.simulate(ts_length)
axes[-1].set_xlabel('Iteration')
return fig
x, y = self.impulse_response(j)
return fig
22.7.3 Illustrations
samlss = SamuelsonLSS()
samlss.plot_simulation(100, stationary=False)
plt.show()
samlss.plot_simulation(100, stationary=True)
plt.show()
samlss.plot_irf(100)
plt.show()
samlss.multipliers()
Let’s shut down the accelerator by setting 𝑏 = 0 to get a pure multiplier model
• the absence of cycles gives an idea about why Samuelson included the accelerator
pure_multiplier.plot_simulation()
pure_multiplier.plot_simulation()
pure_multiplier.plot_irf(100)
22.9 Summary
In this lecture, we wrote functions and classes to represent non-stochastic and stochastic versions of the Samuelson (1939)
multiplier-accelerator model, described in [Samuelson, 1939].
We saw that different parameter values led to different output paths, which could either be stationary, explosive, or
oscillating.
We also were able to represent the model using the QuantEcon.py LinearStateSpace class.
TWENTYTHREE
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
23.1 Overview
The following two lines are only added to avoid a FutureWarning caused by compatibility issues between pandas and
matplotlib.
433
Intermediate Quantitative Economics with Python
Additional technical background related to this lecture can be found in the monograph of [Buraczewski et al., 2016].
The GARCH model is common in financial applications, where time series such as asset returns exhibit time varying
volatility.
For example, consider the following plot of daily returns on the Nasdaq Composite Index for the period 1st January 2006
to 1st November 2019.
import yfinance as yf
r = s.pct_change()
fig, ax = plt.subplots()
ax.plot(r, alpha=0.7)
ax.set_ylabel('returns', fontsize=12)
ax.set_xlabel('date', fontsize=12)
plt.show()
[*********************100%%**********************] 1 of 1 completed
Notice how the series exhibits bursts of volatility (high variance) and then settles down again.
GARCH models can replicate this feature.
The GARCH(1, 1) volatility process takes the form
2
𝜎𝑡+1 = 𝛼0 + 𝜎𝑡2 (𝛼1 𝜉𝑡+1
2
+ 𝛽) (23.2)
where {𝜉𝑡 } is IID with 𝔼𝜉𝑡2 = 1 and all parameters are positive.
Returns on a given asset are then modeled as
𝑟𝑡 = 𝜎𝑡 𝜁𝑡 (23.3)
Suppose that a given household saves a fixed fraction 𝑠 of its current wealth in every period.
The household earns labor income 𝑦𝑡 at the start of time 𝑡.
Wealth then evolves according to
23.2.3 Stationarity
In earlier lectures, such as the one on AR(1) processes, we introduced the notion of a stationary distribution.
In the present context, we can define a stationary distribution as follows:
The distribution 𝐹 ∗ on ℝ is called stationary for the Kesten process (23.1) if
In other words, if the current state 𝑋𝑡 has distribution 𝐹 ∗ , then so does the next period state 𝑋𝑡+1 .
We can write this alternatively as
The left hand side is the distribution of the next period state when the current state is drawn from 𝐹 ∗ .
The equality in (23.6) states that this distribution is unchanged.
There is an important cross-sectional interpretation of stationary distributions, discussed previously but worth repeating
here.
Suppose, for example, that we are interested in the wealth distribution — that is, the current distribution of wealth across
households in a given country.
Suppose further that
• the wealth of each household evolves independently according to (23.4),
• 𝐹 ∗ is a stationary distribution for this stochastic process and
• there are many households.
Then 𝐹 ∗ is a steady state for the cross-sectional wealth distribution in this country.
In other words, if 𝐹 ∗ is the current wealth distribution then it will remain so in subsequent periods, ceteris paribus.
To see this, suppose that 𝐹 ∗ is the current wealth distribution.
What is the fraction of households with wealth less than 𝑦 next period?
To obtain this, we sum the probability that wealth is less than 𝑦 tomorrow, given that current wealth is 𝑤, weighted by the
fraction of households with wealth 𝑤.
Noting that the fraction of households with wealth in interval 𝑑𝑤 is 𝐹 ∗ (𝑑𝑤), we get
By the definition of stationarity and the assumption that 𝐹 ∗ is stationary for the wealth process, this is just 𝐹 ∗ (𝑦).
Hence the fraction of households with wealth in [0, 𝑦] is the same next period as it is this period.
Since 𝑦 was chosen arbitrarily, the distribution is unchanged.
The Kesten process 𝑋𝑡+1 = 𝑎𝑡+1 𝑋𝑡 + 𝜂𝑡+1 does not always have a stationary distribution.
For example, if 𝑎𝑡 ≡ 𝜂𝑡 ≡ 1 for all 𝑡, then 𝑋𝑡 = 𝑋0 + 𝑡, which diverges to infinity.
To prevent this kind of divergence, we require that {𝑎𝑡 } is strictly less than 1 most of the time.
In particular, if
Under certain conditions, the stationary distribution of a Kesten process has a Pareto tail.
(See our earlier lecture on heavy-tailed distributions for background.)
This fact is significant for economics because of the prevalence of Pareto-tailed distributions.
To state the conditions under which the stationary distribution of a Kesten process has a Pareto tail, we first recall that a
random variable is called nonarithmetic if its distribution is not concentrated on {… , −2𝑡, −𝑡, 0, 𝑡, 2𝑡, …} for any 𝑡 ≥ 0.
For example, any random variable with a density is nonarithmetic.
The famous Kesten–Goldie Theorem (see, e.g., [Buraczewski et al., 2016], theorem 2.4.4) states that if
1. the stationarity conditions in (23.7) hold,
2. the random variable 𝑎𝑡 is positive with probability one and nonarithmetic,
3. ℙ{𝑎𝑡 𝑥 + 𝜂𝑡 = 𝑥} < 1 for all 𝑥 ∈ ℝ+ and
4. there exists a positive constant 𝛼 such that
𝔼𝑎𝛼
𝑡 = 1, 𝔼𝜂𝑡𝛼 < ∞, and 𝔼[𝑎𝛼+1
𝑡 ]<∞
then the stationary distribution of the Kesten process has a Pareto tail with tail index 𝛼.
More precisely, if 𝐹 ∗ is the unique stationary distribution and 𝑋 ∗ ∼ 𝐹 ∗ , then
23.3.2 Intuition
The first condition implies that the distribution of 𝑎𝑡 has a large amount of probability mass below 1.
The second condition implies that the distribution of 𝑎𝑡 has at least some probability mass at or above 1.
The first condition gives us existence of the stationary condition.
The second condition means that the current state can be expanded by 𝑎𝑡 .
If this occurs for several concurrent periods, the effects compound each other, since 𝑎𝑡 is multiplicative.
This leads to spikes in the time series, which fill out the extreme right hand tail of the distribution.
The spikes in the time series are visible in the following simulation, which generates of 10 paths when 𝑎𝑡 and 𝑏𝑡 are
lognormal.
μ = -0.5
σ = 1.0
def kesten_ts(ts_length=100):
x = np.zeros(ts_length)
for t in range(ts_length-1):
a = np.exp(μ + σ * np.random.randn())
b = np.exp(np.random.randn())
x[t+1] = a * x[t] + b
return x
fig, ax = plt.subplots()
num_paths = 10
np.random.seed(12)
for i in range(num_paths):
ax.plot(kesten_ts())
ax.set(xlabel='time', ylabel='$X_t$')
plt.show()
As noted in our lecture on heavy tails, for common measures of firm size such as revenue or employment, the US firm
size distribution exhibits a Pareto tail (see, e.g., [Axtell, 2001], [Gabaix, 2016]).
Let us try to explain this rather striking fact using the Kesten–Goldie Theorem.
It was postulated many years ago by Robert Gibrat [Gibrat, 1931] that firm size evolves according to a simple rule whereby
size next period is proportional to current size.
This is now known as Gibrat’s law of proportional growth.
We can express this idea by stating that a suitably defined measure 𝑠𝑡 of firm size obeys
𝑠𝑡+1
= 𝑎𝑡+1 (23.8)
𝑠𝑡
where {𝑎𝑡 } and {𝑏𝑡 } are both IID and independent of each other.
In the exercises you are asked to show that (23.9) is more consistent with the empirical findings presented above than
Gibrat’s law in (23.8).
23.5 Exercises
Exercise 23.5.1
Simulate and plot 15 years of daily returns (consider each year as having 250 working days) using the GARCH(1, 1)
process in (23.2)–(23.3).
Take 𝜉𝑡 and 𝜁𝑡 to be independent and standard normal.
Set 𝛼0 = 0.00001, 𝛼1 = 0.1, 𝛽 = 0.9 and 𝜎0 = 0.
Compare visually with the Nasdaq Composite Index returns shown above.
While the time path differs, you should see bursts of high volatility.
α_0 = 1e-5
α_1 = 0.1
β = 0.9
years = 15
days = years * 250
def garch_ts(ts_length=days):
σ2 = 0
r = np.zeros(ts_length)
for t in range(ts_length-1):
ξ = np.random.randn()
σ2 = α_0 + σ2 * (α_1 * ξ**2 + β)
r[t] = np.sqrt(σ2) * np.random.randn()
return r
fig, ax = plt.subplots()
ax.plot(garch_ts(), alpha=0.7)
ax.set(xlabel='time', ylabel='$\\sigma_t^2$')
plt.show()
Exercise 23.5.2
In our discussion of firm dynamics, it was claimed that (23.9) is more consistent with the empirical literature than Gibrat’s
law in (23.8).
(The empirical literature was reviewed immediately above (23.9).)
In what sense is this true (or false)?
𝔼𝑏 𝕍𝑏
𝔼𝑎 + and 𝕍𝑎 +
𝑠 𝑠2
Both of these decline with firm size 𝑠, consistent with the data.
Moreover, the law of motion (23.10) clearly approaches Gibrat’s law (23.8) as 𝑠𝑡 gets large.
Exercise 23.5.3
Consider an arbitrary Kesten process as given in (23.1).
Suppose that {𝑎𝑡 } is lognormal with parameters (𝜇, 𝜎).
In other words, each 𝑎𝑡 has the same distribution as exp(𝜇 + 𝜎𝑍) when 𝑍 is standard normal.
Suppose further that 𝔼𝜂𝑡𝑟 < ∞ for every 𝑟 > 0, as would be the case if, say, 𝜂𝑡 is also lognormal.
Show that the conditions of the Kesten–Goldie theorem are satisfied if and only if 𝜇 < 0.
Obtain the value of 𝛼 that makes the Kesten–Goldie conditions hold.
𝔼 ln 𝑎𝑡 = 𝔼(𝜇 + 𝜎𝑍) = 𝜇,
and since 𝜂𝑡 has finite moments of all orders, the stationarity condition holds if and only if 𝜇 < 0.
Given the properties of the lognormal distribution (which has finite moments of all orders), the only other condition in
doubt is existence of a positive constant 𝛼 such that 𝔼𝑎𝛼
𝑡 = 1.
𝛼2 𝜎2
exp (𝛼𝜇 + ) = 1.
2
Exercise 23.5.4
One unrealistic aspect of the firm dynamics specified in (23.9) is that it ignores entry and exit.
In any given period and in any given market, we observe significant numbers of firms entering and exiting the market.
Empirical discussion of this can be found in a famous paper by Hugo Hopenhayn [Hopenhayn, 1992].
In the same paper, Hopenhayn builds a model of entry and exit that incorporates profit maximization by firms and market
clearing quantities, wages and prices.
In his model, a stationary equilibrium occurs when the number of entrants equals the number of exiting firms.
In this setting, firm dynamics can be expressed as
𝑠𝑡+1 = 𝑒𝑡+1 𝟙{𝑠𝑡 < 𝑠}̄ + (𝑎𝑡+1 𝑠𝑡 + 𝑏𝑡+1 )𝟙{𝑠𝑡 ≥ 𝑠}̄ (23.11)
Here
• the state variable 𝑠𝑡 represents productivity (which is a proxy for output and hence firm size),
• the IID sequence {𝑒𝑡 } is thought of as a productivity draw for a new entrant and
• the variable 𝑠 ̄ is a threshold value that we take as given, although it is determined endogenously in Hopenhayn’s
model.
The idea behind (23.11) is that firms stay in the market as long as their productivity 𝑠𝑡 remains at or above 𝑠.̄
• In this case, their productivity updates according to (23.9).
Firms choose to exit when their productivity 𝑠𝑡 falls below 𝑠.̄
• In this case, they are replaced by a new firm with productivity 𝑒𝑡+1 .
What can we say about dynamics?
Although (23.11) is not a Kesten process, it does update in the same way as a Kesten process when 𝑠𝑡 is large.
So perhaps its stationary distribution still has Pareto tails?
Your task is to investigate this question via simulation and rank-size plots.
The approach will be to
1. generate 𝑀 draws of 𝑠𝑇 when 𝑀 and 𝑇 are large and
2. plot the largest 1,000 of the resulting draws in a rank-size plot.
(The distribution of 𝑠𝑇 will be close to the stationary distribution when 𝑇 is large.)
In the simulation, assume that
• each of 𝑎𝑡 , 𝑏𝑡 and 𝑒𝑡 is lognormal,
• the parameters are
@njit(parallel=True)
def generate_draws(μ_a=-0.5,
σ_a=0.1,
μ_b=0.0,
σ_b=0.5,
μ_e=0.0,
σ_e=0.5,
s_bar=1.0,
T=500,
(continues on next page)
draws = np.empty(M)
for m in prange(M):
s = s_init
for t in range(T):
if s < s_bar:
new_s = np.exp(μ_e + σ_e * randn())
else:
a = np.exp(μ_a + σ_a * randn())
b = np.exp(μ_b + σ_b * randn())
new_s = a * s + b
s = new_s
draws[m] = s
return draws
data = generate_draws()
fig, ax = plt.subplots()
plt.show()
TWENTYFOUR
Contents
See also:
A version of this lecture using a GPU is available here
In addition to what’s in Anaconda, this lecture will need the following libraries:
24.1 Overview
445
Intermediate Quantitative Economics with Python
One question of interest is whether or not we can replicate Pareto tails from a relatively simple model.
The evolution of wealth for any given household depends on their savings behavior.
Modeling such behavior will form an important part of this lecture series.
However, in this particular lecture, we will be content with rather ad hoc (but plausible) savings rules.
We do this to more easily explore the implications of different specifications of income dynamics and investment returns.
At the same time, all of the techniques discussed here can be plugged into models that use optimization to obtain savings
rules.
We will use the following imports.
fig, ax = plt.subplots()
ax.plot(f_vals, l_vals, label='Lorenz curve, lognormal sample')
ax.plot(f_vals, f_vals, label='Lorenz curve, equality')
ax.legend()
plt.show()
This curve can be understood as follows: if point (𝑥, 𝑦) lies on the curve, it means that, collectively, the bottom (100𝑥)%
of the population holds (100𝑦)% of the wealth.
The “equality” line is the 45 degree line (which might not be exactly 45 degrees in the figure, depending on the aspect
ratio).
A sample that produces this line exhibits perfect equality.
The other line in the figure is the Lorenz curve for the lognormal sample, which deviates significantly from perfect equality.
For example, the bottom 80% of the population holds around 40% of total wealth.
Here is another example, which shows how the Lorenz curve shifts as the underlying distribution changes.
We generate 10,000 observations using the Pareto distribution with a range of parameters, and then compute the Lorenz
curve corresponding to each set of observations.
You can see that, as the tail parameter of the Pareto distribution increases, inequality decreases.
This is to be expected, because a higher tail index implies less weight in the tail of the Pareto distribution.
The definition and interpretation of the Gini coefficient can be found on the corresponding Wikipedia page.
A value of 0 indicates perfect equality (corresponding the case where the Lorenz curve matches the 45 degree line) and
a value of 1 indicates complete inequality (all wealth held by the richest household).
The QuantEcon.py library contains a function to calculate the Gini coefficient.
We can test it on the Weibull distribution with parameter 𝑎, where the Gini coefficient is known to be
𝐺 = 1 − 2−1/𝑎
Let’s see if the Gini coefficient computed from a simulated sample matches this at each fixed value of 𝑎.
fig, ax = plt.subplots()
for a in a_vals:
y = np.random.weibull(a, size=n)
ginis.append(qe.gini_coefficient(y))
ginis_theoretical.append(1 - 2**(-1/a))
ax.plot(a_vals, ginis, label='estimated gini coefficient')
ax.plot(a_vals, ginis_theoretical, label='theoretical gini coefficient')
ax.legend()
ax.set_xlabel("Weibull parameter $a$")
ax.set_ylabel("Gini coefficient")
plt.show()
where
• 𝑤𝑡 is wealth at time 𝑡 for a given household,
• 𝑟𝑡 is the rate of return of financial assets,
• 𝑦𝑡 is current non-financial (e.g., labor) income and
• 𝑠(𝑤𝑡 ) is current wealth net of consumption
Letting {𝑧𝑡 } be a correlated state process of the form
𝑅𝑡 ∶= 1 + 𝑟𝑡 = 𝑐𝑟 exp(𝑧𝑡 ) + exp(𝜇𝑟 + 𝜎𝑟 𝜉𝑡 )
and
𝑦𝑡 = 𝑐𝑦 exp(𝑧𝑡 ) + exp(𝜇𝑦 + 𝜎𝑦 𝜁𝑡 )
𝑠(𝑤) = 𝑠0 𝑤 ⋅ 𝟙{𝑤 ≥ 𝑤}
̂ (24.2)
24.4 Implementation
wealth_dynamics_data = [
('w_hat', float64), # savings parameter
('s_0', float64), # savings parameter
('c_y', float64), # labor income parameter
('μ_y', float64), # labor income paraemter
('σ_y', float64), # labor income parameter
('c_r', float64), # rate of return parameter
('μ_r', float64), # rate of return parameter
('σ_r', float64), # rate of return parameter
('a', float64), # aggregate shock parameter
('b', float64), # aggregate shock parameter
('σ_z', float64), # aggregate shock parameter
('z_mean', float64), # mean of z process
('z_var', float64), # variance of z process
('y_mean', float64), # mean of y process
('R_mean', float64) # mean of R process
]
Here’s a class that stores instance data and implements methods that update the aggregate state and household wealth.
@jitclass(wealth_dynamics_data)
class WealthDynamics:
def __init__(self,
w_hat=1.0,
s_0=0.75,
c_y=1.0,
μ_y=1.0,
σ_y=0.2,
c_r=0.05,
μ_r=0.1,
σ_r=0.5,
a=0.5,
b=0.0,
σ_z=0.1):
def parameters(self):
"""
Collect and return parameters.
"""
parameters = (self.w_hat, self.s_0,
self.c_y, self.μ_y, self.σ_y,
self.c_r, self.μ_r, self.σ_r,
self.a, self.b, self.σ_z)
return parameters
# Simplify names
params = self.parameters()
w_hat, s_0, c_y, μ_y, σ_y, c_r, μ_r, σ_r, a, b, σ_z = params
zp = a * z + b + σ_z * np.random.randn()
# Update wealth
y = c_y * np.exp(zp) + np.exp(μ_y + σ_y * np.random.randn())
wp = y
if w >= w_hat:
R = c_r * np.exp(zp) + np.exp(μ_r + σ_r * np.random.randn())
wp += R * s_0 * w
return wp, zp
Here’s function to simulate the time series of wealth for in individual households.
@njit
def wealth_time_series(wdy, w_0, n):
"""
Generate a single time series of length n for wealth given
initial value w_0.
The initial persistent state z_0 for each household is drawn from
the stationary distribution of the AR(1) process.
"""
z = wdy.z_mean + np.sqrt(wdy.z_var) * np.random.randn()
w = np.empty(n)
w[0] = w_0
for t in range(n-1):
w[t+1], z = wdy.update_states(w[t], z)
return w
@njit(parallel=True)
def update_cross_section(wdy, w_distribution, shift_length=500):
"""
Shifts a cross-section of household forward in time
"""
new_distribution = np.empty_like(w_distribution)
Parallelization is very effective in the function above because the time path of each household can be calculated indepen-
dently once the path for the aggregate state is known.
24.5 Applications
Let’s try simulating the model at different parameter values and investigate the implications for the wealth distribution.
wdy = WealthDynamics()
ts_length = 200
w = wealth_time_series(wdy, wdy.y_mean, ts_length)
fig, ax = plt.subplots()
ax.plot(w)
plt.show()
Now we investigate how the Lorenz curves associated with the wealth distribution change as return to savings varies.
The code below plots Lorenz curves for three different values of 𝜇𝑟 .
If you are running this yourself, note that it will take one or two minutes to execute.
This is unavoidable because we are executing a CPU intensive task.
In fact the code, which is JIT compiled and parallelized, runs extremely fast relative to the number of computations.
%%time
fig, ax = plt.subplots()
μ_r_vals = (0.0, 0.025, 0.05)
gini_vals = []
CPU times: user 1min 29s, sys: 96.4 ms, total: 1min 29s
Wall time: 12.3 s
The Lorenz curve shifts downwards as returns on financial income rise, indicating a rise in inequality.
We will look at this again via the Gini coefficient immediately below, but first consider the following image of our system
resources when the code above is executing:
Since the code is both efficiently JIT compiled and fully parallelized, it’s close to impossible to make this sequence of
tasks run faster without changing hardware.
Now let’s check the Gini coefficient.
fig, ax = plt.subplots()
ax.plot(μ_r_vals, gini_vals, label='gini coefficient')
ax.set_xlabel("$\mu_r$")
ax.legend()
plt.show()
Once again, we see that inequality increases as returns on financial income rise.
Let’s finish this section by investigating what happens when we change the volatility term 𝜎𝑟 in financial returns.
%%time
fig, ax = plt.subplots()
σ_r_vals = (0.35, 0.45, 0.52)
gini_vals = []
CPU times: user 1min 28s, sys: 23.3 ms, total: 1min 28s
Wall time: 11.4 s
We see that greater volatility has the effect of increasing inequality in this model.
24.6 Exercises
Exercise 24.6.1
For a wealth or income distribution with Pareto tail, a higher tail index suggests lower inequality.
Indeed, it is possible to prove that the Gini coefficient of the Pareto distribution with tail index 𝑎 is 1/(2𝑎 − 1).
To the extent that you can, confirm this by simulation.
In particular, generate a plot of the Gini coefficient against the tail index using both the theoretical value just given and
the value computed from a sample via qe.gini_coefficient.
For the values of the tail index, use a_vals = np.linspace(1, 10, 25).
Use sample of size 1,000 for each 𝑎 and the sampling method for generating Pareto draws employed in the discussion of
Lorenz curves for the Pareto distribution.
To the extent that you can, interpret the monotone relationship between the Gini index and 𝑎.
In general, for a Pareto distribution, a higher tail index implies less weight in the right hand tail.
This means less extreme values for wealth and hence more equality.
More equality translates to a lower Gini index.
Exercise 24.6.2
The wealth process (24.1) is similar to a Kesten process.
This is because, according to (24.2), savings is constant for all wealth levels above 𝑤.̂
When savings is constant, the wealth process has the same quasi-linear structure as a Kesten process, with multiplicative
and additive shocks.
The Kesten–Goldie theorem tells us that Kesten processes have Pareto tails under a range of parameterizations.
The theorem does not directly apply here, since savings is not always constant and since the multiplicative and additive
terms in (24.1) are not IID.
At the same time, given the similarities, perhaps Pareto tails will arise.
To test this, run a simulation that generates a cross-section of wealth and generate a rank-size plot.
If you like, you can use the function rank_size from the quantecon library (documentation here).
In viewing the plot, remember that Pareto tails generate a straight line. Is this what you see?
For sample size and initial conditions, use
num_households = 250_000
T = 500 # shift forward T periods
ψ_0 = np.full(num_households, wdy.y_mean) # initial distribution
z_0 = wdy.z_mean
num_households = 250_000
T = 500 # how far to shift forward in time
wdy = WealthDynamics()
ψ_0 = np.full(num_households, wdy.y_mean)
z_0 = wdy.z_mean
fig, ax = plt.subplots()
plt.show()
TWENTYFIVE
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
25.1 Overview
This lecture provides a simple and intuitive introduction to the Kalman filter, for those who either
• have heard of the Kalman filter but don’t know how it works, or
• know the Kalman filter equations, but don’t know where they come from
For additional (more advanced) reading on the Kalman filter, see
• [Ljungqvist and Sargent, 2018], section 2.7
• [Anderson and Moore, 2005]
The second reference presents a comprehensive treatment of the Kalman filter.
Required knowledge: Familiarity with matrix manipulations, multivariate normal distributions, covariance matrices, etc.
We’ll need the following imports:
459
Intermediate Quantitative Economics with Python
The Kalman filter has many applications in economics, but for now let’s pretend that we are rocket scientists.
A missile has been launched from country Y and our mission is to track it.
Let 𝑥 ∈ ℝ2 denote the current location of the missile—a pair indicating latitude-longitude coordinates on a map.
At the present moment in time, the precise location 𝑥 is unknown, but we do have some beliefs about 𝑥.
One way to summarize our knowledge is a point prediction 𝑥̂
• But what if the President wants to know the probability that the missile is currently over the Sea of Japan?
• Then it is better to summarize our initial beliefs with a bivariate probability density 𝑝
– ∫𝐸 𝑝(𝑥)𝑑𝑥 indicates the probability that we attach to the missile being in region 𝐸.
The density 𝑝 is called our prior for the random variable 𝑥.
To keep things tractable in our example, we assume that our prior is Gaussian.
In particular, we take
𝑝 = 𝑁 (𝑥,̂ Σ) (25.1)
where 𝑥̂ is the mean of the distribution and Σ is a 2 × 2 covariance matrix. In our simulations, we will suppose that
This density 𝑝(𝑥) is shown below as a contour map, with the center of the red ellipse being equal to 𝑥.̂
Parameters
----------
x : array_like(float)
Random variable
y : array_like(float)
Random variable
σ_x : array_like(float)
Standard deviation of random variable x
σ_y : array_like(float)
Standard deviation of random variable y
μ_x : scalar(float)
Mean value of random variable x
μ_y : scalar(float)
Mean value of random variable y
σ_xy : array_like(float)
Covariance of random variables x and y
"""
x_μ = x - μ_x
y_μ = y - μ_y
Z = gen_gaussian_plot_vals(x_hat, Σ)
ax.contourf(X, Y, Z, 6, alpha=0.6, cmap=cm.jet)
cs = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
plt.show()
↪extract a single element from your array before performing this operation.␣
We are now presented with some good news and some bad news.
The good news is that the missile has been located by our sensors, which report that the current location is 𝑦 = (2.3, −1.9).
The next figure shows the original prior 𝑝(𝑥) and the new reported location 𝑦
Z = gen_gaussian_plot_vals(x_hat, Σ)
ax.contourf(X, Y, Z, 6, alpha=0.6, cmap=cm.jet)
cs = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs, inline=1, fontsize=10)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
(continues on next page)
plt.show()
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
Here 𝐺 and 𝑅 are 2 × 2 matrices with 𝑅 positive definite. Both are assumed known, and the noise term 𝑣 is assumed to
be independent of 𝑥.
How then should we combine our prior 𝑝(𝑥) = 𝑁 (𝑥,̂ Σ) and this new information 𝑦 to improve our understanding of the
location of the missile?
As you may have guessed, the answer is to use Bayes’ theorem, which tells us to update our prior 𝑝(𝑥) to 𝑝(𝑥 | 𝑦) via
𝑝(𝑦 | 𝑥) 𝑝(𝑥)
𝑝(𝑥 | 𝑦) =
𝑝(𝑦)
where 𝑝(𝑦) = ∫ 𝑝(𝑦 | 𝑥) 𝑝(𝑥)𝑑𝑥.
In solving for 𝑝(𝑥 | 𝑦), we observe that
• 𝑝(𝑥) = 𝑁 (𝑥,̂ Σ).
• In view of (25.3), the conditional density 𝑝(𝑦 | 𝑥) is 𝑁 (𝐺𝑥, 𝑅).
• 𝑝(𝑦) does not depend on 𝑥, and enters into the calculations only as a normalizing constant.
Because we are in a linear and Gaussian framework, the updated density can be computed by calculating population linear
regressions.
In particular, the solution is known1 to be
𝑝(𝑥 | 𝑦) = 𝑁 (𝑥𝐹̂ , Σ𝐹 )
where
𝑥𝐹̂ ∶= 𝑥̂ + Σ𝐺′ (𝐺Σ𝐺′ + 𝑅)−1 (𝑦 − 𝐺𝑥)̂ and Σ𝐹 ∶= Σ − Σ𝐺′ (𝐺Σ𝐺′ + 𝑅)−1 𝐺Σ (25.4)
Here Σ𝐺′ (𝐺Σ𝐺′ + 𝑅)−1 is the matrix of population regression coefficients of the hidden object 𝑥 − 𝑥̂ on the surprise
𝑦 − 𝐺𝑥.̂
This new density 𝑝(𝑥 | 𝑦) = 𝑁 (𝑥𝐹̂ , Σ𝐹 ) is shown in the next figure via contour lines and the color map.
The original density is left in as contour lines for comparison
Z = gen_gaussian_plot_vals(x_hat, Σ)
cs1 = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs1, inline=1, fontsize=10)
M = Σ * G.T * linalg.inv(G * Σ * G.T + R)
x_hat_F = x_hat + M * (y - G * x_hat)
Σ_F = Σ - M * G * Σ
new_Z = gen_gaussian_plot_vals(x_hat_F, Σ_F)
cs2 = ax.contour(X, Y, new_Z, 6, colors="black")
ax.clabel(cs2, inline=1, fontsize=10)
ax.contourf(X, Y, new_Z, 6, alpha=0.6, cmap=cm.jet)
ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black")
plt.show()
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
Our new density twists the prior 𝑝(𝑥) in a direction determined by the new information 𝑦 − 𝐺𝑥.̂
In generating the figure, we set 𝐺 to the identity matrix and 𝑅 = 0.5Σ for Σ defined in (25.2).
Let’s suppose that we have one, and that it’s linear and Gaussian. In particular,
Our aim is to combine this law of motion and our current distribution 𝑝(𝑥 | 𝑦) = 𝑁 (𝑥𝐹̂ , Σ𝐹 ) to come up with a new
predictive distribution for the location in one unit of time.
In view of (25.5), all we have to do is introduce a random vector 𝑥𝐹 ∼ 𝑁 (𝑥𝐹̂ , Σ𝐹 ) and work out the distribution of
𝐴𝑥𝐹 + 𝑤 where 𝑤 is independent of 𝑥𝐹 and has distribution 𝑁 (0, 𝑄).
Since linear combinations of Gaussians are Gaussian, 𝐴𝑥𝐹 + 𝑤 is Gaussian.
Elementary calculations and the expressions in (25.4) tell us that
and
The matrix 𝐴Σ𝐺′ (𝐺Σ𝐺′ + 𝑅)−1 is often written as 𝐾Σ and called the Kalman gain.
• The subscript Σ has been added to remind us that 𝐾Σ depends on Σ, but not 𝑦 or 𝑥.̂
Using this notation, we can summarize our results as follows.
Our updated prediction is the density 𝑁 (𝑥𝑛𝑒𝑤
̂ , Σ𝑛𝑒𝑤 ) where
𝑥𝑛𝑒𝑤
̂ ∶= 𝐴𝑥̂ + 𝐾Σ (𝑦 − 𝐺𝑥)̂
Σ𝑛𝑒𝑤 ∶= 𝐴Σ𝐴′ − 𝐾Σ 𝐺Σ𝐴′ + 𝑄
1.2 0.0
𝐴=( ), 𝑄 = 0.3 ∗ Σ
0.0 −0.2
# Density 1
Z = gen_gaussian_plot_vals(x_hat, Σ)
cs1 = ax.contour(X, Y, Z, 6, colors="black")
ax.clabel(cs1, inline=1, fontsize=10)
# Density 2
M = Σ * G.T * linalg.inv(G * Σ * G.T + R)
x_hat_F = x_hat + M * (y - G * x_hat)
Σ_F = Σ - M * G * Σ
Z_F = gen_gaussian_plot_vals(x_hat_F, Σ_F)
cs2 = ax.contour(X, Y, Z_F, 6, colors="black")
ax.clabel(cs2, inline=1, fontsize=10)
# Density 3
new_x_hat = A * x_hat_F
new_Σ = A * Σ_F * A.T + Q
new_Z = gen_gaussian_plot_vals(new_x_hat, new_Σ)
cs3 = ax.contour(X, Y, new_Z, 6, colors="black")
(continues on next page)
plt.show()
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
𝑥𝑡+1
̂ = 𝐴𝑥𝑡̂ + 𝐾Σ𝑡 (𝑦𝑡 − 𝐺𝑥𝑡̂ )
Σ𝑡+1 = 𝐴Σ𝑡 𝐴′ − 𝐾Σ𝑡 𝐺Σ𝑡 𝐴′ + 𝑄
These are the standard dynamic equations for the Kalman filter (see, for example, [Ljungqvist and Sargent, 2018], page
58).
25.3 Convergence
A sufficient (but not necessary) condition is that all the eigenvalues 𝜆𝑖 of 𝐴 satisfy |𝜆𝑖 | < 1 (cf. e.g., [Anderson and
Moore, 2005], p. 77).
(This strong condition assures that the unconditional distribution of 𝑥𝑡 converges as 𝑡 → +∞.)
In this case, for any initial choice of Σ0 that is both non-negative and symmetric, the sequence {Σ𝑡 } in (25.6) converges
to a non-negative symmetric matrix Σ that solves (25.7).
25.4 Implementation
The class Kalman from the QuantEcon.py package implements the Kalman filter
• Instance data consists of:
– the moments (𝑥𝑡̂ , Σ𝑡 ) of the current prior.
– An instance of the LinearStateSpace class from QuantEcon.py.
The latter represents a linear state space model of the form
𝑄 ∶= 𝐶𝐶 ′ and 𝑅 ∶= 𝐻𝐻 ′
• The class Kalman from the QuantEcon.py package has a number of methods, some that we will wait to use until
we study more advanced applications in subsequent lectures.
• Methods pertinent for this lecture are:
– prior_to_filtered, which updates (𝑥𝑡̂ , Σ𝑡 ) to (𝑥𝐹 𝐹
𝑡̂ , Σ𝑡 )
– filtered_to_forecast, which updates the filtering distribution to the predictive distribution – which
becomes the new prior (𝑥𝑡+1
̂ , Σ𝑡+1 )
– update, which combines the last two methods
– a stationary_values, which computes the solution to (25.7) and the corresponding (stationary)
Kalman gain
You can view the program on GitHub.
25.5 Exercises
Exercise 25.5.1
Consider the following simple application of the Kalman filter, loosely based on [Ljungqvist and Sargent, 2018], section
2.9.2.
Suppose that
• all variables are scalars
• the hidden state {𝑥𝑡 } is in fact constant, equal to some 𝜃 ∈ ℝ unknown to the modeler
# Parameters
θ = 10 # Constant value of state x_t
A, C, G, H = 1, 0, 1, 1
ss = LinearStateSpace(A, C, G, H, mu_0=θ)
# Set up plot
fig, ax = plt.subplots(figsize=(10,8))
xgrid = np.linspace(θ - 5, θ + 2, 200)
for i in range(N):
# Record the current predicted mean and variance
m, v = [float(z) for z in (kalman.x_hat, kalman.Sigma)]
# Plot, update filter
ax.plot(xgrid, norm.pdf(xgrid, loc=m, scale=np.sqrt(v)), label=f'$t={i}$')
kalman.update(y[i])
↪extract a single element from your array before performing this operation.␣
Exercise 25.5.2
The preceding figure gives some support to the idea that probability mass converges to 𝜃.
To get a better idea, choose a small 𝜖 > 0 and calculate
𝜃+𝜖
𝑧𝑡 ∶= 1 − ∫ 𝑝𝑡 (𝑥)𝑑𝑥
𝜃−𝜖
for 𝑡 = 0, 1, 2, … , 𝑇 .
Plot 𝑧𝑡 against 𝑇 , setting 𝜖 = 0.1 and 𝑇 = 600.
Your figure should show error erratically declining something like this
ϵ = 0.1
θ = 10 # Constant value of state x_t
A, C, G, H = 1, 0, 1, 1
ss = LinearStateSpace(A, C, G, H, mu_0=θ)
(continues on next page)
x_hat_0, Σ_0 = 8, 1
kalman = Kalman(ss, x_hat_0, Σ_0)
T = 600
z = np.empty(T)
x, y = ss.simulate(T)
y = y.flatten()
for t in range(T):
# Record the current predicted mean and variance and plot their densities
m, v = [float(temp) for temp in (kalman.x_hat, kalman.Sigma)]
kalman.update(y[t])
↪extract a single element from your array before performing this operation.␣
Exercise 25.5.3
As discussed above, if the shock sequence {𝑤𝑡 } is not degenerate, then it is not in general possible to predict 𝑥𝑡 without
error at time 𝑡 − 1 (and this would be the case even if we could observe 𝑥𝑡−1 ).
Let’s now compare the prediction 𝑥𝑡̂ made by the Kalman filter against a competitor who is allowed to observe 𝑥𝑡−1 .
This competitor will use the conditional expectation 𝔼[𝑥𝑡 | 𝑥𝑡−1 ], which in this case is 𝐴𝑥𝑡−1 .
The conditional expectation is known to be the optimal prediction method in terms of minimizing mean squared error.
(More precisely, the minimizer of 𝔼 ‖𝑥𝑡 − 𝑔(𝑥𝑡−1 )‖2 with respect to 𝑔 is 𝑔∗ (𝑥𝑡−1 ) ∶= 𝔼[𝑥𝑡 | 𝑥𝑡−1 ])
Thus we are comparing the Kalman filter against a competitor who has more information (in the sense of being able to
observe the latent state) and behaves optimally in terms of minimizing squared error.
Our horse race will be assessed in terms of squared error.
In particular, your task is to generate a graph plotting observations of both ‖𝑥𝑡 − 𝐴𝑥𝑡−1 ‖2 and ‖𝑥𝑡 − 𝑥𝑡̂ ‖2 against 𝑡 for
𝑡 = 1, … , 50.
For the parameters, set 𝐺 = 𝐼, 𝑅 = 0.5𝐼 and 𝑄 = 0.3𝐼, where 𝐼 is the 2 × 2 identity.
Set
0.5 0.4
𝐴=( )
0.6 0.3
Observe how, after an initial learning period, the Kalman filter performs quite well, even relative to the competitor who
predicts optimally with knowledge of the latent state.
# Define A, C, G, H
G = np.identity(2)
H = np.sqrt(0.5) * np.identity(2)
A = [[0.5, 0.4],
[0.6, 0.3]]
C = np.sqrt(0.3) * np.identity(2)
# Print eigenvalues of A
print("Eigenvalues of A:")
print(eigvals(A))
# Print stationary Σ
S, K = kn.stationary_values()
print("Stationary prediction error variance:")
print(S)
e1 = np.empty(T-1)
e2 = np.empty(T-1)
fig, ax = plt.subplots(figsize=(9,6))
ax.plot(range(1, T), e1, 'k-', lw=2, alpha=0.6,
label='Kalman filter error')
ax.plot(range(1, T), e2, 'g-', lw=2, alpha=0.6,
label='Conditional expectation error')
ax.legend()
plt.show()
Eigenvalues of A:
[ 0.9+0.j -0.1+0.j]
Stationary prediction error variance:
[[0.40329108 0.1050718 ]
[0.1050718 0.41061709]]
Exercise 25.5.4
Try varying the coefficient 0.3 in 𝑄 = 0.3𝐼 up and down.
Observe how the diagonal values in the stationary solution Σ (see (25.7)) increase and decrease in line with this coefficient.
The interpretation is that more randomness in the law of motion for 𝑥𝑡 causes more (permanent) uncertainty in prediction.
TWENTYSIX
Contents
In this quantecon lecture A First Look at the Kalman filter, we used a Kalman filter to estimate locations of a rocket.
In this lecture, we’ll use the Kalman filter to infer a worker’s human capital and the effort that the worker devotes to
accumulating human capital, neither of which the firm observes directly.
The firm learns about those things only by observing a history of the output that the worker generates for the firm, and
from understanding how that output depends on the worker’s human capital and how human capital evolves as a function
of the worker’s effort.
We’ll posit a rule that expresses how the much firm pays the worker each period as a function of the firm’s information
each period.
In addition to what’s in Anaconda, this lecture will need the following libraries:
To conduct simulations, we bring in these imports, as in A First Look at the Kalman filter.
479
Intermediate Quantitative Economics with Python
Here
• ℎ𝑡 is the logarithm of human capital at time 𝑡
• 𝑢𝑡 is the logarithm of the worker’s effort at accumulating human capital at 𝑡
• 𝑦𝑡 is the logarithm of the worker’s output at time 𝑡
• ℎ0 ∼ 𝒩(ℎ̂ 0 , 𝜎ℎ,0 )
• 𝑢0 ∼ 𝒩(𝑢̂0 , 𝜎𝑢,0 )
Based on information about the worker that the firm has at time 𝑡 ≥ 1, the firm pays the worker log wage
and at time 0 pays the worker a log wage equal to the unconditional mean of 𝑦0 :
𝑤0 = 𝑔ℎ̂ 0
In using this payment rule, the firm is taking into account that the worker’s log output today is partly due to the random
component 𝑣𝑡 that comes entirely from luck, and that is assumed to be independent of ℎ𝑡 and 𝑢𝑡 .
ℎ𝑡+1 𝛼 𝛽 ℎ𝑡 𝑐
[ ]=[ ] [ ] + [ ] 𝑤𝑡+1
𝑢𝑡+1 0 1 𝑢𝑡 0
ℎ
𝑦𝑡 = [𝑔 0] [ 𝑡 ] + 𝑣𝑡
𝑢𝑡
where
ℎ ℎ̂ 0 𝜎ℎ,0 0
𝑥𝑡 = [ 𝑡 ] , 𝑥0̂ = [ ], Σ0 = [ ]
𝑢𝑡 𝑢̂0 0 𝜎𝑢,0
To compute the firm’s wage setting policy, we first we create a namedtuple to store the parameters of the model
WorkerModel = namedtuple("WorkerModel",
('A', 'C', 'G', 'R', 'xhat_0', 'Σ_0'))
A = np.array([[α, β],
[0, 1]])
C = np.array([[c],
[0]])
G = np.array([g, 1])
Please note how the WorkerModel namedtuple creates all of the objects required to compute an associated state-space
representation (26.2).
This is handy, because in order to simulate a history {𝑦𝑡 , ℎ𝑡 } for a worker, we’ll want to form state space system for
him/her by using the LinearStateSpace class.
T = 100
x, y = ss.simulate(T)
y = y.flatten()
Next, to compute the firm’s policy for setting the log wage based on the information it has about the worker, we use the
Kalman filter described in this quantecon lecture A First Look at the Kalman filter.
In particular, we want to compute all of the objects in an “innovation representation”.
We have all the objects in hand required to form an innovations representation for the output process {𝑦𝑡 }𝑇𝑡=0 for a worker.
Let’s code that up now.
𝑥𝑡+1
̂ = 𝐴𝑥𝑡̂ + 𝐾𝑡 𝑎𝑡
𝑦𝑡 = 𝐺𝑥𝑡̂ + 𝑎𝑡
↪extract a single element from your array before performing this operation.␣
We can watch as the firm’s inference 𝐸[𝑢0 |𝑦𝑡−1 ] of the worker’s work ethic converges toward the hidden 𝑢0 , which is not
directly observed by the firm.
fig, ax = plt.subplots(1, 2)
ax[1].plot(u_hat_t, label=r'$E[u_t|y^{t-1}]$')
ax[1].axhline(y=u_0, color='grey',
linestyle='dashed', label=fr'$u_0={u_0:.2f}$')
ax[1].set_xlabel('Time')
ax[1].set_ylabel(r'$E[u_t|y^{t-1}]$')
ax[1].set_title('Inferred work ethic over time')
ax[1].legend()
fig.tight_layout()
plt.show()
Let’s look at Σ0 and Σ𝑇 in order to see how much the firm learns about the hidden state during the horizon we have set.
print(Σ_t[:, :, 0])
[[4. 0.]
[0. 4.]]
print(Σ_t[:, :, -1])
[[0.08805027 0.00100377]
[0.00100377 0.00398351]]
Evidently, entries in the conditional covariance matrix become smaller over time.
It is enlightening to portray how conditional covariance matrices Σ𝑡 evolve by plotting confidence ellipsoides around
𝐸[𝑥𝑡 |𝑦𝑡−1 ] at various 𝑡’s.
plt.tight_layout()
plt.show()
Note how the accumulation of evidence 𝑦𝑡 affects the shape of the confidence ellipsoid as sample size 𝑡 grows.
Now let’s use our code to set the hidden state 𝑥0 to a particular vector in order to watch how a firm learns starting from
some 𝑥0 we are interested in.
For example, let’s say ℎ0 = 0 and 𝑢0 = 4.
Here is one way to do this.
T = 100
x, y = ss_example.simulate(T)
y = y.flatten()
h_0 = 0.0
u_0 = 4.0
Another way to accomplish the same goal is to use the following code.
T = 100
x, y = ss_example.simulate(T)
y = y.flatten()
h_0 = 0.0
u_0 = 4.0
For this worker, let’s generate a plot like the one above.
# First we compute the Kalman filter with initial xhat_0 and Σ_0
kalman = Kalman(ss, xhat_0, Σ_0)
Σ_t = []
y_hat_t = np.zeros(T-1)
u_hat_t = np.zeros(T-1)
ax[1].plot(u_hat_t, label=r'$E[u_t|y^{t-1}]$')
ax[1].axhline(y=u_0, color='grey',
linestyle='dashed', label=fr'$u_0={u_0:.2f}$')
ax[1].set_xlabel('Time')
ax[1].set_ylabel(r'$E[u_t|y^{t-1}]$')
ax[1].set_title('Inferred work ethic over time')
(continues on next page)
fig.tight_layout()
plt.show()
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
u_hat_t[t-1] = x_hat[1]
More generally, we can change some or all of the parameters defining a worker in our create_worker namedtuple.
Here is an example.
# We can set these parameters when creating a worker -- just like classes!
hard_working_worker = create_worker(α=.4, β=.8,
hhat_0=7.0, uhat_0=100, σ_h=2.5, σ_u=3.2)
print(hard_working_worker)
WorkerModel(A=array([[0.4, 0.8],
[0. , 1. ]]), C=array([[0.2],
[0. ]]), G=array([1., 1.]), R=0.5, xhat_0=array([[ 7.],
[100.]]), Σ_0=array([[2.5, 0. ],
[0. , 3.2]]))
We can also simulate the system for 𝑇 = 50 periods for different workers.
The difference between the inferred work ethics and true work ethics converges to 0 over time.
This shows that the filter is gradually teaching the worker and firm about the worker’s effort.
num_workers = 3
T = 50
fig, ax = plt.subplots(figsize=(7, 7))
for i in range(num_workers):
worker = create_worker(uhat_0=4+2*i)
simulate_workers(worker, T, ax)
ax.set_ylim(ymin=-2, ymax=2)
plt.show()
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
u_hat_t[i] = x_hat[1]
T = 50
fig, ax = plt.subplots(figsize=(7, 7))
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
u_hat_t[i] = x_hat[1]
# We can also use exact u_0=1 and h_0=2 for all workers
# These two lines set u_0=1 and h_0=2 for all workers
mu_0 = np.array([[1],
[2]])
Sigma_0 = np.zeros((2,2))
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
u_hat_t[i] = x_hat[1]
T = 50
fig, ax = plt.subplots(figsize=(7, 7))
mu_0_1 = np.array([[1],
[100]])
mu_0_2 = np.array([[1],
[30]])
Sigma_0 = np.zeros((2,2))
uhat_0s = 100
αs = 0.5
βs = 0.3
↪extract a single element from your array before performing this operation.␣
↪extract a single element from your array before performing this operation.␣
u_hat_t[i] = x_hat[1]
We can do lots of enlightening experiments by creating new types of workers and letting the firm learn about their hidden
(to the firm) states by observing just their output histories.
Search
495
CHAPTER
TWENTYSEVEN
Contents
“Questioning a McCall worker is like having a conversation with an out-of-work friend: ‘Maybe you are
setting your sights too high’, or ‘Why did you quit your old job before you had a new one lined up?’ This is
real social science: an attempt to model, to understand, human behavior by visualizing the situation people
find themselves in, the options they face and the pros and cons as they themselves see them.” – Robert E.
Lucas, Jr.
In addition to what’s in Anaconda, this lecture will need the following libraries:
27.1 Overview
The McCall search model [McCall, 1970] helped transform economists’ way of thinking about labor markets.
To clarify notions such as “involuntary” unemployment, McCall modeled the decision problem of an unemployed worker
in terms of factors including
• current and likely future wages
• impatience
• unemployment compensation
To solve the decision problem McCall used dynamic programming.
Here we set up McCall’s model and use dynamic programming to analyze it.
As we’ll see, McCall’s model is not only interesting in its own right but also an excellent vehicle for learning dynamic
programming.
497
Intermediate Quantitative Economics with Python
27.2.1 A Trade-Off
In order to optimally trade-off current and future rewards, we need to think about two things:
1. the current payoffs we get from different choices
2. the different states that those choices will lead to in next period
To weigh these two aspects of the decision problem, we need to assign values to states.
To this end, let 𝑣∗ (𝑤) be the total lifetime value accruing to an unemployed worker who enters the current period unem-
ployed when the wage is 𝑤 ∈ 𝕎.
In particular, the agent has wage offer 𝑤 in hand.
More precisely, 𝑣∗ (𝑤) denotes the value of the objective function (28.1) when an agent in this situation makes optimal
decisions now and at all future points in time.
Of course 𝑣∗ (𝑤) is not trivial to calculate because we don’t yet know what decisions are optimal and what aren’t!
But think of 𝑣∗ as a function that assigns to each possible wage 𝑠 the maximal lifetime value that can be obtained with
that offer in hand.
A crucial observation is that this function 𝑣∗ must satisfy the recursion
𝑤
𝑣∗ (𝑤) = max { , 𝑐 + 𝛽 ∑ 𝑣∗ (𝑤′ )𝑞(𝑤′ )} (27.1)
1−𝛽 𝑤′ ∈𝕎
Suppose for now that we are able to solve (27.1) for the unknown function 𝑣∗ .
Once we have this function in hand we can behave optimally (i.e., make the right choice between accept and reject).
All we have to do is select the maximal choice on the right-hand side of (27.1).
The optimal action is best thought of as a policy, which is, in general, a map from states to actions.
Given any 𝑤, we can read off the corresponding best choice (accept or reject) by picking the max on the right-hand side
of (27.1).
Thus, we have a map from ℝ to {0, 1}, with 1 meaning accept and 0 meaning reject.
𝑤
𝜎(𝑤) ∶= 1 { ≥ 𝑐 + 𝛽 ∑ 𝑣∗ (𝑤′ )𝑞(𝑤′ )}
1−𝛽 𝑤′ ∈𝕎
𝜎(𝑤) ∶= 1{𝑤 ≥ 𝑤}
̄
where
Here 𝑤̄ (called the reservation wage) is a constant depending on 𝛽, 𝑐 and the wage distribution.
The agent should accept if and only if the current wage offer exceeds the reservation wage.
In view of (27.2), we can compute this reservation wage if we can compute the value function.
To put the above ideas into action, we need to compute the value function at each possible state 𝑤 ∈ 𝕎.
To simplify notation, let’s set
𝑤(𝑖)
𝑣∗ (𝑖) = max { , 𝑐 + 𝛽 ∑ 𝑣∗ (𝑗)𝑞(𝑗)} for 𝑖 = 1, … , 𝑛 (27.3)
1−𝛽 1≤𝑗≤𝑛
𝑤(𝑖)
𝑣′ (𝑖) = max { , 𝑐 + 𝛽 ∑ 𝑣(𝑗)𝑞(𝑗)} for 𝑖 = 1, … , 𝑛 (27.4)
1−𝛽 1≤𝑗≤𝑛
Step 3: calculate a measure of a discrepancy between 𝑣 and 𝑣′ , such as max𝑖 |𝑣(𝑖) − 𝑣′ (𝑖)|.
Step 4: if the deviation is larger than some fixed tolerance, set 𝑣 = 𝑣′ and go to step 2, else continue.
Step 5: return 𝑣.
For a small tolerance, the returned function 𝑣 is a close approximation to the value function 𝑣∗ .
The theory below elaborates on this point.
𝑤(𝑖)
(𝑇 𝑣)(𝑖) = max { , 𝑐 + 𝛽 ∑ 𝑣(𝑗)𝑞(𝑗)} for 𝑖 = 1, … , 𝑛 (27.5)
1−𝛽 1≤𝑗≤𝑛
(A new vector 𝑇 𝑣 is obtained from given vector 𝑣 by evaluating the r.h.s. at each 𝑖.)
The element 𝑣𝑘 in the sequence {𝑣𝑘 } of successive approximations corresponds to 𝑇 𝑘 𝑣.
• This is 𝑇 applied 𝑘 times, starting at the initial guess 𝑣
One can show that the conditions of the Banach fixed point theorem are satisfied by 𝑇 on ℝ𝑛 .
One implication is that 𝑇 has a unique fixed point in ℝ𝑛 .
• That is, a unique vector 𝑣 ̄ such that 𝑇 𝑣 ̄ = 𝑣.̄
Moreover, it’s immediate from the definition of 𝑇 that this fixed point is 𝑣∗ .
A second implication of the Banach contraction mapping theorem is that {𝑇 𝑘 𝑣} converges to the fixed point 𝑣∗ regardless
of 𝑣.
27.3.3 Implementation
Our default for 𝑞, the distribution of the state process, will be Beta-binomial.
fig, ax = plt.subplots()
ax.plot(w_default, q_default, '-o', label='$q(w(i))$')
ax.set_xlabel('wages')
ax.set_ylabel('probabilities')
plt.show()
mccall_data = [
('c', float64), # unemployment compensation
('β', float64), # discount factor
('w', float64[:]), # array of wage values, w[i] = wage at state i
('q', float64[:]) # array of probabilities
]
Here’s a class that stores the data and computes the values of state-action pairs, i.e. the value in the maximum bracket on
the right hand side of the Bellman equation (27.4), given the current state and an arbitrary feasible action.
Default parameter values are embedded in the class.
@jitclass(mccall_data)
class McCallModel:
self.c, self.β = c, β
self.w, self.q = w_default, q_default
Based on these defaults, let’s try plotting the first few approximate value functions in the sequence {𝑇 𝑘 𝑣}.
We will start from guess 𝑣 given by 𝑣(𝑖) = 𝑤(𝑖)/(1 − 𝛽), which is the value of accepting at every given wage.
Here’s a function to implement this:
"""
n = len(mcm.w)
v = mcm.w / (1 - mcm.β)
v_next = np.empty_like(v)
for i in range(num_plots):
ax.plot(mcm.w, v, '-', alpha=0.4, label=f"iterate {i}")
# Update guess
for j in range(n):
v_next[j] = np.max(mcm.state_action_values(j, v))
v[:] = v_next # copy contents into v
ax.legend(loc='lower right')
Now let’s create an instance of McCallModel and watch iterations 𝑇 𝑘 𝑣 converge from below:
mcm = McCallModel()
fig, ax = plt.subplots()
ax.set_xlabel('wage')
ax.set_ylabel('value')
plot_value_function_seq(mcm, ax)
plt.show()
You can see that convergence is occurring: successive iterates are getting closer together.
Here’s a more serious iteration effort to compute the limit, which continues until measured deviation between successive
iterates is below tol.
Once we obtain a good approximation to the limit, we will use it to calculate the reservation wage.
We’ll be using JIT compilation via Numba to turbocharge our loops.
@jit(nopython=True)
def compute_reservation_wage(mcm,
max_iter=500,
tol=1e-6):
# Simplify names
c, β, w, q = mcm.c, mcm.β, mcm.w, mcm.q
n = len(w)
v = w / (1 - β) # initial guess
v_next = np.empty_like(v)
j = 0
error = tol + 1
while j < max_iter and error > tol:
for j in range(n):
v_next[j] = np.max(mcm.state_action_values(j, v))
compute_reservation_wage(mcm)
47.316499710024964
Now that we know how to compute the reservation wage, let’s see how it varies with parameters.
In particular, let’s look at what happens when we change 𝛽 and 𝑐.
grid_size = 25
R = np.empty((grid_size, grid_size))
for i, c in enumerate(c_vals):
(continues on next page)
fig, ax = plt.subplots()
ax.set_title("reservation wage")
ax.set_xlabel("$c$", fontsize=16)
ax.set_ylabel("$β$", fontsize=16)
ax.ticklabel_format(useOffset=False)
plt.show()
As expected, the reservation wage increases both with patience and with unemployment compensation.
The approach to dynamic programming just described is standard and broadly applicable.
But for our McCall search model there’s also an easier way that circumvents the need to compute the value function.
Let ℎ denote the continuation value:
𝑤(𝑠′ )
𝑣∗ (𝑠′ ) = max { , ℎ}
1−𝛽
𝑤(𝑠′ )
ℎ = 𝑐 + 𝛽 ∑ max { , ℎ} 𝑞(𝑠′ ) (27.7)
𝑠′ ∈𝕊
1−𝛽
𝑤(𝑠′ )
ℎ′ = 𝑐 + 𝛽 ∑ max { , ℎ} 𝑞(𝑠′ ) (27.8)
𝑠′ ∈𝕊
1−𝛽
@jit(nopython=True)
def compute_reservation_wage_two(mcm,
max_iter=500,
tol=1e-5):
# Simplify names
c, β, w, q = mcm.c, mcm.β, mcm.w, mcm.q
# == First compute h == #
h = np.sum(w * q) / (1 - β)
i = 0
error = tol + 1
while i < max_iter and error > tol:
s = np.maximum(w / (1 - β), h)
h_next = c + β * np.sum(s * q)
h = h_next
return (1 - β) * h
27.5 Exercises
Exercise 27.5.1
Compute the average duration of unemployment when 𝛽 = 0.99 and 𝑐 takes the following values
c_vals = np.linspace(10, 40, 25)
That is, start the agent off as unemployed, compute their reservation wage given the parameters, and then simulate to see
how long it takes to accept.
Repeat a large number of times and take the average.
Plot mean unemployment duration as a function of 𝑐 in c_vals.
cdf = np.cumsum(q_default)
@jit(nopython=True)
def compute_stopping_time(w_bar, seed=1234):
np.random.seed(seed)
t = 1
while True:
# Generate a wage draw
w = w_default[qe.random.draw(cdf)]
# Stop when the draw is above the reservation wage
if w >= w_bar:
stopping_time = t
break
else:
t += 1
return stopping_time
@jit(nopython=True)
def compute_mean_stopping_time(w_bar, num_reps=100000):
obs = np.empty(num_reps)
for i in range(num_reps):
obs[i] = compute_stopping_time(w_bar, seed=i)
(continues on next page)
fig, ax = plt.subplots()
plt.show()
Exercise 27.5.2
The purpose of this exercise is to show how to replace the discrete wage offer distribution used above with a continuous
distribution.
This is a significant topic because many convenient distributions are continuous (i.e., have a density).
Fortunately, the theory changes little in our simple model.
Recall that ℎ in (27.6) denotes the value of not accepting a job in this period but then behaving optimally in all subsequent
periods:
To shift to a continuous offer distribution, we can replace (27.6) by
𝑤(𝑠′ )
ℎ = 𝑐 + 𝛽 ∫ max { , ℎ} 𝑞(𝑠′ )𝑑𝑠′ (27.10)
1−𝛽
The aim is to solve this nonlinear equation by iteration, and from it obtain the reservation wage.
Try to carry this out, setting
• the state sequence {𝑠𝑡 } to be IID and standard normal and
• the wage function to be 𝑤(𝑠) = exp(𝜇 + 𝜎𝑠).
You will need to implement a new version of the McCallModel class that assumes a lognormal wage distribution.
Calculate the integral by Monte Carlo, by averaging over a large number of wage draws.
For default parameters, use c=25, β=0.99, σ=0.5, μ=2.5.
Once your code is working, investigate how the reservation wage changes with 𝑐 and 𝛽.
mccall_data_continuous = [
('c', float64), # unemployment compensation
('β', float64), # discount factor
('σ', float64), # scale parameter in lognormal distribution
('μ', float64), # location parameter in lognormal distribution
('w_draws', float64[:]) # draws of wages for Monte Carlo
]
@jitclass(mccall_data_continuous)
class McCallModelContinuous:
@jit(nopython=True)
def compute_reservation_wage_continuous(mcmc, max_iter=500, tol=1e-5):
h = h_next
return (1 - β) * h
grid_size = 25
R = np.empty((grid_size, grid_size))
for i, c in enumerate(c_vals):
for j, β in enumerate(β_vals):
mcmc = McCallModelContinuous(c=c, β=β)
R[i, j] = compute_reservation_wage_continuous(mcmc)
fig, ax = plt.subplots()
ax.set_title("reservation wage")
ax.set_xlabel("$c$", fontsize=16)
ax.set_ylabel("$β$", fontsize=16)
ax.ticklabel_format(useOffset=False)
plt.show()
TWENTYEIGHT
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
28.1 Overview
Previously we looked at the McCall job search model [McCall, 1970] as a way of understanding unemployment and
worker decisions.
One unrealistic feature of the model is that every job is permanent.
In this lecture, we extend the McCall model by introducing job separation.
Once separation enters the picture, the agent comes to view
• the loss of a job as a capital loss, and
• a spell of unemployment as an investment in searching for an acceptable job
The other minor addition is that a utility function will be included to make worker preferences slightly more sophisticated.
We’ll need the following imports
513
Intermediate Quantitative Economics with Python
At this stage the only difference from the baseline model is that we’ve added some flexibility to preferences by introducing
a utility function 𝑢.
It satisfies 𝑢′ > 0 and 𝑢″ < 0.
For now we will drop the separation of state process and wage process that we maintained for the baseline model.
In particular, we simply suppose that wage offers {𝑤𝑡 } are IID with common distribution 𝑞.
The set of possible wage values is denoted by 𝕎.
(Later we will go back to having a separate state process {𝑠𝑡 } driving random outcomes, since this formulation is usually
convenient in more sophisticated models.)
If currently unemployed, the worker either accepts or rejects the current offer 𝑤𝑡 .
If he accepts, then he begins work immediately at wage 𝑤𝑡 .
If he rejects, then he receives unemployment compensation 𝑐.
The process then repeats.
Note: We do not allow for job search while employed—this topic is taken up in a later lecture.
We drop time subscripts in what follows and primes denote next period values.
Let
• 𝑣(𝑤𝑒 ) be total lifetime value accruing to a worker who enters the current period employed with existing wage 𝑤𝑒
• ℎ(𝑤) be total lifetime value accruing to a worker who who enters the current period unemployed and receives wage
offer 𝑤.
Here value means the value of the objective function (28.1) when the worker makes optimal decisions at all future points
in time.
Our first aim is to obtain these functions.
Suppose for now that the worker can calculate the functions 𝑣 and ℎ and use them in his decision making.
Then 𝑣 and ℎ should satisfy
and
Rather than jumping straight into solving these equations, let’s see if we can simplify them somewhat.
(This process will be analogous to our second pass at the plain vanilla McCall model, where we simplified the Bellman
equation.)
First, let
We’ll use the same iterative approach to solving the Bellman equations that we adopted in the first job search lecture.
Here this amounts to
1. make guesses for 𝑑 and 𝑣
2. plug these guesses into the right-hand sides of (28.5) and (28.6)
3. update the left-hand sides from this rule and then repeat
In other words, we are iterating using the rules
28.4 Implementation
@njit
def u(c, σ=2.0):
return (c**(1 - σ) - 1) / (1 - σ)
Also, here’s a default wage distribution, based around the BetaBinomial distribution:
Here’s our jitted class for the McCall model with separation.
mccall_data = [
('α', float64), # job separation rate
('β', float64), # discount factor
('c', float64), # unemployment compensation
('w', float64[:]), # list of wage values
('q', float64[:]) # pmf of random variable w
]
@jitclass(mccall_data)
class McCallModel:
"""
Stores the parameters and functions associated with a given model.
"""
v_new = np.empty_like(v)
for i in range(len(w)):
v_new[i] = u(w[i]) + β * ((1 - α) * v[i] + α * d)
Now we iterate until successive realizations are closer together than some small tolerance level.
We then return the current iterate as an approximate solution.
@njit
def solve_model(mcm, tol=1e-5, max_iter=2000):
"""
Iterates to convergence on the Bellman equations
return v, d
mcm = McCallModel()
v, d = solve_model(mcm)
h = u(mcm.c) + mcm.β * d
(continues on next page)
fig, ax = plt.subplots()
plt.show()
The value 𝑣 is increasing because higher 𝑤 generates a higher wage flow conditional on staying employed.
Here’s a function compute_reservation_wage that takes an instance of McCallModel and returns the associ-
ated reservation wage.
@njit
def compute_reservation_wage(mcm):
"""
Computes the reservation wage of an instance of the McCall model
by finding the smallest w such that v(w) >= h.
v, d = solve_model(mcm)
h = u(mcm.c) + mcm.β * d
i = np.searchsorted(v, h, side='right')
w_bar = mcm.w[i]
return w_bar
Next we will investigate how the reservation wage varies with parameters.
In each instance below, we’ll show you a figure and then ask you to reproduce it in the exercises.
As expected, higher unemployment compensation causes the worker to hold out for higher wages.
In effect, the cost of continuing job search is reduced.
Finally, let’s look at how 𝑤̄ varies with the job separation rate 𝛼.
Higher 𝛼 translates to a greater chance that a worker will face termination in each period once employed.
Once more, the results are in line with our intuition.
If the separation rate is high, then the benefit of holding out for a higher wage falls.
Hence the reservation wage is lower.
28.6 Exercises
Exercise 28.6.1
Reproduce all the reservation wage figures shown above.
Regarding the values on the horizontal axis, use
grid_size = 25
c_vals = np.linspace(2, 12, grid_size) # unemployment compensation
beta_vals = np.linspace(0.8, 0.99, grid_size) # discount factors
alpha_vals = np.linspace(0.05, 0.5, grid_size) # separation rate
mcm = McCallModel()
w_bar_vals = np.empty_like(c_vals)
fig, ax = plt.subplots()
for i, c in enumerate(c_vals):
mcm.c = c
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
ax.set(xlabel='unemployment compensation',
ylabel='reservation wage')
ax.plot(c_vals, w_bar_vals, label=r'$\bar w$ as a function of $c$')
ax.legend()
plt.show()
fig, ax = plt.subplots()
for i, β in enumerate(beta_vals):
mcm.β = β
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
plt.show()
fig, ax = plt.subplots()
for i, α in enumerate(alpha_vals):
mcm.α = α
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
plt.show()
TWENTYNINE
Contents
29.1 Overview
In this lecture we again study the McCall job search model with separation, but now with a continuous wage distribution.
While we already considered continuous wage distributions briefly in the exercises of the first job search lecture, the change
was relatively trivial in that case.
This is because we were able to reduce the problem to solving for a single scalar value (the continuation value).
Here, with separation, the change is less trivial, since a continuous wage distribution leads to an uncountably infinite state
space.
The infinite state space leads to additional challenges, particularly when it comes to applying value function iteration (VFI).
These challenges will lead us to modify VFI by adding an interpolation step.
The combination of VFI and this interpolation step is called fitted value function iteration (fitted VFI).
Fitted VFI is very common in practice, so we will take some time to work through the details.
We will use the following imports:
525
Intermediate Quantitative Economics with Python
The model is the same as the McCall model with job separation we studied before, except that the wage offer distribution
is continuous.
We are going to start with the two Bellman equations we obtained for the model with job separation after a simplifying
transformation.
Modified to accommodate continuous wage draws, they take the following form:
and
526 Chapter 29. Job Search III: Fitted Value Function Iteration
Intermediate Quantitative Economics with Python
2. Build a function 𝑣 on the state space ℝ+ by interpolation or approximation, based on v and {𝑤𝑖 }.
3. Obtain and record the samples of the updated function 𝑣′ (𝑤𝑖 ) on each grid point 𝑤𝑖 .
4. Unless some stopping condition is satisfied, take this as the new array and go to step 1.
How should we go about step 2?
This is a problem of function approximation, and there are many ways to approach it.
What’s important here is that the function approximation scheme must not only produce a good approximation to each 𝑣,
but also that it combines well with the broader iteration algorithm described above.
One good choice from both respects is continuous piecewise linear interpolation.
This method
1. combines well with value function iteration (see., e.g., [Gordon, 1995] or [Stachurski, 2008]) and
2. preserves useful shape properties such as monotonicity and concavity/convexity.
Linear interpolation will be implemented using numpy.interp.
The next figure illustrates piecewise linear interpolation of an arbitrary function on grid points 0, 0.2, 0.4, 0.6, 0.8, 1.
def f(x):
y1 = 2 * np.cos(6 * x) + np.sin(14 * x)
return y1 + 2.5
c_grid = np.linspace(0, 1, 6)
f_grid = np.linspace(0, 1, 150)
def Af(x):
return np.interp(x, c_grid, f(c_grid))
fig, ax = plt.subplots()
ax.legend(loc="upper center")
29.3 Implementation
The first step is to build a jitted class for the McCall model with separation and a continuous wage offer distribution.
We will take the utility function to be the log function for this application, with 𝑢(𝑐) = ln 𝑐.
We will adopt the lognormal distribution for wages, with 𝑤 = exp(𝜇 + 𝜎𝑧) when 𝑧 is standard normal and 𝜇, 𝜎 are
parameters.
@njit
def lognormal_draws(n=1000, μ=2.5, σ=0.5, seed=1234):
np.random.seed(seed)
z = np.random.randn(n)
w_draws = np.exp(μ + σ * z)
return w_draws
mccall_data_continuous = [
('c', float64), # unemployment compensation
('α', float64), # job separation rate
('β', float64), # discount factor
('w_grid', float64[:]), # grid of points for fitted VFI
('w_draws', float64[:]) # draws of wages for Monte Carlo
]
@jitclass(mccall_data_continuous)
(continues on next page)
528 Chapter 29. Job Search III: Fitted Value Function Iteration
Intermediate Quantitative Economics with Python
def __init__(self,
c=1,
α=0.1,
β=0.96,
grid_min=1e-10,
grid_max=5,
grid_size=100,
w_draws=lognormal_draws()):
# Simplify names
c, α, β = self.c, self.α, self.β
w = self.w_grid
u = lambda x: np.log(x)
# Update v
v_new = u(w) + β * ((1 - α) * v + α * d)
@njit
def solve_model(mcm, tol=1e-5, max_iter=2000):
"""
Iterates to convergence on the Bellman equations
return v, d
@njit
def compute_reservation_wage(mcm):
"""
Computes the reservation wage of an instance of the McCall model
by finding the smallest w such that v(w) >= h.
v, d = solve_model(mcm)
h = u(mcm.c) + mcm.β * d
w_bar = np.inf
for i, wage in enumerate(mcm.w_grid):
if v[i] > h:
w_bar = wage
break
return w_bar
The exercises ask you to explore the solution and how it changes with parameters.
29.4 Exercises
Exercise 29.4.1
Use the code above to explore what happens to the reservation wage when the wage parameter 𝜇 changes.
Use the default parameters and 𝜇 in mu_vals = np.linspace(0.0, 2.0, 15).
Is the impact on the reservation wage as you expected?
mcm = McCallModelContinuous()
mu_vals = np.linspace(0.0, 2.0, 15)
w_bar_vals = np.empty_like(mu_vals)
fig, ax = plt.subplots()
530 Chapter 29. Job Search III: Fitted Value Function Iteration
Intermediate Quantitative Economics with Python
plt.show()
Not surprisingly, the agent is more inclined to wait when the distribution of offers shifts to the right.
Exercise 29.4.2
Let us now consider how the agent responds to an increase in volatility.
To try to understand this, compute the reservation wage when the wage offer distribution is uniform on (𝑚 − 𝑠, 𝑚 + 𝑠)
and 𝑠 varies.
The idea here is that we are holding the mean constant and spreading the support.
(This is a form of mean-preserving spread.)
Use s_vals = np.linspace(1.0, 2.0, 15) and m = 2.0.
State how you expect the reservation wage to vary with 𝑠.
mcm = McCallModelContinuous()
s_vals = np.linspace(1.0, 2.0, 15)
m = 2.0
w_bar_vals = np.empty_like(s_vals)
fig, ax = plt.subplots()
for i, s in enumerate(s_vals):
a, b = m - s, m + s
mcm.w_draws = np.random.uniform(low=a, high=b, size=10_000)
w_bar = compute_reservation_wage(mcm)
w_bar_vals[i] = w_bar
plt.show()
532 Chapter 29. Job Search III: Fitted Value Function Iteration
Intermediate Quantitative Economics with Python
534 Chapter 29. Job Search III: Fitted Value Function Iteration
CHAPTER
THIRTY
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
30.1 Overview
In this lecture we solve a McCall style job search model with persistent and transitory components to wages.
In other words, we relax the unrealistic assumption that randomness in wages is independent over time.
At the same time, we will go back to assuming that jobs are permanent and no separation occurs.
This is to keep the model relatively simple as we study the impact of correlation.
We will use the following imports:
535
Intermediate Quantitative Economics with Python
𝑤𝑡 = exp(𝑧𝑡 ) + 𝑦𝑡
where
Here {𝜁𝑡 } and {𝜖𝑡 } are both IID and standard normal.
Here {𝑦𝑡 } is a transitory component and {𝑧𝑡 } is persistent.
As before, the worker can either
1. accept an offer and work permanently at that wage, or
2. take unemployment compensation 𝑐 and wait till next period.
The value function satisfies the Bellman equation
𝑢(𝑤)
𝑣∗ (𝑤, 𝑧) = max { , 𝑢(𝑐) + 𝛽 𝔼𝑧 𝑣∗ (𝑤′ , 𝑧 ′ )}
1−𝛽
In this express, 𝑢 is a utility function and 𝔼𝑧 is expectation of next period variables given current 𝑧.
The variable 𝑧 enters as a state in the Bellman equation because its current value helps predict future wages.
30.2.1 A Simplification
There is a way that we can reduce dimensionality in this problem, which greatly accelerates computation.
To start, let 𝑓 ∗ be the continuation value function, defined by
𝑢(𝑤) ∗
𝑣∗ (𝑤, 𝑧) = max { , 𝑓 (𝑧)}
1−𝛽
Combining the last two expressions, we see that the continuation value function satisfies
𝑢(𝑤′ ) ∗ ′
𝑓 ∗ (𝑧) = 𝑢(𝑐) + 𝛽 𝔼𝑧 max { , 𝑓 (𝑧 )}
1−𝛽
𝑢(𝑤′ )
𝑄𝑓(𝑧) = 𝑢(𝑐) + 𝛽 𝔼𝑧 max { , 𝑓(𝑧 ′ )}
1−𝛽
Once we have 𝑓 ∗ , we can solve the search problem by stopping when the reward for accepting exceeds the continuation
value, or
𝑢(𝑤)
≥ 𝑓 ∗ (𝑧)
1−𝛽
For utility we take 𝑢(𝑐) = ln(𝑐).
The reservation wage is the wage where equality holds in the last expression.
That is,
𝑤(𝑧)
̄ ∶= exp(𝑓 ∗ (𝑧)(1 − 𝛽)) (30.1)
Our main aim is to solve for the reservation rule and study its properties and implications.
30.3 Implementation
job_search_data = [
('μ', float64), # transient shock log mean
('s', float64), # transient shock log variance
('d', float64), # shift coefficient of persistent state
('ρ', float64), # correlation coefficient of persistent state
('σ', float64), # state volatility
('β', float64), # discount factor
('c', float64), # unemployment compensation
('z_grid', float64[:]), # grid over the state space
('e_draws', float64[:,:]) # Monte Carlo draws for integration
]
Here’s a class that stores the data and the right hand side of the Bellman equation.
Default parameter values are embedded in the class.
@jitclass(job_search_data)
class JobSearch:
def __init__(self,
μ=0.0, # transient shock log mean
s=1.0, # transient shock log variance
d=0.0, # shift coefficient of persistent state
ρ=0.9, # correlation coefficient of persistent state
σ=0.1, # state volatility
β=0.98, # discount factor
c=5, # unemployment compensation
mc_size=1000,
(continues on next page)
# Set up grid
z_mean = d / (1 - ρ)
z_sd = σ / np.sqrt(1 - ρ**2)
k = 3 # std devs from mean
a, b = z_mean - k * z_sd, z_mean + k * z_sd
self.z_grid = np.linspace(a, b, grid_size)
def parameters(self):
"""
Return all parameters as a tuple.
"""
return self.μ, self.s, self.d, \
self.ρ, self.σ, self.β, self.c
@njit(parallel=True)
def Q(js, f_in, f_out):
"""
Apply the operator Q.
* js is an instance of JobSearch
* f_in and f_out are arrays that represent f and Qf respectively
"""
μ, s, d, ρ, σ, β, c = js.parameters()
M = js.e_draws.shape[1]
for i in prange(len(js.z_grid)):
z = js.z_grid[i]
expectation = 0.0
for m in range(M):
e1, e2 = js.e_draws[:, m]
z_next = d + ρ * z + σ * e1
go_val = np.interp(z_next, js.z_grid, f_in) # f(z')
y_next = np.exp(μ + s * e2) # y' draw
w_next = np.exp(z_next) + y_next # w' draw
stop_val = np.log(w_next) / (1 - β)
expectation += max(stop_val, go_val)
expectation = expectation / M
f_out[i] = np.log(c) + β * expectation
def compute_fixed_point(js,
(continues on next page)
# Set up loop
f_in = f_init
i = 0
error = tol + 1
return f_out
js = JobSearch()
qe.tic()
f_star = compute_fixed_point(js, verbose=True)
qe.toc()
6.438979864120483
Next we will compute and plot the reservation wage function defined in (30.1).
fig, ax = plt.subplots()
ax.plot(js.z_grid, res_wage_function, label="reservation wage given $z$")
ax.set(xlabel="$z$", ylabel="wage")
ax.legend()
plt.show()
c_vals = 1, 2, 3
fig, ax = plt.subplots()
for c in c_vals:
(continues on next page)
ax.set(xlabel="$z$", ylabel="wage")
ax.legend()
plt.show()
As expected, higher unemployment compensation shifts the reservation wage up at all state values.
Next we study how mean unemployment duration varies with unemployment compensation.
For simplicity we’ll fix the initial state at 𝑧𝑡 = 0.
@njit
(continues on next page)
@njit
def draw_tau(t_max=10_000):
z = 0
t = 0
unemployed = True
while unemployed and t < t_max:
# draw current wage
y = np.exp(μ + s * np.random.randn())
w = np.exp(z) + y
res_wage = np.exp(f_star_function(z) * (1 - β))
# if optimal to stop, record t
if w >= res_wage:
unemployed = False
τ = t
# else increment data and state
else:
z = ρ * z + d + σ * np.random.randn()
t += 1
return τ
@njit(parallel=True)
def compute_expected_tau(num_reps=100_000):
sum_value = 0
for i in prange(num_reps):
sum_value += draw_tau()
return sum_value / num_reps
return compute_expected_tau()
Let’s test this out with some possible values for unemployment compensation.
fig, ax = plt.subplots()
ax.plot(c_vals, durations)
ax.set_xlabel("unemployment compensation")
ax.set_ylabel("mean unemployment duration")
plt.show()
30.5 Exercises
Exercise 30.5.1
Investigate how mean unemployment duration varies with the discount factor 𝛽.
• What is your prior expectation?
• Do your results match up?
fig, ax = plt.subplots()
ax.plot(beta_vals, durations)
ax.set_xlabel(r"$\beta$")
ax.set_ylabel("mean unemployment duration")
plt.show()
The figure shows that more patient individuals tend to wait longer before accepting an offer.
THIRTYONE
Contents
In addition to what’s in Anaconda, this lecture will need the following libraries:
31.1 Overview
545
Intermediate Quantitative Economics with Python
• Career and job within career both chosen to maximize expected discounted wage flow.
• Infinite horizon dynamic programming with two state variables.
31.2 Model
where
𝐼 = 𝜃 + 𝜖 + 𝛽𝑣(𝜃, 𝜖)
Evidently 𝐼, 𝐼𝐼 and 𝐼𝐼𝐼 correspond to “stay put”, “new job” and “new life”, respectively.
31.2.1 Parameterization
As in [Ljungqvist and Sargent, 2018], section 6.5, we will focus on a discrete version of the model, parameterized as
follows:
• both 𝜃 and 𝜖 take values in the set np.linspace(0, B, grid_size) — an even grid of points between 0
and 𝐵 inclusive
• grid_size = 50
• B = 5
• β = 0.95
The distributions 𝐹 and 𝐺 are discrete distributions generating draws from the grid points np.linspace(0, B,
grid_size).
A very useful family of discrete distributions is the Beta-binomial family, with probability mass function
𝑛 𝐵(𝑘 + 𝑎, 𝑛 − 𝑘 + 𝑏)
𝑝(𝑘 | 𝑛, 𝑎, 𝑏) = ( ) , 𝑘 = 0, … , 𝑛
𝑘 𝐵(𝑎, 𝑏)
Interpretation:
• draw 𝑞 from a Beta distribution with shape parameters (𝑎, 𝑏)
• run 𝑛 independent binary trials, each with success probability 𝑞
• 𝑝(𝑘 | 𝑛, 𝑎, 𝑏) is the probability of 𝑘 successes in these 𝑛 trials
Nice properties:
• very flexible class of distributions, including uniform, symmetric unimodal, etc.
• only three parameters
Here’s a figure showing the effect on the pmf of different shape parameters when 𝑛 = 50.
n = 50
a_vals = [0.5, 1, 100]
b_vals = [0.5, 1, 100]
fig, ax = plt.subplots(figsize=(10, 6))
for a, b in zip(a_vals, b_vals):
ab_label = f'$a = {a:.1f}$, $b = {b:.1f}$'
ax.plot(list(range(0, n+1)), gen_probs(n, a, b), '-o', label=ab_label)
ax.legend()
plt.show()
31.3 Implementation
We will first create a class CareerWorkerProblem which will hold the default parameterizations of the model and
an initial guess for the value function.
class CareerWorkerProblem:
def __init__(self,
B=5.0, # Upper bound
β=0.95, # Discount factor
grid_size=50, # Grid size
F_a=1,
F_b=1,
G_a=1,
G_b=1):
The following function takes an instance of CareerWorkerProblem and returns the corresponding Bellman operator
𝑇 and the greedy policy function.
In this model, 𝑇 is defined by 𝑇 𝑣(𝜃, 𝜖) = max{𝐼, 𝐼𝐼, 𝐼𝐼𝐼}, where 𝐼, 𝐼𝐼 and 𝐼𝐼𝐼 are as given in (31.2).
"""
Returns jitted versions of the Bellman operator and the
greedy policy function
cw is an instance of ``CareerWorkerProblem``
"""
@njit(parallel=parallel_flag)
def T(v):
"The Bellman operator"
v_new = np.empty_like(v)
for i in prange(len(v)):
for j in prange(len(v)):
v1 = θ[i] + ϵ[j] + β * v[i, j] # Stay put
v2 = θ[i] + G_mean + β * v[i, :] @ G_probs # New job
v3 = G_mean + F_mean + β * F_probs @ v @ G_probs # New life
v_new[i, j] = max(v1, v2, v3)
return v_new
@njit
def get_greedy(v):
"Computes the v-greedy policy"
σ = np.empty(v.shape)
for i in range(len(v)):
for j in range(len(v)):
v1 = θ[i] + ϵ[j] + β * v[i, j]
v2 = θ[i] + G_mean + β * v[i, :] @ G_probs
v3 = G_mean + F_mean + β * F_probs @ v @ G_probs
if v1 > max(v2, v3):
action = 1
elif v2 > max(v1, v3):
action = 2
else:
action = 3
σ[i, j] = action
return σ
return T, get_greedy
Lastly, solve_model will take an instance of CareerWorkerProblem and iterate using the Bellman operator to
find the fixed point of the Bellman equation.
def solve_model(cw,
use_parallel=True,
tol=1e-4,
max_iter=1000,
verbose=True,
print_skip=25):
T, _ = operator_factory(cw, parallel_flag=use_parallel)
# Set up loop
v = np.full((cw.grid_size, cw.grid_size), 100.) # Initial guess
i = 0
error = tol + 1
elif verbose:
print(f"\nConverged in {i} iterations.")
return v_new
cw = CareerWorkerProblem()
T, get_greedy = operator_factory(cw)
v_star = solve_model(cw, verbose=False)
greedy_star = get_greedy(v_star)
Interpretation:
• If both job and career are poor or mediocre, the worker will experiment with a new job and new career.
• If career is sufficiently good, the worker will hold it and experiment with new jobs until a sufficiently good one is
found.
• If both job and career are good, the worker will stay put.
Notice that the worker will always hold on to a sufficiently good career, but not necessarily hold on to even the best paying
job.
The reason is that high lifetime wages require both variables to be large, and the worker cannot change careers without
changing jobs.
• Sometimes a good job must be sacrificed in order to change to a better career.
31.4 Exercises
Exercise 31.4.1
Using the default parameterization in the class CareerWorkerProblem, generate and plot typical sample paths for 𝜃
and 𝜖 when the worker follows the optimal policy.
In particular, modulo randomness, reproduce the following figure (where the horizontal axis represents time)
Hint: To generate the draws from the distributions 𝐹 and 𝐺, use quantecon.random.draw().
F = np.cumsum(cw.F_probs)
G = np.cumsum(cw.G_probs)
v_star = solve_model(cw, verbose=False)
T, get_greedy = operator_factory(cw)
greedy_star = get_greedy(v_star)
plt.legend()
plt.show()
Exercise 31.4.2
Let’s now consider how long it takes for the worker to settle down to a permanent job, given a starting point of (𝜃, 𝜖) =
(0, 0).
In other words, we want to study the distribution of the random variable
𝑇 ∗ ∶= the first point in time from which the worker's job no longer changes
Evidently, the worker’s job becomes permanent if and only if (𝜃𝑡 , 𝜖𝑡 ) enters the “stay put” region of (𝜃, 𝜖) space.
Letting 𝑆 denote this region, 𝑇 ∗ can be expressed as the first passage time to 𝑆 under the optimal policy:
𝑇 ∗ ∶= inf{𝑡 ≥ 0 | (𝜃𝑡 , 𝜖𝑡 ) ∈ 𝑆}
Collect 25,000 draws of this random variable and compute the median (which should be about 7).
Repeat the exercise with 𝛽 = 0.99 and interpret the change.
cw = CareerWorkerProblem()
F = np.cumsum(cw.F_probs)
G = np.cumsum(cw.G_probs)
T, get_greedy = operator_factory(cw)
v_star = solve_model(cw, verbose=False)
greedy_star = get_greedy(v_star)
@njit
def passage_time(optimal_policy, F, G):
t = 0
i = j = 0
while True:
if optimal_policy[i, j] == 1: # Stay put
return t
elif optimal_policy[i, j] == 2: # New job
j = qe.random.draw(G)
else: # New life
i, j = qe.random.draw(F), qe.random.draw(G)
t += 1
@njit(parallel=True)
def median_time(optimal_policy, F, G, M=25000):
samples = np.empty(M)
for i in prange(M):
samples[i] = passage_time(optimal_policy, F, G)
return np.median(samples)
median_time(greedy_star, F, G)
7.0
To compute the median with 𝛽 = 0.99 instead of the default value 𝛽 = 0.95, replace cw = CareerWorkerProb-
lem() with cw = CareerWorkerProblem(β=0.99).
The medians are subject to randomness but should be about 7 and 14 respectively.
Not surprisingly, more patient workers will wait longer to settle down to their final job.
Exercise 31.4.3
Set the parameterization to G_a = G_b = 100 and generate a new optimal policy figure – interpret.
cw = CareerWorkerProblem(G_a=100, G_b=100)
T, get_greedy = operator_factory(cw)
v_star = solve_model(cw, verbose=False)
greedy_star = get_greedy(v_star)
In the new figure, you see that the region for which the worker stays put has grown because the distribution for 𝜖 has
become more concentrated around the mean, making high-paying jobs less realistic.
THIRTYTWO
Contents
32.1 Overview
559
Intermediate Quantitative Economics with Python
32.2 Model
Let 𝑥𝑡 denote the time-𝑡 job-specific human capital of a worker employed at a given firm and let 𝑤𝑡 denote current wages.
Let 𝑤𝑡 = 𝑥𝑡 (1 − 𝑠𝑡 − 𝜙𝑡 ), where
• 𝜙𝑡 is investment in job-specific human capital for the current role and
• 𝑠𝑡 is search effort, devoted to obtaining new offers from other firms.
For as long as the worker remains in the current job, evolution of {𝑥𝑡 } is given by 𝑥𝑡+1 = 𝑔(𝑥𝑡 , 𝜙𝑡 ).
When search effort at 𝑡 is 𝑠𝑡 , the worker receives a new job offer with probability 𝜋(𝑠𝑡 ) ∈ [0, 1].
The value of the offer, measured in job-specific human capital, is 𝑢𝑡+1 , where {𝑢𝑡 } is IID with common distribution 𝑓.
The worker can reject the current offer and continue with existing job.
Hence 𝑥𝑡+1 = 𝑢𝑡+1 if he/she accepts and 𝑥𝑡+1 = 𝑔(𝑥𝑡 , 𝜙𝑡 ) otherwise.
Let 𝑏𝑡+1 ∈ {0, 1} be a binary random variable, where 𝑏𝑡+1 = 1 indicates that the worker receives an offer at the end of
time 𝑡.
We can write
Agent’s objective: maximize expected discounted sum of wages via controls {𝑠𝑡 } and {𝜙𝑡 }.
Taking the expectation of 𝑣(𝑥𝑡+1 ) and using (32.1), the Bellman equation for this problem can be written as
𝑣(𝑥) = max {𝑥(1 − 𝑠 − 𝜙) + 𝛽(1 − 𝜋(𝑠))𝑣[𝑔(𝑥, 𝜙)] + 𝛽𝜋(𝑠) ∫ 𝑣[𝑔(𝑥, 𝜙) ∨ 𝑢]𝑓(𝑑𝑢)} (32.2)
𝑠+𝜙≤1
32.2.1 Parameterization
Before we solve the model, let’s make some quick calculations that provide intuition on what the solution should look like.
To begin, observe that the worker has two instruments to build capital and hence wages:
1. invest in capital specific to the current job via 𝜙
2. search for a new job with better job-specific capital match via 𝑠
Since wages are 𝑥(1 − 𝑠 − 𝜙), marginal cost of investment via either 𝜙 or 𝑠 is identical.
Our risk-neutral worker should focus on whatever instrument has the highest expected return.
The relative expected return will depend on 𝑥.
For example, suppose first that 𝑥 = 0.05
• If 𝑠 = 1 and 𝜙 = 0, then since 𝑔(𝑥, 𝜙) = 0, taking expectations of (32.1) gives expected next period capital equal
to 𝜋(𝑠)𝔼𝑢 = 𝔼𝑢 = 0.5.
• If 𝑠 = 0 and 𝜙 = 1, then next period capital is 𝑔(𝑥, 𝜙) = 𝑔(0.05, 1) ≈ 0.23.
Both rates of return are good, but the return from search is better.
Next, suppose that 𝑥 = 0.4
• If 𝑠 = 1 and 𝜙 = 0, then expected next period capital is again 0.5
• If 𝑠 = 0 and 𝜙 = 1, then 𝑔(𝑥, 𝜙) = 𝑔(0.4, 1) ≈ 0.8
Return from investment via 𝜙 dominates expected return from search.
Combining these observations gives us two informal predictions:
1. At any given state 𝑥, the two controls 𝜙 and 𝑠 will function primarily as substitutes — worker will focus on whichever
instrument has the higher expected return.
2. For sufficiently small 𝑥, search will be preferable to investment in job-specific human capital. For larger 𝑥, the
reverse will be true.
Now let’s turn to implementation, and see if we can match our predictions.
32.3 Implementation
We will set up a class JVWorker that holds the parameters of the model described above
class JVWorker:
r"""
A Jovanovic-type model of employment with on-the-job search.
"""
def __init__(self,
A=1.4,
α=0.6,
β=0.96, # Discount factor
π=np.sqrt, # Search effort function
a=2, # Parameter of f
b=2, # Parameter of f
grid_size=50,
mc_size=100,
ɛ=1e-4):
# Human capital
self.x_grid = np.linspace(ɛ, grid_max, grid_size)
The function operator_factory takes an instance of this class and returns a jitted version of the Bellman operator
T, i.e.
𝑇 𝑣(𝑥) = max 𝑤(𝑠, 𝜙)
𝑠+𝜙≤1
where
When we represent 𝑣, it will be with a NumPy array v giving values on grid x_grid.
But to evaluate the right-hand side of (32.3), we need a function, so we replace the arrays v and x_grid with a function
v_func that gives linear interpolation of v on x_grid.
Inside the for loop, for each x in the grid over the state space, we set up the function 𝑤(𝑧) = 𝑤(𝑠, 𝜙) defined in (32.3).
The function is maximized over all feasible (𝑠, 𝜙) pairs.
Another function, get_greedy returns the optimal choice of 𝑠 and 𝜙 at each 𝑥, given a value function.
"""
Returns a jitted version of the Bellman operator T
jv is an instance of JVWorker
"""
π, β = jv.π, jv.β
x_grid, ɛ, mc_size = jv.x_grid, jv.ɛ, jv.mc_size
f_rvs, g = jv.f_rvs, jv.g
@njit
def state_action_values(z, x, v):
s, ϕ = z
v_func = lambda x: np.interp(x, x_grid, v)
integral = 0
for m in range(mc_size):
u = f_rvs[m]
integral += v_func(max(g(x, ϕ), u))
integral = integral / mc_size
@njit(parallel=parallel_flag)
def T(v):
(continues on next page)
v_new = np.empty_like(v)
for i in prange(len(x_grid)):
x = x_grid[i]
# Search on a grid
search_grid = np.linspace(ɛ, 1, 15)
max_val = -1
for s in search_grid:
for ϕ in search_grid:
current_val = state_action_values((s, ϕ), x, v) if s + ϕ <= 1␣
↪ else -1
if current_val > max_val:
max_val = current_val
v_new[i] = max_val
return v_new
@njit
def get_greedy(v):
"""
Computes the v-greedy policy of a given function v
"""
s_policy, ϕ_policy = np.empty_like(v), np.empty_like(v)
for i in range(len(x_grid)):
x = x_grid[i]
# Search on a grid
search_grid = np.linspace(ɛ, 1, 15)
max_val = -1
for s in search_grid:
for ϕ in search_grid:
current_val = state_action_values((s, ϕ), x, v) if s + ϕ <= 1␣
↪else -1
return T, get_greedy
To solve the model, we will write a function that uses the Bellman operator and iterates to find a fixed point.
def solve_model(jv,
use_parallel=True,
tol=1e-4,
max_iter=1000,
verbose=True,
print_skip=25):
"""
Solves the model by value function iteration
(continues on next page)
* jv is an instance of JVWorker
"""
T, _ = operator_factory(jv, parallel_flag=use_parallel)
# Set up loop
v = jv.x_grid * 0.5 # Initial condition
i = 0
error = tol + 1
return v_new
Let’s generate the optimal policies and see what they look like.
jv = JVWorker()
T, get_greedy = operator_factory(jv)
v_star = solve_model(jv)
s_star, ϕ_star = get_greedy(v_star)