18.
650 Fundamentals of Statistics
Roger Jin
Spring 2019
Contents
1 February 5, Introduction 1
2 February 12, Gaussian Mixtures 1
3 February 14, Maximum Likelihood 1
3.1 Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
4 Sampling Distribution 2
5 Appendix 2
5.1 Useful R Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1 February 5, Introduction
1. We always call a parameter that we don’t know by a Greek letter, because we don’t know it.
2. No models are true, but some models are useful.
2 February 12, Gaussian Mixtures
Suppose that we have X ∼ N (θ1 , σ12 ) and Y ∼ N (θ2 , σ22 ). Then we can use linearity of expectation to
get the moments of a mixture of these two models.
3 February 14, Maximum Likelihood
3.1 Diffusion
We can model the evolution of the price of an asset as
dPt
= udt + σdWt ,
Pt
where W is a diffusion process. Then we can show that
Wt |W0 = 0 N (0, t).
Also apparently a Lalacian Distribution is a mixed Gaussian where the variances are themselves
exponentially distriuted.
1
4 Sampling Distribution
Distribution of the estimator, if the experiment were repeated √ many times.
Lecture 4 Slide 30 - without the multiplicative factor of n, the distribution converges to 0. Suppose
we have X1 , · · · , Xn that have mean θ and variance σ 2 . If we consider the sample mean
n
1X
X̄ = Xi
n i=1
σ2
Var X̄ =
n
Applying Chebyshev’s we have that
P X̄ − θ > k < k 2 Var X̄ = k 2 σ 2 /n,
√
which limits to 0. Then multiplying by n scales up the variance by n, so we no longer have this limit
go to zero. I still don’t get how it’s normal though, maybe just because of CLT.
Theorem 4.1 (MSE)
h i2
M SE(θ̂) = E θ̂ − θ
2
= Var θ̂ + (E [θ] − θ)
5 Appendix
5.1 Useful R Commands
1. str(A): displays structure of A .
2. ggplot(): plots stuff.