Introductory Econometrics
Prachi Singh & Partha Bandopadhyay
Ashoka University
January 26, 2021
Lecture 3 ECO-2400 January 26, 2021 1 / 18
Introduction
Topics in Chapter 2
1 Simple Regression Model (Section 2.1 JW)
2 Ordinary Least Squares Estimates (Section 2.2 JW)
3 Properties of OLS (Section 2.3 JW)
4 Units of Measurement & Functional Form (Section 2.4 JW)
5 Expected Values & Variances of OLS Estimators (Section 2.5 JW)
6 Regression Through Origin (Section 2.6 JW)
Lecture 3 ECO-2400 January 26, 2021 2 / 18
Introduction
What do we need to know about an estimator?
Unbiased: E(estimator) = population parameter
E (βˆ1 ) = β1
Efficient: Minimum variance
We need few assumptions to show that our estimator is unbiased.
These assumptions are called GAUSS MARKOV assumptions.
Lecture 3 ECO-2400 January 26, 2021 3 / 18
Section 2.5
Assumptions
1 Linear in parameters with population model being: y = β0 + β1 x + u
2 Random sample of size. [xi , yi : i = 1, 2....n. We have:
yi = β0 + β1 xi + ui
3 Sample variance of the independent variable x is non-zero.
4 Zero conditional mean: E (u|x ) = 0
5 The error u has the same variance given any value of the explanatory
variable: Var (u|x ) = σ 2
Lecture 3 ECO-2400 January 26, 2021 4 / 18
Section 2.5
How to do sample?
Always remember and try to be as clear as possible about:
a) The meaning of conditionality.
This was was used in describing the PRF, deriving estimates from a
sample etc. So all expressions which use conditionality should be
absolutely clear to you. Eg: E (u|x )orE (y |x ).
b) Random Sampling
How do you collect samples?
Lecture 3 ECO-2400 January 26, 2021 5 / 18
Section 2.5
Describing sampling
Lecture 3 ECO-2400 January 26, 2021 6 / 18
Section 2.5
OLS estimators are unbiased estimators
Proof:
Recollect (from Lecture 2) -
Pn
i=1 (xi − x )(yi − y)
βˆ1 = Pn 2
(1)
i=1 (xi − x )
This can also be written as:
Pn
(xi − x )yi
βˆ1 = Pi=1
n 2
(2)
i=1 (xi − x )
Total Sum of Squares of x (SSTx ) = ni=1 (xi − x )2
P
And for a sample we can write yi = β0 + β1 xi + ui
Hence we get, Pn
ˆ (xi − x )(β0 + β1 xi + ui )
β1 = i=1 (3)
SSTx
Lecture 3 ECO-2400 January 26, 2021 7 / 18
Section 2.5
OLS estimators are unbiased estimators
Proof Continued:
Let’s expand the numerator -
n
X n
X n
X n
X
(xi − x )(β0 + β1 xi + ui ) = β0 (xi − x ) + β1 xi (xi − x ) + ui (xi − x )
i=1 i=1 i=1 i=1
(4)
Pn Pn Pn
=⇒ β0 i=1 (xi − x ) + β1 i=1 xi (xi − x ) + i=1 ui (xi − x )
Now lets focus on each of these terms:
Term 1: ni=1 (xi − x ) = 0 ; why?
P
Term 2: Recollect or see Appendix A:
Pn
− x ) = ni=1 (xi − x )2 = SSTx
P
i=1 xi (xiP
Term 3: ni=1 ui (xi − x )
Hence equation 4 becomes:
Pn
i=1 ui (xi − x)
βˆ1 = β1 + (5)
SSTx
Lecture 3 ECO-2400 January 26, 2021 8 / 18
Section 2.5
OLS estimators are unbiased estimators
Proof Continued:
Taking expectation of equation 5. Remember everything is conditioned on
x values even if it is not explicitly stated -
Pn
i=1 ui (xi − x)
E (βˆ1 ) = E (β1 + ) (6)
SSTx
=⇒ E (βˆ1 ) = E (β1 ) + SST 1
E ( ni=1 ui (xi − x ))
P
x
=⇒ E (βˆ1 |x ) = β1 + SST 1
( ni=1 (xi − x )E (ui ))
P
x
Now we know,
E (ui ) = 0 , so we get -
E (βˆ1 ) = β1
Lecture 3 ECO-2400 January 26, 2021 9 / 18
Section 2.5
OLS estimators are unbiased estimators
Proof Continued:
From equation 12 (see lecture 2), we have:
βˆ0 = y − βˆ1 x (7)
and we also have: yi = β0 + β1 xi + ui Taking average: y = β0 + β1 x + u
So we get,
βˆ0 = β0 + β1 x + u − βˆ1 x (8)
=⇒ βˆ0 = β0 + x (β1 − βˆ1 ) + u
Taking expectation on both sides, we get -
E (βˆ0 ) = E (β0 ) + x E (β1 − βˆ1 ) + E (u)
The second and third terms are zero. Why??
So we get -
E (βˆ0 ) = β0
HENCE PROVED.
Lecture 3 ECO-2400 January 26, 2021 10 / 18
Section 2.5
Lecture 3 ECO-2400 January 26, 2021 11 / 18
Section 2.5
PLEASE TAKE A BREAK AND
THEN LISTEN TO REST OF THE LECTURE.
Lecture 3 ECO-2400 January 26, 2021 12 / 18
Section 2.5
Variance of OLS estimators
Assumption 5: Homoskedasticity - The error u has the same variance
given any value of the explanatory variable.
Var (u|x ) = σ 2
=⇒ Var (u|x ) = E (u 2 |x ) − (E (u|x ))2
=⇒ E (u 2 |x ) = σ 2
Assumption 5 written for y instead of u:
PRF: E (y |x ) = β0 + β1 x
Var (y |x ) = σ 2
Lecture 3 ECO-2400 January 26, 2021 13 / 18
Section 2.5
Visualizing Homoskedasticity
Lecture 3 ECO-2400 January 26, 2021 14 / 18
Section 2.5
Visualizing Heteroskedasticity
Lecture 3 ECO-2400 January 26, 2021 15 / 18
Section 2.5
Proof for Variance of β1
Pn
i=1 ui (xi − x)
βˆ1 = β1 + (9)
SSTx
Var (βˆ1 ) = Var (β1 ) + 1
Var [ ni=1 ui (xi − x )]
P
SSTx2
=⇒ Var (βˆ1 ) = 0 + 1
Var [ ni=1 ui (xi − x )]
P
SSTx2
=⇒ Var (βˆ1 ) = 0 + 1 Pn
SSTx2
2
i=1 (xi − x ) [Var (ui )]
Since Var (ui ) = σ 2 for all i, so we get -
σ2
Var (βˆ1 ) = SST 2 [SSTx ]
x
2
Var (βˆ1 ) = σSSTx
DS: Cover Var for βˆ0 .
Lecture 3 ECO-2400 January 26, 2021 16 / 18
Section 2.5
Homework:
Read page 55 and 56 (related to error variance).
Lecture 3 ECO-2400 January 26, 2021 17 / 18
Section 2.5
Lecture 3 ECO-2400 January 26, 2021 18 / 18