0% found this document useful (0 votes)
31 views26 pages

Stock Watson 3U ExerciseSolutions Chapter17 Instructors

The document provides solutions to end-of-chapter exercises from the third updated edition of 'Introduction to Econometrics' by James H. Stock and Mark W. Watson. It includes detailed explanations of the restricted least squares estimator, its properties, and comparisons with ordinary least squares estimators. The document is intended for instructors only and contains mathematical derivations and proofs related to econometric concepts.

Uploaded by

qq1812016515
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views26 pages

Stock Watson 3U ExerciseSolutions Chapter17 Instructors

The document provides solutions to end-of-chapter exercises from the third updated edition of 'Introduction to Econometrics' by James H. Stock and Mark W. Watson. It includes detailed explanations of the restricted least squares estimator, its properties, and comparisons with ordinary least squares estimators. The document is intended for instructors only and contains mathematical derivations and proofs related to econometric concepts.

Uploaded by

qq1812016515
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

 

 
 
 
 

Introduction to Econometrics (3rd Updated Edition)

by

James H. Stock and Mark W. Watson

Solutions to End-of-Chapter Exercises: Chapter 17*

(This version August 17, 2014)

*Limited distribution: For Instructors Only. Answers to all odd-numbered questions


are provided to students on the textbook website. If you find errors in the solutions,
please pass them along to us at [email protected].

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 1
_____________________________________________________________________________________________________

17.1. (a) Suppose there are n observations. Let b1 be an arbitrary estimator of β1. Given
the estimator b1, the sum of squared errors for the given regression model is

∑ (Y − b X ) .
i =1
i 1 i
2

βˆ1RLS , the restricted least squares estimator of β1, minimizes the sum of squared
errors. That is, βˆ RLS satisfies the first order condition for the minimization which
1
requires the differential of the sum of squared errors with respect to b1 equals
zero:

∑ 2(Y − b X )(− X ) = 0.
i =1
i 1 i i

Solving for b1 from the first order condition leads to the restricted least squares
estimator

∑in=1 X iYi
βˆ1RLS = .
∑in=1 X i2

(b) We show first that βˆ1RLS is unbiased. We can represent the restricted least
squares estimator βˆ1RLS in terms of the regressors and errors:

∑in=1 X iYi ∑in=1 X i ( β1 X i + ui ) ∑in=1 X i ui


βˆ1RLS = = = β1 + .
∑in=1 X i2 ∑in=1 X i2 ∑in=1 X i2

Thus

⎛ ∑n X u ⎞ ⎡ ∑n X E (u | X ,K , X n ) ⎤
E ( βˆ1RLS ) = β1 + E ⎜ i =n1 i 2 i ⎟ = β1 + E ⎢ i =1 i ni 1 2 ⎥ = β1 ,
⎝ ∑i =1 X i ⎠ ⎣ ∑i =1 X i ⎦

where the second equality follows by using the law of iterated expectations, and
the third equality follows from

∑in=1 X i E (ui | X 1 ,K , X n )
=0
∑in=1 X i2

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 2
_____________________________________________________________________________________________________

17.1 (continued)

because the observations are i.i.d. and E(ui |Xi) = 0. (Note, E(ui |X1,…, Xn) =
E(ui |Xi) because the observations are i.i.d.

Under assumptions 1−3 of Key Concept 17.1, βˆ1RLS is asymptotically normally


distributed. The large sample normal approximation to the limiting distribution
of βˆ1RLS follows from considering

ˆ ∑in=1 X i ui 1n ∑in=1 X i ui
β1 − β1 = n 2 = 1 n 2 .
RLS

∑i =1 X i n ∑ i =1 X i

Consider first the numerator which is the sample average of vi = Xiui. By


assumption 1 of Key Concept 17.1, vi has mean zero:
E ( X iui ) = E[ X i E (ui | X i )] = 0. By assumption 2, vi is i.i.d. By assumption 3,
var(vi) is finite. Let v = 1n ∑in=1 X iui , then σ v2 = σ v2 /n. Using the central limit
theorem, the sample average

n
1
v /σ v =
σv n
∑ v → N (0, 1)
i =1
i
d

or

1 n

n i =1
X i ui →
d
N (0, σ v2 ).

For the denominator, X i2 is i.i.d. with finite second variance (because X has a
finite fourth moment), so that by the law of large numbers

1 n 2 p

n i =1
X i → E ( X 2 ).

Combining the results on the numerator and the denominator and applying
Slutsky’s theorem lead to

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 3
_____________________________________________________________________________________________________

17.1 (continued)

1
∑in=1 X i ui ⎛ var( X i ui ) ⎞
n ( βˆ1RLS − βu ) = n

d
N ⎜ 0, ⎟.
1
n ∑ n
i =1 X i
2
⎝ E( X 2 ) ⎠

(c) βˆ1RLS is a linear estimator:

∑in=1 X iYi Xi
βˆ1RLS = = ∑ i =1 aiYi ,
n
where ai = .
∑i =1 X i
n 2
∑ X i2
n
i =1

The weight ai (i = 1,…, n) depends on X1,…, Xn but not on Y1,…, Yn.

Thus

∑in=1 X i ui
βˆ1RLS = β1 + .
∑in=1 X i2

βˆ1RLS is conditionally unbiased because

⎛ ∑ i=1
n
Xu ⎞
E( β̂RLS
|X 1 ,…, X n = E ⎜ β1 + n i 2 i |X 1 ,…, X n ⎟
1
⎝ ∑ i=1 X i ⎠
⎛ ∑ i=1
n
Xu ⎞
= β1 + E ⎜ n i 2 i |X 1 ,…, X n ⎟
⎝ ∑ i=1 X i ⎠
= β1.

The final equality used the fact that

⎛ ∑ n X iui ⎞ ∑ i=1
n
X i E(ui |X 1 ,…, X n )
E ⎜ i=1 |X ,…, X n⎟
= =0
⎝ ∑ i=1 X i ∑ i=1
n 2 1 n
⎠ X i2

because the observations are i.i.d. and E (ui |Xi) = 0.

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 4
_____________________________________________________________________________________________________

17.1 (continued)

(d) The conditional variance of βˆ1RLS , given X1,…, Xn, is

⎛ ∑ n X iui ⎞
var( β̂1RLS |X1,…, X n ) = var ⎜ β1 + i=1 |X 1 ,…, X n ⎟
⎝ ∑ i=1 X i
n 2

∑ i=1
n
X i2 var(ui |X 1 ,…, X n )
= n
(∑ i=1 X i2 )2
∑ i=1
n
X i2σ u2
= n 2 2
(∑ i=1 X i )
σ u2
= .
∑ i=1
n
X i2

(e) The conditional variance of the OLS estimator β̂1 is

σ u2
var( βˆ1|X 1 ,K , X n ) = .
∑in=1 ( X i − X ) 2

Since

n n n n n

∑ ( X i − X )2 = ∑ X i2 − 2 X ∑ X i + nX 2 = ∑ X i2 − nX 2 < ∑ X i2 ,
i =1 i =1 i =1 i =1 i =1

the OLS estimator has a larger conditional variance:


var(β1|X1 ,K , X n ) > var(βˆ1RLS |X1,K , X n ).
The restricted least squares estimator βˆ1RLS is more efficient.

(f) Under assumption 5 of Key Concept 17.1, conditional on X1,…, Xn, βˆ1RLS is
normally distributed since it is a weighted average of normally distributed
variables ui:

∑in=1 X i ui
βˆ1RLS = β1 + .
∑in=1 X i2

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 5
_____________________________________________________________________________________________________

17.1 (continued)

Using the conditional mean and conditional variance of βˆ1RLS derived in parts (c)
and (d) respectively, the sampling distribution of βˆ1RLS , conditional on X1,…, Xn,
is

⎛ σ u2 ⎞
βˆ1RLS ~ N ⎜ β1 , ⎟.
⎝ ∑ n
i =1
2
X ⎠
i

(g) The estimator

! ∑ i=1
n
Yi ∑ i=1
n
( β1 X i + ui ) ∑ i=1
n
u
β1 = n = = β1 + n i
∑ i=1 X i ∑ i=1 X i
n
∑ i=1 X i

The conditional variance is

⎛ ∑n u ⎞
var( β!1 |X 1 ,…, X n) = var ⎜ β1 + ni=1 i |X 1 ,…, X n⎟
⎝ ∑ i=1 X i ⎠
∑ i=1
n
var(ui |X 1 ,…, X n )
= n
(∑ i=1 X i )2
nσ u2
= n .
(∑ i=1 X i )2

The difference in the conditional variance of β!1 and β̂1RLS is

nσ u2 σ u2
var( β!1|X 1 ,…, X n ) − var( β̂1RLS |X 1 ,…, X n ) = − n 2.
n
(∑ i=1 X i )2 ∑ i=1 Xi

In order to prove var( β!1|X 1 ,…, X n ) ≥ var( β̂1RLS |X 1 ,…, X n ), we need to show

n 1
≥ n
(∑ X i )
n
i =1
2
∑i =1 X i2

or equivalently

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 6
_____________________________________________________________________________________________________

17.1 (continued)

2
n
⎛ n ⎞
n∑ X ≥ ⎜ ∑ X i ⎟ .
i
2

i =1 ⎝ i =1 ⎠

This inequality comes directly by applying the Cauchy-Schwartz inequality

2
⎡ n ⎤ n n

⎢∑ i i ⎥ ∑ i ∑ bi
⋅ ≤ ⋅
2 2
( a b ) a
⎣ i =1 ⎦ i =1 i =1

which implies

2 2
⎛ n ⎞ ⎛ n ⎞ n n n

⎜⎝ ∑ i ⎟⎠ ⎜⎝ ∑
= ≤ ∑ ⋅ ∑ = ∑
2 2
X 1⋅ X i⎟
1 X i
n X i2 .
i=1 i=1 ⎠ i=1 i=1 i=1

n
That is nΣ i=1 X i2 ≥ (Σ nx=1 X i )2, or var( β!1|X 1 ,…, X n ) ≥ var( β̂1RLS |X 1 ,…, X n ).

Note: because β!1 is linear and conditionally unbiased, the result


var( β!1 |X 1 ,…, X n) ≥ var( β̂1RLS |X 1 ,…, X n ) follows directly from the Gauss-Markov
theorem.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 7
_____________________________________________________________________________________________________

17.2. The sample covariance is

1 n
s XY = ∑ ( X i − X )(Yi − Y )
n − 1 i =1
1 n
= ∑{[ X i − µ X ) − ( X − µ X )][Yi − µY ) − (Y − µY )]}
n = 1 i =1
1 ⎧ n n
= ⎨∑ i ( X − µ X )(Yi − µY ) − ∑ ( X − µ X )(Yi − µY )
n − 1 ⎩ i =1 i =1
n n

−∑ ( X i − µ X )(Y − µY ) + ∑ ( X − µ X )(Y − µY ) ⎬
i =1 i =1 ⎭
n ⎡1 n
⎤ n
= ⎢ ∑
n − 1 ⎣ n i =1
( X i − µ X )(Yi − µY ) ⎥ −
⎦ n −1
( X − µ X )(Y − µY )

where the final equality follows from the definition of X and Y which implies that
Σin=1 ( X i − µ X ) = n( X − µ X ) and Σin=1 (Yi − µY ) = n(Y − µY ), and by collecting terms.

We apply the law of large numbers on sXY to check its convergence in probability. It
is easy to see the second term converges in probability to zero because X →
p
µ X and
Y →
p
µY so ( X − µ X )(Y − µY ) →
p
0 by Slutsky’s theorem. Let’s look at the first
term. Since (Xi, Yi) are i.i.d., the random sequence (Xi − µX) (Yi − µY) are i.i.d. By the
definition of covariance, we have E[( X i − µ X )(Yi − µY )] = σ XY . To apply the law of
large numbers on the first term, we need to have

var[( X i − µ X )(Yi − µY )] < ∞

which is satisfied since

var[( X i − µ X )(Yi − µY )] < E[( X i − µ X ) 2 (Yi − µY ) 2 ]


≤ E[( X i − µ X ) 4 ]E [(Yi − µY ) 4 ] < ∞.

The second inequality follows by applying the Cauchy-Schwartz inequality, and the
third inequality follows because of the finite fourth moments for (Xi, Yi). Applying
the law of large numbers, we have

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 8
_____________________________________________________________________________________________________

17.2 (continued)

1 n

n i =1
( X i − µ X )(Yi − µY ) →
p
E[( X i − µ X )(Yi − µY )] = σ XY .

Also, n
n−1 → 1, so the first term for sXY converges in probability to σXY. Combining
p
results on the two terms for sXY , we have s XY → σ XY .

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 9
_____________________________________________________________________________________________________

17.3. (a) Using Equation (17.19), we have

n ∑ i =1 ( X i − X )ui
1 n
ˆ
n ( β1 − β1 ) = n 1 n
n ∑ i =1 ( X i − X )
2

1
∑in=1[( X i − µ X ) − ( X − µ X )]ui
= n n

n ∑ i =1 ( X i − X )
1 n 2

1
∑in=1 ( X i − µ X )ui ( X − µX ) 1
∑in=1 ui
= n
− n
1
n ∑in=1 ( X i − X ) 2 1
n ∑in=1 ( X i − X ) 2
1
∑in=1 vi ( X − µX ) 1
∑in=1 ui
= n
− n
1
n ∑in=1 ( X i − X ) 2 1
n ∑in=1 ( X i − X ) 2

by defining vi = (Xi − µX)ui.

(b) The random variables u1,…, un are i.i.d. with mean µu = 0 and variance
0 < σ u2 < ∞. By the central limit theorem,

n (u − µu ) 1
∑in=1 ui
= n

d
N (0, 1).
σu σu

The law of large numbers implies X → µ X 2 , or X − µ X → 0. By the consistency


p p

of sample variance, 1n Σin=1 ( X i − X ) 2 converges in probability to population


variance, var(Xi), which is finite and non-zero. The result then follows from
Slutsky’s theorem.

(c) The random variable vi = (Xi − µX) ui has finite variance:

var(vi ) = var[( X i − µ X ) µi ]
≤ E[( X i − µ X ) 2 ui2 ]
≤ E[( X i − µ X ) 4 ]E[(ui ) 4 ] < ∞.

The inequality follows by applying the Cauchy-Schwartz inequality, and the


second inequality follows because of the finite fourth moments for (Xi, ui). The
finite variance along with the fact that vi has mean zero (by assumption 1 of Key
Concept 15.1) and vi is i.i.d. (by assumption 2) implies that the sample average
v satisfies the requirements of the central limit theorem. Thus,

(continued on the next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 10
_____________________________________________________________________________________________________

17.3 (continued)

v 1
∑in=1 vi
= n

σv σv

satisfies the central limit theorem.

(d) Applying the central limit theorem, we have

1
∑in=1 vi
n

d
N (0, 1).
σv

Because the sample variance is a consistent estimator of the population variance,


we have

1
∑in=1 ( X i − X ) 2 p
n
→ 1.
var( X i )

Using Slutsky’s theorem,

1
n ∑in=1 vt
σv

d
N (0, 1),
1
n ∑ ( X t − X )2
n
i =1
σ X2

or equivalently

1
∑in=1 vi ⎛ var(vi ) ⎞
n

d
N ⎜ 0, 2 ⎟
.
n ∑ i =1 ( X i − X )
n 2
⎝ [var( X i )] ⎠
1

Thus

1
∑in=1 vi ( X − µX ) 1
∑in=1 ui
n ( βˆ1 − β1 ) = n
− n
1
n ∑in=1 ( X i − X ) 2 1
n ∑in=1 ( X i − X ) 2
⎛ var(vi ) ⎞

d
N ⎜ 0, 2 ⎟
⎝ [var( X i )] ⎠

since the second term for n (βˆ1 − β1 ) converges in probability to zero as shown
in part (b).

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 11
_____________________________________________________________________________________________________

17.4. (a) Write (βˆ1 − β1 ) = an Sn where an = 1


n
and Sn = n ( Bˆ1 − β1 ). Now,
an → 0 and S n →
d
S where S is distributed N (0, a2). By Slutsky’s theorem
0 × S . Thus Pr (|βˆ1 − β1| > δ ) → 0 for any δ > 0, so that βˆ1 − β1 → 0 and βˆ1
p
an Sn →
d

is consistent.

su2 p
(b) We have (i) → 1 and (ii) g ( x) = x is a continuous function; thus from the
σ 2
u

continuous mapping theorem

su2 su p
= → 1.
σ 2
u σu

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 12
_____________________________________________________________________________________________________

17.5. Because E(W 4) = [E(W2)]2 + var(W2), [E(W2)]2 ≤ E (W 4) < ∞. Thus E(W2) < ∞.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 13
_____________________________________________________________________________________________________

17.6. Using the law of iterated expectations, we have

E( β̂1 ) = E[E( β̂1|X 1 ,…, X n)] = E( β1 ) = β1.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 14
_____________________________________________________________________________________________________

17.7. (a) The joint probability distribution function of ui, uj, Xi, Xj is f (ui, uj, Xi, Xj). The
conditional probability distribution function of ui and Xi given uj and Xj is f (ui,
Xi |uj, Xj). Since ui, Xi, i = 1,…, n are i.i.d., f (ui, Xi |uj, Xj) = f (ui, Xi). By definition
of the conditional probability distribution function, we have

f (ui , u j , X i , X j ) = f (ui , X i | u j , X j ) f (u j , X j )
= f (ui , X i ) f (u j , X j ).

(b) The conditional probability distribution function of ui and uj given Xi and Xj


equals

f (ui , u j , X i , X j ) f (ui , X i ) f (u j , X j )
f (ui , u j | X i , X j ) = = = f (ui | X i ) f (u j | X j ).
f (Xi, X j ) f (Xi ) f (X j )

The first and third equalities used the definition of the conditional probability
distribution function. The second equality used the conclusion the from part (a)
and the independence between Xi and Xj. Substituting

f (ui , u j | X i , X j ) = f (ui | X i ) f (u j | X j )

into the definition of the conditional expectation, we have

E (ui u j | X i , X j ) = ∫ ∫ ui u j f (ui , u j |X i , X j ) dui du j

= ∫ ∫ ui u j f (ui | X i ) f (u j | X j )dui du j
= ∫ ui f (ui | X i )dui ∫ u j f (u j | X j )du j
= E (ui | X i ) E (u j | X j ).

(c) Let Q = (X1, X2,…, Xi – 1, Xi + 1,…, Xn), so that f (ui|X1,…, Xn) = f (ui |Xi, Q). Write

(continued on next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 15
_____________________________________________________________________________________________________

17.7 (continued)

f (ui , X i , Q)
f (ui | X i , Q) =
f ( X i , Q)
f (ui , X i ) f (Q)
=
f ( X i ) f (Q)
f (ui , X i )
=
f (Xi )
= f (ui | X i )

where the first equality uses the definition of the conditional density, the second
uses the fact that (ui, Xi) and Q are independent, and the final equality uses the
definition of the conditional density. The result then follows directly.

(d) An argument like that used in (c) implies

f (ui u j | X i , K X n ) = f (ui u j | X i , X j )

and the result then follows from part (b).

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 16
_____________________________________________________________________________________________________

17.8. (a) Because the errors are heteroskedastic, the Gauss-Markov theorem does not
apply. The OLS estimator of β1 is not BLUE.

(b) We obtain the BLUE estimator of β1 from OLS in the following

Y!i = β0 X! 0i + β1 X! 1i + u!i

where

Yi 1
Y!i = , X! 0i =
θ 0 + θ1|X i | θ 0 + θ1|X i |
Xi ui
X! 1i = , and u! = .
θ 0 + θ1|X i | θ 0 + θ1|X i |

(c) Using equations (17.2) and (17.19), we know the OLS estimator, βˆ1 , is

ˆ ∑in=1 ( X i − X )(Yi − Y ) ∑in=1 ( X i − X ) ui


β1 = = β1 + n .
∑in=1 ( X i − X )2 ∑i =1 ( X i − X ) 2

As a weighted average of normally distributed variables ui , βˆ1 is normally


distributed with mean E ( βˆ1 ) = β1. The conditional variance of βˆ1 , given X1,…,
Xn, is

⎛ ∑ n ( X − X ) ui ⎞
var ( βˆ1| X 1 ,..., X n ) = var ⎜ β1 + i =n1 i | X 1 ,..., X n ⎟
⎝ ∑i =1 ( X i − X ) 2

∑ ( X − X ) var (ui | X 1 ,..., X n )
n 2
= i =1 i n
[∑i =1 ( X i − X ) 2 ]2
∑in=1 ( X i − X ) 2 var(ui | X i )
=
[∑in=1 ( X i − X ) 2 ]2
∑in=1 ( X i − X ) 2 (θ0 + θ1| X i |)
= .
[∑in=1 ( X i − X ) 2 ]2

Thus the exact sampling distribution of the OLS estimator, βˆ1 , conditional on
X1,…, Xn, is

(continued on next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 17
_____________________________________________________________________________________________________

17.8 (continued)

⎛ ∑in=1 ( X i − X )2 (θ0 + θ1| X i |) ⎞


βˆ1 ~ N ⎜ β1 , ⎟.
⎝ [∑in=1 ( X i − X )2 ]2 ⎠

(d) The weighted least squares (WLS) estimators, βˆ0WLS and βˆ1WLS , are solutions to

n
min ∑ (Y!i − b0 X! 0i − b1 X! 1i )2 ,
b0, b1
i=1

the minimization of the sum of squared errors of the weighted regression. The
first order conditions of the minimization with respect to b0 and b1 are

∑ 2(Y! − b X! i 0 0i
− b1 X! 1i )(− X! 0i ) = 0,
i=1
n

∑ 2(Y! − b X! i 0 0i
− b1 X! 1i )(− X! 1i ) = 0.
i=1

Solving for b1 gives the WLS estimator

−Q S + Q00 S1
βˆ1WLS = 01 0
Q00Q11 − Q012

where

Q00 = ∑ i=1
n
X! 0i X! 0i , Q01 = ∑ i=1
n
X! 0i X! 1i , Q11 = ∑ i=1
n
X! 1i X! 1i , S0 = ∑ i=1
n
X! 0iY!i , and S1 = ∑ i=1
n
X! 1iY.
!
! ! !
Substituting Yi = β0 X 0i + β1 X 0i + u!i yields

−Q Z + Q Z
βˆ1WLS = β1 + 01 0 002 1
Q00Q11 − Q01

where Z0 = ∑ i=1 X! 0iu!i , and Z1 = ∑ i=1 X! 1iu!i or


n n

∑ i=1
n
(Q00 X! 1i − Q01 X! 0i )u!i
β̂WLS
− β1 = .
1
Q00Q11 − Q012

From this we see that the distribution of βˆ1WLS | X1 ,... X n is N ( β1 , σ β2ˆWLS ), where
1

(continued on next page)

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 18
_____________________________________________________________________________________________________

17.8 (continued)

σ u2! ∑ i=1
n
(Q00 X! 1i − Q01 X! 0i )2
σ β̂ WLS
2
=
1 (Q00Q11 − Q012 )2
Q002Q11 + Q012Q00 − 2Q00Q012
=
(Q00Q11 − Q012 )2
Q00
=
Q00Q11 − Q012

where the first equality uses the fact that the observations are independent, the
second uses σ u! = 1, the definition of Q00, Q11, and Q01, and the third is an
2

algebraic simplification.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 19
_____________________________________________________________________________________________________

17.9. We need to prove

1 n
∑ [( X i − X ) 2 uˆi2 − ( X i − µ X ) 2 ui2 ] → 0.
p

n i =1

Using the identity X = µ X + ( X − µ X ),

1 n 2 1
n

∑ i
n i =1
[( X − X ) 2 2
ˆ
ui − ( X i − µ X ) 2 2
ui ] = ( X − µ X ) ∑ uˆi2
n i =1
1 n
− 2( X − µ X ) ∑ ( X i − µ X )uˆi2
n i =1
1 n
+ ∑ ( X i − µ X ) 2 (uˆi2 − ui2 ).
n i =1

The definition of uˆi implies

uˆi2 = ui2 + ( βˆ0 − β0 )2 + ( βˆ1 − β1 )2 X i2 − 2ui ( βˆ0 − β0 )


− 2u ( βˆ − β ) X + 2( βˆ − β )( βˆ − β ) X .
i 1 1 i 0 0 1 1 i

Substituting this into the expression for 1


n Σin=1[( X i − X ) 2 uˆi2 − ( X i − µ X ) 2 ui2 ] yields a
p
series of terms each of which can be written as anbn where an → 0 and
bn = 1n Σin=1 X ir uis where r and s are integers. For example,
an = ( X − µ X ), an = (βˆ1 − β1 ) and so forth. The result then follows from Slutksy’s
p
theorem if 1
n Σin=1 X ir uis → d where d is a finite constant. Let wi = X ir uis and note that
wi is i.i.d. The law of large numbers can then be used for the desired result if
E ( wi2 ) < ∞. There are two cases that need to be addressed. In the first, both r and s
are non-zero. In this case write

E (wi2 ) = E ( X i2 r ui2 s ) < [ E ( X i4 r )][ E (ui4 s )]

and this term is finite if r and s are less than 2. Inspection of the terms shows that
this is true. In the second case, either r = 0 or s = 0. In this case the result follows
directly if the non-zero exponent (r or s) is less than 4. Inspection of the terms shows
that this is true.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 20
_____________________________________________________________________________________________________

17.10. Using (17.43) with W = θˆ − θ implies

ˆ E[(θˆ − θ ) 2 ]
Pr(|θ − θ | ≥ δ ) ≤
δ2

Since E[(θˆ − θ ) 2 ] → 0, Pr(| θˆ − θ | > δ ) → 0, so that θˆ − θ →


p
0.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 21
_____________________________________________________________________________________________________

17.11. Note: in early printing of the third edition there was a typographical error in the
expression for µY|X. The correct expression is µY | X = µY + (σ XY / σ X2 )( x − µ X ) .

(a) Using the hint and equation (17.38)

1
fY | X = x ( y ) =
σ (1 − ρ XY
2
Y
2
)
⎛ 1 ⎛ ⎛ x − µ ⎞2 ⎛ x − µ X ⎞⎛ y − µY ⎞ ⎛ y − µY ⎞ ⎞ 1 ⎛ x − µ X ⎞ ⎞
2 2

× exp ⎜ ⎜⎜ X
⎟ − 2 ρ XY ⎜ ⎟⎜ ⎟+⎜ ⎟ ⎟+ ⎜ ⎟ ⎟.
⎜ −2(1 − ρ XY ) ⎜ ⎝ σ X ⎠
2
⎝ σ X ⎠⎝ σ Y ⎠ ⎝ σ Y ⎠ ⎟⎠ 2 ⎝ σ X ⎠ ⎟⎠
⎝ ⎝

Simplifying yields the desired expression.

(b) The result follows by noting that fY|X=x(y) is a normal density (see equation
(17.36)) with µ = µT|X and σ2 = σ Y2|X .

(c) Let b = σXY/ σ X2 and a = µY −bµX.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 22
_____________________________________________________________________________________________________

17.12. (a)


1 ⎛ u2 ⎞ ⎛ σ u2 ⎞ ∞ 1 ⎛ u2 σ u2 ⎞
∫−∞ σ 2π ⎜⎝ 2σ u2 ⎟⎠
E (e u ) = exp − + u du = exp ⎜ ⎟∫
2
⎝ ⎠ −∞ u σ 2 π
exp ⎜


2σ 2
u
+ u −
2
⎟ du

u

⎛σ ⎞2
1 ⎛ 1 2⎞ ⎛σ ⎞2
= exp ⎜ u ⎟ ∫ exp ⎜ − 2 ( u − σ u2 ) ⎟ du = exp ⎜ u ⎟
⎝ 2 ⎠ −∞ σ u 2π ⎝ 2σ u ⎠ ⎝ 2 ⎠

where the final equality follows because the integrand is the density of a normal
random variable with mean and variance equal to σ u2 . Because the integrand is a
density, it integrates to 1.

(b) The result follows directly from (a).

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 23
_____________________________________________________________________________________________________

17.13 (a) The answer is provided by equation (13.10) and the discussion following the
equation. The result was also shown in Exercise 13.10, and the approach used
in the exercise is discussed in part (b).
 
(b)  Write  the  regression  model  as  Yi  =  β0  +  β1Xi  +  vi,  where  β0  =  E(β0i),  β1  =  
E(β1i),  and  vi  =  ui  +  (β0i  −  β0)  +  (β1i  −  β1)Xi.    Notice  that    
 
                                       E(vi  |  Xi)  =  E(ui|Xi)  +  E(β0i  −  β0|  Xi)  +  XiE(β1i  −  β1|Xi)  =  0    
 
because  β0i  and  β1i  are  independent  of  Xi.    Because  E(vi  |  Xi)  =  0,  the  OLS  
regression  of  Yi  on  Xi  will  provide  consistent  estimates  of  β0  =  E(β0i)  and    
β1  =  E(β1i).    Recall  that  the  weighted  least  squares  estimator  is  the  OLS  
estimator  of  Yi/σi  onto  1/σi  and  Xi/σi  ,  where   σ i = θ0 + θ1 X i2 .  Write  this  
regression  as    
 
                            Yi / σ i = β0 (1/ σ i ) + β1 ( X i / σ i ) + vi / σ i .  
 
This  regression  has  two  regressors,  1/σi  and  Xi/σi.  Because  these  
regressors  depend  only  on  Xi,  E(vi|Xi)    =  0  implies  that    E(vi/σi  |  (1/σi),  
Xi/σi)  =  0.    Thus,  weighted  least  squares  provides  a  consistent  estimator  of  
β0  =  E(β0i)  and    β1  =  E(β1i).      

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 24
_____________________________________________________________________________________________________

17.14
(a) Yi = (Yi − µ) + µ, so that Yi 2 = (Yi − µ)2 + µ2 + 2(Yi − µ)µ. The result follows after

taking the expected value of both sides of the equation.

(b) This follows from the large of large numbers because Yi is i.i.d with mean E(Yi) =
µ and finite variance.

(c) This follows from the large of large numbers because Yi 2 is i.i.d with mean E( Yi 2 )

= µ2 + σ2 (from (a)) and finite variance (because Yi has a finite fourth moment, Yi 2
has a finite second moment).

(d)
1 n
∑ (
n i=1 i
Y − Y )
2
=
1 n 2
(
∑ Y + Y 2 − 2YYi
n i=1 i
)
1 n 2 1 n
= ∑ Y
n i=1 i
+ Y 2
− 2Y ∑Y 2
n i=1 i
1 n 2
= ∑Y − Y 2
n i=1 i

p
(e) This follows from (a)-(d) and Y 2 → µ 2

(f) This follows from (e) and n/(n−1) → 1.

©2015 Pearson Education, Inc.


 
Stock/Watson - Introduction to Econometrics - 3rd Updated Edition - Answers to Exercises: Chapter 17 25
_____________________________________________________________________________________________________

17.15
n d
(a) Write W = ∑Z i
2
where Zi ~ N(0,1). From the law of large number W/n → E( Z i2 )
i=1

= 1.

(b) The numerator is N(0,1) and the denominator converges in probability to 1. The
result follows from Slutsky’s theorem (equation (17.9)).

(c) V/m is distributed χ m2 / m and the denominator converges in probability to 1. The


result follows from Slutsky’s theorem (equation (17.9)).

©2015 Pearson Education, Inc.


 

You might also like