Applied Econometrics Techniques
153400119
Dr. Ben Groom
Problem Set 2: Large Sample Properties and
Maximum Likelihood Estimation
Q1. The variable Z represents equals 1 if an individual has been a victim of crime and 0
otherwise. It is believed 1) Z ~ Bernoulli(θ ) and; 2) the observations are independent. A
random survey obtains the following data: 1,1,1,0,1,0,0.
Given that the Likelihood of the Bernoulli distribution is:
L = Πθ x i (1 − θ )
1− xi
(which means for data (1,1,0) would yield: θ * θ * (1 −θ ) )
therefore the log likelihood is:
⇒ ln L = Σi xi ln θ + (n − Σi xi ) ln ( 1 − θ )
(in the case of data (1,1,0) the log likelihood =
ln θ + ln θ + ln ( 1 − θ ) = 2 ln θ + ( 3 − 2 ) ln ( 1 − θ ) = Σi xi ln θ + ( n − Σi xi ) ln ( 1 = θ ) )
What is the maximum likelihood estimate of θ ?
Q2. The log likelihood function for the linear model y i = α + βxi + u i where
ui ~ N ( 0, σ 2 ) is given by:
2
n n 1 u
f ( x1 , x2 ,..., xn )
2 2
( )
= − ln ( 2π ) − ln σ 2 − Σi i
2 σ
Show that the maximum likelihood estimator for the variance is equal to RSS/n, where
2
RSS = u i in the linear model.
Q3. Let Y denote the sample average from a random sample with mean µ and
variance σ 2 . Consider two alternative estimators of µ . W1 = [ ( n −1) / n ]Y and
W2 = Y / 2 .
i) show that W1 and W2 are both biased estimators of µ and find the
biases. What happens to the biases as n → ∞ ? Comment on any important
differences in bias for the two estimators as the sample size gets large.
ii) Find the probability limits of W1 and W2 .
iii) Find var (W1 ) and var (W2 ) .
iv) Compare W1 and W2
v) Argue that W1 is a better estimator than Y if µ is close to zero (Hint:
consider variance and bias)
Q.4. Practical Exercise.
Use the bwght dataset found on STATA.
i) The likelihood ratio test: use the LR test to test the null hypothesis that
β 2 = β 4 in the following model.
bwght = α + β1cigs + β2 male + β3 drink + β4 feduc + u i
ii) Using the Wald test, test the following null hypothesis for the model above.
H 0 : β2 = β4 , H 1 : β2 ≠ β4 .