Point Estimation Exercises
Point Estimation Exercises
It is important to note that, in general, we have as many moment conditions as the parameters.
In Example 5.2.5, we have more moment conditions than parameters, because both the mean and
variance of Poisson random variables are the same. Given a sample, this results in two different
estimates of a single parameter. One of the questions could be, can these two estimators be combined
in some optimal way? This is done by the so-called generalized method of moments (GMM). We will
not deal with this topic.
As we have seen, the method of moments finds estimators of unknown parameters by equating the
corresponding sample and population moments. This method often provides estimators when other
methods fail to do so or when estimators are harder to obtain, as in the case of a gamma distribution.
Compared to other methods, method of moments estimators are easy to compute and have some
desirable properties that we will discuss in ensuing sections. The drawback is that they are usually
not the “best estimators” (to be defined later) available and sometimes may even be meaningless.
EXERCISES 5.2
5.2.1. Let X1 , . . . , Xn be a random sample of size n from the geometric distribution for which p
is the probability of success.
2 5 7 43 18 19 16 11 22
4 34 19 21 23 6 21 7 12
How will you use this information? [The pdf of a geometric distribution is f (x) =
p(1 − p)x−1 , for x = 1, 2, . . . . Also μ = 1/p.]
5.2.2. Let X1 , . . . , Xn be a random sample of size n from the exponential distribution whose pdf
(by taking θ = 1/β in Definition 2.3.7) is
⎧
⎨θe−θx , x≥0
f (x, θ) =
⎩ 0, x < 0.
0.9 0.1 0.1 0.8 0.9 0.1 0.1 0.7 1.0 0.2
0.1 0.1 0.1 2.3 0.8 0.3 0.2 0.1 1.0 0.9
0.1 0.5 0.4 0.6 0.2 0.4 0.2 0.1 0.8 0.2
0.5 3.0 1.0 0.5 0.2 2.0 1.7 0.1 0.3 0.1
0.4 0.5 0.8 0.1 0.1 1.7 0.1 0.2 0.3 0.1
234 CHAPTER 5 Point Estimation
Assuming the data follow an exponential distribution, obtain a moment estimate for the
parameter θ. Interpret.
5.2.3. Let X1 , . . . , Xn be a random sample from a uniform distribution on the interval
(θ − 1, θ + 1).
(a) Find a moment estimator for θ.
(b) Use the following data to obtain a moment estimate for θ:
2αxe−αx ,
2
x>0
f(x) =
0, otherwise.
e−(x−θ) , x≥θ
f(x) =
0, otherwise.
1 + αx
f(x, α) = , −1 ≤ x ≤ 1, and − 1 ≤ α ≤ 1.
2
# $
Find method of moments estimators for r and p. [Here E[X] = r(1 − p)/p and E X2 =
r(1 − p)(r − rp + 1)/p2 .]
5.2.9. Let X1 , . . . , Xn be a random sample from a distribution with pdf
(θ + 1) xθ , 0 ≤ x ≤ 1; θ > −1
f (x) =
0, otherwise.
It does not follow that every function of a sufficient statistic is sufficient. However, any one-to-one
function of a sufficient statistic is also sufficient. Every statistic need not be sufficient. When they
do exist, sufficient estimators are very important, because if one can find a sufficient estimator it
is ordinarily possible to find an unbiased estimator based on the sufficient statistic. Actually, the
following theorem shows that if one is searching for an unbiased estimator with minimal variance,
it has to be restricted to functions of a sufficient statistics.
RAO–BLACKWELL THEOREM
Theorem 5.4.7 Let X1 , . . . , Xn be a random sample with joint pf or pdf f (x1 , . . . , xn ; θ) and let
U = (U1 , . . . , Un ) be jointly sufficient for θ = (θ1 , . . . , θn ). If T is any unbiased estimator of k (θ), and if
T ∗ = E (T |U ), then:
(a) T ∗ is an unbiased estimator of k(θ).
(b) T ∗ is a function of U, and does not depend on θ.
(c) Var T ∗ ≤ Var(T ) for every θ, and Var T ∗ < Var(T ) for some θ unless T ∗ = T with probability 1.
Proof.
(a) By the property of conditional expectation and by the fact that T is an unbiased estimator of k(θ),
E T ∗ = E(E(T |U)) = E(T ) = k(θ).
Because Var (T |U ) ≥ 0 for all u, it follows that E (Var (T |U )) ≥ 0. Hence, Var T ∗ ≤ Var(T ). We
∗
note that Var T = Var(T ) if and only if Var (T |U) = 0 or T is a function of U, in which case
T ∗ = T (from the definition of T ∗ = E (T |U ) = T ).
EXERCISES 5.4
5.4.1. Let X1 , . . . , Xn be a random sample from a population with density
1 1
N N
μ= ai and σ 2 = (ai − μ)2 .
N N
i=1 i=1
For a finite population, show that the sample variance S 2 is a biased estimator of σ 2 .
5.4.3. For an infinite population with finite variance σ 2 , show that the sample standard deviation
S is a biased estimator for σ. Find an unbiased estimator of σ. [We have seen that S 2 is an
unbiased estimator of σ 2 . From this exercise, we see that a function of an unbiased estimator
need not be an unbiased estimator.]
5.4.4. Let X1 , . . . , Xn be a random sample from an infinite population with finite variance σ 2 .
Define
1
n
2
S 2 = Xi − X .
n
i=1
2
Show that S 2 is a biased estimator for σ 2 , and that the bias of S 2 is − σn . Thus, S 2 is
negatively biased, and so on average underestimates the variance. Note that S 2 is the MLE
of σ 2 .
5.4.5. Let X1 , . . . , Xn be a random sample from a population with the mean μ. What condition
must be imposed on the constants c1 , c2 , . . . , cn so that
c1 X1 + c2 X2 + · · · + cn Xn
is an unbiased estimator of μ?
5.4.6. Let X1 , . . . , Xn be a random sample from a geometric distribution with parameter θ. Find
an unbiased estimate of θ.
5.4.10. Let X1 , . . . , Xn1 be a random sample from an N μ1 , σ 2 distribution and let Y1 , . . . , Yn2 be
a random sample from a N μ2 , σ 2 distribution. Show that the pooled estimator
is unbiased for σ 2 , where S12 and S22 are the respective sample variances.
5.4.11. Let X1 , . . . , Xn be a random sample from an N μ, σ 2 distribution. Show that the sample
median, M, is an unbiased estimator of the population mean μ. Compare the variances of
X and M. [Note: For the normal distribution, the mean, median, and mode all occur at
the same location. Even though both X and M are unbiased, the reason we usually use the
mean instead of the median as the estimator of μ is that X has a smaller variance than M.]
5.4.12. Let X1 , . . . , Xn be a random sample from a Poisson distribution with parameter λ. Show
that the sample mean X is sufficient for λ.
5.4.13. Let X1 , . . . , Xn be a random sample from a population with density function
1 |x|
fσ (x) = exp − , −∞ < X < ∞, σ > 0.
2σ σ
1.5 3.0 2.6 6.8 0.7 2.2 1.3 1.6 1.1 6.5
0.3 2.0 1.8 1.0 0.7 0.7 1.6 3.0 2.0 2.5
5.7 0.1 0.2 0.5 0.4
5.4.16. Let X1 , . . . , Xn be a random sample from a one-parameter Weibull distribution with pdf
⎧
⎨2αxe−αx2 , x>0
f (x) =
⎩ 0, otherwise.
Show that min Xi , max Xi is sufficient for θ.
1≤i≤n 1≤i≤n
0.3 3.4 0.4 1.8 0.7 1.0 0.1 2.3 3.7 2.0
0.3 3.7 0.1 1.3 1.2 3.3 0.2 1.3 0.6 0.4
5.4.19. Show that X1 is not sufficient for μ, if X1 , . . . , Xn is a sample from N(μ, 1).
5.4.22. Let X1 , . . . , Xn be a random sample of size n from a Bernoulli population with parameter
p. Show that p̂ = X is the UMVUE for p.