Point Estimation Exercises | PDF | Bias Of An Estimator | Estimator
100% found this document useful (1 vote)
532 views

Point Estimation Exercises

This document discusses the method of moments for parameter estimation and provides exercises for applying the method to different distributions. The key points are: 1) The method of moments finds estimators of unknown parameters by equating sample moments to population moments. 2) It is an intuitive method that is easy to compute but estimators are usually not optimal and can sometimes be meaningless. 3) Exercises are provided to find moment estimators for parameters of various distributions like geometric, exponential, uniform, and Weibull. Students are asked to apply the method to sample data from these distributions.

Uploaded by

Julius Vai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
532 views

Point Estimation Exercises

This document discusses the method of moments for parameter estimation and provides exercises for applying the method to different distributions. The key points are: 1) The method of moments finds estimators of unknown parameters by equating sample moments to population moments. 2) It is an intuitive method that is easy to compute but estimators are usually not optimal and can sometimes be meaningless. 3) Exercises are provided to find moment estimators for parameters of various distributions like geometric, exponential, uniform, and Weibull. Students are asked to apply the method to sample data from these distributions.

Uploaded by

Julius Vai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

5.

2 The Method of Moments 233

It is important to note that, in general, we have as many moment conditions as the parameters.
In Example 5.2.5, we have more moment conditions than parameters, because both the mean and
variance of Poisson random variables are the same. Given a sample, this results in two different
estimates of a single parameter. One of the questions could be, can these two estimators be combined
in some optimal way? This is done by the so-called generalized method of moments (GMM). We will
not deal with this topic.
As we have seen, the method of moments finds estimators of unknown parameters by equating the
corresponding sample and population moments. This method often provides estimators when other
methods fail to do so or when estimators are harder to obtain, as in the case of a gamma distribution.
Compared to other methods, method of moments estimators are easy to compute and have some
desirable properties that we will discuss in ensuing sections. The drawback is that they are usually
not the “best estimators” (to be defined later) available and sometimes may even be meaningless.

EXERCISES 5.2
5.2.1. Let X1 , . . . , Xn be a random sample of size n from the geometric distribution for which p
is the probability of success.

(a) Use the method of moments to find a point estimator for p.


(b) Use the following data (simulated from geometric distribution) to find the moment
estimator for p:

2 5 7 43 18 19 16 11 22
4 34 19 21 23 6 21 7 12

How will you use this information? [The pdf of a geometric distribution is f (x) =
p(1 − p)x−1 , for x = 1, 2, . . . . Also μ = 1/p.]
5.2.2. Let X1 , . . . , Xn be a random sample of size n from the exponential distribution whose pdf
(by taking θ = 1/β in Definition 2.3.7) is

⎨θe−θx , x≥0
f (x, θ) =
⎩ 0, x < 0.

(a) Use the method of moments to find a point estimator for θ.


(b) The following data represent the time intervals between the emissions of beta particles.

0.9 0.1 0.1 0.8 0.9 0.1 0.1 0.7 1.0 0.2
0.1 0.1 0.1 2.3 0.8 0.3 0.2 0.1 1.0 0.9
0.1 0.5 0.4 0.6 0.2 0.4 0.2 0.1 0.8 0.2
0.5 3.0 1.0 0.5 0.2 2.0 1.7 0.1 0.3 0.1
0.4 0.5 0.8 0.1 0.1 1.7 0.1 0.2 0.3 0.1
234 CHAPTER 5 Point Estimation

Assuming the data follow an exponential distribution, obtain a moment estimate for the
parameter θ. Interpret.
5.2.3. Let X1 , . . . , Xn be a random sample from a uniform distribution on the interval
(θ − 1, θ + 1).
(a) Find a moment estimator for θ.
(b) Use the following data to obtain a moment estimate for θ:

11.72 12.81 12.09 13.47 12.37

5.2.4. The probability density of a one-parameter Weibull distribution is given by

2αxe−αx ,
2
x>0
f(x) =
0, otherwise.

(a) Using a random sample of size n, obtain a moment estimator for α.


(b) Assuming that the following data are from a one-parameter Weibull population,

1.87 1.60 2.36 1.12 0.15


1.83 0.64 1.53 0.73 2.26

obtain a moment estimate of α.


5.2.5. Let X1 , . . . , Xn be a random sample from the truncated exponential distribution with pdf

e−(x−θ) , x≥θ
f(x) =
0, otherwise.

Find the method of moments estimate of θ.


5.2.6. Let X1 , . . . , Xn be a random sample from a distribution with pdf

1 + αx
f(x, α) = , −1 ≤ x ≤ 1, and − 1 ≤ α ≤ 1.
2

Find the moment estimators for α.


5.2.7. Let X1 , . . . , Xn be a random sample from a population with pdf

⎨ 2α2 , x≥α
f (x) = x3
⎩ 0, otherwise.

Find a method of moments estimator for α.


5.2.8. Let X1 , . . . , Xn be a random sample from a negative binomial distribution with pmf
 
x+r−1 x
p(x, r, p) = p (1 − p)x , 0 ≤ p ≤ 1, x = 0, 1, 2, . . . .
r−1
5.3 The Method of Maximum Likelihood 235

# $
Find method of moments estimators for r and p. [Here E[X] = r(1 − p)/p and E X2 =
r(1 − p)(r − rp + 1)/p2 .]
5.2.9. Let X1 , . . . , Xn be a random sample from a distribution with pdf

(θ + 1) xθ , 0 ≤ x ≤ 1; θ > −1
f (x) =
0, otherwise.

Use the method of moments to obtain an estimator of θ.


5.2.10. Let X1 , . . . , Xn be a random sample from a distribution with pdf
2β−2x
β2
, 0<x<β
f (x) =
0, otherwise.

Use the method of moments to obtain an estimator of β.


5.2.11. Let X1 , . . . , Xn be a random sample with common mean μ and variance σ 2 . Obtain a method
of moments estimator for σ.
5.2.12. Let X1 , . . . , Xn be a random sample from the beta distribution with parameters α and β.
Find the method of moments estimator for α and β.
5.2.13. Let X1 , X2 , . . . , Xn be a random sample from a distribution with unknown mean μ and
variance σ 2 . Show that the method of moments estimators for μ and σ 2 are, respectively, the

sample mean X and S 2 = (1/n) ni=1 (X − X)2 . Note that S 2 = [(n − 1)/n] S 2 where S 2 is
the sample variance.

5.3 THE METHOD OF MAXIMUM LIKELIHOOD


It is highly desirable to have a method that is generally applicable to the construction of statistical
estimators that have “good” properties. In this section we present an important method for finding
estimators of parameters proposed by geneticist/statistician Sir Ronald A. Fisher around 1922 called
the method of maximum likelihood. Even though the method of moments is intuitive and easy to
apply, it usually does not yield “good” estimators. The method of maximum likelihood is intuitively
appealing, because we attempt to find the values of the true parameters that would have most likely
produced the data that we in fact observed. For most cases of practical interest, the performance of
maximum likelihood estimators is optimal for large enough data. This is one of the most versatile
methods for fitting parametric statistical models to data. First, we define the concept of a likelihood
function.
Definition 5.3.1 Let f (x1 , . . . , xn ; θ), θ ∈  ⊆ Rk , be the joint probability (or density) function of n
random variables X1 , . . . , Xn with sample values x1 , . . . , xn . The likelihood function of the sample is
given by

L(θ; x1 , . . . , xn ) = f (x1 , . . . , xn ; θ), [= L(θ), in a briefer notation].

We emphasize that L is a function of θ for fixed sample values.


262 CHAPTER 5 Point Estimation

It does not follow that every function of a sufficient statistic is sufficient. However, any one-to-one
function of a sufficient statistic is also sufficient. Every statistic need not be sufficient. When they
do exist, sufficient estimators are very important, because if one can find a sufficient estimator it
is ordinarily possible to find an unbiased estimator based on the sufficient statistic. Actually, the
following theorem shows that if one is searching for an unbiased estimator with minimal variance,
it has to be restricted to functions of a sufficient statistics.

RAO–BLACKWELL THEOREM
Theorem 5.4.7 Let X1 , . . . , Xn be a random sample with joint pf or pdf f (x1 , . . . , xn ; θ) and let
U = (U1 , . . . , Un ) be jointly sufficient for θ = (θ1 , . . . , θn ). If T is any unbiased estimator of k (θ), and if
T ∗ = E (T |U ), then:
(a) T ∗ is an unbiased estimator of k(θ).
(b) T ∗ is a function of U, and does not depend on θ.
   
(c) Var T ∗ ≤ Var(T ) for every θ, and Var T ∗ < Var(T ) for some θ unless T ∗ = T with probability 1.

Proof.
(a) By the property of conditional expectation and by the fact that T is an unbiased estimator of k(θ),
 
E T ∗ = E(E(T |U)) = E(T ) = k(θ).

Hence, T ∗ is an unbiased estimator of k(θ).


(b) Because U is sufficient for θ, the conditional distribution of any statistic (hence, for T ), given U,
does not depend on θ. Thus, T ∗ = E(T |U) is a function of U.
(c) From the property of conditional probability, we have the following:

Var (T ) = E (Var (T |U )) + Var (E (T |U ))


 
= E (Var (T |U )) + Var T ∗ .

 
Because Var (T |U ) ≥ 0 for all u, it follows that E (Var (T |U )) ≥ 0. Hence, Var T ∗ ≤ Var(T ). We
 ∗
note that Var T = Var(T ) if and only if Var (T |U) = 0 or T is a function of U, in which case
T ∗ = T (from the definition of T ∗ = E (T |U ) = T ).

In particular, if k (θ) = θ, and T is an unbiased estimator of θ, then T ∗ = E (T |U ) will typically give


the MVUE of θ. If T is the sufficient statistic that best summarizes the data from a given distribution
with parameter θ, and we can find some function g of T such that E (g (T )) = θ, it follows from the
Rao–Blackwell theorem that g(T ) is the UMVUE for θ.

EXERCISES 5.4
5.4.1. Let X1 , . . . , Xn be a random sample from a population with density

e−(x−θ) , for x > θ


f (x) =
0, otherwise.
5.4 Some Desirable Properties of Point Estimators 263

(a) Show that X is a biased estimator of θ.


(b) Show that X is an unbiased estimator of μ = 1 + θ.
5.4.2. The mean and variance of a finite population {a1 , . . . , aN } are defined by

1  1 
N N
μ= ai and σ 2 = (ai − μ)2 .
N N
i=1 i=1

For a finite population, show that the sample variance S 2 is a biased estimator of σ 2 .

5.4.3. For an infinite population with finite variance σ 2 , show that the sample standard deviation
S is a biased estimator for σ. Find an unbiased estimator of σ. [We have seen that S 2 is an
unbiased estimator of σ 2 . From this exercise, we see that a function of an unbiased estimator
need not be an unbiased estimator.]
5.4.4. Let X1 , . . . , Xn be a random sample from an infinite population with finite variance σ 2 .
Define
1 
n
2
S 2 = Xi − X .
n
i=1
2
Show that S 2 is a biased estimator for σ 2 , and that the bias of S 2 is − σn . Thus, S 2 is
negatively biased, and so on average underestimates the variance. Note that S 2 is the MLE
of σ 2 .
5.4.5. Let X1 , . . . , Xn be a random sample from a population with the mean μ. What condition
must be imposed on the constants c1 , c2 , . . . , cn so that

c1 X1 + c2 X2 + · · · + cn Xn

is an unbiased estimator of μ?

5.4.6. Let X1 , . . . , Xn be a random sample from a geometric distribution with parameter θ. Find
an unbiased estimate of θ.

5.4.7. Let X1 , . . . , Xn be a random sample from U (0, θ) distribution. Let Yn = max{X1 , . . . , Xn }.


We know (from Example 5.3.4) that θ̂1 = Yn is a maximum likelihood estimator of θ.
(a) Show that θ̂2 = 2X is a method of moments estimator.
(b) Show that θ̂1 is a biased estimator, and θ̂2 is an unbiased estimator of θ.
(c) Show that θ̂3 = n+1
n θ̂1 is an unbiased estimator of θ.
5.4.8. Let X1 , . . . , Xn be a random sample from a population with mean μ and variance 1. Show
2
that μ̂2 = X is a biased estimator of μ2 , and compute the bias.
 
5.4.9. Let X1 , . . . , Xn be a random sample from an N μ, σ 2 distribution. Show that the estimator
μ̂ = X is the MVUE for μ.
264 CHAPTER 5 Point Estimation

 
5.4.10. Let X1 , . . . , Xn1 be a random sample from an N μ1 , σ 2 distribution and let Y1 , . . . , Yn2 be
 
a random sample from a N μ2 , σ 2 distribution. Show that the pooled estimator

(n1 − 1) S12 + (n2 − 1) S22


σ̂ 2 =
n1 + n2 − 2

is unbiased for σ 2 , where S12 and S22 are the respective sample variances.
 
5.4.11. Let X1 , . . . , Xn be a random sample from an N μ, σ 2 distribution. Show that the sample
median, M, is an unbiased estimator of the population mean μ. Compare the variances of
X and M. [Note: For the normal distribution, the mean, median, and mode all occur at
the same location. Even though both X and M are unbiased, the reason we usually use the
mean instead of the median as the estimator of μ is that X has a smaller variance than M.]
5.4.12. Let X1 , . . . , Xn be a random sample from a Poisson distribution with parameter λ. Show
that the sample mean X is sufficient for λ.
5.4.13. Let X1 , . . . , Xn be a random sample from a population with density function

1 |x|
fσ (x) = exp − , −∞ < X < ∞, σ > 0.
2σ σ

Find a sufficient statistic for the parameter σ.


5.4.14. Show that if θ̂ is a sufficient statistic for the parameter θ and if the maximum likelihood
estimator of θ is unique, then the maximum likelihood estimator is a function of this
sufficient statistic θ̂.
5.4.15. Let X1 , . . . , Xn be a random sample from an exponential population with parameter θ.

(a) Show that ni=1 Xi is sufficient for θ. Also show that X is sufficient for θ.
(b) The following is a random sample from exponential distribution.

1.5 3.0 2.6 6.8 0.7 2.2 1.3 1.6 1.1 6.5
0.3 2.0 1.8 1.0 0.7 0.7 1.6 3.0 2.0 2.5
5.7 0.1 0.2 0.5 0.4

(i) What is an unbiased estimate of the mean?


(ii) Using part (a) and these data, find two sufficient statistics for the parameter θ.

5.4.16. Let X1 , . . . , Xn be a random sample from a one-parameter Weibull distribution with pdf

⎨2αxe−αx2 , x>0
f (x) =
⎩ 0, otherwise.

(a) Find a sufficient statistic for α.


(b) Using part (a), find an UMVUE for α.
5.4 Some Desirable Properties of Point Estimators 265

5.4.17. Let X1 , . . . , Xn be a random sample from a population with density function



⎪ 1 θ θ
⎨ , − ≤x≤ ,θ>0
f (x) = θ 2 2

⎩ 0, otherwise.


Show that min Xi , max Xi is sufficient for θ.
1≤i≤n 1≤i≤n

5.4.18. Let X1 , . . . , Xn be a random sample from a G(1, β) distribution.



(a) Show that U = ni=1 Xi is a sufficient statistic for β.
(b) The following is a random sample from a G(1, β) distribution.

0.3 3.4 0.4 1.8 0.7 1.0 0.1 2.3 3.7 2.0
0.3 3.7 0.1 1.3 1.2 3.3 0.2 1.3 0.6 0.4

Find a sufficient statistic for β.

5.4.19. Show that X1 is not sufficient for μ, if X1 , . . . , Xn is a sample from N(μ, 1).

5.4.20. Let X1 , . . . , Xn be a random sample from the truncated exponential distribution


with pdf

⎨eθ−x , x>θ
f (x) =
⎩ 0, otherwise.

Show that X(1) = min(Xi ) is sufficient for θ.

5.4.21. Let X1 , . . . , Xn be a random sample from a distribution with pdf



⎨θxθ−1 , 0 < x < 1, θ>0
f (x) =
⎩ 0, otherwise.

Show that U = X1 , . . . , Xn is a sufficient statistic for θ.

5.4.22. Let X1 , . . . , Xn be a random sample of size n from a Bernoulli population with parameter
p. Show that p̂ = X is the UMVUE for p.

5.4.23. Let X1 , . . . , Xn be a random sample from a Rayleigh distribution with pdf



⎨ 2x e−x2 /α , x>0
α
f (x) =
⎩ 0, otherwise.
n 2
Show that i=1 Xi is sufficient for the parameter α.

You might also like