Week 4 in-class problems
PM522b Introduction to the Theory of Statistics Part 2
1. A more general version of the delta method can be stated as follows. Let Tn be
d
a sequence of random variables such that rn (Tn − θ) −→ T , where rn −→ +∞ as
n −→ +∞ is a sequence of real numbers and T is a random variable. Then, if g
d
is continuously differentiable at θ, rn (g(Tn ) − g(θ)) −→ g ′ (θ)T . Use this version
of
the delta method to identify the limit in distribution of n 1 − 1/(θX(n) ) where
X1 , . . . , Xn is a random sample
from a U [0, θ]. Investigate where the same behavior
holds for n 1 − 1/(θθ̂n ) where θ̂n = n+1 n X(n) is the unbiased estimator of the θ.
NOTES: 1) The more traditional version of the delta method for asymptotically
normal sequences is a particular case of this more general version (SHOW THIS).
2) In this more general version the case g ′ (θ) = 0 is not excluded: when g ′ (θ) = 0,
d
it asserts that rn (g(Tn ) − g(θ)) −→ g ′ (θ)T = 0 × T = 0, i.e. rn (g(Tn ) − g(θ))
converges in distribution (and in probability) to the constant 0.
2. Let X = (X1 , ..., Xn ) be a random sample from distribution F (x|θ), θ ∈ R and let
(L(X), U (X)) be a 100(1 − α)% confidence interval for θ.
a. Show that if g(x) is a monotonically increasing function of x then (g(L(X)), g(U (X))
is a 100(1 − α)% confidence interval for g(θ). What if g(x) is a monotonically
decreasing?
b. Apply part a. to compute a 100(1 − α)% confidence confidence for eµ and
i.i.d
e−µ , from a random sample X1 . . . Xn ∼ N (µ, σ 2 ) with both µ and σ 2 are
unknown.
c. In the same setting as b. derive a 100(1 − α)% confidence interval for µ2 .
i.i.d
3. Let X1 , ..., Xn ∼ Bernoulli(p) and let the sample proportion p̂ = X̄ be the usual
estimator of p.
a. Show that q p̂−p is an asymptotic pivotal quantity and derive an asymptotic
p̂(1−p̂)
n
100(1−α)% confidenceP interval for p based on it. (You won’t need it but show
also that p̂(1 − p̂) = n1 ni=1 (Xi − X̄)2 (i.e. the biased sample variance).
b. Show that q p̂−p is an asymptotic pivotal quantity and derive an asymptotic
p(1−p)
n
100(1 − α)% confidence interval for p based on it.
1
c. Parts a. and b. above show that there many pivots or asymptotic pivots that
one can use to construct a confidence interval. Perform a small simulation
in R to compare the two asymptotic confidence intervals above in terms of
expected length and coverage.
i.i.d i.i.d
4. Let X1 , ..., Xn1 ∼ Bernoulli(p1 ) and Y1 , ..., Yn2 ∼ Bernoulli(p2 ) two indepen-
dent random samples. Derive an asymptotic 100(1 − α)% confidence interval
for p1 − p2 following the standard approach for the difference of two population
means. Is your asymptotic 100(1 − α)% different than the one for the asymptotic
100(1 − α)% difference of two population means µ1 − µ2 . If so, how?
5. Proof of Independence of S 2 and X̄ for normal samples (adapted from Problem 4
in the textbook). You will learn 3 additional proofs of this result! Suppose that
i.i.d
X1 , X2 , . . . , Xn ∼ N (µ, σ 2 ).
a. Show that by considering the transformation (Xi − µ)/σ it is sufficient to
consider the case of a random sample from N (0, 1).
b. Show that X1 − X̄ = − ni=2 (Xi − X̄) and use it to show S 2 can be written
P
as a function of the differences (X2 − X̄, . . . , Xn − X̄).
c. Write down the joint probability density function (pdf) for X1 , X2 , . . . , Xn
and use the joint transformation Y1 = X1 and Yi = Xi − X̄ for i = 2, 3, . . . , n.
Use the Jacobian method to find the joint pdf of Y1 , Y2 , . . . , Yn . Show that Y1
is independent of Y2 , . . . , Yn .
d. Conclude that X̄ is independent of (X2 − X̄, . . . , Xn − X̄) .
e. Conclude that X̄ is independent of the sample variance S 2 .