0% found this document useful (0 votes)
16 views5 pages

Chapter 6 Estimation

Chapter 6 covers estimation of random variables, focusing on optimum estimation techniques such as MAP and ML estimation. It includes various problems involving joint probability density functions (PDFs), marginal distributions, conditional PDFs, and minimum mean square error estimates for random variables X and Y. The chapter also addresses discrete random variables and their joint PMFs, providing a comprehensive approach to statistical estimation.

Uploaded by

dat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Chapter 6 Estimation

Chapter 6 covers estimation of random variables, focusing on optimum estimation techniques such as MAP and ML estimation. It includes various problems involving joint probability density functions (PDFs), marginal distributions, conditional PDFs, and minimum mean square error estimates for random variables X and Y. The chapter also addresses discrete random variables and their joint PMFs, providing a comprehensive approach to statistical estimation.

Uploaded by

dat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CHAPTER 6: ESTIMATION A RANDOM VARIABLE

Optimum Estimation Given Another Random Variable


Linear Estimation of X given Y
MAP and ML Estimation

Problem 6.7. The random variables X and Y have the joint probability density function (PDF)

 3 (x − y)

if 0 ≤ y ≤ x ≤ 2,
fX,Y (x, y) = 4
0 otherwise.

(a) What is fX (x)?


(b) What is the blind estimate x bB ?
(c) What is the minimum mean square error estimate of X given X < 0.5?
(d) What is fY (y)?
(e) What is the blind estimate ybB ?
(f ) What is the minimum mean square error estimate of Y given Y > 0.5?
(g) What is fX|Y (x|y), the conditional PDF of X given Y = y?
(h) What is x bM (y), the minimum mean square error estimate of X given Y = y?
(i) What is fY |X (y|x), the conditional PDF of Y given X = x?
(j) What is ybM (x), the minimum mean square error estimate of Y given X = x?
(k) What is X bL (Y ), the linear minimum mean square error estimate of X given Y ?
(l) What is e∗L , the minimum mean square error of the optimum linear estimate?

Problem 6.1. The random variables X and Y have the joint probability density function (PDF)

6(y − x) if 0 ≤ x ≤ y ≤ 1,
fX,Y (x, y) =
0 otherwise.

(a) What is fX (x)?


(b) What is the blind estimate x bB ?
(c) What is the minimum mean square error estimate of X given X < 0.5?
(d) What is fY (y)?
(e) What is the blind estimate ybB ?
(f ) What is the minimum mean square error estimate of Y given Y > 0.5?
(g) What is fX|Y (x|y), the conditional PDF of X given Y = y?
(h) What is x bM (y), the minimum mean square error estimate of X given Y = y?
(i) What is fY |X (y|x), the conditional PDF of Y given X = x?
(j) What is ybM (x), the minimum mean square error estimate of Y given X = x?
(k) What is X bL (Y ), the linear minimum mean square error estimate of X given Y ?
(l) What is e∗L , the minimum mean square error of the optimum linear estimate?

1
Problem 6.2. The random variables X and Y have the joint probability density function (PDF)

2 if 0 ≤ x ≤ y ≤ 1,
fX,Y (x, y) =
0 otherwise.
(a) What is fX (x)?
(b) What is the blind estimate x bB ?
(c) What is the minimum mean square error estimate of X given X > 0.5?
(d) What is fY (y)?
(e) What is the blind estimate ybB ?
(f ) What is the minimum mean square error estimate of Y given X > 0.5?
(g) What is fX|Y (x|y), the conditional PDF of X given Y = y?
(h) What is x bM (y), the minimum mean square error estimate of X given Y = y?
(i) What is
bM (0.5))2 |Y = 0.5 ,
e∗ (0.5) = E (X − x
 

the minimum mean square error of the estimate of X given Y = 0.5?

Problem 6.3. The random variables X and Y have the joint probability density function (PDF)

2(y + x) if 0 ≤ x ≤ y ≤ 1,
fX,Y (x, y) =
0 otherwise.
(a) What is fX (x)?
(b) What is the blind estimate x bB ?
(c) What is the minimum mean square error estimate of X given X < 0.5?
(d) What is fY (y)?
(e) What is the blind estimate ybB ?
(f ) What is the minimum mean square error estimate of Y given Y > 0.5?
(g) What is fX|Y (x|y), the conditional PDF of X given Y = y?
(h) What is x bM (y), the minimum mean square error estimate of X given Y = y?
(i) What is fY |X (y|x), the conditional PDF of Y given X = x?
(j) What is ybM (x), the minimum mean square error estimate of Y given X = x?
(k) What is X bL (Y ), the linear minimum mean square error estimate of X given Y ?
(l) What is e∗L , the minimum mean square error of the optimum linear estimate?

Problem 6.4. The joint PMF for two discrete random variables X and Y is given in the fol-
lowing table
HH
HH Y -3 -1 1 3
X HH
H
1 1 1
-1 6 8 24
0
1 1 1 1
0 12 12 12 12
1 1 1
1 0 24 8 6

2
(a) Find the marginal probability mass functions (PMF) pX (x) and pY (y)
(b) Determine whether X and Y are independent.
(c) Calculate E[X], E[Y ], Var(X), Var(Y ), Cov(X, Y ).
(d) Let XbL (Y ) = aY + b be a linear estimator of X. Find a∗ and b∗ , the values of a and b that
minimize the mean square error eL .
(e) What is e∗L , the minimum mean square error of the optimum linear estimate?
(f ) Find pX|Y (x| − 3), the conditional PMF of X given Y = −3.
(g) Find xbM (−3), the optimum (nonlinear) mean square estimator of X given Y = −3.
(h) What is

bM (−3))2 |Y = −3 ,
e∗ (−3) = E (X − x
 

the minimum mean square error of the estimate of X given Y = −3?

Problem 6.5. Let X be a random variable with PDF



2x if x ∈ [0, 1],
fX (x) =
0 otherwise.

Here, the parameter X = x can be seen as the probability of success in a Bernoulli trial. To
estimate X, we perform n independent trials of the Bernoulli experiment. The number of
successes in the n trials is a random variable Y . That is, given X = x, the random variable Y
follows the Binomial distribution B(n, x). Therefore, the conditional PMF of Y given X = x is
defined by
pY |X (y|x) = Cny xy (1 − x)n−y ,
for y ∈ {0, 1, 2, . . . , n}. Given an observation Y = y, derive the following estimates of X:
(a) The blind estimate x bB .
(b) The maximum a posteriori probability estimate x bM AP (y).
(c) The maximum likelihood estimate x bM L (y).

Problem 6.6. Let X be a random variable with PDF



6x(1 − x) if x ∈ [0, 1],
fX (x) =
0 otherwise.

Here, the parameter X = x can be seen as the probability of success in a Bernoulli trial. To
estimate X, we perform n independent trials of the Bernoulli experiment. The number of
successes in the n trials is a random variable Y . That is, given X = x, the random variable Y
follows the Binomial distribution B(n, x). Therefore, the conditional PMF of Y given X = x is
defined by
pY |X (y|x) = Cny xy (1 − x)n−y ,
for y ∈ {0, 1, 2, . . . , n}. Given an observation Y = y, derive the following estimates of X:
(a) The blind estimate x bB .

3
(b) The maximum a posteriori probability estimate x
bM AP (y).
(c) The maximum likelihood estimate x
bM L (y).

Problem 6.7. Let X be a continuous random variable X with probability density function
fX (x) = 2x if 0 ≤ x ≤ 1; and fX (x) = 0, otherwise. Given X = x, the random variable Y
follows the Geometric distribution Geo(x) with parameter x. That is, the conditional PMF of
Y given X = x is defined by PY |X=x (y) = P (Y = y|X = x) = (1 − x)y−1 x, for y ∈ {1, 2, . . .}.
Then, find
(a) the maximum a posteriori probability (MAP) estimate of X given the observation Y = 3.
(b) the maximum a posteriori probability (MAP) estimate of X given the observation Y = 2.

Problem 6.8. Let X be a continuous random variable X with probability density function
fX (x) = 2x if 0 ≤ x ≤ 1; and fX (x) = 0, otherwise. Given X = x, the random variable Y
follows the Geometric distribution Geo(x) with parameter x. That is, the conditional PMF of
Y given X = x is defined by PY |X=x (y) = P (Y = y|X = x) = (1 − x)y−1 x, for y ∈ {1, 2, . . .}.
(a) Find the maximum a posteriori probability (MAP) estimate of X given the observation
Y = 3.
(b) Find the maximum a posteriori probability (MAP) estimate of X given the observation
Y = 2.

Problem 6.9. Consider a collection of old coins. Each coin has random probability, x of
landing with heads up when it is flipped. The probability of heads, x, is a sample value of
a continuous random variable X with probability density function fX (x) = 2x if 0 ≤ x ≤ 1;
and fX (x) = 0, otherwise. To estimate x for a coin, we flip the coin n times and count the
number of heads, y. Because each flip is a Bernoulli trial with probability of success x, and y is
a sample value of the Binomial distribution B(n, x) random variable Y . That is, given X = x,
the random variable Y follows the Binomial distribution B(n, x). Therefore, the conditional
PMF of Y given X = x is defined by PY |X=x (y) = P (Y = y|X = x) = Cny xy (1 − x)n−y , for
y ∈ {0, 1, 2, . . . , n}.
(a) Find the maximum a posteriori probability (MAP) estimate of X given the observation
Y = y.
(b) Find the maximum a posteriori probability (MAP) estimate of X given the observation
Y = 3.

Problem 6.10. Consider a collection of old coins. Each coin has random probability, x of
landing with heads up when it is flipped. The probability of heads, x, is a sample value of a
continuous random variable X with probability density function fX (x) = 6x(1−x) if 0 ≤ x ≤ 1;
and fX (x) = 0, otherwise. To estimate x for a coin, we flip the coin n times and count the num-
ber of heads, y. Because each flip is a Bernoulli trial with probability of success x, and y is a
sample value of the Binomial distribution B(n, x) random variable Y . That is, given X = x,

4
the random variable Y follows the Binomial distribution B(n, x). Therefore, the conditional
PMF of Y given X = x is defined by PY |X=x (y) = P (Y = y|X = x) = Cny xy (1 − x)n−y , for
y ∈ {0, 1, 2, . . . , n}.
(a) Find the maximum likelihood (ML) estimate of X given the observation Y = y.
(b) Find the maximum likelihood (ML) estimate of X given the observation Y = 2.

You might also like