Multiple RV
Multiple RV
5.0
INTRODT}CTION
5.1
EXPECTED VALUE OF A FUNCTION OF RANDOM VARIABLES
t_
I ables, denoted g(Xr, . . . , X1'), the expected value of the function becomes
I r4l
t42 E - 4AXr' ... ' xiv)I
Probability, (5'l-2)
Random Variables, [- . . J--"''
- J-- [- g(xr, .--,xilfx,....xr(xr,.... -xiv)ds1 ...d*N
and Random
Signal Principles
Thus, expectation in general involves an N-fold integration when N random'
variables are involved. It should be clear to the readcr from (5.1-2) that the
expected value of a sum of functions is equal to the sum of the expected values
of the individual functions.
We illustrate the application of (5.1-2) with an example that will develop
an important point
Esxr Xy):r[*",r,]
l\ .@ 16,
from (5.1-2). After using (4.3-8), the terms in the sum all reduce to the '
dxi - EIaiXil
- aiElXil
t*_*o,*,rr,(x;)
so
l- ,v '1 .v
,l Io,r,l-DuiElxil
I L i=l
i=l
which says that the mean value of a v'eighred sum of random variables
equals the w,eighted sum of mean values.
and substitute into (5.1-2). After integrating with respect to all random r,ari-
ables except Xr, (5.1-2) becomes
fr
| s(x)_fx,(x1)d.x1
E:Hs$,)l = J-rc (5.1-4)
which is the same as previously given in (3.1-6) for one random variable. Some t43
reflection on the reader's part will verify that (5.1-4) also validates such earlier crr,pren 5:
topics as moments, central moments, characteristic function, etc., for a single Operations on
random variable. Multiple Random
Variables
ior the case of two random variables X and Y. Clearly nt,rc : EiX"l are the
moments mn of X, while fttok : EYklare the moments of Y. The sum n * k is
called the order of the moments. Thus rns2, tn2y and m11 at€ a[_second-order
moments of X and I/. The first-order momentS ,r?or - E[Yl : Y and ntto :
EIX): * are the expected values of Y and X, respectively, and are the co-
ordinates of the "center of gravity" of the function fx.y(x, ),).
The second-order moment ftttt : EIXYI is called the correlation of X and
I. It is so important to later work that we give it the symbol Rxy. Hence,
Rxy = ttltt - ElxYl:
J*J* \fx.y(r,.y) dxdy (5.1-6)
Y
- -6X +22
tUncorrelated garr.r..iur random variables are. however. knou'n to also be independenl (see Section
S 1l
t4 The mean value of f is Y = E[y] - EI-6X +221- -6* +22- 4. The
Probability.
Random Variables.
I correlation of X and
:nr1s _
f is found from (5.1-6)
EtXYI - El-6X2 +22X1- -6E[.Y2]+ZZ*
Rxy
and Random
Signal Principles l)
- -6(l + 22(3) - 0
fi Since Rl.r. :0. X and Y are orthogonal from (5.1-8). On the other hand.
F Rxt * EIX]IEIYI - 12. so X and Y areno, uncorrelated [see (5.1-7)J.
*
rq We note that two random variables can be orthogonal even though i
correlated when one. Y. is related to the other. X. by the linear function :..
f; .
^_ ,lv__x)_(v -
P=nL
v)l (s.l -l7D)
"r ", )
is known as the correlation coefficient of X and Y. It can be shown (see
Problem 5.1-10) that
-fJ _- ...1
"'J (xr-x)',...
-*
(xv - f ri"nfx,,....x*(xr,...,r,,,,)dx1 ...-rry (5.1-19)
tH x -faiXi
i:t
ri:
.:i:i
'
where the cr; are real weighting constants. The variance of X u'ill be
found. From Erample 5.1-1,
NA'
i,-''
^r r,1
['l'] : Lairirx'l:
f Fr t, i S\
a'*'-
.l
I
f
i
E
t6 so we have
Probat ility, N
Raodom Variables. x-*:Iqi(Xi -x)
eod Random i-l
SiFal kinciples
and
Cx,xr: 'rt,
{3r,
is true,.we get
,
o-x :$ 22
koror,
In words: the variance of a v'eighted sum of uncorrelated randont tariables
(v,eights u) equals the w'eighted sum of the yariances of the random vari-
ables (v'eigfus a!).
t..5.2
JOINT CHARACTERISTIC FUNCTIONS
The joint characteristic function of two random variables X and )' is defined
by
*i.,zt
ox. y(arr , o2) - J*J* Jr y(x, .r')eto'' 41 4y, (s.2-2\
Joint moments rnnp c?n be found from the joint characteristic flunction as
[ollows:
t+k0x,v!at-,'atz\
ffink:(-i),*&
q a"4 I (s.2-6)
l,; -o.oa-o
EXAMPLE s.z-t. Two random variables X and )/ have the joint character-
istic function
O x.ykot, coz) : exp(-2al - A,rll
We show that X and I/ are both zero-mean random variables and that
they are uncorrelated.
The means derive from (5.2-6):
*
=
Etxl: fttto: -iryuff! i-,=,.*=o
- -j(-4o)exp(-2co? - S,,rill
1u,,
- 0
-o.oa-o
0^9x'-"''rn'(arr
^n
' 't a''''v)
mn,n.-..n,= (-/)n
a(4' a(4, . . . "' aofi Ilail,=o t5'2-8)
148 where
Probability.
Random Variables.
and Random
I R:xr*a2*'..*n,v
The joint characteristic function is especially useful in certain practical
(s.2-e)
Signal Principles problems where the probability density function is needed for the sum of .f/
statistically independent random variables. We use an examplc to shot*' how
the desired probability density is found.
Let Y : Xr * X= +...
ExAl\tPLE s.z-e * XN be the surn of N statistically
independent random variables Xi. i
- l. 2, . . .. N. Denote their probabil-
ity densities and characteristic functions, respectively. by ./r,(r,) and
Or.,(,.ri). Because of independence the joint probability density is the
product of all the individual densities and (5.2-7) can be written as
_l* [-l-+/''t
I l-lP\ r/'r1 '1
: .r,,r,,
il I* *rr(-.-,)e'''*'dr, - LI
Next, we write the characteristic function of Y using (-1.3-l) and note that.
it is the sarne as (5.2-7) u,ilh ro; : co, all i. Hence.
si
4.
$ o1(a,) - E[a-tr:E[.*e(rt*)] ;
t,'J
. i^.
!.
g/
- oxr.....*'^(r. .. ., at): i[i=l ox,(r,r)
:.;
I
Finally. we use (3.-1-3) to obtain the desired density of )':
:A
_x [n.,
/r r.r ): +
-,. J[-
sulfn-i'" ,t.
Li=l I
In ttre special case where the ){; are rdentically distributed such that
.Dx,(.) = O.r,(r,r). all i. our result reduces to
5.3
JOINTLY GAUSSIA\ RANDO}I VARIABLES
Gaussian random variables are very important because ttn-v show up in nearly
every area of science and engineering. In this section. the case of tu'o gaussian
random variables is fint examined. The more advanced case of N random
variables is then introduced.
Two Redom Variables t49
Two random variables X and Y are said to be jointtl' saussian if their joint ffi;:L::
density funcqion is of the form
".
Multiple Random
Variables
I
fx.t'$, )) =
2ooyoyrrEj
1 -l [t, - x)'
.*plr6j zp$ - *)0' -
-
(y - il'l I
-T)l
Y)
'.4, oxoy
(5.3-l )
o2x:El(x-*)\ (s.3-4)
Figure 5.3-la illustrates the appearance of'the j_oin_t gaussian density func-
tion (5.3-l). Its maximum is located at the point (X, Y').The maximum value
is obt4ined from
(5.3-e)
(5.3- l0)
Now the form of (5.3-8) is sulficient to guarantee that X and Y are statisticalll'
independent. Therefore, ne conclude that arr,r' ttncorrelated guussi1117 v1177flonl
t\\'hen ol.= d). and p = 0 the ellipse degenerates into a circle; when P: *i or -l the elliples
degcnerare inro axes rotated by angles n/4 and -n { respeclively that pass through the point (-f'. }')
- 150 Itv$,v)
Probabilityf
Random Varirblcs,
and Random
Signal Princiglcs
,l
tbt
:
FIGURE 5.}I
Sketch of the joint density function of two gaussian random variables.
a
(5.3-r l)
*N Randgm Variables
N random variables X1, X2, . . . , X N are calted jointty gaussian if their joint
density function can be written ast
and
lc^l-
l;:;^ -;;,''] (5.3-r6)
- p/os.,o:; :lI
lc,,t-': u5[_'l;:,,. (5.3- l7)
I loi, I
t[c.t]-' t,
- 1lozt.,ov.1 - p') (5.3- l8t
-eaussian
(Papoulis, 1965, p.257)" This holds for any k < N.
*5.4 153
TRANSFORMATIONS OF MULTIPLE RANDOM VARTABLES cxlprrn 5:
Operations oo
The function g in either (5.1-l) or (5.1-2) can be considered a transformation Multiple.Random
involving more than one random variable. By defining a new variable Variables
Y -g(Xr.&,...,X,v). we see that (5.1-2) is the expected value of Y. In
calculating expected values it was not necessary to determine the density
function of the new random variable y. lt may be, however, that the density
function )' is required in some practical problems, and its determination is
briefly considered in this section. First we consider a single functional trans-
lbrmation o[ more than one random variable. Then we develop the case of
several functions of several random variables.
*One Function
FIGURE,5.4-I
Regions in x;x3 plane applicable to Exampte 5.4-1.
Fr( r') - [,
_ 1r':-'ir'r
J.r=-.r J .,=-1,:-..]y ,
f x,.x.G1, x2) dxldx2
f.t'
- J_,. I(v, x2) dx2 (t )
where we define
cxrprrn 5:
Operations on
Multiple Random
Variables
(4\
The last term in (a) is zero since the joint density is not dependent on y.
Finally, the result is
*Multiple Functions
More generally, we are interested in finding the joint density function of a set
of functions that defines a set of random variables
defined by functional transformations I;. Now all the possible cases described
in Chapter 3 for one random variable carry over to the N-dimensional
problem. That is, the X1 can be continuous, discrete, or mixed, while the
functions Ti can be linear, nonlinear, continuous, segmented, etc. Because
so many cases are possible, many of them being beyond our scope, we shall
discuss only one representative problem.
' We shall assume that the new random variables )/,, given by (5.4-3), are
produced by single-valued continuous functions i'; having continuous partial
derivatives everywhere. It is further assumed that a set of inverse continuous
functions t-l exists such that the old variables may be expressed as single-
valued continuous functions of the new variables:
These assumptions mean that a point in the joint sample space of the .\l maps
into onll' one point in the space of the new variables Yy, and vice versa.
Let R;. be a closed region of points in the space of the X; and Ry be the'
corresponding region of mapped points in the space of the Y;. then the prob-
ability that a point falls in R; u,ill equal the probability that its mapped point
falls in Ry. These probabilities. in terms of joint densities, are given by
156
I
ari' ari'
,r^
I
, oYr
J_ :
:l i
(5.4-0)
ari I
ari' I
oYr ali I
.li',. r, (
When.^/
variable.
- l. (5.4-8) reduces tot-1.4-9) previously derived for a sinele random
The solution (5.4-8) for the joint density of the new variables 11 is illu-
strated here with an example.
i
E
Yt=TlXt.X)- aXt+bX2
Y2
- Ty(Xr. X) - cXr + dX2
F where a, b. c. and r/ are real constants. The inverse functions are easy to
F. obtain by solving these tu'o equations for the two variables )'1 and X2:
*5.5
LINEAR TRANSFORMATION OF GAUSSIAN RANDOM
VARIABLES
(s.s-2)
L o, ot*;*, ,rrl
v-[;] rr, :[;] x:[;] , :[;] (s.5-3)
- *,:
and Random
Signal Principlcs
x, ait (Yr - - Y,r)
Y') + "'+ r''(Yru (s.5-8)
from (5.5-5). Here ail represents the yth element of [fl-t.
The densjty function of the new variables 11, . . . , Yf is found by solving
the right side of (5.4-8) in two steps. The first step is to determine l"/1. By using
(5.5-7) with (5.4-6) we find that "/ equals the determinant of the matrix Ifl-'.
Hence,t
ut :lr[.]-"1 =# (s.s-e)
The second step tn s,--!'iing (S.4 3) proceeds by usirrg (5.5-8) tt, obtain
NA
Cr,r, - El(xi - *)Vi - *)l- Irn I d' rttvr - v^X y^ - f,,)l
A'=l il-r,
Since C;,1., is the yth element in the covariance matrix [Crl of (5.3-12) and
C r^r,,, is the knlth element of the covariance matrix of the new variables Y;.
which we denote [C,],(5.5-10) can be wriuen in the form
Finally, (5.114) and (5.5-9) are substituted into (5.4-8), and (5.5-4) is used to
obtain
[r - il'[crl-'[.,'- rl (-s.5-15)
N
li:Doi*** (s.s-16)
t=l
from (5.5-l) and covariances given by the elements of the covariance matrix
EXAMPLE 5-$t. Two gaussian random variables X1 and -X2 have zero
means and varianse,s dx, -- 4 arrd oi., = 9. Their covariance Cl.,x, equals
3. it Xr and X, are linearly transformed to new variables Y1 and Y2
according to
Yt: Xt -2Xz
Y2-3Xt+4X2
we use the above results to find the means, variances. and covariance of
Y1 and Y2.
Here
:
f .f
[l 1] and t.,t :
[1 ;]
Since X1 and X2 are zero-mean and gaussian, Y1 and Y2 will also be zero-
mean and gaussian. thus lr:0 and lz:0. From (5.5-17):
IT
rc,r-rrrrc-tr.r,=[l 1][1 ;][ I i] =[_:: :;]
ts Thusol, = 2S. o;, :252, and C'y, ).: : -66.
*5.6
COMPUTER GE\ERATION OF MULTIPLT RANDOM
VARIABLES
It can be shown (Problem 5.6-l) that the joint density of Y1 and )'2 rs
rt,.t.(.r.r..r,:):*# (s.6-2)
fl w,:*)l,,,,. i:land2 (t )
*
& and their normalized correlation coefficient according to
firl
it _rQ,Q,rl-,,= i,*.,, _ w,xrn.:,, _ fu=\
pw:___f_L, (3)
fl
The applicable MATLAB code is shown in Figure 5.6-1. Our results
ilt are tabulated in Table 5.6-1, where the standard deviations and normal-
H ized correlation coefficient are found to be in error by -7.5%, -2.3o/o.
'and 45.2oh, respectively. For N : 1000 values, these errors improve (see
H Problem 5.6-5).
If arbitrary means W1 and W2are desired for W, and W2 in the preceding
example, we only need to add these to right sides of (5.6-6):
(5.6-7 a)
TABLE 5.6.I
Results applicable to Example 5.Gl
l\Iean Standard deviation Correlation coefficient
fi1 o; o2
p
- rr(tv., w.) _ (s.6-8)
(s.6- r 3)
5.7
SAMPLING AND SONTE LINTIT THEOREMS
In this section we briefly introduce some basic concepts of sampling. The topic
will be expanded further in Chapter 8. Although we shall develop the topics
around an example practical problem, the results will apply to much more
general situations.
Engineers and scientists are frequently confronted u,ith the pr.rblem of rrlea-
suring some quantity. For example, if we need to measure a dc voltage, we use
a dc voltmeter, u'hich provides a scale indication of the voltage. Now regard-
less of the mechanism used b1' the meter to provide its indication. one typically
"reads" this scale to obtain a "value" we say is the measurement of the
voltage. In other words. v'e sample the indication to get our measurement.
lu The measurement can only be considered as an estimate of voltage, however,
Probability, because of meter drifts, accuracy tolerances, etc. In fact, any measurement can
Random Variables, only be considered as an estimate of the quantity of interest. In our example,
and Random our estimate uscs only one sample. More generally, we may estimate (measure)
Signal Principlcs a quantity by using more than one sample (observation).
To quantify these practical thoughts further, consider the problem of
measuring the average (dc) value of some random noise voltage. lf we had
a large number of identical such sources, we could imagine sampling the
voltage of each (at a given time) and form an estimate of the dc voltage by
averaging the samples. For N sources, each sample could be considered a
value of one of N random variables that form the N ciimensions of a combined
experiment, as in Section 1.6. Here samples from each of N subexperiments
are combined.
In our practicai world we usrla.lly must take another apprcach becausc we
never have multiple identical sources with which to work. We seek another
model. Suppose we now take a sequence of N samples over time under the
assumption (model) that the voltage's statistical properties stay unchanged
with time.t Again, each sample is taken as the value o[ one of N statistically
independent random variables. all having the same probability distribution.
We again hAve a combined experiment, but it is now N repetitions of one basic
experiment.
i*:*r, (s.7-2)
tMore is said about this model in the next chapter (Section 6.2).
[The circumflex is notation to imply an estimate. or estimator: in this case an estimate of the
time average of il samples denoted by i,v.
to represent the effect of averaging over the random variables. Equation r65
(5.7-2\ is called an estimator; it produces a specific estimate of f for a specific cHrrrrn 5:
set of samples. We refer to (5.7-2) as the sample mean. Operations on
Of great interest is: How does our estimator of the sample mean perform? Multiple Random
To seek an answer, we find the mean and variance of our estimator. Variables
Any estimator (measurement function) for which the mean of the estimator of
some quantit!' equals the quantity being estimated is called unbiased. For the
variance:
oi..
1
: -*2 +ptrrtx')
-. I t
+ (ru'
1
- .ut2l
- |ww\ - *'t - oilr (s.7-s)
From (5.7-5) the variance of our sample mean estimator goes to zero as N -+
oo fqr finite source variance o]. ttls
fact implies that for large N our esti-
rnaror uill give an estimate nearly equal to the quantity being estimated u'ith
high probabilitl. To prove the implication, we use Chebychev's inequality of
(3.2-10). For our notation it says
(s.7-6)
Which tends ro I as N -+ oo for any finite e > 0 and finite o|. Ttris result
indicates that Xr converges to X with probability I as lf -+ oo. Such estima-
tors are called consistent.
fru=*f
n=l
r: ( s-i'71
while the estimator for the variance of the voltage can be defined as
Ar/v
oi :, _-l - * i= (s.7-8)
E,t
Here i7r, is defined in (5.7-2). The estimator of (5.7-7) is found in Problem 5.7-l
to be unbiased. Its variance is found in Problem 5.7-2. Similarly, the mean of the
variancc estimator of (5.7-8) is unbiased, but becomes biased ii the factor l/
(N- l) is changed to l/N as in the mean and power estimators (Problem
5.7-3).
t4.75Y2
-
Here the sample mean is in error by less than I oh for the given set of
.sample values, but the estimate of variance is in error by about 7.8%. The
reader should be aware that any other set of I I samples may give
different values and different percentage errors.
EXAMrLE s.z-r. The random variable Y of Problem 3.5-4 can be generated
by the transformation Y -al(l - Xyttz -ll'/', 0 < x < l. We use
EE MATLAB to generate 250 values of Y from which the sample mean of
(5.1-2), the second moment of (5.7-7). and the variance of (5.7-8) are then
calculated. These values are compared to the true mean, second moment,
and variance of Y, which areknou'n to be raf 4,a2, and o2(16-n?1116.
respectively. For the calculations \\'e assume a 2.
The MATLAB code for this example is given in Figure 5.7-1.
-
Calculated data, shown in Table 5.7-1. reveals errors in estimating
mean, second moment. and variance to be -5.7oh, -10.3%. and'
-8.5o/o, respectively. Problem 5.7-4 reconsiders this example, but for I =
1000 values of Y.
- TABLE 5.7-I t67
Deta applicable to Exemple 5.7-3 cHnrren 5:
Meen Secod moment Vrirnc.e Operations on
Multiple Random
True values 1.57 4.00 r.53 Variables
Estimated (N: 250) 1.48 3.59 l./10
Percent error -5.7% -10.3o/o -8.5%
The preceding developments have shown that the sample mean estimator of
(5.7-2). rvhere the random variables X, are identically distributed (same mean
and same finite variance) and are at least pairwise statistically independent.
satisfies
:-
jggtrlxN-Xl<ell -l any€>0 (s.7-9)
p{ rir(xr)-f}
I r-oo' ' '!t
I
-r (5.2-lo)
168 *5.8
Probability, COMPLEX RANDOM VARIABLES
Random Variabhs,
and Random A complex random variahle Z can be defined in terms of real random variables
Signal Principles Xand Yby
Z: X + jy (s.8- r )
where j - \F. In considering expected values involving Z,thejoint density
of X and Y must be used. For instance. if g(.) is some function (real or
complex) of Z, the expected value of dZ\ is obtained from
and
CZ,,Z,: EllZn,- EIZ^ll-lZ,- Elz,lll nt'm (5.8-9)
PROBLEMS
l+ o=.rr<aando<.r,<1,
.fx,.xr.x,.x'.(.rr,.r-r.13..ri) = loDca and0 <_x3 <cand0<.ra <d
I O elsewhere
t70 5.1-3. The density furction of two random variables x and Ir is
Probability.
Random Variables, fx.Y$,r') - u(x)u(l') I 6c-a(''+rt
and Random Find the mcan value of the function
Signal Principles
g(X'Y)-'-2tx2+Y2\
5.1-5. Three statisrically independent random variables x1, x2, and x3 have
mean
values f, = 3, iz:6,- and X, :
-2. Find the mean ualues of t-he following
functions:
(a) g(Xr, Xz, Xtl: Xt * 3X2 + 4X,
(b) g(Xr Xz, Xt) -= XtXzXt
(c) g(Xr Xz, X) - -ZX;X2 3xth + 4X2X3
(O S$r, Xz, Xt): Xt * X2 +- X3
5.1{. Find rhe mean value of the function
g(X, Y) = X2 + Y2
where x and y are random variables defined by the density function
.-1.? +y2112o2
fx.v(x,y) =:-;:=-
with o2 a constant.
Y=aX*b
u'here a and D are any real numbers.
(a) Show that their correlation coefficient is t7t
- I I if a>}foranyD cueg,rrn 5:
'=l-t ifa<oforanyb Operations on
(D) Show that their covariance is Multiple Random
Cxv: alx Variables
where ol, is the variance of X.
tpr : *!L. r
Jltotltzo
5.1-lf. Find all the second-order moments and central moments for the density func-
tion given in Problern 5.1-3.
5.1-13. Find all the third-order rnoments by using (5.1-5) for X and Y defined in
Problem 5.1-12.
ttnk:E p(xi,1t1,1@,
- *)^(ti- Do
f
where P(.xr,.rrl= PIX:.ri. )/ -))1. X has N possible values xi, ald Y
has M possible values.rT.
Find: (a) the correlation. (6) the covariance, and (c) the correlation coefficient
of X and Y (d) Are X and Y either uncorrelated or orthogonal?
5.1-16. Discrete random variables X and Y have the joint Oensitr
['t't' v \ "a \ -
o<x<2 and o<r'<3
,fr..r(x,r')- I 9
IO elsrewhere
Io elsewhere
V:X*aY
W:X-uY
where a is a real number and l'
and Y are random variables. Determinc a in
terms of moments of X and Y such that V and W are otthogonal.
*5.1-21. If X and I/ in Problem 5.1-20 are gaussian. show that W and V d,re statisti-
calil independent if a7 = oi/o1., where ol. and oi. are the variances of X and
Y, respectively.
"5.1-22. Three uncorrelated random variables Xt. X:. and ,Yr have'means *1 : l.
I",r,l,lltJ';li,Tii:Tl"l;ff HJIIJ-1";";';,,:rik'i;i,"o
(a) ttr mean value. (b) the variance of Y.
5.1-23. Given ll' = (aX + 3 f )r where X and I are zero-mean random variables with
variances oi:4 and ol' : 16. Their correlation coefficient is p - -0.5.
(a) Find a value for the parameter a that minimizes the mean value of 14,'.
(6) Find the minimum mean value.
5.1-24. Tu'o random variables have a uniform densitl"on a circular region defined by
I
Find the mean value of the function g(X, )') : X2 + Y2.
t5.l-25. Define thoconditional expccted value of a functioq g(X, I) of random vari- t73
ablesXand Yas
t'xerrr'R 5:
Operations on
Els(x, r)lsl: sG,)).fx.y(x..vl8) clxdv
ft I*J* Multiple Random
Variables
f
(a) lf event I is defined as 8= lt,o < Y <),t1. where lo < ln are constants,
-'----i evaluate ElStX. f )181. (Hint: Use results of Problem 4.4-8).
(6) If 8 is defined by A - [ Y: .r'] what does the conditional expected value of
B part (a) become?
5.1-26. For random variables .l and )' having i: l. | -2, o1. - 6, ri' :9, and
p - -i. nna (a) the covariance of X and I, (D) the correlation of X and Y,
and (ci the moments nrrs and rrrsr.
5.1-32. Random rariables' X and Y are defined by the ioint density of Problem 4.3- 19.
Find all first- and second-order joint nroments tor these random variables. Are
5.1-37. Ttrc cosine inequalitT,, sometimes called Schrlar;t inequality for random vari-
ablesXand Yis
lq(xY)j2 < E(xr)E( y:)
Show its validity. (Hint: Expand the nonnegative quantity El(aX - Yfi.
where a is a real parameter.)
tEt(x + v)21)0
5
s {E(x2)}0't + [r( vt)]o t
Show its validity. (Hint: Expand and combine f[(X + y)21 and
5]r and use the cosine inequatity of Problem 5.l-37.)
{[E(X'f t +
[E( f2)]0
15.2-1. Find the joint characteristic function for X and Y defined in Problem 5.t-3.
'5.2-2. Show that the joint characteristic function of N independent random variables
X,, having characteristic functions <Dy,(a.r,) is
*5.24. For two zero-mean gaussian random variables X and Y. show that their joint
characteristic lunction is
Use the result to find the marginal characteristic functions of X and )'.
*5.2{. Random variables X1 and X2 have the joint characteristic function
whereN>0isaninteger-
(a) Find the correlation and moments rll.s ?Dd n?6r.
(b) Determine the means of X1 and .l',.
(c) What is the correlation coefficient?
*52-7. The joint probability density of two discrete random variables X and Y coo- l7s
sists of impulses located at all lattice points 1mb,id), where m = 0,1, . . . , M
cnerrr,n 5:
and n :1,2,...,/V with.D > 0 and d > 0 beingconstants. All possible points
Operations on
equally probable. Determine the joint characterisic function.
Multiple Random
Variables
r5.2{. l*t k = 1,2, . . . , r(, be statistically independent Poisson random variables,
X1,
each with its own variance 61 (Problem 3.2-13). Show that the sum X : Xr *
X21...* Xx is a Poisson random variable. (Hint: Use results of Problerns
5.2.2 and 3.2-31.)
*5.2-9. Show that the sum X or N statistically independent Poisson random variables
X,, with different means D,, is also a Poisson random variable but its mean is
b: br *b2*...*bt.lHint: Use (5.2-7) and the result of Problem 5.2-2-\
*S.2-10. Show that the sum of iV identically distributed staiisticaily independent expo-
nential random variables Xi, as given by (2.5-9) with a :0 and b replaced by
l/a, is an Erlang random variable, as defined in Problem 3.2-32 and in
Appendix F.fHint: Use (5.2-7) and the result of Problem 5.2-2.1
*5.2-ll. The chi-square random variable with one degree of freedom is defined by the
density
u(x\1-x12
fx6): t0 /2\J2x
where l(l 12) is a constant approximately equalto 1.772. Show that the sum X
of N identically distributed statistically independent chi-square random vari-
ables, each with one degree of freedom, is a chi-square random variable with
N degrees of freedom as defined in Problem 3.2-27 and Appendix F [see (F-35)
through (F-39)1. lHint: Use (5.2-7) and the result of Problem 5.2-2.1
*5.3-1. Zero-mean gaussian random variables X and Y have variances ,t2x:3 and
o2y :4, respectivell'. and a correlation coeflicient p
(a) Write an expression for the joint density function.
- -*.
(6) Show that a rotation of coordinates through the angle given by (5.3-l l)
will produce ne* statistically independent random variables.
"5.3-2. Find the conditional density functions fx@lY : f) and fy(ilX: x) applic-
able to two gaussian random variables X and Y defined by (5.3-l) and show
that they are also gaussian.
5.$3. Assume ox: or = o in (5.3-l) and show that the locus of the maximum of the
joint density is a line passing through the point 8,i') u'ith slope n/4 (or
-nl4\ when p= I (or -l).
5.3-4. Two gaussian random varjables X and Y have variances o;' = 9 and o2y : 4,
respectively, and conelation coeffi cient p. It is known that a coordinate rota-
tion by an angle -zi8 results in new random variables )'1 and I/2 that are
uncorrelated. What is p?
t76 r5.35. Let X and y bejointly gaussian random variables where o2x = iyand e- -1.
Probability,
Find a transformation matrix such that ncw random variables Is and Y2 arc
Random Yariablcs,
statistically indepcndent.
and Random
variables X and I have first- and second-ordcr moments
5'16' Gaussian random
Signal Princifles
* - -1.0,P:1.16, i:1.5,7 -2.89. and Rr.r.= -l-724. Fiod:'(o) Cy1.
and(6) p.Also lind the angle 0 of a coordinate rotation that will
gcnerate new
random variables that are statistically independent.
5.3-7. Suppose the annual snowfalls (accumulated depths in meters) for two nearby
alpine ski resorts are adequately represented by jointly gaussian random vari-
ables X and I, for which p=0.E2. or: 1.5m. oy:1.2m, and Ryy -
81.476m1. If the average snowfall at onJresort is l0m, what is the avJrage
at the other resort?
5.$8. Two gaussian random variablr-'s X and I' have a correlation crrclficrent
p:0.25. fhe standard deviation of X is 1.9. A linear transfonnation (coor-
dinate rotation of nl6) is known to transform X and Y to new random
variables that are statistically independenr. \f,'hat is o].?
*5.1'9. Gaussian random variables X1 and X1. for u'hich f r:2,&,=9, i2= -1.
ol, - 4. and C x,x, = --3. are transformed to new random variables Y1 and I1
according to
Yr: -Xt * Xt
Y=: -2Xt - 3l':'
Find: to\[email protected]) 0t,t..kt) oi,.(") ol.. and (.f) Cy,v,.
undergo a transformation
[rl =
[:]
to generate new random variables Ys and Y2. i
(a) Find the joint densitl of )'1 and 12.
(b) Show what points in the _r'r-r', plane correspond to a nonzero value of the
new density.
*5.+2. Three random variables Ir.X:, and X3 represent samples of a random noise
voltage taken at three times. Their covariance matrix is defined bl'
:
r.,r
[i i t,g i.l]
A transformation matrir
rrr :11 + i]
converts the variables to new random variables .rl, Y2, aod y3. Find thc t77
covariance matrix of the new random variables.
curpren 5:
Operations on
'5.+3. Determine the density of I. = (x? + xl)oj when x1 and X2 arejointly gaus-
Multiplc Random
sian rafrdom variables with zero means and the same variance. (Hint: Use the
Variables
results of Example 5.4-2.)
*5.5-1. 7*ro-mean gaussian random variables Xt, Xz. and X3 having a covariance
.matrix
l- 4 z.os r.os-l
[crl =lz.os 4 2.05 1
L r.05 2.0s 4 I
are transformed to new variables
rn=[i i]
Find the matrices tr) [il and (D) [C,1. t"l Also find the correlation coefficient
of Y1 and 12.
tr
Ia:.-.4Lt
i5.65. Repeat Example 5.6-l except use N = 1000 values of X1 and 1000 values o,[, ,'.
X2. Note the improvement in the accuracy of thc estimated quantities fdi' '"'
random variables lll, and P'2 3S compared to the cxample.
5.7-1. Find the mean value of the power estimator of (5.7-7) and give arguments why
the estimator is unbiased.
5.1-2. Find the variance of the po\r,er estimaror of (5.7-7) and show that it
approaches zero as .\ becomes infinite.
5.7-3. If the factor I /(N - I ) in the variance estimator of (5.7-8) is replaced by I /I,
shou'that the mean of the modified estimator is biased. Determine the amount
of bias. How does the bias behave as N becomes very large?
Z : cos(X ) *.1sin( Y1
where .l' and Y are independent real random variables uniformly distributed
from -z lo z.
(c) Find the mean value of Z.
(b) Find the rariance of Z. '
*5.8-2. Complex random variables 2,1 and 22havezero means. The correlation of the
real parts of 21 and 22 is 4, while the correlation of the imaginary parts is 6.
The real part of 21 and the imaginary part of 22are statistically inlependent
as a pair. as are the imaginary part of 21 and the real part of 22.
(a) '*'hat is the correlation of Zy and Z2?
(D) Are 21 and Z. statistically independenr?