0% found this document useful (0 votes)
72 views38 pages

Multiple RV

This chapter discusses extending operations on random variables to include multiple random variables. It defines the expected value of a function of random variables as an N-fold integration over all involved variables. Joint moments about the origin are also introduced, representing the expected value of products of the random variables. The chapter uses an example to show that the expected value of a weighted sum of random variables equals the weighted sum of the individual expected values. It also discusses the concepts of correlation and orthogonality between random variables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views38 pages

Multiple RV

This chapter discusses extending operations on random variables to include multiple random variables. It defines the expected value of a function of random variables as an N-fold integration over all involved variables. Joint moments about the origin are also introduced, representing the expected value of products of the random variables. The chapter uses an example to show that the expected value of a weighted sum of random variables equals the weighted sum of the individual expected values. It also discusses the concepts of correlation and orthogonality between random variables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CHAPTER 5

Operations on Multiple Random Variables

5.0
INTRODT}CTION

After establishing some of the basic theory of several random variables in


the previous chapter, it is appropriate to now extend the operations
described in Chapter 3 to include multiple random variables. This chapter
is dedicated to these extensions. Mainly, the concept of expectation is
enlarged to include two or more random variables. Other operations invol-
ving moments. characteristic functions, and transformations are all special
aoplications oI expectation.

5.1
EXPECTED VALUE OF A FUNCTION OF RANDOM VARIABLES

When more than a single random variable is involved, expectation must be


taken with respect to all the variables involved. For example, if S(X. f) is
some function of two random variables X and Y the expected value of g(., .) is
given by

E - E[s(x. ]')I : ,[--,[-- (5.1-l)

This expression is the two-variable extension of (3.1-6).


g For N random variables Xr,l'2, . .., XN and some function of these vari-

t_
I ables, denoted g(Xr, . . . , X1'), the expected value of the function becomes

I r4l
t42 E - 4AXr' ... ' xiv)I
Probability, (5'l-2)
Random Variables, [- . . J--"''
- J-- [- g(xr, .--,xilfx,....xr(xr,.... -xiv)ds1 ...d*N
and Random
Signal Principles
Thus, expectation in general involves an N-fold integration when N random'
variables are involved. It should be clear to the readcr from (5.1-2) that the
expected value of a sum of functions is equal to the sum of the expected values
of the individual functions.
We illustrate the application of (5.1-2) with an example that will develop
an important point

EXAMrLE s.r-I. We shall find the mean (expected) value of a sum of il


weighted random variables. If we let
N
g(Xr,...,Xt): Io,X,
i=l
where the "weights" are the constants o;, the mean value of the weighted
sum becomes

Esxr Xy):r[*",r,]
l\ .@ 16,

-Ili=1 J-o I aixifx,,...x,(xr


J-e
x7y)dx1 ...d*x

from (5.1-2). After using (4.3-8), the terms in the sum all reduce to the '

dxi - EIaiXil
- aiElXil
t*_*o,*,rr,(x;)
so

l- ,v '1 .v
,l Io,r,l-DuiElxil
I L i=l
i=l
which says that the mean value of a v'eighred sum of random variables
equals the w,eighted sum of mean values.

The above extensions (5.1-l) and (5.1-2) of expectation do not invalidate


any of our single random variable results. For example, let

g(Xt, ..,Xr') : g(X) (5. r -3)

and substitute into (5.1-2). After integrating with respect to all random r,ari-
ables except Xr, (5.1-2) becomes
fr
| s(x)_fx,(x1)d.x1
E:Hs$,)l = J-rc (5.1-4)
which is the same as previously given in (3.1-6) for one random variable. Some t43
reflection on the reader's part will verify that (5.1-4) also validates such earlier crr,pren 5:
topics as moments, central moments, characteristic function, etc., for a single Operations on
random variable. Multiple Random
Variables

Joint Moments about the Origin

One important application of (5.1-l) is in defining joint moments about the


origin. They are denoted by mr1 are are defined by

ffink : EIX' Yol - J*J*" ykf x.y(x, y) dx d1, (s. r -5)

ior the case of two random variables X and Y. Clearly nt,rc : EiX"l are the
moments mn of X, while fttok : EYklare the moments of Y. The sum n * k is
called the order of the moments. Thus rns2, tn2y and m11 at€ a[_second-order
moments of X and I/. The first-order momentS ,r?or - E[Yl : Y and ntto :
EIX): * are the expected values of Y and X, respectively, and are the co-
ordinates of the "center of gravity" of the function fx.y(x, ),).
The second-order moment ftttt : EIXYI is called the correlation of X and
I. It is so important to later work that we give it the symbol Rxy. Hence,
Rxy = ttltt - ElxYl:
J*J* \fx.y(r,.y) dxdy (5.1-6)

If correlation can be u'ritten in the form


Rxv : ElnqY) (s. r -7)

then X and I are said to be uncorrelaled. Statistical independence of X and Y


is sufficient to guarantee they are uncorrelated, as is readily proven by (5.1-6)
using (4.5-4). The converse of this statement, that is, that X and )z are inde-
pendent if X and Y are uncorrelated, is no, necessarily true in general.f
If
Rxy -0 (5.t-8)
for two random variables X and )', they are called orthogonal.
A sirnple example is next developed that illustrates the important new
topic of correlation.

s Let X be a random variable that has a mean value ,f -


EXAMPLE s.r-2.
rx EIX)- 3 and variance ozx:2. From (3.2..6) we easily determine the
# second moment of X about the origin: EIX-I - nt1 - o'y + X' : I l.
Next. let another random variable Y be defined by
:/.1

Y
- -6X +22

tUncorrelated garr.r..iur random variables are. however. knou'n to also be independenl (see Section
S 1l
t4 The mean value of f is Y = E[y] - EI-6X +221- -6* +22- 4. The
Probability.
Random Variables.
I correlation of X and
:nr1s _
f is found from (5.1-6)
EtXYI - El-6X2 +22X1- -6E[.Y2]+ZZ*
Rxy
and Random
Signal Principles l)
- -6(l + 22(3) - 0
fi Since Rl.r. :0. X and Y are orthogonal from (5.1-8). On the other hand.
F Rxt * EIX]IEIYI - 12. so X and Y areno, uncorrelated [see (5.1-7)J.
*
rq We note that two random variables can be orthogonal even though i
correlated when one. Y. is related to the other. X. by the linear function :..
f; .

Y : aX + 6. It can be shown that X and Y are alu'a;-s correlated if


lal*0. regardless of the value of h (see Problem 5.1-9). They are un-
correlated if a:0. but this is not a case of nruch practical interest.
H Orthoeonalitv can likewise be shou'n to occur when a and b are related
G by 1, =" -uEiX}\/Elxlwhenever qn+0. if Eil'l :0. ,t'and i'cannot
r5

ri be orthogonal for any value of a excepl u:0, a noninteresting problem.


il The reader may wish to verify these statements as an exercise.
For A randotrt I'ariables Xt, X:.....-,Y'.1, the (rrl * tl- + "'* rl1')-order
joint moments are defined b1'
nl ,r r,,. .. ., ,
. - EfX't' ' x';' . . xi'' l
(5.1-91
_ [* '''l'1r
- J _., J- '\'i'' /,,

u'here n1. r?r.....r1r are all integers - 0. l.1.....

Another important application of (5.1-lt is in defining ioittt c'entrul mo,nents.


For tn,o random variables X and Y. these moments. denoted b)'&,,r. dre given
bv

lttrk: E[(]' - Xl"l]t' - )')t]


(5. l- l0t
: [- [* (r'- .l')"( -,' - )')^/r 1.(.r..r') dr d1'
J --J -,.-''-
The second-order central moments

tt2o: E[t.l'-.\-'t]] - oi (-s.l-llt


tto2: flt l' - ]-'trl - o;, (5.r-ll)
are just the variances of X and Y.
The second-order joint rnoment lr : is very important. It is called the
cot'uri(tn('e of X and X and is given the sr llrbol C'.r I . liencc
C t't, : ltl
- E[('x' fN- .t-)))' - ).)l
:J: J--
I t.. - .\ ltr' - Y)J t,.y(-x, -r') d.r dr (5.1-l.it
By direct expansion of the product (.x - X)(r' - f), this integral reduces to the 145
form cxrprrn 5:
Operations on
Cxy = Rxy - *Y - Rxr'- EI)(IEVI (s.r-r4) Multiple Random
when (5.1-6) is used. If X and Y are either independent or uncorrelated, then Variables
(5.1-7) applies and (5.1-14) shows their covariance is zero:

C xy : 0 X and Y independent or uncorrelated (s. l-r 5)


If X and Y are orthogonal random variables, then
Cxy = -ElXlEtYl X and Y orthogonal (5.1-16)
from use of (5.1-8) with (5.1-14). Clearly, C1.y -0 if either X or I also has
zero mean value.
The norm alized second-order moment
p = lll/.tM: Cxyloxoy (5.1-l7a)
given by

^_ ,lv__x)_(v -
P=nL
v)l (s.l -l7D)
"r ", )
is known as the correlation coefficient of X and Y. It can be shown (see
Problem 5.1-10) that

-l < p < I (s. r-t8)


For .N random variables X1, X2, ..., XN the (n; * n2 *..-+ r7y)-order
joint central moment is defined by

Fn1n....ns, = fl(xt - *)"(xz - *z)" ...(xN - *r\'rl


r:!f(F

-fJ _- ...1
"'J (xr-x)',...
-*
(xv - f ri"nfx,,....x*(xr,...,r,,,,)dx1 ...-rry (5.1-19)

An example is next developed that involves the use of covariances.

,q ExArltpLE s.r-r. A*eain let X be a weighted sum of N random variables Xi;


:',t
;}
that is, let
.ry

tH x -faiXi
i:t
ri:
.:i:i
'
where the cr; are real weighting constants. The variance of X u'ill be
found. From Erample 5.1-1,
NA'
i,-''
^r r,1
['l'] : Lairirx'l:
f Fr t, i S\
a'*'-
.l
I
f
i
E
t6 so we have
Probat ility, N
Raodom Variables. x-*:Iqi(Xi -x)
eod Random i-l

SiFal kinciples
and

ozy -- Et6- D't: r[* oi(Xi- xti,lr,- r,)]


/VN iv tt/

- Ifj=t j=l apiE[(xi - *,\Wi - bl= Ii=l j=l


laiaiCy,^.,
Thus, the variance of a weighted sum of N random variables Xi (weights
o1) equals the weighted sum of all their covariances Cr,r, (weights a,?).
For the special case of uncorrelated random variables, where

Cx,xr: 'rt,
{3r,
is true,.we get

,
o-x :$ 22
koror,
In words: the variance of a v'eighted sum of uncorrelated randont tariables
(v,eights u) equals the w'eighted sum of the yariances of the random vari-
ables (v'eigfus a!).

t..5.2
JOINT CHARACTERISTIC FUNCTIONS

The joint characteristic function of two random variables X and )' is defined
by

Qx.ykot, o2) : Eld'')=io:t 1


(5.2-l)

where ar1 and 02 ?re real numbers. An equivalent form is

*i.,zt
ox. y(arr , o2) - J*J* Jr y(x, .r')eto'' 41 4y, (s.2-2\

This expression is recognized as the two-dimensional Fourier transform (with


signs of arl and a;2 reversed) of the joint density function. From the inverse
Fourier transform we also have

.fx.y(x, n : x,y(o,r, o>2)s-i','-iu)' do1 dot2 1s.z-ly


;tJ*J*.
By setting either @2 - 0 or ar1 = 0 in (5.2-2), the characteristic functions of 147
X ot Y arc obtained. They arc called marginal characteristic functions: ilG s,
Orratrons on
. O x(o) - Ox. y(arr , 0) (5'24) Muttiple Random
Oy(rA) - Ox.r(0, a,t) (5.2-5) Variables

Joint moments rnnp c?n be found from the joint characteristic flunction as
[ollows:

t+k0x,v!at-,'atz\
ffink:(-i),*&
q a"4 I (s.2-6)
l,; -o.oa-o

This expression is the two-dimensional extension of (3.3-a).

EXAMPLE s.z-t. Two random variables X and )/ have the joint character-
istic function
O x.ykot, coz) : exp(-2al - A,rll
We show that X and I/ are both zero-mean random variables and that
they are uncorrelated.
The means derive from (5.2-6):

*
=
Etxl: fttto: -iryuff! i-,=,.*=o

- -j(-4o)exp(-2co? - S,,rill
1u,,
- 0
-o.oa-o

Y - n1Y): mot = -j(-16-z)exp(-2c,i1- 8r,,])l


lrrrl =Q.a,r-g
Also from (5.2-6);
Pr
Rxy = EIXYI: rltn = ei2 ;6[exp(-2ro? - 8r,])ll,r=0,aD:0

- -(-4corX-l6aa)exp(- 2rl - t.lll larl -0.rr2-0


-0
Sincemeans arezero,Cxy: Ryy from (5.l-14). Therefore.Cxy:0 and
X and Y are uncorrelated.
The joint characteristic function for N random variables l'1, X2,. . . , X1. is
defined by

or.,....rr(rot,...,o.r,) - Eld-tx'+"'+7'i';rl'r1 15.2-7)

Joint moments are obtained from

0^9x'-"''rn'(arr
^n
' 't a''''v)
mn,n.-..n,= (-/)n
a(4' a(4, . . . "' aofi Ilail,=o t5'2-8)
148 where
Probability.
Random Variables.
and Random
I R:xr*a2*'..*n,v
The joint characteristic function is especially useful in certain practical
(s.2-e)

Signal Principles problems where the probability density function is needed for the sum of .f/
statistically independent random variables. We use an examplc to shot*' how
the desired probability density is found.

Let Y : Xr * X= +...
ExAl\tPLE s.z-e * XN be the surn of N statistically
independent random variables Xi. i
- l. 2, . . .. N. Denote their probabil-
ity densities and characteristic functions, respectively. by ./r,(r,) and
Or.,(,.ri). Because of independence the joint probability density is the
product of all the individual densities and (5.2-7) can be written as

Or,.....r. (ott. .. ., @,r')

_l* [-l-+/''t
I l-lP\ r/'r1 '1

-J* '')] "'r


' J -* ['J ''*'l L
dr1

: .r,,r,,
il I* *rr(-.-,)e'''*'dr, - LI
Next, we write the characteristic function of Y using (-1.3-l) and note that.
it is the sarne as (5.2-7) u,ilh ro; : co, all i. Hence.
si
4.
$ o1(a,) - E[a-tr:E[.*e(rt*)] ;
t,'J
. i^.

!.
g/
- oxr.....*'^(r. .. ., at): i[i=l ox,(r,r)
:.;

I
Finally. we use (3.-1-3) to obtain the desired density of )':
:A

_x [n.,
/r r.r ): +
-,. J[-
sulfn-i'" ,t.
Li=l I
In ttre special case where the ){; are rdentically distributed such that
.Dx,(.) = O.r,(r,r). all i. our result reduces to

tr(r ) : t*, ,,, ,-i-'v ctot


*{* 1yr'

5.3
JOINTLY GAUSSIA\ RANDO}I VARIABLES
Gaussian random variables are very important because ttn-v show up in nearly
every area of science and engineering. In this section. the case of tu'o gaussian
random variables is fint examined. The more advanced case of N random
variables is then introduced.
Two Redom Variables t49

Two random variables X and Y are said to be jointtl' saussian if their joint ffi;:L::
density funcqion is of the form
".
Multiple Random
Variables
I
fx.t'$, )) =
2ooyoyrrEj
1 -l [t, - x)'
.*plr6j zp$ - *)0' -
-
(y - il'l I
-T)l
Y)
'.4, oxoy
(5.3-l )

which is sometimes called lhe bivariate gaussian densiry. Here


X (5.3-2)
-'Elil
v-r14 (5.3-3)

o2x:El(x-*)\ (s.3-4)

o?Y: EIV - Yt'l (5.3-s)


p
- E\(x - *>tv - Y\l1o^.o, (5.3-6)

Figure 5.3-la illustrates the appearance of'the j_oin_t gaussian density func-
tion (5.3-l). Its maximum is located at the point (X, Y').The maximum value
is obt4ined from

fx.y(x,)) < fx.y(*, Y1 - (5.3-7)


btroyoy$4
The locus of constant values of -fx.y(x, l') will be an ellipset as shown in Figure
5.3-lb. This is equivalent to saying that the line of intersection formed by
functionfx.v(x,y) with a plane parallel to the x1' plane is an ellipse.
slicing the
Observe that if p :0, corresponding to uncorrelated X and Y, (5.3-l) can
be u'ritten as

-fx.y(x,r) -,fx(.x) fy(y) (5-3-8)

.where[.(.x) and/y(.r') are the marginal density functions of X and Y given by

(5.3-e)

(5.3- l0)

Now the form of (5.3-8) is sulficient to guarantee that X and Y are statisticalll'
independent. Therefore, ne conclude that arr,r' ttncorrelated guussi1117 v1177flonl

t\\'hen ol.= d). and p = 0 the ellipse degenerates into a circle; when P: *i or -l the elliples
degcnerare inro axes rotated by angles n/4 and -n { respeclively that pass through the point (-f'. }')
- 150 Itv$,v)
Probabilityf
Random Varirblcs,
and Random
Signal Princiglcs

,l

tbt

:
FIGURE 5.}I
Sketch of the joint density function of two gaussian random variables.
a

variableS are also statistically, indepeadent.lt results that a coordinate rotation


(linear transformation of X and Yl through an angle

(5.3-r l)

is sufficient to convert correlated random variablesX and Y, having variances


oi and o2y, r.spectively, correlation coefficient p, and the joint density of
(5.3-l), into two statistically independent gaussian random variables.t
By direct application of (4.4.12) and (4.4-13), the conditional density
functions fxGlY : y) and fv(y,lX = x) can be found from the above expres-
sions (see Problem 5.3-2).

tWozcncraft and Jacobs (1965), p. 155.


EHMrLE s.rt. We show by example that (5.3-l l) applies to arbitrary as l5l
well as gaussian random variables. Consider random variables Ys and Y2 !ffifi 5;
related to arbitrary random variables X and I by the coordinate rotation Operarions on
yt= xcos(g)* ysin(o) MultipleRandom
variables
Yz: -Xsin(g) * Ycos(g)
If f and I are the means of X and Y, respoctively, the means of Y1 and
Y2arc clearly lr:.fcos(O)* f sin(g) and i2---*sin(p)*Ycos(g),
respoctively. The covariance of Y1 and Y2 is
C Y, )': = E[( Yr - Y'X Yz - Y))
- E[{(X - Xlcos(g) + (y
- i)sin(e)}
t-(x - *lsin(g) + (Y - Dcos(0)lJ
- - ;rlsin(o) cos(o) * C;y[cos'@) - sin2(a)]
(o=,

: (o', - "ilt) sin(29) * C xvcos(20)


Here C;y - El(X - XX Y - Y)l: Nxoy.lf we require Y1 and Y2 to be
uncorrelated, we must have Cy., r., : 0. By equating the above equation to
zero we obtain (5.3-ll). Thus, (5.3-ll) applies to arbitrary as well as
gaussian random variables.

*N Randgm Variables

N random variables X1, X2, . . . , X N are calted jointty gaussian if their joint
density function can be written ast

where we define matrices

r._rr :[:,:] (5.3-13)

and

tWe denore a matrix symbolically by usc of heavy brackcur ['1.


152 We use the notation [.]' for the matrix tra_nspose, [.]-t for the matrix inverse,
Probability. and f [.lt for the deteririnant. Elements of [C;1, calied the corar iance matrix of
Random Variables, the N randorn variables, are given by
and Random
Signal, Principles
ci: Et(x, - x)(Xi-X)l_ l"i, i:i (s.3-r 5)
lcr',r', i *i
The density (5.3-12) is often called the N-uariate gaussian density function.
For the special case u'here N = 2, the covariance matrix becomes

lc^l-
l;:;^ -;;,''] (5.3-r6)

- p/os.,o:; :lI
lc,,t-': u5[_'l;:,,. (5.3- l7)
I loi, I
t[c.t]-' t,
- 1lozt.,ov.1 - p') (5.3- l8t

On substitution of (5.3-17) and (5.3-18) into (5.3-12). and letting Xr = X and


X2- Y. it is easy to vcrif;* that the bivariate density of (5.3-l) results.

*Some Properties of Gaussian Random Yariables

We state without proof some of the properties exhibited by t/ jointly gaussian


random variables X,,. ., Xx.
l. Gaussian random variables are completely defined through only their first-
and second-order moments: that is. by their means. variances'. and covar-
iances. This fact is readily apparent since only these quantities are needed
to completely determine (5.3-12).
2. lf the random variables are uncorrelated. they are also statistically inde-
pendent. This property was given earlier for two variables.
3. Random variables produced b1.' a linear transformation of Xy, . . . . X1 u'ill
also be gaussian, as will be proven in Section 5.5.
4. Any'kdimensional (k-variate) marginal density function obtained from the
ll-dimensional density function (5.3-l2l by integrating out N - k random
variables will be gaussian. If the variables are ordered so that .Y1. . . . . l'r
occur in the marginal density and Xa-1 . . .. l'\ are integrated out. then the
covariance matrix of X1 Xi is equal to the Ieading k x k submatrix ol'
the covariance matrix of Xy, . .. , Xr 1\\'ilks. 1962, p. 168).
5. The conditional density -fx,.....x*(xl......rrlXr+r:xr+r,..., XN=.r.r) is

-eaussian
(Papoulis, 1965, p.257)" This holds for any k < N.
*5.4 153
TRANSFORMATIONS OF MULTIPLE RANDOM VARTABLES cxlprrn 5:
Operations oo
The function g in either (5.1-l) or (5.1-2) can be considered a transformation Multiple.Random
involving more than one random variable. By defining a new variable Variables
Y -g(Xr.&,...,X,v). we see that (5.1-2) is the expected value of Y. In
calculating expected values it was not necessary to determine the density
function of the new random variable y. lt may be, however, that the density
function )' is required in some practical problems, and its determination is
briefly considered in this section. First we consider a single functional trans-
lbrmation o[ more than one random variable. Then we develop the case of
several functions of several random variables.

*One Function

Here Y -g(Xr,Xz,...,X.*).We seek to first define the probability distri-


bution of Y and then the probability density. The distribution is
Fr(f') - PIY . l'l - Plg(Xt, X2,..., Xx) < -r'). This probability is associated
with all points in the (.rt,-r2, . . . , .r,v) hyperspace that map such that
g(.rt. .\r. . . . . rr,) 1 ), for any _l'. Formally, we integrate over all such points
according to
,Fr (.r') : Plg(Xt. X2,...,Xru) .v)
=
: .Jf ,, r=. (5.4-
I L', (-tr , -r2. I)

{g(-rr,.r2. .. ., x.v) < _r')

The density follows differentiation


-
/r'(t')
dFv(v)
--;:
(5.4-2)
:',r,.r,..{r. ('t1' 'r2' ' '
"'r'r ) clx$x2" 'rl'r'r
iJ J'","',' "''
.. . ..tr) < -r.}
Perhaps the use of (5..1-l ) and (5.4-2) is best demonstrated b1'example- We
take two cases.

ExAI\tpLE s.t-t. We find the density :


lunction for the ratio ]' = B(Xr , Xz)
H Xr/Xz of tu'o positive random variables X1 and X2. The eventlY -
rd Xt/Xz < r') corresponds to points 0. xr /rz=t'in the .rlrl plane as
::
:!!
.A shown shaded in Figure 5.4-1. Now the distribution of I' is
i?,
Fy(_r'): Pll'-.Ytl.\': <.r'): this probability equals the integral of the
ET joint density of X1 and .l'1 over the shaded areas. On iptegrating. u'hen
H using the horizontal strip as shown, we have (for.r' > 0)

Fy(t') -PlXrlXzSyl : .. . 12) r/r1dr2


t f lo.'
tr,,r..(xr,
ty
Probability,
Random Variables,
and Random
Signal Principles

FIGURE,5.4-I
Regions in x;x3 plane applicable to Exampte 5.4-1.

The density results frorn differentiation according to (2.3-r)


and Leibniz,s
rule

f ,-(fl -4LP): [* ,dr,,.x.(]xz, x2)dx2


d)' Jo
To progress further requires a specific density be specified.

EXAM*LE s.+2. As a second exampre consider the function y


(x-? + xi)'/'. which is instructive because it invorves -
jntegral. .using Leibniz,s
rule with a doub.le Here Fy(y) _ ply : d? i-i{yr-;rL ,,
theprobability of all points in the xr-r1 prane that fail'on,
circle of radius ;'.
#d inria., u
It is

Fr( r') - [,
_ 1r':-'ir'r
J.r=-.r J .,=-1,:-..]y ,
f x,.x.G1, x2) dxldx2
f.t'
- J_,. I(v, x2) dx2 (t )

where we define

I(Y' x)- 1('''-'1t'/2 r'('r1 ' 'r*3) d-v1 (2)


J -6:-;',rr:lr'
From Leibniz's rule applied to the lasr form of ( l):

/r t.rr --tP- 1(.r., r.) + /(,. -.., ). J-,


l1jf ,., (_3)

Direct us9 of 12) proves the first two ri,eht-side terms


applying Leibniz's rule to the last term-in
in tll are zero. on
13). u,e have
155

cxrprrn 5:
Operations on
Multiple Random
Variables

(4\

The last term in (a) is zero since the joint density is not dependent on y.
Finally, the result is

fv(i: [, lnr,,r,lb? - l)'/',*r]


(s)
+ .fx,.rrl-(y' - *1)t /', -r1l ,o
6fu
This result is evaluated in Problem 5.4-3 for jointly gaussian X1 and X2.

*Multiple Functions

More generally, we are interested in finding the joint density function of a set
of functions that defines a set of random variables

defined by functional transformations I;. Now all the possible cases described
in Chapter 3 for one random variable carry over to the N-dimensional
problem. That is, the X1 can be continuous, discrete, or mixed, while the
functions Ti can be linear, nonlinear, continuous, segmented, etc. Because
so many cases are possible, many of them being beyond our scope, we shall
discuss only one representative problem.
' We shall assume that the new random variables )/,, given by (5.4-3), are
produced by single-valued continuous functions i'; having continuous partial
derivatives everywhere. It is further assumed that a set of inverse continuous
functions t-l exists such that the old variables may be expressed as single-
valued continuous functions of the new variables:

j: 1,2, -.., N 15.4-4\

These assumptions mean that a point in the joint sample space of the .\l maps
into onll' one point in the space of the new variables Yy, and vice versa.
Let R;. be a closed region of points in the space of the X; and Ry be the'
corresponding region of mapped points in the space of the Y;. then the prob-
ability that a point falls in R; u,ill equal the probability that its mapped point
falls in Ry. These probabilities. in terms of joint densities, are given by
156

Probability. Jr., lr' r,(11. ' . .r,r'). r/.r-1 . . . r/.r1'


(5.4-5)
Random Variables.
:Jr,
and Random
Signal Principles
f/,, ,,t,',. ... ..r'r )r/-r'r . .. d.r'n,

Thisequation may be solvad for.l1',. .),(.1'1......1'.r) by treating it as simply a


multiple integral involving a change of variables.
By working on the left side of (5.4-5) we change the variables .r, to new
variables.l i by means of the variable changes (5.4-4). The integrand is changed
by direct functional substitution. The limits change from the region R.y to the
regron R1.. Finally. the diffcrential hypervolume r/.r 1 . . . rAry will change to the
value Uldf'r ...d-r',r'(Speigel. I963. p. 182). u'here Ul is the magnitude of the
jaobiant "/ of the transformations. The jacobian is the determinant of a
matrix of dcrii'ativcs tiefinrd br

I
ari' ari'
,r^
I

, oYr
J_ :
:l i
(5.4-0)

ari I
ari' I

oYr ali I

Thrs. the left side of (5.4-5) becomes

x,(-r-1.. ...x1)r/.r1 . . . ,/.t,r,


Jr, .[r,
:
Jr, Jt,, ,.{',
Since this result must equal the right side of (5.4-5). we conclude that

.li',. r, (

When.^/
variable.
- l. (5.4-8) reduces tot-1.4-9) previously derived for a sinele random

The solution (5.4-8) for the joint density of the new variables 11 is illu-
strated here with an example.

ExAtvtPLE s.4-3. Let the transformations be linear and given by'

i
E
Yt=TlXt.X)- aXt+bX2
Y2
- Ty(Xr. X) - cXr + dX2
F where a, b. c. and r/ are real constants. The inverse functions are easy to
F. obtain by solving these tu'o equations for the two variables )'1 and X2:

f.Afier the German mathematiciae Karl Gustav Jakob Jacobi (1t04-1851)


t57
: rr t( I'r , Yz) = @Yr - bY)/(ad - Dc)
xr
cneprrn 5:
x2 - Tlt (Yt,Yz) = ?cyt * ay)/(ad - bc) Operations on
Multiple Random
where *. rhull assume (ad - bcl * 0. From (5.4-6): Variables
, : I d/(ad-bc) -b/(ad-hc)l I
' | -c/(ad - bc) a/(ad - bc)l: 1oa - tr1
Finally, from (5.4-8),
t r''r.-\(dy, - b),2 -c-l'1 * o)'r\
og '
-[t',.v=(:-'r.-l':) -t
oa : @ )'
lad - bcl

*5.5
LINEAR TRANSFORMATION OF GAUSSIAN RANDOM
VARIABLES

Equation (5.4-E) can be readily applied to the problem of linearly transform-


ing a set of gaussian random variables X1,X2,...,Xy for which the joint
density of (5.3-12) applies. The new variables Y1, Y2, . . . , Yy ?ra
. Yt : ottXr * arzXz + "' + atxXN
Y2 - atXt * a22X2+ "' + azxXy (s.5-l )
:

YN : axtXt * ay2X2+ "' * ayyXv


where the coefficients aii, i and j = 1,2, ..., N, are real numbers. Now if we
define the following matrices:

(s.s-2)

L o, ot*;*, ,rrl
v-[;] rr, :[;] x:[;] , :[;] (s.5-3)

then it is clear from (5.5-l) that

lrl : [rllxl [r - iJ : [r][x *l (5.s-1)

lxl : [rl-'[v] [x - rl - [rl-'lr - vl (5.s-s)

so long as I is nonsingular. Thus,

Xi - T; t(Yr, ...,-Y,v) - aitYr + ai2Yr+ ... + aiN Yr (s.s-6)


158 oXi ar,' ii (5.s-7)
Probability; ffi:q--o'
Random Variablcs,

- *,:
and Random
Signal Principlcs
x, ait (Yr - - Y,r)
Y') + "'+ r''(Yru (s.5-8)
from (5.5-5). Here ail represents the yth element of [fl-t.
The densjty function of the new variables 11, . . . , Yf is found by solving
the right side of (5.4-8) in two steps. The first step is to determine l"/1. By using
(5.5-7) with (5.4-6) we find that "/ equals the determinant of the matrix Ifl-'.
Hence,t

ut :lr[.]-"1 =# (s.s-e)

The second step tn s,--!'iing (S.4 3) proceeds by usirrg (5.5-8) tt, obtain
NA
Cr,r, - El(xi - *)Vi - *)l- Irn I d' rttvr - v^X y^ - f,,)l
A'=l il-r,

: Ir* I d"'Cy*r,,, (s.s- l0)


l=l m=l

Since C;,1., is the yth element in the covariance matrix [Crl of (5.3-12) and
C r^r,,, is the knlth element of the covariance matrix of the new variables Y;.
which we denote [C,],(5.5-10) can be wriuen in the form

[cr l - [rl-' [c, lr [rl't-' ( s. s- I I )

Here Irl'represents the transpose of Ifl. rne inverse of (5..s-l l) is


[cr]-'- 1r;'1crl-'Irl (s.s-r 2)

which has a determinant


rl.
llc,l- t' = l[ C ,l-'I II T ll' (s.s-13)
,5
On substitution of (5 .5-13) and (s. 5-t-12) into ( 3- l2):
.l'x,.. ..v ('rr = 7"il .\,v T;-l ,)
-
lrl:
_ lrlrltlrlc,l-' _.*et 1 [.r
_ixl''Irl'][( ) l-'Ir][, - ,l (s.5- r 4)
(2t71t( tz 1

Finally, (5.114) and (5.5-9) are substituted into (5.4-8), and (5.5-4) is used to
obtain

[r - il'[crl-'[.,'- rl (-s.5-15)

tWe represcnt thc magnitude of the determiorol cf a marrix by lltlll.


This result shows that the new random variables I/1 ,Y2,...,Yy are jointly 159
gaussian because (5.5-15) is of the required form. .rffis,
In summary, (5.5- 15) shows that a linear transformation of gaussian ran- operations on
dom variables produces gaussian random variables. The neq variables have Muttiple Random
mean valUCS Variables

N
li:Doi*** (s.s-16)
t=l
from (5.5-l) and covariances given by the elements of the covariance matrix

[crl : [rl[crl[rl' (s.5-17)

as found from i5.5 I l). 1

EXAMPLE 5-$t. Two gaussian random variables X1 and -X2 have zero
means and varianse,s dx, -- 4 arrd oi., = 9. Their covariance Cl.,x, equals
3. it Xr and X, are linearly transformed to new variables Y1 and Y2
according to
Yt: Xt -2Xz
Y2-3Xt+4X2
we use the above results to find the means, variances. and covariance of
Y1 and Y2.
Here

:
f .f
[l 1] and t.,t :
[1 ;]
Since X1 and X2 are zero-mean and gaussian, Y1 and Y2 will also be zero-
mean and gaussian. thus lr:0 and lz:0. From (5.5-17):

IT
rc,r-rrrrc-tr.r,=[l 1][1 ;][ I i] =[_:: :;]
ts Thusol, = 2S. o;, :252, and C'y, ).: : -66.

*5.6
COMPUTER GE\ERATION OF MULTIPLT RANDOM
VARIABLES

In Sectitrn 3.-5 rre discussed the ger-reration of a single random variable of


prescribed probabilitl' density by transformation of a random variable that
wa3 uniformll'distributed on (0,1). Here. u'e shall utilize results of the pre-
ceding ts'o set-tions to show how some usefully distributed random variables
can be generated by'computer when the generation initially requires either tu'o
uniformly distributed random variables or two gaussian variables. We
160 describe several examples, the first based on transformation of ttvo statistically
Probability, independent random variables X1 and X2, both uniformly distributed on (0.1).
Random Variables, One common problem in the sirnulation of systems by a digital computer
and Random is the generation of gaussian random variables. As a first example, we note
Signal Principlcs that two statistically independent gaussian random variables 11 and 12, each
with zero mean and unit variance, can be generated by the transformations
(see Dillard. 1967)
Y1
- I'{X r, Xz) - vf 2lng', ) cos(2nX1) (5.6- la)
Y2
- T2(Xt, Xz) - vf 2ln(l'r ) sin(2zrX2) (s.6- r b)

It can be shown (Problem 5.6-l) that the joint density of Y1 and )'2 rs

rt,.t.(.r.r..r,:):*# (s.6-2)

as it should be for statisticalll, independent Y1 and Y2. Our example can be


generalized to include arbitrary means and variances (Problem 5.6-2).
As another example, assume we start with two zero-mean, unit-variance,
statistically independent gaussian random variables l'1 and Y2 (perhaps gen-
erated as in our first example above), and seek to transform thern to two zero-
rnean gaussian variates W1 and W2 lhal have arbitrary variances. o21a,, and
o2lr,,,andarbitrarycbrrelationcoefficient pu.. From (5.3-16)applied ro Wl and
W2. and from (5.5-17) for a linear transformation. we have

lc,,l :lr-Ii ,-. ,"")i,:r".] : [rllrl' (5 6-3) '


The covariance matrix of 11 and Y2 does not explicitly appear in (5.5-17)
because it is a unit matrix due to the unit-variance assumption about Ys and
I/2. Our goal is obtained if we solve for [fJ tha_t makes (5.6-3) true for
arbitrarily specified o2w,,, o?v, and ps,. As lon! as [C6 lis nonsingular (the
usual case). [f] can be exprissed as a lower triangu-lar matrix of the form

rrr =[a ;,] (s.6-4)

On using (5.6-4) in (5.6.3), and solving for the elements. we have


Tn : on', (5.6-5a)
I:, = Py,ou,z (s.6-sb)

T2: = owz - P2* (5.6-5c)

The final transformations yielding W1 and W2 become


||'t=TnYt:oWrYt (5.6-6a)

Wz -- TxYt * TzzYz= PwowzYl * oyt, - Ply Yz (5.6-60)


from the form of (5.5a). Thus, if zero-mean, unit-variance, statistically inde- t6l
pendent gaussian random variables 11 and Y2 are transformed according to cHepren 5:
(5.6-6), then W1 and W7 are correlated gaussian random variables having zero Operations on
means, respective ,ariairces of 4v, an{o'nr,and correlation coefficient py. Multiple Random
Variables
EXAMPLE 5.G1. We use MATLAB to generate N - 100 values xt*
n=1,2,...,N, of a random variable X1 uniform on (0, l). We than
E repeat the process for a second random variable X2 with values x2,.
Next, we successively use (5.6-l) and (5.6-6) to create two sets of values
M
;i -.1F,

t x'1,, aod rrzr* n


-1,2....,N. of two zero-mean gaussian random vari-
abies w, ia W2 hraving respective variances o2*,,-- 4 and oir, :9, and
normalized correlation coefficient ps,: -0.4. To determine the quality
il of our random variahle's values. ue find their means'ar:cording to

fl w,:*)l,,,,. i:land2 (t )

their variances according tc


il
H
K
oit', :**"-- ti',)'. i - I and 2 (2\

*
& and their normalized correlation coefficient according to
firl
it _rQ,Q,rl-,,= i,*.,, _ w,xrn.:,, _ fu=\
pw:___f_L, (3)
fl
The applicable MATLAB code is shown in Figure 5.6-1. Our results
ilt are tabulated in Table 5.6-1, where the standard deviations and normal-
H ized correlation coefficient are found to be in error by -7.5%, -2.3o/o.
'and 45.2oh, respectively. For N : 1000 values, these errors improve (see
H Problem 5.6-5).
If arbitrary means W1 and W2are desired for W, and W2 in the preceding
example, we only need to add these to right sides of (5.6-6):
(5.6-7 a)

W2 - Wz + Pu,oil':l', + on,.J I 4* ,, (s.6-7 bl

TABLE 5.6.I
Results applicable to Example 5.Gl
l\Iean Standard deviation Correlation coefficient

fi1 o; o2

True values 00 : 3 -0.tl


Estimated (A'= 100) -0.02 -0.17 r.E5 2.93 -0.57
Percenl error -i.5% -2.3"h 42.5o/o
- t62 ttttttltttttttt lrsrlc 5. 6- 1 ttttttttttttttttB
Probability, cI.er
Random Variables,
and Random f r 100r t nunbcr of randonvarl.ablcr to gcucratc
Signal Principles
rlgrl. - .qrt ({); t atandmd dcvlatl.oa
rl.gr2. tqrt(9),
rho r -O.at t aornellzcdcorrolatlon cooff,LeLcat
l1 - raad(l,N) ; t ualforuly dl,st,rlbuted raadm annbcrs
*2 - rand(1,N);
y1 . tqrt(-2rlog(r1) ).tcog12rplrr2) I t ladepcndrat GauaaLa-
t rando variableg
y2 - eqrt (-2r1og(r1) ) .igl.n(2tElLix2);
lllrsigwlttcoartr-tg
T2L - rborsigrr2;
T22 t sl,gru2ieqrt ( 1-rho^2 );

11 . T11ryli t correlated Gauasi.. randmvarLableg


v2e T21ty1 +T22*y2i
sDoan - [raean(r1) nean(w2) I ; % ga4rle meln
rcov - [eow(w1r1) cov(w2,1) I; % (biased) earyrle covariance
try - corrcoef (w1,w2);
rbo_hat r tED(2r 1); t eatinat,c of nomelLzed correlation
t cocffieicnt
cov_6rr - 100*(sqrt(rcov) - ( [stgtrl sigrw2l.^2))./ ...
( letgrrl el'grw2l.^2] tDercent, error
rho_err - 100r (rbo_hat - rho) . /rbo
FIGURE s.GI
MATLAB code used in Example 5.6-1.

The foregoing transformations can be extended to generate apy number of


zero-mean correlated gaussian random variables by transforming the same
number of zero-mean, unit-variance, independent gaussian random variables.
For N random variables. lC*l becomes an N x N specified (arbitrary) _sym-
metric matrix and the form ofl5.6-3) again applies. The elements of [f] can
be found from the Cholesky method of factoring matrices. as described in
Ralston and Wilf (1967).
As a final example, suppose two statistically in_dependent gaussian ran-
dom variables Vfi and lfz'. with respective means Wy and lt! and variances
both equal to o' , are subjected to the transformations

p
- rr(tv., w.) _ (s.6-8)

O = I:( W1. W2)


- ran-r(wz/w) (s.6-e)

From the inverse transformations


Wr= I,-'(R, @) : Rcos(@) (s.6-lo) 163

wz= rtr(R,@): Rsin(@) (s.6-r I) cxerrrn 5:


Operations on
and the use of (5:5-4), r*e find the Jacobian equals R. Since Multiple Randbm
Variables

fw,'wr(wt' w2) - li't):+(":- iv')2ll(uzl (s.6-r 2)


= *'-t(*'r
(5.4-8) yields

.fn.e(r, q :'#rexp{-[1rcos(g) - wt 12 + [, sin(g) - wrl'\ttzo\l

:#.-o{ }V' + w? + wb - 2rwvcos(g) - Zrw2sin(a)l}

(s.6- r 3)

where u(r) is the unit-step function. lf we now define

.40: ,lrlr? tlrt . (s.6-14)


go : tan-'(wr/ wr\ (s.6-15)
(5.6-13) can be written as

f n.s(r,0) --'#.-r{ *rr' + At - 2rAscos(o - go)l} ,t.6-16)

Equation (5.6-16) is our principal result. It is important in system simulations


because it is the joint density of the envelope (R) and phase (O) of the sum of a
sinusoidal signal (with peak amplitu-de 16 and phase 06) and a zero-mean
gaussian bandpass noise of power o'. This density is developed further in
Section 10.6.

5.7
SAMPLING AND SONTE LINTIT THEOREMS

In this section we briefly introduce some basic concepts of sampling. The topic
will be expanded further in Chapter 8. Although we shall develop the topics
around an example practical problem, the results will apply to much more
general situations.

Sampling and Estimation

Engineers and scientists are frequently confronted u,ith the pr.rblem of rrlea-
suring some quantity. For example, if we need to measure a dc voltage, we use
a dc voltmeter, u'hich provides a scale indication of the voltage. Now regard-
less of the mechanism used b1' the meter to provide its indication. one typically
"reads" this scale to obtain a "value" we say is the measurement of the
voltage. In other words. v'e sample the indication to get our measurement.
lu The measurement can only be considered as an estimate of voltage, however,
Probability, because of meter drifts, accuracy tolerances, etc. In fact, any measurement can
Random Variables, only be considered as an estimate of the quantity of interest. In our example,
and Random our estimate uscs only one sample. More generally, we may estimate (measure)
Signal Principlcs a quantity by using more than one sample (observation).
To quantify these practical thoughts further, consider the problem of
measuring the average (dc) value of some random noise voltage. lf we had
a large number of identical such sources, we could imagine sampling the
voltage of each (at a given time) and form an estimate of the dc voltage by
averaging the samples. For N sources, each sample could be considered a
value of one of N random variables that form the N ciimensions of a combined
experiment, as in Section 1.6. Here samples from each of N subexperiments
are combined.
In our practicai world we usrla.lly must take another apprcach becausc we
never have multiple identical sources with which to work. We seek another
model. Suppose we now take a sequence of N samples over time under the
assumption (model) that the voltage's statistical properties stay unchanged
with time.t Again, each sample is taken as the value o[ one of N statistically
independent random variables. all having the same probability distribution.
We again hAve a combined experiment, but it is now N repetitions of one basic
experiment.

Estimation of Mean, Power, and Variance

For either of the above approaches the N samples xn represent values of


identically distributed, random variables X, n:1,2,..., N. Assume the X,
are independent at least by pairs; they have the same mean value X and
variance oi because o[ identical distributions. Since we wish to estimate
(measure) the mean noise voltage, intuition indicates we should ,form the
average of the sample values as follows:f

i,* = estimate of average of ,\ samples = (s.7- r )


* Er,
Equation (5.7-l) is a function of the set of specific samples {x,}; it gives a
number which u'e call an estimare or measurement of the mean of the random
variables. Another set of specific samples rvould produce a different number
ir. When all possible sample sets arsconsidered, u,e form the function

i*:*r, (s.7-2)

tMore is said about this model in the next chapter (Section 6.2).
[The circumflex is notation to imply an estimate. or estimator: in this case an estimate of the
time average of il samples denoted by i,v.
to represent the effect of averaging over the random variables. Equation r65
(5.7-2\ is called an estimator; it produces a specific estimate of f for a specific cHrrrrn 5:
set of samples. We refer to (5.7-2) as the sample mean. Operations on
Of great interest is: How does our estimator of the sample mean perform? Multiple Random
To seek an answer, we find the mean and variance of our estimator. Variables

ei,r =r[**-] :*E Erx.t-*, any N (s.7-3)

Any estimator (measurement function) for which the mean of the estimator of
some quantit!' equals the quantity being estimated is called unbiased. For the
variance:

r:l.(i x - xl'l o'rn .- gli, - 2X i,, + r'l


,l +fr.*p"-]
N
i2, I *\
--,a -rt4D;Elx,x^l
n=l m=l
(s.7-4)

But EIX,X^I- [X'lfor n: m andequals *2 fo, n*mbecause of assumed


independence b,v pairs. Thus,

oi..
1
: -*2 +ptrrtx')
-. I t
+ (ru'
1

- .ut2l
- |ww\ - *'t - oilr (s.7-s)

From (5.7-5) the variance of our sample mean estimator goes to zero as N -+
oo fqr finite source variance o]. ttls
fact implies that for large N our esti-
rnaror uill give an estimate nearly equal to the quantity being estimated u'ith
high probabilitl. To prove the implication, we use Chebychev's inequality of
(3.2-10). For our notation it says

(s.7-6)

Which tends ro I as N -+ oo for any finite e > 0 and finite o|. Ttris result
indicates that Xr converges to X with probability I as lf -+ oo. Such estima-
tors are called consistent.

ExAItpLE s.r-t. Suppose the mean of our example noise voltage is to be


estimated to within 5oh of its true value with a probability of 0.95 when
.M
- 50 samples are used. We find what mean and variance are allou'ed.
From (5.7-6) uith e 0.05X u'e require
-
oX
l- =-<0.95
50(0.05x)'-
which mcans f s (l 60)t/2oy for the accuracies desired.
166 Thus far, our discussion has centered on estimating the mean of some
Probability, random quantity. Estimates of functions of random quantities are also pos-
Random Variables, sible. For example, an estimator for the power in a random voltage can be
and Random defind as
Signal Principles

fru=*f
n=l
r: ( s-i'71

while the estimator for the variance of the voltage can be defined as
Ar/v
oi :, _-l - * i= (s.7-8)
E,t
Here i7r, is defined in (5.7-2). The estimator of (5.7-7) is found in Problem 5.7-l
to be unbiased. Its variance is found in Problem 5.7-2. Similarly, the mean of the
variancc estimator of (5.7-8) is unbiased, but becomes biased ii the factor l/
(N- l) is changed to l/N as in the mean and power estimators (Problem
5.7-3).

EXAMPLE s.t-2. A random noise voltage behaves approximately as an


exponential random variable with a mean value of 4 and a variance of
16. Eleven samples are taken having values 0.1V,0.4,0.9, 1.4,2.0,2.8,
3.7,4.8, 6.4.9.2, and 12.0V. We use (5.7-2) and (5.7-8) to find the respec-
tive mean and variance of these samples. From (5.7-2)

xil N : (0.1 + 0.4 +0.9 +...+ 9.2+ 12.0) - 3.973V


ll
From (5.7-8)

A -*t,o | - 3-s73)'+ (0.+ - 3.s73)2


+ ...+ (12.0 - 3.9?3)21

t4.75Y2
-
Here the sample mean is in error by less than I oh for the given set of
.sample values, but the estimate of variance is in error by about 7.8%. The
reader should be aware that any other set of I I samples may give
different values and different percentage errors.
EXAMrLE s.z-r. The random variable Y of Problem 3.5-4 can be generated
by the transformation Y -al(l - Xyttz -ll'/', 0 < x < l. We use
EE MATLAB to generate 250 values of Y from which the sample mean of
(5.1-2), the second moment of (5.7-7). and the variance of (5.7-8) are then
calculated. These values are compared to the true mean, second moment,
and variance of Y, which areknou'n to be raf 4,a2, and o2(16-n?1116.
respectively. For the calculations \\'e assume a 2.
The MATLAB code for this example is given in Figure 5.7-1.
-
Calculated data, shown in Table 5.7-1. reveals errors in estimating
mean, second moment. and variance to be -5.7oh, -10.3%. and'
-8.5o/o, respectively. Problem 5.7-4 reconsiders this example, but for I =
1000 values of Y.
- TABLE 5.7-I t67
Deta applicable to Exemple 5.7-3 cHnrren 5:
Meen Secod moment Vrirnc.e Operations on
Multiple Random
True values 1.57 4.00 r.53 Variables
Estimated (N: 250) 1.48 3.59 l./10
Percent error -5.7% -10.3o/o -8.5%

tttttttt?5tte6e6tt Exa4rle 5 .7 -3 %tttttttte6t6r6tttttes


clerr
N- ?50;.t nnnbcr of randm varLablea to generate .:
a*2; tconet-aar-
x - raDd(1,19);
y : ataqrt(eqrt (L. l (1-x) ) -1) ; t raadm variable
yneaD t DerF (y) t eaqllc E€aD
y2msat - Bean (y. ^2 ) t secoad moment
nrar.cov(y) tvari-nce
yegtinat66 - [yucaa y2mouents yvar],
trtrue - lpj.l2 {.0 (16-pLe2llllt
Der_error - 100r (ycgttneted - ytnre) . /ytrnre
FIGURE 5.7.I
MATLAB code used in Example 5.7-3.

Weak Law of Large Numbers

The preceding developments have shown that the sample mean estimator of
(5.7-2). rvhere the random variables X, are identically distributed (same mean
and same finite variance) and are at least pairwise statistically independent.
satisfies
:-
jggtrlxN-Xl<ell -l any€>0 (s.7-9)

Expression (5.7-9) is known as the v,eak lax' of large nuntbers.

Strong Law of Large Numbers

Another important relationship is the strong law of large numbers. For /V


random variables, X, defined as for the u,eak law, it states that, as /t' -) co.

p{ rir(xr)-f}
I r-oo' ' '!t
I
-r (5.2-lo)
168 *5.8
Probability, COMPLEX RANDOM VARIABLES
Random Variabhs,
and Random A complex random variahle Z can be defined in terms of real random variables
Signal Principles Xand Yby
Z: X + jy (s.8- r )
where j - \F. In considering expected values involving Z,thejoint density
of X and Y must be used. For instance. if g(.) is some function (real or
complex) of Z, the expected value of dZ\ is obtained from

ElsZ)l= r (x, r') ctx dy (s.8-2)


J:J*g1:)A
Various important quantities suct' as the mean value and variance are
obtained through application of (5.8-2). The mean value of Z is
r14 - Elxl+jnlYl - x +.iY
Z - (s.8-3)
The variance o).of Z is defined as the mean value of the function
g(Z): lZ - Elzll'; that is.
oL:Eltz-El4f1 (s.8-4)
Equation (5.8-2) can be extended to include functions of two random.
variables
Z^: X^ +.iY,,, (s.8-s)
and
zn - xn +.iY, (5.8-6)
n + m. if expectation is taken with respect to four randorir variables X,,, Y*,
X* Yn through their joint density function fx^.yn,.xn.y, (x^,!^,xn,.ln).If this
density satisfies
-fxo,,r',,,x,,y,(x^, )',,, x11, !n\ : fx^.y,,(xrr, )'rr)fx^,y,(x* yn) (5.8-7)

then Z^ and Zn are called statistically independen t. The extension to N ran-


dom variables is straightforu'ard.
The correlation and co'tariance of Z^ and Z, are defined by
Rz,,z.:EIZi,Z,l n*m (5.8-8)

and
CZ,,Z,: EllZn,- EIZ^ll-lZ,- Elz,lll nt'm (5.8-9)

respectively, where the superscripted asterisk* represents the complex conju-


gate. If the covariance is 0. Z^ and Z, are said to be uncorrelated random
t.'ariables. By setting (5.8-9) to 0. u'e find that
Rr^r^-El4lElZ") m*n (5.8-1.0)

fior uncorrelated random variables. Statistical independence is sufficient to


guarantee that Z^ and Zn are uncorrelated.
Finally, we note that twocomplex random variables are called orthogonol 169
if their correlation. given by (5.8-8), equals 0. cxeprrn 5:
Operations on
Multiple Random
Variables
5.9
SUMMARY

This chapter extended the operations performed on a single random variable


in Chapter 3 to include operations on multiple random variables. Topics
extended were:
. Expected values were devcloped of functions of random variables, which
' i.ncluded both jbint moments about the origin and centra! monnents. as well
as loint characteristic funclions that are uselul in findrng moments. New
moments of special interesl were correlation and covariance.
. Multiple gaussian random variables were defined.
'. Single and multiple functional transformations of several random variables
were developed.
. Transformation results were used'to show how linear transformation of
jointly gaussian random variables is especially important, as it produces
random variables that are also joint gaussian.
. The important technique of how to generate multiple random variables by
computer was next introduced. The material was illustrated by a computer
example using MATLAB software.
. Some new material on the basics of sampling and estimation of mean.
power. and variance was given. It was supported by both regular and a
computer example and problem (MATLAB).
. Finally, some more advanced rnaterial was given that defines complex
random variables and their characteristics.

PROBLEMS

. 5.1-1. Random variables X and l' have the joint density


I
0<-r<6 and 0<.1'.4
.&. r'(.x..r') = 24
0 elsewhere

What is the expected ralue of the functio n g(X , Y) : (XY)2?

5.1-2. Extend Problem 5.1-l b1' finding the expected value of


XlJ: -t1'xltxi'tl'. where flt, nz. n1. and ll4 irr€ integers' >0
g(Xt. Xt, Xt.
and

l+ o=.rr<aando<.r,<1,
.fx,.xr.x,.x'.(.rr,.r-r.13..ri) = loDca and0 <_x3 <cand0<.ra <d
I O elsewhere
t70 5.1-3. The density furction of two random variables x and Ir is
Probability.
Random Variables, fx.Y$,r') - u(x)u(l') I 6c-a(''+rt
and Random Find the mcan value of the function
Signal Principles

ts o.r=t and o.rs|


L. r and/or I.,
g(x, Y\ - -,
I
t 0 all other X and y
5'1{. For the random variables in Problem 5.1-3, find the mean value of the
func-
tion

g(X'Y)-'-2tx2+Y2\
5.1-5. Three statisrically independent random variables x1, x2, and x3 have
mean
values f, = 3, iz:6,- and X, :
-2. Find the mean ualues of t-he following
functions:
(a) g(Xr, Xz, Xtl: Xt * 3X2 + 4X,
(b) g(Xr Xz, Xt) -= XtXzXt
(c) g(Xr Xz, X) - -ZX;X2 3xth + 4X2X3
(O S$r, Xz, Xt): Xt * X2 +- X3
5.1{. Find rhe mean value of the function

g(X, Y) = X2 + Y2
where x and y are random variables defined by the density function
.-1.? +y2112o2

fx.v(x,y) =:-;:=-
with o2 a constant.

5.1-7. Two statistically independent random variables X and I/ have


mean values *
- Elil- 2
3_lg-
t E\n 4. They haue second;;*;.
- - p-:"rii;i:';
and y2
- EIy\- 25. r'ind:
(a) the inean value (D) the second moment and ,

(c) the variance of the random variable W :3X _ y.

5.1-8. Two randomvariables X and y havemeans i


= land y = 2, variances o]. _
4 and oi = I, and a correlation coefficient p*.y :0.4.
New random variables
W and Iz are defined by
V:-X+2Y W:X+3y
Find:
(a) the means (b) the variances (c) the correlations and
(d) the correlation coefficient pys, of l, and H1.

5.1-9. Two random variables x and y are related by the expression

Y=aX*b
u'here a and D are any real numbers.
(a) Show that their correlation coefficient is t7t
- I I if a>}foranyD cueg,rrn 5:
'=l-t ifa<oforanyb Operations on
(D) Show that their covariance is Multiple Random
Cxv: alx Variables
where ol, is the variance of X.

*5.1-10. Show that the correlation coefficient satisfies the expression

tpr : *!L. r
Jltotltzo

5.1-lf. Find all the second-order moments and central moments for the density func-
tion given in Problern 5.1-3.

'5.1.12. Random variables X and 7 have the joirrt density function

.fx.v(*,/): G+il2rqo r and _j . y.3


{ ;r1.":i[
(a) Find all the second-order moments of X and Y.
(D) What are the variances of X and )/?
(c) What is the correlation coefficient?

5.1-13. Find all the third-order rnoments by using (5.1-5) for X and Y defined in
Problem 5.1-12.

5.1-14. For discrete random variables X and Y, show that:


(a) Joint moments are
N M
nnk: IIi:l r@i,y)xitf
j-l
(D) Joint central moments are

ttnk:E p(xi,1t1,1@,
- *)^(ti- Do
f
where P(.xr,.rrl= PIX:.ri. )/ -))1. X has N possible values xi, ald Y
has M possible values.rT.

'5.1-15. For two random variables X and Y:

lx.t.(x,.r') : 0.156(.r + l)6(.y) + 0.16(x)6(-r') * 0.16(x)6(.r'- 2)


* 0.45(.r - l)6(y + 2) +0.26(.r - l)d(r - l) + 0.56(x - l)dcv - 3)

Find: (a) the correlation. (6) the covariance, and (c) the correlation coefficient
of X and Y (d) Are X and Y either uncorrelated or orthogonal?
5.1-16. Discrete random variables X and Y have the joint Oensitr

.4.r.(.r..r') = 0.46(x + c)6(-r' - 2) + 0.36(.r'- o)d(.r'- 2)


+0.16(x - cr)6(.r' - d) + 0.26(-r - l)E(.r' - l)
Determine the value of a. if any, that minimizes the correlation between X and
Y and find the minimum correlation. Are X and l. orthogonal?
t72 5.1-17. For two discrete random variabhs X and Y:
Probability. .fx.yk,r)=0.36(.r-a)6()'-o)+0.56(x+cr)d(-),-4)+0.26(.r+2)6(.r'+2)
Random VariaHes,
and Random Determirc the value of tr. if any. that minimizes the covariance of X and Y.
Signat Principlcs Find the minimum covariance. Are X and I uncorrelaled?

5.1-18. The density function

['t't' v \ "a \ -
o<x<2 and o<r'<3
,fr..r(x,r')- I 9
IO elsrewhere

applies to two randonr variables X and )'.


(a) Show, by use ot (5.1-6) and (5.1-7). that X and Y are uncorrelared.
(D) Slrow that X and )' are also statistically independcnt.

5"1-19. Two ramlom variables X and I have rhe density function


12
0 <'x < 2 and 0 < r' <
1.r(x,r.) : | *:t'+0'5r')t
3

Io elsewhere

(aI Find all the first- and second-order moments.


(b) Find the covariance.
(c) Arc X and X uncorrelated?

5.1-20. Define nindorn variablcs [' and lU by

V:X*aY
W:X-uY
where a is a real number and l'
and Y are random variables. Determinc a in
terms of moments of X and Y such that V and W are otthogonal.

*5.1-21. If X and I/ in Problem 5.1-20 are gaussian. show that W and V d,re statisti-
calil independent if a7 = oi/o1., where ol. and oi. are the variances of X and
Y, respectively.

"5.1-22. Three uncorrelated random variables Xt. X:. and ,Yr have'means *1 : l.

I",r,l,lltJ';li,Tii:Tl"l;ff HJIIJ-1";";';,,:rik'i;i,"o
(a) ttr mean value. (b) the variance of Y.

5.1-23. Given ll' = (aX + 3 f )r where X and I are zero-mean random variables with
variances oi:4 and ol' : 16. Their correlation coefficient is p - -0.5.
(a) Find a value for the parameter a that minimizes the mean value of 14,'.
(6) Find the minimum mean value.

5.1-24. Tu'o random variables have a uniform densitl"on a circular region defined by

.fx.v(x,r, : t.'.r2 J;..*'ff ,,


I

I
Find the mean value of the function g(X, )') : X2 + Y2.
t5.l-25. Define thoconditional expccted value of a functioq g(X, I) of random vari- t73
ablesXand Yas
t'xerrr'R 5:
Operations on
Els(x, r)lsl: sG,)).fx.y(x..vl8) clxdv
ft I*J* Multiple Random
Variables
f

(a) lf event I is defined as 8= lt,o < Y <),t1. where lo < ln are constants,
-'----i evaluate ElStX. f )181. (Hint: Use results of Problem 4.4-8).
(6) If 8 is defined by A - [ Y: .r'] what does the conditional expected value of
B part (a) become?

5.1-26. For random variables .l and )' having i: l. | -2, o1. - 6, ri' :9, and
p - -i. nna (a) the covariance of X and I, (D) the correlation of X and Y,
and (ci the moments nrrs and rrrsr.

3.1-27. i ='t..ft .. ;. | -. 2. i'l : 4. :rncl C.r-r = -l l1"h icr random variables X


and i'.
(a) Find o'r.o=r,it;'y. and p.
(b) \\'har is rhe rncan vnluc .;l-,f1g rirrrdorrr variabie W : (X + 3 y)l + 2X -f 3?

5.1-28. -)E:5. indepenrlent random variables with * :1,


let X and Y be statistically
P -4- )'= l. and For a random variable W = X -2Y + I fini
(a) Rp. (b) flr.r..(c') Rr.n.. and (r/) Cxy.@) Are X and I uncorrelated?

5.1-29. Statistically independent random variables X and Y have moments mrc - 2,


nr6:14. rns. :12. and rllr : -6. Find the moment pcr2.

5.1-30. A joint density' is given as .

t. t_...,,_l.r(r'+1.5) 0<.r-<l and 0<.t'<l


,.r..)r,r..,r_
l0 ersewhere

Find all the joint ntoments D1,,1, tt and li : 0. l. .. . .

5.1-31. Findall thejointcentral momentstr,r,*.nandA'=0, l,....forthedensity'of


Problem 5.1-30.

5.1-32. Random rariables' X and Y are defined by the ioint density of Problem 4.3- 19.
Find all first- and second-order joint nroments tor these random variables. Are

-'5.I-33. In a control system. a random voltage X is known to have a rnean value


i :,nt - -2V : tn2:9V1. If the voltage X
and a second moment X2
is amplihed by an amplifier that gives an output l'-- -1.5X ', 2, find
t'. t':.oi.and R.y1..
"i.
5.1-34. Tn'o randtrm variables X and Y are defincd bl' X:0. | = -l- X2 -- ).
f :1. and R1 t : -2. Tu'o new rantlotn rariables W'and ['are:
W -2)'* I'
u : _x _ 3)'.
Find ti'. L .rt. LP. Rn1,,o211, and ol..
t74 5.1-35. Statistically_ independent random variables X and Ir have respectivg lneans .f,
Probabilily,
: I and f :-112. Their second moments arc X2 -4 and Y2 =lll4.
Random Variables,
Another random variable is defined as W =!X? +2Y +1. Find Jr,
and Random ir, Rrr, Cxv, fu, and Rpy.
Signal Principles
5.t-36. Determine the correlation R;.y and correlation coefficient for the randqri
variables defined in Problem 4.5-9.

5.1-37. Ttrc cosine inequalitT,, sometimes called Schrlar;t inequality for random vari-
ablesXand Yis
lq(xY)j2 < E(xr)E( y:)
Show its validity. (Hint: Expand the nonnegative quantity El(aX - Yfi.
where a is a real parameter.)

5.1-3E. Trre triangle inequality for random variables X and li is

tEt(x + v)21)0
5
s {E(x2)}0't + [r( vt)]o t
Show its validity. (Hint: Expand and combine f[(X + y)21 and
5]r and use the cosine inequatity of Problem 5.l-37.)
{[E(X'f t +
[E( f2)]0

15.2-1. Find the joint characteristic function for X and Y defined in Problem 5.t-3.

'5.2-2. Show that the joint characteristic function of N independent random variables
X,, having characteristic functions <Dy,(a.r,) is

(D;,,.....;^.(arl, . . ., @x) :fI tr,,(.,,,,


:
15.2-3. For N

lQx,....,xn(rrlr .... arr )i < Qx,.....x,(0. .. .. 0) : I

*5.24. For two zero-mean gaussian random variables X and Y. show that their joint
characteristic lunction is

Ox.y(urr , t;2) = exp{- \loilo.l * 2poxo1.arar: * "i rl]t


*5.2-5. Find the joint characteristic function for random variables X and X defined b1
f x .yG, y) : ( I l2n) rect (.r1:r) rect [(.t + .r.)/r] cos(.r * r')

Use the result to find the marginal characteristic functions of X and )'.
*5.2{. Random variables X1 and X2 have the joint characteristic function

Qx,..r,(rr, {/1) : [rl -,12co11(l - j2to2)]-^/r

whereN>0isaninteger-
(a) Find the correlation and moments rll.s ?Dd n?6r.
(b) Determine the means of X1 and .l',.
(c) What is the correlation coefficient?
*52-7. The joint probability density of two discrete random variables X and Y coo- l7s
sists of impulses located at all lattice points 1mb,id), where m = 0,1, . . . , M
cnerrr,n 5:
and n :1,2,...,/V with.D > 0 and d > 0 beingconstants. All possible points
Operations on
equally probable. Determine the joint characterisic function.
Multiple Random
Variables
r5.2{. l*t k = 1,2, . . . , r(, be statistically independent Poisson random variables,
X1,
each with its own variance 61 (Problem 3.2-13). Show that the sum X : Xr *
X21...* Xx is a Poisson random variable. (Hint: Use results of Problerns
5.2.2 and 3.2-31.)

*5.2-9. Show that the sum X or N statistically independent Poisson random variables
X,, with different means D,, is also a Poisson random variable but its mean is
b: br *b2*...*bt.lHint: Use (5.2-7) and the result of Problem 5.2-2-\
*S.2-10. Show that the sum of iV identically distributed staiisticaily independent expo-
nential random variables Xi, as given by (2.5-9) with a :0 and b replaced by
l/a, is an Erlang random variable, as defined in Problem 3.2-32 and in
Appendix F.fHint: Use (5.2-7) and the result of Problem 5.2-2.1

*5.2-ll. The chi-square random variable with one degree of freedom is defined by the
density

u(x\1-x12
fx6): t0 /2\J2x
where l(l 12) is a constant approximately equalto 1.772. Show that the sum X
of N identically distributed statistically independent chi-square random vari-
ables, each with one degree of freedom, is a chi-square random variable with
N degrees of freedom as defined in Problem 3.2-27 and Appendix F [see (F-35)
through (F-39)1. lHint: Use (5.2-7) and the result of Problem 5.2-2.1

*5.3-1. Zero-mean gaussian random variables X and Y have variances ,t2x:3 and
o2y :4, respectivell'. and a correlation coeflicient p
(a) Write an expression for the joint density function.
- -*.
(6) Show that a rotation of coordinates through the angle given by (5.3-l l)
will produce ne* statistically independent random variables.

"5.3-2. Find the conditional density functions fx@lY : f) and fy(ilX: x) applic-
able to two gaussian random variables X and Y defined by (5.3-l) and show
that they are also gaussian.

5.$3. Assume ox: or = o in (5.3-l) and show that the locus of the maximum of the
joint density is a line passing through the point 8,i') u'ith slope n/4 (or
-nl4\ when p= I (or -l).
5.3-4. Two gaussian random varjables X and Y have variances o;' = 9 and o2y : 4,
respectively, and conelation coeffi cient p. It is known that a coordinate rota-
tion by an angle -zi8 results in new random variables )'1 and I/2 that are
uncorrelated. What is p?
t76 r5.35. Let X and y bejointly gaussian random variables where o2x = iyand e- -1.
Probability,
Find a transformation matrix such that ncw random variables Is and Y2 arc
Random Yariablcs,
statistically indepcndent.
and Random
variables X and I have first- and second-ordcr moments
5'16' Gaussian random
Signal Princifles
* - -1.0,P:1.16, i:1.5,7 -2.89. and Rr.r.= -l-724. Fiod:'(o) Cy1.
and(6) p.Also lind the angle 0 of a coordinate rotation that will
gcnerate new
random variables that are statistically independent.

5.3-7. Suppose the annual snowfalls (accumulated depths in meters) for two nearby
alpine ski resorts are adequately represented by jointly gaussian random vari-
ables X and I, for which p=0.E2. or: 1.5m. oy:1.2m, and Ryy -
81.476m1. If the average snowfall at onJresort is l0m, what is the avJrage
at the other resort?

5.$8. Two gaussian random variablr-'s X and I' have a correlation crrclficrent
p:0.25. fhe standard deviation of X is 1.9. A linear transfonnation (coor-
dinate rotation of nl6) is known to transform X and Y to new random
variables that are statistically independenr. \f,'hat is o].?

*5.1'9. Gaussian random variables X1 and X1. for u'hich f r:2,&,=9, i2= -1.
ol, - 4. and C x,x, = --3. are transformed to new random variables Y1 and I1
according to
Yr: -Xt * Xt
Y=: -2Xt - 3l':'
Find: to\[email protected]) 0t,t..kt) oi,.(") ol.. and (.f) Cy,v,.

*5.4-1. Random variables X and I having the joint density

.f x.t.G, r') = ({;u(x - 2)u(.r' - I ).x.r,2 exp(4 - ?.r-,')

undergo a transformation

[rl =
[:]
to generate new random variables Ys and Y2. i
(a) Find the joint densitl of )'1 and 12.
(b) Show what points in the _r'r-r', plane correspond to a nonzero value of the
new density.

*5.+2. Three random variables Ir.X:, and X3 represent samples of a random noise
voltage taken at three times. Their covariance matrix is defined bl'

:
r.,r
[i i t,g i.l]
A transformation matrir

rrr :11 + i]
converts the variables to new random variables .rl, Y2, aod y3. Find thc t77
covariance matrix of the new random variables.
curpren 5:
Operations on
'5.+3. Determine the density of I. = (x? + xl)oj when x1 and X2 arejointly gaus-
Multiplc Random
sian rafrdom variables with zero means and the same variance. (Hint: Use the
Variables
results of Example 5.4-2.)

*5.5-1. 7*ro-mean gaussian random variables Xt, Xz. and X3 having a covariance
.matrix
l- 4 z.os r.os-l
[crl =lz.os 4 2.05 1

L r.05 2.0s 4 I
are transformed to new variables

Yt: 5Xr *2X2 - X3


Yz: -Xt * 3X2 * X3
Yt:ZXr- X2*2X3
(a) Find the covariance matrix of Is, Y2, and Y3.
(D) Write an expression for the joint density function of Y1, Y2, and Y3.
15.5-2. Two gaussian random variables X1 and X2 aft defined by the mean and
covariance matrices

rr --[ l] rc,r -l-),n -l'n)


Two new random variables Y1 and Y2 are formed using the transformation

rn=[i i]
Find the matrices tr) [il and (D) [C,1. t"l Also find the correlation coefficient
of Y1 and 12.

*s.Gt. Show that (5.6-2) results from the transformations of (5.6-l).


r5.G2. Exiend the text and show that (5.6-l) can be replaced by

Yr : Tlxr, xt)= fr * ,tqll@cos(2nX2\


Yz: Tz(Xr, xz): )': +
I -r"1r.l"txr>sin(2rrX2)
to generate statistically_independent gaussian random variables Is and )'-.
u'ith respective means i, und i1. and respective variances o2y, and ozy-.
*5.G3. Extend the text that leads to (5.6-7) and find transformations of tu'o statisti-
cally independent random variables X1 and X2, both uniform on 10. I ), that
will directly create two correlated -eaussian_random _variables If ', and Il,
havinq correlation coefhcient pp,, ffi€ooS W1 and Wt, ?nd variances o11',
and o14...
178 i5.G4. Work Problem 5.G3. except generate the randorn rariables R and @ for which
(5.C16) applies.

tr
Ia:.-.4Lt
i5.65. Repeat Example 5.6-l except use N = 1000 values of X1 and 1000 values o,[, ,'.
X2. Note the improvement in the accuracy of thc estimated quantities fdi' '"'
random variables lll, and P'2 3S compared to the cxample.

5.7-1. Find the mean value of the power estimator of (5.7-7) and give arguments why
the estimator is unbiased.

5.1-2. Find the variance of the po\r,er estimaror of (5.7-7) and show that it
approaches zero as .\ becomes infinite.
5.7-3. If the factor I /(N - I ) in the variance estimator of (5.7-8) is replaced by I /I,
shou'that the mean of the modified estimator is biased. Determine the amount
of bias. How does the bias behave as N becomes very large?

5.14. Rework Example 5.7-3 for 1000 values of I generated by MATLAB.


Compare the new values of sample- mean, sccond moment, and varianc'e " j
with those found in the example. Are they more accurate?

*5.8-1. A complex random variable Z is defined b1'

Z : cos(X ) *.1sin( Y1

where .l' and Y are independent real random variables uniformly distributed
from -z lo z.
(c) Find the mean value of Z.
(b) Find the rariance of Z. '
*5.8-2. Complex random variables 2,1 and 22havezero means. The correlation of the
real parts of 21 and 22 is 4, while the correlation of the imaginary parts is 6.
The real part of 21 and the imaginary part of 22are statistically inlependent
as a pair. as are the imaginary part of 21 and the real part of 22.
(a) '*'hat is the correlation of Zy and Z2?
(D) Are 21 and Z. statistically independenr?

You might also like