0% found this document useful (0 votes)
416 views26 pages

Chapter 7 - Correlation Functions: EE420/500 Class Notes 7/22/2009 John Stensby

The document summarizes key concepts about correlation functions for random processes: 1. It defines the autocorrelation, autocovariance, and correlation functions for a random process X(t). 2. It describes properties of autocorrelation functions, including that they are even and maximum at the origin, for wide-sense stationary (WSS) random processes. 3. It provides examples to illustrate how the autocorrelation function captures the similarity between samples of a random process at different times and can reveal periodic components.

Uploaded by

Chinta Venu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
416 views26 pages

Chapter 7 - Correlation Functions: EE420/500 Class Notes 7/22/2009 John Stensby

The document summarizes key concepts about correlation functions for random processes: 1. It defines the autocorrelation, autocovariance, and correlation functions for a random process X(t). 2. It describes properties of autocorrelation functions, including that they are even and maximum at the origin, for wide-sense stationary (WSS) random processes. 3. It provides examples to illustrate how the autocorrelation function captures the similarity between samples of a random process at different times and can reveal periodic components.

Uploaded by

Chinta Venu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

EE420/500 Class Notes 7/22/2009 John Stensby

Chapter 7 - Correlation Functions


Let X(t) denote a random process. The autocorrelation of X is defined as

R x ( t1, t 2 ) = E[X( t1 )X( t 2 )] = zz∞ ∞


x x f ( x1, x 2 ; t1, t 2 ) dx1dx2 .
-∞ -∞ 1 2
(7-1)

The autocovariance function is defined as

C x ( t1, t 2 ) = E[{X( t1 ) − ηX ( t1 )}{X( t 2 ) − ηX ( t 2 )}] = R X ( t1, t 2 ) − ηX ( t1 )ηX ( t 2 ) , (7-2)

and the correlation function is defined as

C x (t1, t 2 ) R (t , t ) − ηX (t1 )ηX (t 2 )


rx (t1, t 2 ) = = X 1 2 . (7-3)
σ x (t1 )σ x (t 2 ) σ x (t1 )σ x (t 2 )

If X(t) is at least wide sense stationary, then RX depends only on the time difference τ = t1
- t2, and we write

R x ( τ ) = E[X( t )X( t + τ )] = zz∞ ∞


x x f ( x1, x 2 ; τ ) dx1dx 2 .
-∞ -∞ 1 2
(7-4)

Finally, if X(t) is ergodic we can write

R x ( τ ) = limit 21T
T→∞
z-T
T
X( t )X( t + τ )dt . (7-5)

Function rX(τ) can be thought of as a “measure of statistical similarity” of X(t) and X(t+τ). If
rx(τ0) = 0, the samples X(t) and X(t+τ0) are said to be uncorrelated.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-1


EE420/500 Class Notes 7/22/2009 John Stensby

Properties of Autocorrelation Functions for Real-Valued, WSS Random Processes

1. RX(0) = E[X(t)X(t)] = Average Power.


2. RX(τ) = RX(-τ). The autocorrelation function of a real-valued, WSS process is even.
Proof:

R X ( τ ) = E[X(t)X(t + τ )] = E[X(t - τ )X(t - τ + τ )] (Due to WSS)


(7-6)
= R X (- τ )

3. ⎮RX(τ)⎮≤ RX(0). The autocorrelation is maximum at the origin.


Proof:

b
E X( t ) ± X( t + τ ) g2 = E X( t )2 + X( t + τ )2 ± 2 X( t )X( t + τ ) ≥ 0
(7-7)
= R X ( 0) + R X ( 0) ± 2R X ( τ ) ≥ 0

Hence, ⎮RX(τ)⎮≤ RX(0) as claimed.


4. Assume that WSS process X can be represented as X(t) = η + Xac(t), where η is a constant and
E[Xac(t)] = 0. Then,

R X ( τ) = E ⎡⎣( η + X ac (t) )( η + X ac (t + τ) )⎤⎦

= E[η2 ] + 2ηE[X ac (t)] + E [ X ac (t)X ac (t + τ) ] (7-8)

= η2 + R Xac ( τ).

5. If each sample function of X(t) has a periodic component of frequency ω then RX(τ) will have
a periodic component of frequency ω.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-2


EE420/500 Class Notes 7/22/2009 John Stensby

Example 7-1: Consider X(t) = Acos(ωt+θ) + N(t), where A and ω are constants, random
variable θ is uniformly distributed over (0, 2π), and wide sense stationary N(t) is independent of
θ for every time t. Find RX(τ), the autocorrelation of X(t).

l
R X ( τ ) = E Acos( ωt + θ ) + N(t) q lAcos(ω[t + τ]+ θ) + N(t + τ )q
A2
= E cos(2ωt + ωτ + 2θ ) + cos( ωτ ) + E A cos(ωt + θ ) N ( t + τ )
2
(7-9)
+ E N ( t )A cos(ω[t + τ] + θ ) + E N ( t ) N( t + τ )

A2
= cos(ωτ ) + R N ( τ )
2

So, RX(τ) contains a component at ω, the same frequency as the periodic component in X.
6. Suppose that X(t) is ergodic, has zero mean, and it has no periodic components; then

limit R X ( τ ) = 0 . (7-10)
τ→∞

That is, X(t) and X(t+τ) become uncorrelated for large τ.


7. Autocorrelation functions cannot have an arbitrary shape. As will be discussed in Chapter 8,
for a WSS random process X(t) with autocorrelation RX(τ), the Fourier transform of RX(τ) is the
power density spectrum (or simply power spectrum) of the random process X. And, the power

spectrum must be non-negative. Hence, we have the additional requirement that

∞ ∞
F [ R x (τ)] = ∫ R x (τ)e− jωτdτ = ∫ R x (τ) cos(ωτ)dτ ≥ 0 (7-11)
−∞ −∞

for all ω (the even nature of RX was used to obtain the right-hand side of (7-11)). Because of
this, in applications, you will not find autocorrelation functions with flat tops, vertical sides, or

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-3


EE420/500 Class Notes 7/22/2009 John Stensby

any jump discontinuities in amplitude (these features cause “oscillatory behavior”, and negative
values, in the Fourier transform). Autocorrelation R(τ) must vary smoothly with τ.
Example 7-2: Random Binary Waveform

Process X(t) takes on only two values: ±A. Every ta seconds a sample function of X
either “toggles” value or it remains the same (positive constant ta is known). Both possibilities
are equally likely (i.e., P[“toggle”] = P[“no toggle”] = 1/2). The possible transitions occur at
times t0 + kta, where k is an integer, -∞ < k < ∞. Time t0 is a random variable that is uniformly
distributed over [0, ta]. Hence, given an arbitrary sample function from the ensemble, a “toggle”
can occur anytime. Starting from t = t0, sample functions are constant over intervals of length ta,
and the constant can change sign from one ta interval to the next. The value of X(t) over one "ta-
interval" is independent of its value over any other "ta-interval". Figure 7-1 depicts a typical
sample function of the random binary waveform. Figure 7-2 is a timing diagram that illustrates
the "ta intervals". The algorithm used to generate the process is not changing with time. As a
result, it is possible to argue that the process is stationary. Also, since +A and –A are equally
likely values for X at any time t, it is obvious that X(t) has zero mean.

X(t)
X(t 1+τ)=X2
A

X(t 1)=X1
-A

t1 Time Line t1+τ

t0 -t a 0 t0 t0+ta t0+2ta t0+3ta t0+4ta t0+5ta

Fig. 7- 1: Sample function of a simple binary random process.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-4


EE420/500 Class Notes 7/22/2009 John Stensby

" ta Intervals "

t0 -t a 0 t0 t0+ta t0+2ta t0+3ta t0+4ta t0+5t

Fig. 7-2: Time line illustrating the independent "ta intervals".

To determine the autocorrelation function RX(τ), we must consider two basic cases.
1) Case ⎮τ⎮ > ta . Then, the times t1 and t1 + τ cannot be in the same "ta interval". Hence, X(t1)

and X(t1 + τ) must be independent so that

R ( τ ) = E[X( t1 )X( t1 + τ )] = E[X( t1 )] E[X( t1 + τ )] = 0, τ > t a . (7-12)

2) Case ⎮τ⎮ < ta. To calculate R(τ) for this case, we must first determine an expression for the

probability P[t1 and t1 + τ in the same “ta interval”]. We do this in two parts: the first part is i) 0
< τ < ta, and the second part is ii) -ta < τ ≤ 0.
i) 0 < τ < ta. Times t1 and t1+τ may, or may not, be in the same "ta-interval". However, we write

P[t1 and t1 + τ in same "t a interval"] = P[t 0 ≤ t1 ≤ t1 + τ <t 0 + t a ]

= P[t1 + τ − t a < t 0 ≤ t1 ]

(7-13)
= 1 [t1 − (t1 + τ − t a )]
ta

t −τ
= a , 0 < τ < ta
ta

ii) -ta < τ ≤ 0. Times t1 and t1+τ may, or may not, be in the same "ta-interval". However, we

write

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-5


EE420/500 Class Notes 7/22/2009 John Stensby

P[t1 and t1 + τ in same "t a interval"] = P[t 0 ≤ t1 + τ ≤ t1 <t 0 + t a ]

= P[t1 − t a < t 0 ≤ t1 + τ]

(7-14)
= 1 [t1 + τ − (t1 − t a )]
ta

t +τ
= a , -t a < τ ≤ 0
ta

Combine (7-13) and (7-14), we can write

t − τ
P[t1 and t1 + τ in same "t a interval"] = a t , τ < ta . (7-15)
a

Now, the product X(t1)X(t1+τ) takes on only two values, plus or minus A2. If t1 and t1 + τ are in
the same "ta-interval" then X(t1)X(t1+τ) = A2. If t1 and t1 + τ are in different "ta-intervals" then
X(t1) and X(t1+τ) are independent, and X(t1)X(t1+τ) = ±A2 equally likely. For ⎮τ⎮ < ta we can
write

R(τ) = E [ X(t1 )X(t1 + τ) ]

= A 2 P [ t1 and t1 + τ in same "t a interval"]


. (7-16)
+ A 2 P ⎡{t1 and t1 + τ in different "t a intervals"}, X(t1 )X(t1 + τ) = A 2 ⎤
⎣ ⎦

− A 2 P ⎡{t1 and t1 + τ in different "t a intervals"}, X(t1 )X(t1 + τ) = − A 2 ⎤


⎣ ⎦

However, the last two terms on the right-hand side of (7-16) cancel out (read again the two
sentences after (7-15)). Hence, we can write

R(τ) = A 2 P [ t1 and t1 + τ in same "t a interval"] . (7-17)

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-6


EE420/500 Class Notes 7/22/2009 John Stensby

R(τ)

A2

τ
-t a ta

Fig. 7-3: Autocorrelation of Random Binary waveform.

Finally, substitute (7-15) into (7-17) to obtain

⎧ 2 ⎡ ta − τ ⎤
⎪A ⎢ t , τ < ta
R(τ) = E[X(t1 )X(t1 + τ)] = ⎨
⎪ ⎣ a ⎥⎦ . (7-18)

⎪⎩ 0, τ > ta

Equation (7-18) provides a formula for R(τ) for the random binary signal described by Figure 7-
1. A plot of this formula for R(τ) is given by Figure 7-3.
Poisson Random Points Review

The topic of random Poisson points is discussed in Chapters 1, 2 and Appendix 9B. Let
n(t1, t2) denote the number of Poisson points in the time interval (t1, t2). Then, these points are
distributed in a Poisson manner with

(λτ)k
P[n(t1, t 2 ) = k] = e−λτ , (7-19)
k!

where τ ≡ ⎮t1 - t2⎮, and λ > 0 is a known parameter. That is, n(t1,t2) is Poisson distributed with
parameter λτ. Note that n(t1, t2) is an integer valued random variable with

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-7


EE420/500 Class Notes 7/22/2009 John Stensby

E[n(t1, t2)] = λ⎮t1 - t2⎮


VAR[n(t1, t2)] = λ⎮t1 - t2⎮ (7−20)
Ε[n2(t1, t2)] = VAR[n(t1, t2)] + (E[n(t1, t2)])2 = λ⎮t1 - t2⎮ + λ2⎮t1 - t2⎮2.

Note that E[n(t1,t2)] and VAR[n(t1,t2)] are the same, an unusual result for random quantities. If
(t1, t2) and (t3, t4) are non-overlapping, then the random variables n(t1, t2) and n(t3, t4) are
independent. Finally, constant λ is the average point density. That is, λ represents the average
number of points in a unit length interval.
Poisson Random Process

Define the Poisson random process

X(t) = 0, t=0
. (7-21)
= n(0,t), t > 0

A typical sample function is illustrated by Figure 7-4.


Mean of Poisson Process

For any fixed t ≥ 0, X(t) is a Poisson random variable with parameter λt. Hence,

E[X(t)] = λt, t ≥ 0. (7-22)

The time varying nature of the mean implies that process X(t) is nonstationary.

X(t)

4
3
2
1
× × × × × × ×
time
× Location of a Poison Point
Fig. 7-4: Typical sample function of Poisson random process.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-8


EE420/500 Class Notes 7/22/2009 John Stensby

Autocorrelation of Poisson Process

The autocorrelation is defined as R(t1, t2) = E[X(t1)X(t2)] for t1 ≥ 0 and t2 ≥ 0. First, note
that

R(t, t) = λt + λ 2 t 2 , t > 0 , (7-23)

a result obtained from the known 2nd moment of a Poisson random variable. Next, we show that

R(t1, t 2 ) = λt 2 + λ 2 t1t 2 for 0 < t 2 < t1


. (7-24)
2
= λt1 + λ t1t 2 for 0 < t1 < t 2

Proof: case 0 < t1 < t2

We consider the case 0 < t1 < t2. The random variables X(t1) and {X(t2) - X(t1)} are
independent since they are for non-overlapping time intervals. Also, X(t1) has mean λt1, and
{X(t2) - X(t1)} has mean λ(t2 - t1). As a result,

E[X(t1 ){X(t 2 ) - X(t1 )}] =E[X(t1 )]E[X(t 2 ) - X(t1 )]=λt1 ⋅ λ (t 2 − t1 ) . (7-25)

Use this result to obtain

R(t1,t 2 )=E[X(t1)X(t 2 )] =E[X(t1 ){X(t1 )+X(t 2 ) - X(t1 )}]

=E[X 2 (t1 )]+E[X(t1 ){X(t 2 ) - X(t1 )}]=λt1 + λ 2 t12 + λt1 ⋅ λ (t 2 − t1 ) . (7-26)

=λt1 + λ 2 t1t 2 for 0 < t1 < t 2

Case 0 < t2 < t1 is similar to the case shown above. Hence, for the Poisson process, the
autocorrelation function is

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-9


EE420/500 Class Notes 7/22/2009 John Stensby

R(t1, t 2 ) = λt 2 + λ 2 t1t 2 for 0 < t 2 < t1


. (7-27)
2
= λt1 + λ t1t 2 for 0 < t1 < t 2

Semi-Random Telegraph Signal

The Semi-Random Telegraph Signal is defined as

X(0) = 1

X(t) = 1 if number of Poisson Points in (0,t) is even (7-28)

= -1 if number of Poisson Points in (0,t) is odd

for -∞ < t < ∞. Figure 7-5 depicts a typical sample function of this process. In what follows, we
find the mean and autocorrelation of the semi-random telegraph signal.
First, note that

P[X(t) = 1] = P[even number of pts in (0, t)]

= P[0 pts in (0,t)] + P[2 pts in (0,t)] + P[4 pts in (0,t)] + " (7-29)

⎛ λ2 t 2 λ4 t 4 ⎞
−λ t ⎜1 + −λ t
=e + + "⎟ = e cosh(λ t ) .
⎜ 2! 4! ⎟
⎝ ⎠

Note that (7-29) is valid for t < 0 since it uses ⎮t⎮. In a similar manner, we can write

X(t)

× × × × × × ×

× Location of a Poisson Point


Fig. 7-5: A typical sample function of the semi-random telegraph signal.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-10


EE420/500 Class Notes 7/22/2009 John Stensby

P[X(t) = −1] = P[odd number of pts in (0, t)]

= P[1 pts in (0,t)] + P[3 pts in (0,t)] + P[5 pts in (0,t)] + " (7-30)

⎛ λ3 t
3
λ5 t
5 ⎞
−λ t ⎜λ t + −λ t
=e + + "⎟ = e sinh(λ t ).
⎜ 3! 5! ⎟
⎝ ⎠

As a result of (7-29) and (7-30), the mean is

E[X(t)] = +1 × P[X(t) = +1] − 1 × P[X(t) = −1]

=e
−λ t
( cosh(λ t ) − sinh(λ t ) ) (7-31)

−2λ t
=e .

The constraint X(0) = 1 causes a nonzero mean that dies out with time. Note that X(t) is not
WSS since its mean is time varying.
Now, find the autocorrelation R(t1, t2). First, suppose that t1 - t2 ≡ τ > 0, and -∞ < t2 < ∞.
If there is an even number of points in (t2, t1), then X(t1) and X(t2) have the same sign and

P[X(t1 ) = 1, X(t 2 ) = 1] = P[ X(t1 ) = 1⎮X(t 2 ) = 1 ] P[ X(t 2 ) = 1 ]


(7-32)
= {exp[−λτ]cosh(λτ)}{exp[ −λ t 2 ]cosh(λ t 2 )}

P[X(t1 ) = −1, X(t 2 ) = −1] = P[ X(t1 ) = −1⎮X(t 2 ) = −1 ] P[ X(t 2 ) = −1 ]


(7-33)
= {exp[−λτ]cosh(λτ)}{exp[−λ t 2 ]sinh(λ t 2 )}

for t1 - t2 ≡ τ > 0, and -∞ < t2 < ∞. If there are an odd number of points in (t2, t1), then X(t1) and
X(t2) have different signs, and we have

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-11


EE420/500 Class Notes 7/22/2009 John Stensby

P[X(t1 ) = 1, X(t 2 ) = −1] = P[ X(t1 ) = 1⎮X(t 2 ) = −1 ] P[ X(t 2 ) = −1 ]


(7-34)
= {exp[−λτ]sinh(λτ)}{exp[−λ t 2 ]sinh(λ t 2 )}

P[X(t1 ) = −1, X(t 2 ) = 1] = P[ X(t1 ) = −1⎮X(t 2 ) = 1 ] P[ X(t 2 ) = 1 ]


(7-35)
= {exp[−λτ]sinh(λτ)}{exp[−λ t 2 ]cosh(λ t 2 )}

for t1 - t2 ≡ τ > 0, and -∞ < t2 < ∞. The product X(t1)X(t2) is +1 with probability given by the
sum of (7-32) and (7-33); it is -1 with probability given by the sum of (7-34) and (7-35). Hence,
its expected value can be expressed as

R(t1, t 2 ) = E[X(t1 )X(t 2 )]

−λ t 2
= e −λτ cosh(λτ) ⎡ e {cosh(λ t 2 ) + sinh(λ t 2 )}⎤ (7-36)
⎣⎢ ⎦⎥

−λ t 2
−e −λτ sinh(λτ) ⎡ e {cosh(λ t 2 ) + sinh(λ t 2 )}⎤ .
⎢⎣ ⎥⎦

Using standard identities, this result can be simplified to produce

R(t1, t 2 ) = E[X(t1 )X(t 2 )]

−λ t 2
= e −λτ [cosh(λτ) − sinh(λτ) ] e ⎡⎣cosh(λ t 2 ) + sinh(λ t 2 ) ⎤⎦
(7-37)
−λτ ⎡ −λτ ⎤ −λ t 2 ⎡ eλ t 2 ⎤
=e e e
⎣ ⎦ ⎢⎣ ⎥⎦

= e −2λτ , for τ = t1 − t 2 > 0.

Due to symmetry (the autocorrelation function must be even), we can conclude that

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-12


EE420/500 Class Notes 7/22/2009 John Stensby

−2λ τ
R(t1, t 2 ) = R(τ) = e , τ = t1 − t 2 , (7-38)

is the autocorrelation function of the semi-random telegraph signal, a result illustrated by Fig. 7-
6. Again, note that the semi-random telegraph signal is not WSS since it has a time-varying
mean.
Random Telegraph Signal
Let X(t) denote the semi-random telegraph signal discussed above. Consider the process
Y(t) = αX(t), where α is a random variable that is independent of X(t) for all time. Furthermore,
assume that α takes on only two values: α = +1 and α = -1 equally likely. Then the mean of Y is
E[Y] = E[αX] = E[α]E[X] = 0 for all time. Also, RY(τ) = E[α2]RX(τ) = RX(τ), a result depicted
by Figure 7-6. Y is called the Random Telegraph Signal since it is “entirely random” for all time
t. Note that the Random Telegraph Signal is WSS.

R(τ) = exp(-2λ⎮τ⎮)

τ
Fig. 7-6: Autocorrelation function for both the semi-random and random telegraph signals.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-13


EE420/500 Class Notes 7/22/2009 John Stensby

Autocorrelation of Wiener Process


Consider the Wiener process that was introduced in Chapter 6. If we assume that X(0) =
0 (in many textbooks, this is part of the definition of a Wiener process), then the autocorrelation
of the Wiener process is RX(t1, t2) = 2D{min(t1,t2)}. To see this, first recall that a Wiener process
has independent increments. That is, if (t1, t2) and (t3, t4) are non-overlapping intervals (i.e., 0 ≤
t1 < t2 ≤ t3 < t4), then increment X(t2) - X(t1) is statistically independent of increment X(t4) - X(t3).
Now, consider the case t1 > t2 ≥ 0 and write

R x ( t1, t 2 ) = E X( t1 )X( t 2 ) = E {X( t1 ) − X( t 2 ) + X( t 2 )}X( t 2 )

= E {X( t1 ) − X( t 2 )}X( t 2 ) + E X( t 2 )}X( t 2 ) (7-39)

= 0 + 2D t 2 .

By symmetry, we can conclude that

RX(t1, t2) = 2D{min(t1,t2)}, t1 & t2 ≥ 0, (7-40)

for the Wiener process X(t), t ≥ 0, with X(0) = 0.


Correlation Time
Let X(t) be a zero mean (i.e., E[X(t)] = 0) W.S.S. random process. The correlation time
of X(t) is defined as

τx ≡
1
z

R x ( 0) 0
R x ( τ ) dτ . (7-41)

Intuitively, time τx gives some measure of the time interval over which “significant” correlation
exists between two samples of process X(t).

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-14


EE420/500 Class Notes 7/22/2009 John Stensby

For example, consider the random telegraph signal described above. For this process the
correlation time is

τx ≡ z
1 ∞ −2λτ
1 0
e dτ =
1

. (7-42)

In Chapter 8, we will relate correlation time to the spectral bandwidth (to be defined in Chapter
8) of a W.S.S. process.
Crosscorrelation Functions
Let X(t) and Y(t) denote real-valued random processes. The crosscorrelation of X and Y
is defined as

R xY ( t1, t 2 ) = E[X( t1 )Y( t 2 )] = zz ∞ ∞


-∞ -∞
xy f ( x, y; t1, t 2 ) dx dy . (7-43)

The crosscovariance function is defined as

C XY ( t1, t 2 ) = E[{X( t1 ) − ηX ( t1 )}{Y( t 2 ) − ηY ( t 2 )}] = R XY ( t1, t 2 ) − ηX ( t1 )ηY ( t 2 ) (7-44)

Let X(t) and Y(t) be WSS random processes. Then X(t) and Y(t) are said to be jointly
stationary in the wide sense if RXY(t1,t2) = RXY(τ), τ = t1 - t2. For jointly stationary in the wide
sense processes the crosscorrelation is

R XY ( τ ) = E[X( t + τ )Y( t )] = zz∞


-∞ -∞

xy f XY ( x, y ; τ ) dxdy (7-45)

Warning: Some authors define RXY(τ) = E[X(t)Y(t+τ)]; in the literature, there is controversy in
the definition of RXY over which function is shifted. For RXY, the order of the subscript is
significant! In general, RXY ≠ RYX.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-15


EE420/500 Class Notes 7/22/2009 John Stensby

For jointly stationary random processes X and Y, we show some elementary properties of
the cross correlation function.
1. RYX(τ) = RXY(-τ). To see this, note that

R YX ( τ ) = E[Y( t + τ )X( t )] = E[Y( t − τ + τ )X( t − τ )] = E[Y( t )X( t − τ )] = R XY ( − τ ) . (7-46)

2. RYX(τ) does not necessarily have its maximum at τ = 0; the maximum can occur anywhere.
However, we can say that

2 R XY ( τ ) ≤ R X ( 0) + R Y ( 0) . (7-47)

To see this, note that

E[{X( t + τ ) ± Y( t )}2 ] = E[X 2 ( t + τ )] ± 2E[X( t + τ )Y( t )] + E[Y 2 ( t )] ≥ 0


. (7-48)
= R X ( 0) ± 2R XY ( τ ) + R Y ( 0) ≥ 0

Hence, we have 2 R XY ( τ ) ≤ R X ( 0) + R Y ( 0) as claimed.


Linear, Time-Invariant Systems: Expected Value of the Output
Consider a linear time invariant system with impulse response h(t). Given input X(t), the
output Y(t) can be computed as

Y( t ) = L X( t ) ≡ z-∞

X( τ )h(t - τ ) dτ (7-49)

The notation L[ • ] denotes a linear operator (the convolution operator in this case). As given by
(7-49), output Y(t) depends only on input X(t), initial conditions play no role here (assume that
all initial conditions are zero).

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-16


EE420/500 Class Notes 7/22/2009 John Stensby

Convolution and expectation are integral operators. In applications that employ these
operations, it is assumed that we can interchange the order of convolution and expectation.
Hence, we can write


E[Y(t)] = E [ L[X(t)]] ≡ E ⎡⎢ ∫ X( τ)h(t - τ) dτ ⎤⎥
⎣ -∞ ⎦


=∫ E[X( τ)]h(t - τ) dτ (7-50)
-∞


= ∫ ηx ( τ)h(t - τ) dτ .
-∞

More generally, in applications, it is assumed that we can interchange the operations of


expectation and integration so that

β β β β
E ⎡⎢ ∫ 1 " ∫ n f (t1," , t n )dt1 " dt n ⎤⎥ = ∫ 1 " ∫ n E [ f (t1," , t n ) ] dt1 " dt n , (7-51)
⎣ α1 αn ⎦ α1 αn

for example (f is a random function involving variables t1, …, tn).


As a special case, assume that input X(t) is wide-sense stationary with mean ηx.
Equation (7-50) leads to

∞ ∞
ηY = E[Y(t)] = ηx ⎡⎢ ∫ h(t - τ) dτ ⎤⎥ = ηx ⎡⎢ ∫ h( τ) dτ ⎤⎥ = ηx H(0) , (7-52)
⎣ -∞ ⎦ ⎣ -∞ ⎦

where H(0) is the DC response (i.e., DC gain) of the system.


Linear, Time-Invariant Systems: Input/Output Cross Correlation
Let RX(t1,t2) denote the autocorrelation of input random process X(t). We desire to find
RXY(t1,t2) = E[x(t1)y(t2)], the crosscorrelation between input X(t) and output Y(t) of a linear,
time-invariant system.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-17


EE420/500 Class Notes 7/22/2009 John Stensby

Theorem 7-1
The cross correlation between input X(t) and output Y(t) can be calculated as (both X and
Y are assumed to be real-valued)


R XY (t1, t 2 ) = E [ X(t1 )Y(t 2 ) ] = L2 [ R X (t1, t 2 )] = ∫ R X (t1, t 2 − α) h(α) dα (7-53)
-∞

Notation: L2[·] means operate on the t2 variable (the second variable) and treat t1 (the first
variable) as a fixed parameter.

Proof: In (7-53), the convolution involves folding and shifting the “t2 slot” so we write

Y( t 2 ) = z-∞

X( t 2 − α )h ( α ) dα ⇒ X( t1 )Y( t 2 ) = z-∞

X( t1 )X( t 2 − α )h(α ) dα , (7-54)

a result that can be used to derive


R XY (t1, t 2 ) = E[X(t1 )Y(t 2 )] = ∫ E[X(t1 )X(t 2 − α)] h(α) dα
-∞


=∫ R X (t1, t 2 − α) h(α) dα (7-55)
-∞

= L2 [ R X (t1, t 2 )] .

Special Case: X is WSS


Suppose that input process X(t) is WSS. Let τ = t1 - t2 and write (7-55) as

R XY ( τ ) = z
-∞

R X ( τ + α ) h(α ) dα = z-∞

R X ( τ − α ) h( −α ) dα
(7-56)
= R X ( τ ) ∗ h( − τ )

Note that X(t) and Y(t) are jointly wide sense stationary.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-18


EE420/500 Class Notes 7/22/2009 John Stensby

Theorem 7-2

The autocorrelation RY(τ) can be obtained from the crosscorrelation RXY(τ) by the formula

R Y ( t1, t 2 ) = L1 R XY ( t1, t 2 ) = z-∞



R XY ( t1 − α, t 2 ) h( α ) dα (7-57)

Notation: L1[·] means operate on the t1 variable (the first variable) and treat t2 (the second
variable) as a fixed parameter.

Proof: In (7-57), the convolution involves folding and shifting the “t1 slot” so we write

∞ ∞
Y(t1 ) = ∫ X(t1 − α)h( α) dα ⇒ Y(t1 )Y(t 2 ) = ∫ Y(t 2 )X(t1 − α)h(α) dα (7-58)
-∞ -∞

Now, take the expected value of (7-58) to obtain

R Y ( t1, t 2 ) = E[Y( t1 )Y( t 2 )]

= z
-∞

E[Y( t 2 )X( t1 − α )] h(α ) dα = z-∞

R XY ( t1 − α, t 2 ) h(α ) dα (7-59)

= L1 R XY ( t1, t 2 ) ,

a result that completes the proof of our theorem.


The last two theorems can be combined into a single formula for finding RY(t1,t2).
Consider the formula


R Y (t1, t 2 ) = ∫ R XY (t1 − α, t 2 ) h(α) dα
-∞
(7-60)
∞⎡ ∞
= ∫ ∫ R (t − α, t 2 − β) h(β) dβ ⎤⎥ h(α ) dα .
-∞ ⎢⎣ -∞ X 1 ⎦

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-19


EE420/500 Class Notes 7/22/2009 John Stensby

This result leads to

R Y ( t1, t 2 ) = zz ∞ ∞
-∞ -∞
R X ( t1 − α, t 2 − β ) h( α )h(β ) dα dβ , (7-61)

an important "double convolution" formula for RY in terms of RX.


Special Case: X(t) is W.S.S.
Suppose that input process X(t) is WSS. Let τ = t1 - t2 and write (7-61) as

R Y ( t1, t 2 ) = zz ∞ ∞
-∞ -∞
R X ( t1 − t 2 − [α − β]) h(α )h(β ) dα dβ (7-62)

Define τ ≡ t1 - t2; in (7-62) change the variables of integration to α and γ ≡ α - β and obtain

∞ ∞
R Y ( τ) = ∫ ∫ R X (τ − γ ) h(α )h(α − γ ) dγ dα
-∞ -∞
. (7-63)
∞ ∞
= R X (τ − γ ) ⎡⎢
∫ h(α)h(−[ γ − α ]) dα ⎤⎥ dγ

-∞ ⎣ - ∞ ⎦

This last formula can be expressed in a more convenient form. First, define

Ψ( τ ) ≡ z-∞

h( α )h ( −[τ − α ]) dα = h( τ )∗ h( − τ ) . (7-64)

Then, (7-63) can be expressed as

RX(τ) RY(τ) = [h(τ)∗h(-τ)]∗RX(τ)


h(τ)∗h(-τ)

Fig. 7-7: Output autocorrelation in terms of input autocorrelation.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-20


EE420/500 Class Notes 7/22/2009 John Stensby

RY (τ) = z
-∞

R X ( τ − γ )Ψ( γ ) dγ = R X ( τ ) ∗ Ψ( τ ) , (7-65)

a result that is illustrated by Figure 7-7. Equation (7-65) is a convenient formula for computing
RY(τ) when input X(t) is W.S.S. Note from (7-65) that WSS input X(t) produce WSS output
Y(t). A similar statement can be made for stationary in the strict sense: strict-sense stationary
input X(t) produces strict-sense stationary output Y(t).
Example 7-3
A zero-mean, stationary process X(t) with autocorrelation Rx(τ) = qδ(τ) (white noise) is
applied to a linear system with impulse response h(t) = e-ct U(t), c > 0. Find Rxy(τ) and Ry(τ) for
this system. Since X is WSS, we know that

R XY ( τ) = E[X(t + τ)Y(t)] = R X ( τ) ∗ h( −τ) = qδ( τ) ∗ ecτ U( −τ)


(7-66)
= q ecτ U( −τ).

Note that X and Y are jointly WSS. That RXY(τ) = 0 for τ > 0 should be intuitive since X(t) is a
white noise process. Now, the autocorrelation of Y can be computed as

R Y ( τ) = E[Y(t + τ)Y(t)] = R X ( τ) ∗ [h( τ) ∗ h( −τ)] = [R X ( τ) ∗ h( −τ)] ∗ h( τ)

= R XY ( τ) ∗ h( τ) = {qecτ U( −τ)} ∗ {e −cτ U( τ)} (7-67)

q −c τ
= e , − ∞ < τ < ∞.
2c

Note that output Y(t) is not “white” noise; samples of Y(t) are correlated with each other.
Basically, system h(t) filtered white-noise input X(t) to produce an output Y(t) that is correlated.
As we will see in Chapter 8, input X(t) is modeled as having an infinite bandwidth; the system
“bandlimited” its input to form an output Y(t) that has a finite bandwidth.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-21


EE420/500 Class Notes 7/22/2009 John Stensby

Example 7-4 (from Papoulis, 3rd Ed., pp. 311-312)


A zero-mean, stationary process X(t) with autocorrelation Rx(τ) = qδ(τ) (white noise) is
applied at t = 0 to a linear system with impulse response h(t) = e- c t U(t), c > 0. See Figure 7-8.
Assume that the system is “at rest” initially (the initial conditions are zero so that Y(t) = 0, t < 0).
Find Rxy and Ry for this system.
In a subtle way, this problem differs from the previous example. Since the input is
applied at t = 0, the system “sees” a non-stationary input. Hence, we must analyze the general,
nonstationary case. As shown below, processes X(t) and Y(t) are not jointly wide sense
stationary, and Y(t) is not wide sense stationary. Also, it should be obvious that RXY(t1,t2) =
E[X(t1)Y(t2)] = 0 for t1 < 0 or t2 < 0 (note how this differs from Example 7-3).
Rxy(t1,t2) equals the response of the system to Rx(t1-t2) = qδ(t1-t2), when t1 is held fixed
and t2 is the independent variable (the input δ function occurs at t2 = t1). For t1 > 0 and t2 > 0, we
can write


R xy (t1, t 2 ) = L2 [ R X (t1, t 2 )] = ∫ R X (t1, t 2 − α) h(α) dα
-∞


= ∫ q δ (t1 − t 2 + α) h(α) dα = q h( −[t1 − t 2 ])
-∞ (7-68)

= q exp[ −c(t 2 − t1 )] U(t 2 − t1 ), t1 > 0, t 2 > 0,

= 0, t1 < 0 or t 2 < 0,

a result that is illustrated by Figure 7-9. For t2 < t1, output Y(t2) is uncorrelated with input X(t1),
as expected (this should be intuitive). Also, for (t2 - t1) > 5/c we can assume that X(t1) and Y(t2)

X(t) Y(t)
h(t) = e -ct U(t)
close at
t=0

Figure 7-8: System with random input applied at t = 0.

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-22


EE420/500 Class Notes 7/22/2009 John Stensby

Rxy(t1,t2)

0 t1 t2 - Axis
Figure 7-9: Plot of (7-68); the crosscorrelation
between input X and output Y.

are uncorrelated. Finally, note that X(t) and Y(t) are not jointly wide sense stationary since RXY
depends on absolute t1 and t2 (and not only the difference τ = t1 - t2).
Now, find the autocorrelation of the output; there are two cases. The first case is t2 > t1 >
0 for which we can write

R Y (t1, t 2 ) = E[Y(t1 )Y(t 2 )]


=∫ R XY (t1 − α, t 2 ) h(α) dα
-∞
(7-69)
t
= 1 q e −c(t 2 −[t1 −α])e −cα U(t 2 − [t1 − α])dα

0

q
= (1 − e −2ct1 )e −c(t 2 − t1 ) , t 2 > t1 > 0,
2c

Note that the requirements 1) h(α) = 0, α < 0, and 2) t1 - α > 0 were used to write (7-69). The
second case is t1 > t2 > 0 for which we can write

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-23


EE420/500 Class Notes 7/22/2009 John Stensby


R Y (t1, t 2 ) = E[Y(t1 )Y(t 2 )] = ∫ R XY (t1 − α, t 2 ) h(α) dα
-∞

t
= ∫ 1 q e−c(t 2 −[t1 −α ]) e −cα U(t 2 − [t1 − α])dα
0
(7-70)
t1 − c(t 2 −[t1 −α ]) −cα
=∫ qe e dα
t1 − t 2

q
= (1 − e−2ct 2 )e−c(t1 − t 2 ) , t1 > t 2 > 0.
2c

Note that output Y(t) is not stationary. The reason for this is simple (and intuitive). Input X(t) is
applied at t = 0, and the system is “at rest” before this time (Y(t) = 0, t < 0). For a few time
constants, this fact is “remembered” by the system (the system “has memory”). For t1 and t2
larger than 5 time constants (t1, t2 > 5/(2c)), “steady state” can be assumed, and the output
autocorrelation can be approximated as

q − c t 2 − t1
R y ( t1, t 2 ) ≈ e . (7-71)
2c

Output y(t) is “approximately stationary” for t1, t2 > 5/(2c1).


Example 7-5: Let X(t) be a real-valued, WSS process with autocorrelation R(τ). For any fixed
T > 0, define the random variable

T
ST ≡ ∫ X(t)dt . (7-72)
−T

Express the second moment E[ST2] as a single integral involving R(τ). First, note that

T T T T
ST2 ≡ ∫ X(t1 )dt1 ∫−T X(t 2 )dt 2 = ∫−T ∫−T X(t1)X(t 2 )dt1dt 2 , (7-73)
−T

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-24


EE420/500 Class Notes 7/22/2009 John Stensby

(t1, t2) plane t2-axis (τ, τ′) plane τ′-axis


τ = t1 − t 2 2T
T τ′ = t1 + t 2
(τ, 2T+τ) (τ, 2T-τ)

t1-axis
τ-axis
-T T -2T 2T

t1 = ½( τ + τ′)
-T (τ, -{2T+τ}) (τ, -{2T-τ})
t 2 = ½( τ′ − τ)
-2T

Fig. 7-10: Geometry used to change variables from (t1, t2) to (τ, τ′) in the double
integral that appears in Example 7-5.

a result that leads to

T T T T
E ⎡ST2⎤ = ∫ ∫ E [ X(t1 )X(t 2 )]dt1dt 2 = ∫ ∫ R(t1 − t 2 ) dt1dt 2 . (7-74)
⎣ ⎦ −T −T −T −T

The integrand in (7-74) depends only on one quantity, namely the difference t1 – t2. Therefore,
Equation (7-74) should be expressible in terms of a single integral in the variable τ ≡ t1 – t2. To
see this, use τ ≡ t1 – t2 and τ′ = t1 + t2 and map the (t1, t2) plane to the (τ, τ′) plane (this
relationship has an inverse), as shown by Fig. 7-10. As discussed in Appendix 4A, the integral
(7-74) can be expressed as

T T ∂ (t1, t 2 )
∫−T ∫−T R(t1 − t 2 ) dt1dt 2 = ∫∫ R(τ) ∂ ( τ, τ′)
dτ dτ′ , (7-75)
R2

where R2 is the “rotated square” region in the (τ, τ′) plane shown on the right-hand side of Fig.
7-10. For use in (7-75), the Jacobian of the transformation is

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-25


EE420/500 Class Notes 7/22/2009 John Stensby

½ ½
∂ (t1, t 2 )
= =½ (7-76)
∂ ( τ, τ′) −½ ½

In the (τ,τ′)-plane, as τ goes from –2T to 0, the quantity τ′ traverses from –2T- τ to 2T+ τ, as can
be seen from examination of Fig. 7-10. Also, as τ goes from 0 to 2T, the quantity τ′ traverses
from –2T+ τ to 2T- τ. Hence, we have

T T 0 2T +τ 2T 2T −τ
E ⎡ST2⎤ = ∫ ∫ R(t1 − t 2 ) dt1dt 2 = ∫
−2T ∫− (2T +τ )
½R( τ) dτ′dτ + ∫ ∫−(2T −τ) ½R(τ) dτ′dτ
⎣ ⎦ −T −T 0

0 2T
=∫ (2T + τ)R( τ) dτ + ∫ (2T − τ)R( τ) dτ (7-77)
−2T 0

2T
=∫ (2T − τ )R( τ) dτ
−2T

a single integral that can be evaluate given autocorrelation function R(τ).

Updates at http://www.ece.uah.edu/courses/ee420-500/ 7-26

You might also like