ECE 420 Digital Communications
Lecture #3
Spectral Density
Autocorrelation Function
Random Signals
Signal Transmission Through Linear Systems
Bandwidth of Digital Data
1.3 Spectral Density
The spectral density of a signal characterizes the distribution of
the signal’s energy or power in the frequency domain.
This concept is particularly important when considering filtering in
communication systems while evaluating the signal and noise at
the filter output.
The energy spectral density (ESD) or the power spectral density
(PSD) is used in the evaluation.
1.3.1. Energy Spectral Density (ESD)
Energy spectral density describes the signal energy per unit
bandwidth measured in joules/hertz.
Represented as ψx(f), the squared magnitude spectrum
x( f ) X( f )
2
(1.14)
According to Parseval’s
theorem,
the energy of x(t):
x 2 (t) dt =
2
Ex = |X(f)| df (1.13)
- -
Therefore:
Ex =
-
x (f) df (1.15)
The Energy spectral density is symmetrical in frequency about
origin and total energy of the signal x(t) can be expressed as:
E x = 2 x (f) df (1.16)
0
1.3.2. Power Spectral Density (PSD)
The power spectral density (PSD) function Gx(f ) of the periodic
signal x(t) is a real, even, and nonnegative function of frequency
that gives the distribution of the power of x(t) in the frequency
domain.
PSD is represented as:
G x (f ) =
|Cn |2 ( f nf 0 )
n=-
(1.18)
Whereas the average power of a periodic signal x(t) is
T /2
represented as: 1 0
Px
T0
2
x (t) dt
2
|C |
n=-
n
(1.17)
T0 / 2
Using PSD, the average normalized power of a real-valued
signal is represented as:
Px
G x (f) df 2 G x (f) df
0
(1.19)
1.4 Autocorrelation
1.4.1. Autocorrelation of an Energy Signal
Correlation is a matching process; autocorrelation refers to the
matching of a signal with a delayed version of itself.
Autocorrelation function of a real-valued energy signal x(t) is
defined as:
R x ( ) = x(t) x (t + ) dt
for - < < (1.21)
The autocorrelation function Rx(τ) provides a measure of how
closely the signal matches a copy of itself as the copy is shifted
τ units in time.
Rx(τ) is not a function of time; it is only a function of the time
difference τ between the waveform and its shifted copy.
1.4.2. Autocorrelation of an Energy Signal
The autocorrelation function of a real-valued energy signal has
the following properties:
R x ( ) =R x (- ) symmetrical in about zero
R x ( ) R x (0) for all maximum value occurs at the origin
autocorrelation and ESD form a
R x ( ) x (f) Fourier transform pair, as designated
by the double-headed arrows
R x (0)
x 2 (t) dt value at the origin is equal to the
energy of the signal
1.4.3 Autocorrelation of a Power Signal
Autocorrelation function of a real-valued power signal x(t) is
defined as:
T /2
1
R x ( ) lim
T
T T / 2
x(t) x (t + ) dt for - < < (1.22)
When the power signal x(t) is periodic with period T0, the
autocorrelation function can be expressed as
T0 / 2
1
R x ( )
T0
T0 / 2
x(t) x (t + ) dt for - < < (1.23)
1.4.4. Autocorrelation of a Power Signal
The autocorrelation function of a real-valued periodic signal has
the following properties similar to those of an energy signal:
R x ( ) =R x (- ) symmetrical in about zero
R x ( ) R x (0) for all maximum value occurs at the origin
R x ( ) Gx (f) autocorrelation and PSD form a
Fourier transform pair
T0 / 2
1
R x (0)
T0
T0 / 2
x 2 (t) dt value at the origin is equal to the
average power of the signal
1.5 Random Signals
1.5.1 Random Variables
All useful message signals appear random; that is, the receiver
does not know, a priori, which of the possible waveform have
been sent.
Let a random variable X(A) represent the functional relationship
between a random event A and a real number.
The (cumulative) distribution function FX(x) of the random
variable X is given by
FX ( x) P( X x) (1.24)
Another useful function relating to the random variable X is the
probability density function (pdf)
dFX ( x) (1.25)
p X ( x)
dx
1.5.1.1 Ensemble Averages
The first moment of a probability
mX E{ X } xp X ( x) dx distribution of a random variable
X is called mean value mX, or
expected value of a random
variable X
E{ X 2 } x 2 p X ( x) dx The second moment of a
probability distribution is the
mean-square value of X
Central moments are the
var( X ) E{( X mX ) } 2
moments of the difference
between X and mX and the
second central moment is the
2
( x m X ) p X ( x) dx variance of X
Variance is equal to the
difference between the mean-
var( X ) E{X 2 } E{X }2 square value and the square of
the mean
1.5.2. Random Processes
A random process X(A, t) can be viewed as a function of two
variables: an event A and time.
1.5.2.1 Statistical Averages of a Random
Process
A random process whose distribution functions are continuous can
be described statistically with a probability density function (pdf).
A partial description consisting of the mean and autocorrelation
function are often adequate for the needs of communication
systems.
Mean of the random process X(t) :
E{ X (tk )} xp X k ( x) dx mX (tk ) (1.30)
Autocorrelation function of the random process X(t)
RX (t1 , t2 ) E{ X (t1 ) X (t2 )} (1.31)
1.5.2.2 Stationarity
A random process X(t) is said to be stationary in the strict sense
if none of its statistics are affected by a shift in the time origin.
A random process is said to be wide-sense stationary (WSS) if
two of its statistics, its mean and autocorrelation function, do not
vary with a shift in the time origin.
E{X (t )} mX a constant (1.32)
RX (t1 , t2 ) RX (t1 t2 ) (1.33)
1.5.2.3 Autocorrelation of a Wide-Sense
Stationary Random Process
For a wide-sense stationary process, the autocorrelation
function is only a function of the time difference τ = t1 – t2;
RX ( ) E{X (t ) X (t )} for (1.34)
Properties of the autocorrelation function of a real-valued wide-
sense stationary process are
1. RX ( ) RX ( ) Symmetrical in τ about zero
2. RX ( ) RX (0) for all Maximum value occurs at the origin
3. RX ( ) GX ( f ) Autocorrelation and power spectral
density form a Fourier transform pair
4. RX (0) E{ X 2 (t )} Value at the origin is equal to the
average power of the signal
1.5.3. Time Averaging and Ergodicity
When a random process belongs to a special class, known as an
ergodic process, its time averages equal its ensemble averages.
The statistical properties of such processes can be determined
by time averaging over a single sample function of the process.
A random process is ergodic in the mean if
T /2
1
mX lim
x T
T / 2
X (t )dt (1.35)
It is ergodic in the autocorrelation function if
T /2
1
RX ( ) lim
x T
T / 2
X (t ) X (t )dt (1.36)
1.5.4. Power Spectral Density and
Autocorrelation
A random process X(t) can generally be classified as a power
signal having a power spectral density (PSD) GX(f )
Principal features of PSD functions
1. GX ( f ) 0 And is always real valued
2. GX ( f ) GX ( f ) for X(t) real-valued
3. GX ( f ) RX ( ) PSD and autocorrelation form a
Fourier transform pair
4. PX (0) G
X ( f )df Relationship between average
normalized power and PSD
1.5.5. Noise in Communication Systems
The term noise refers to unwanted electrical signals that are
always present in electrical systems; e.g spark-plug ignition
noise, switching transients, and other radiating electromagnetic
signals.
Can describe thermal noise as a zero-mean Gaussian random
process.
A Gaussian process n(t) is a random function whose amplitude at
any arbitrary time t is statistically characterized by the Gaussian
probability density function
1 1 n
2
p ( n) exp (1.40)
2 2
Noise in Communication Systems
The normalized or standardized Gaussian density function of a
zero-mean process is obtained by assuming unit variance.
1.5.5.1 White Noise
The primary spectral characteristic of thermal noise is that its
power spectral density is the same for all frequencies of interest
in most communication systems
Power spectral density Gn(f )
N0 (1.42)
Gn ( f ) watts / hertz
2
Autocorrelation function of white noise is
N0
Rn ( ) {Gn ( f )}
1
( ) (1.43)
2
The average power Pn of white noise is infinite
N0
p ( n)
2
df (1.44)
The effect on the detection process of a channel with additive
white Gaussian noise (AWGN) is that the noise affects each
transmitted symbol independently.
Such a channel is called a memoryless channel.
The term “additive” means that the noise is simply superimposed
or added to the signal
1.6 Signal Transmission through
Linear Systems
A system can be characterized equally well in the time domain
or the frequency domain, techniques will be developed in both
domains
The system is assumed to be linear and time invariant.
It is also assumed that there is no stored energy in the system
at the time the input is applied
1.6.1. Impulse Response
The linear time invariant system or network is characterized in the
time domain by an impulse response h (t ),to an input unit impulse
(t)
y(t ) h(t ) when x(t ) (t ) (1.45)
The response of the network to an arbitrary input signal x (t )is
found by the convolution of x (t )with h (t )
y (t ) x(t ) h(t ) x( )h(t )d
(1.46)
The system is assumed to be causal,which means that there can
be no output prior to the time, t =0,when the input is applied.
The convolution integral can be expressed as:
y (t ) x( )h(t )d (1.47a)
0
1.6.2. Frequency Transfer Function
The frequency-domain output signal Y (f )is obtained by taking
the Fourier transform
Y( f ) X ( f ) H( f ) (1.48)
Frequency transfer function or the frequency response is defined
as:
Y( f )
H( f ) (1.49)
X( f )
H ( f ) H ( f ) e j ( f ) (1.50)
The phase response is defined as:
Im{H ( f )} (1.51)
( f ) tan 1
Re{H ( f )}
1.6.2.1. Random Processes and Linear Systems
If a random process forms the input to a time-invariant linear
system,the output will also be a random process.
The input power spectral density GX (f )and the output power
spectral density GY (f )are related as:
2
GY ( f ) GX ( f ) H ( f ) (1.53)
1.6.3. Distortionless Transmission
What is the required behavior of an ideal transmission line?
The output signal from an ideal transmission line may have some
time delay and different amplitude than the input
It must have no distortion—it must have the same shape as the
input.
For ideal distortionless transmission:
Output signal in time domain y(t ) Kx(t t0 )
(1.54)
Output signal in frequency domain Y ( f ) KX ( f )e j 2 ft0
(1.55)
System Transfer Function H ( f ) Ke j 2 ft0
(1.56)
What is the required behavior of an ideal transmission line?
The overall system response must have a constant magnitude
response
The phase shift must be linear with frequency
All of the signal’s frequency components must also arrive with
identical time delay in order to add up correctly
Time delay t0 is related to the phase shift and the radian
frequency = 2f by:
t0 (seconds) = (radians) / 2f (radians/seconds ) (1.57a)
Another characteristic often used to measure delay distortion of a
signal is called envelope delay or group delay:
1 d ( f ) (1.57b)
( f )
2 df
1.6.3.1. Ideal Filters
For the ideal low-pass filter transfer function with bandwidth Wf =
fu hertz can be written as:
H ( f ) H ( f ) e j ( f )
(1.58)
Where
1 for | f | fu
H( f )
0 for | f | fu
(1.59)
j ( f ) j 2 ft0
e e
(1.60)
Figure1.11 (b) Ideal low-pass filter
Ideal Filters
The impulse response of the ideal low-pass filter:
h(t ) 1{H ( f )}
H ( f )e j 2 ft df
fu
fu
e j 2 ft0 e j 2 ft df
fu
fu
e j 2 f (t t0 ) df
sin 2 fu (t t0 )
2 fu
2 f u (t t0 )
2 fu sinc 2 f u (t t0 )
Ideal Filters
For the ideal band-pass filter For the ideal high-pass filter
transfer function transfer function
Figure1.11 (a) Ideal band-pass filter Figure1.11 (c) Ideal high-pass filter
1.6.3.2. Realizable Filters
The simplest example of a realizable low-pass filter; an RC filter
1 1
H( f ) e j ( f ) 1.63)
1 j 2 f 1 (2 f ) 2
Figure 1.13
Realizable Filters
Phase characteristic of RC filter
Figure 1.13
Realizable Filters
There are several useful approximations to the ideal low-pass
filter characteristic and one of these is the Butterworth filter
1
Hn ( f ) n 1
1 ( f / fu ) 2n
(1.65)
Butterworth filters are
popular because they
are the best
approximation to the
ideal, in the sense of
maximal flatness in the
filter passband.
1.7. Bandwidth Of Digital Data
1.7.1 Baseband versus Bandpass
An easy way to translate the
spectrum of a low-pass or baseband
signal x(t) to a higher frequency is to
multiply or heterodyne the baseband
signal with a carrier wave cos 2fct
xc(t) is called a double-sideband
(DSB) modulated signal
xc(t) = x(t) cos 2fct (1.70)
From the frequency shifting theorem
Xc(f) = 1/2 [X(f-fc) + X(f+fc) ] (1.71)
Generally the carrier wave frequency
is much higher than the bandwidth of
the baseband signal
fc >> fm and therefore WDSB = 2fm
1.7.2 Bandwidth Dilemma
Theorems of
communication and
information theory are
based on the
assumption of strictly
bandlimited channels
The mathematical
description of a real
signal does not permit
the signal to be strictly
duration limited and
strictly bandlimited.
1.7.2 Bandwidth Dilemma
All bandwidth criteria have in common the attempt to specify a
measure of the width, W, of a nonnegative real-valued spectral
density defined for all frequencies f < ∞
The single-sided power spectral density for a single heterodyned
pulse xc(t) takes the analytical form:
2
sin ( f f c )T
Gx ( f ) T (1.73)
( f f c )T
Different Bandwidth Criteria
(a) Half-power bandwidth.
(b) Equivalent rectangular
or noise equivalent
bandwidth.
(c) Null-to-null bandwidth.
(d) Fractional power
containment
bandwidth.
(e) Bounded power
spectral density.
(f) Absolute bandwidth.
Assignment # 2: Due date: March 28, 2019
Problems: 1.11, 1.15, 1.19, and 1.20