0% found this document useful (0 votes)
57 views53 pages

Digital Signal Processing: Markus Kuhn

This document discusses digital signal processing. It begins by defining signals and describing how they may need to be transformed to extract or prepare embedded information. It then contrasts analog signal processing using passive electronics with digital signal processing using analog-to-digital converters, CPUs, DSPs and other chips. Digital processing offers advantages like noise control, linearity and flexibility but can require more power. Typical applications of DSP include communication systems, consumer electronics, astronomy, and experimental physics. The document lists some objectives of learning DSP and recommends textbooks on the topic.

Uploaded by

Minh Tuyến
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views53 pages

Digital Signal Processing: Markus Kuhn

This document discusses digital signal processing. It begins by defining signals and describing how they may need to be transformed to extract or prepare embedded information. It then contrasts analog signal processing using passive electronics with digital signal processing using analog-to-digital converters, CPUs, DSPs and other chips. Digital processing offers advantages like noise control, linearity and flexibility but can require more power. Typical applications of DSP include communication systems, consumer electronics, astronomy, and experimental physics. The document lists some objectives of learning DSP and recommends textbooks on the topic.

Uploaded by

Minh Tuyến
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Signal processing

Signals may have to be transformed in order to


Digital Signal Processing → amplify or filter out embedded information
→ detect patterns
Markus Kuhn → prepare the signal to survive a transmission channel
→ prevent interference with other signals sharing a medium
→ undo distortions contributed by a transmission channel
→ compensate for sensor deficiencies
Computer Laboratory → find information encoded in a different domain
To do so, we also need
http://www.cl.cam.ac.uk/teaching/1213/DSP/
→ methods to measure, characterise, model and simulate trans-
mission channels
Michaelmas 2012 – Part II → mathematical tools that split common channels and transfor-
mations into easily manipulated building blocks
3

Signals Analog electronics


→ flow of information
Passive networks (resistors, capacitors, R
inductances, crystals, SAW filters),
→ measured quantity that varies with time (or position) non-linear elements (diodes, . . . ),
Uin L C Uout
(roughly) linear operational amplifiers
→ electrical signal received from a transducer
(microphone, thermometer, accelerometer, antenna, etc.) Advantages:

→ electrical signal that controls a process → passive networks are highly lin- Uin

ear over a very large dynamic Uin

Uout
Continuous-time signals: voltage, current, temperature, speed, . . . range and large bandwidths
Uout
→ analog signal-processing cir- √
1/ LC ω (= 2πf) t
Discrete-time signals: daily minimum/maximum temperature, cuits require little or no power
0

lap intervals in races, sampled continuous signals, . . .


→ analog circuits cause little ad- Uin − Uout
=
1
Z t
Uout dτ + C
dUout
ditional interference R L dt
Electronics (unlike optics) can only deal easily with time-dependent signals, therefore spatial −∞
signals, such as images, are typically first converted into a time signal with a scanning process
(TV, fax, etc.).

2 4
Digital signal processing Objectives
Analog/digital and digital/analog converter, CPU, DSP, ASIC, FPGA. By the end of the course, you should be able to

Advantages: → apply basic properties of time-invariant linear systems

→ noise is easy to control after initial quantization


→ understand sampling, aliasing, convolution, filtering, the pitfalls of
spectral estimation
→ highly linear (within limited dynamic range) → explain the above in time and frequency domain representations
→ complex algorithms fit into a single chip → use filter-design software
→ flexibility, parameters can easily be varied in software → visualise and discuss digital filters in the z-domain
→ digital processing is insensitive to component tolerances, aging, → use the FFT for convolution, deconvolution, filtering
environmental conditions, electromagnetic interference → implement, apply and evaluate simple DSP applications in MATLAB
But: → apply transforms that reduce correlation between several signal sources

→ discrete-time processing artifacts (aliasing) → understand and explain limits in human perception that are ex-
ploited by lossy compression techniques
→ can require significantly more power (battery, cooling)
→ understand the basic principles of several widely-used modulation
→ digital clock and switching cause interference and audio-visual coding techniques.
5 7

Typical DSP applications Textbooks


→ communication systems → astronomy → R.G. Lyons: Understanding digital signal processing. 3rd ed.,
modulation/demodulation, channel VLBI, speckle interferometry Prentice-Hall, 2010. (£68)
equalization, echo cancellation
→ A.V. Oppenheim, R.W. Schafer: Discrete-time signal process-
→ consumer electronics → experimental physics ing. 3rd ed., Prentice-Hall, 2007. (£47)
perceptual coding of audio and video sensor-data evaluation
on DVDs, speech synthesis, speech → J. Stein: Digital signal processing – a computer science per-
recognition
→ aviation spective. Wiley, 2000. (£133)
→ music
radar, radio navigation
→ S.W. Smith: Digital signal processing – a practical guide for
synthetic instruments, audio effects,
noise reduction → security engineers and scientists. Newness, 2003. (£48)

→ medical diagnostics
steganography, digital watermarking,
biometric identification, surveillance
→ K. Steiglitz: A digital signal processing primer – with appli-
magnetic-resonance and ultrasonic
systems, signals intelligence, elec- cations to digital audio and computer music. Addison-Wesley,
tronic warfare
imaging, computer tomography, 1996. (£67)
ECG, EEG, MEG, AED, audiology
→ engineering → Sanjit K. Mitra: Digital signal processing – a computer-based
→ geophysics control systems, feature extraction approach. McGraw-Hill, 2002. (£38)
seismology, oil exploration for pattern recognition
6 8
Units and decibel Sequences and systems
Communications engineers often use logarithmic units: A discrete sequence {xn }∞
n=−∞ is a sequence of numbers

→ Quantities often vary over many orders of magnitude → difficult . . . , x−2 , x−1 , x0 , x1 , x2 , . . .
to agree on a common SI prefix (nano, micro, milli, kilo, etc.)
where xn denotes the n-th number in the sequence (n ∈ Z). A discrete
→ Quotient of quantities (amplification/attenuation) usually more sequence maps integer numbers onto real (or complex) numbers.
interesting than difference We normally abbreviate {xn }∞ n=−∞ to {xn }, or to {xn }n if the running index is not obvious.
The notation is not well standardized. Some authors write x[n] instead of xn , others x(n).
→ Signal strength usefully expressed as field quantity (voltage,
current, pressure, etc.) or power, but quadratic relationship Where a discrete sequence {xn } samples a continuous function x(t) as
between these two (P = U 2 /R = I 2 R) rather inconvenient xn = x(ts · n) = x(n/fs ),
→ Perception is logarithmic (Weber/Fechner law → slide 175)
we call ts the sampling period and fs = 1/ts the sampling frequency.
Plus: Using magic special-purpose units has its own odd attractions (→ typographers, navigators)
A discrete system T receives as input a sequence {xn } and transforms
Neper (Np) denotes the natural logarithm of the quotient of a field it into an output sequence {yn } = T {xn }:
quantity F and a reference value F0 . (rarely used today)
Bel (B) denotes the base-10 logarithm of the quotient of a power P discrete
. . . , x2 , x1 , x0 , x−1 , . . . . . . , y2 , y1 , y0 , y−1 , . . .
system T
and a reference power P0 . Common prefix: 10 decibel (dB) = 1 bel.
9 11

Where P is some power and P0 a 0 dB reference power, or equally Some simple sequences
where F is a field quantity and F0 the corresponding reference level:
un
P F Unit-step sequence: 1
10 dB · log10 = 20 dB · log10
P0 F0 (
Common reference values are indicated with suffix after “dB”: 0, n < 0
un =
1, n ≥ 0
0 dBW =1W
. . . −3 −2 −1 0 1 2 3 ... n
0 dBm = 1 mW = −30 dBW
0 dBµV = 1 µV
δn
0 dBSPL = 20 µPa (sound pressure level) Impulse sequence:
1
0 dBSL = perception threshold (sensation limit) (
1, n = 0
Remember: δn =
0, n =
6 0
3 dB = 2× power, 6 dB = 2× voltage/pressure/etc.
= un − un−1 . . . −3 −2 −1 0 1 2 3 ... n
10 dB = 10× power, 20 dB = 10× voltage/pressure/etc.
W.H. Martin: Decibel – the new name for the transmission unit. Bell System Technical Journal,
January 1929.
10 12
Sinusoidial sequences Types of discrete systems
A cosine wave, frequency f , phase offset θ: A causal system cannot look into the future:
x(t) = cos(2πf t + θ) yn = f (xn , xn−1 , xn−2 , . . .)
Sampling it at sampling rate fs results in the discrete sequence {xn }: A memory-less system depends only on the current input value:
xn = cos(2πf n/fs + θ) yn = f (xn )

Exercise 1 Use the following MATLAB (or GNU Octave) code to display A delay system shifts a sequence in time:
41 samples (≈ 1/200 s = 5 ms) of a 400 Hz sinusoidial wave sampled at
yn = xn−d
8 kHz:
n=0:40; fs=8000; T is a time-invariant system if for any d
f=400; x=cos(2*pi*f*n/fs); stem(n, x); ylim([-1.1 1.1])
{yn } = T {xn } ⇐⇒ {yn−d } = T {xn−d }.
Try frequencies f of 0, 1000, 2000, 3000, 4000, and 5000 Hz. Also try to
negate these frequencies. Do any of these resulting sequences look to be T is a linear system if for any pair of sequences {xn } and {x′n }
identical, or are they negatives of each other? Also try sin instead of cos.
Finally, try adding θ phase offsets of ±π/4, ±π/2, and ±π. T {a · xn + b · x′n } = a · T {xn } + b · T {x′n }.
13 15

Properties of sequences Example: M -point moving average system


A sequence {xn } is
M −1
1 X xn−M +1 + · · · + xn−1 + xn
periodic ⇔ ∃k > 0 : ∀n ∈ Z : xn = xn+k yn = xn−k =
M k=0 M

X
absolutely summable ⇔ |xn | < ∞ It is causal, linear, time-invariant, with memory. With M = 4:
n=−∞
X∞
square summable ⇔ |xn |2 < ∞ ⇔ “energy signal” x
n=−∞ y
| {z }
“energy′′
k
1 X
0 < lim |xn |2 < ∞ ⇔ “power signal” 0
k→∞ 1 + 2k
n=−k
| {z }
“average power”

This energy/power terminology reflects that if U is a voltageR supplied to a load


resistor R, then P = U I = U 2 /R is the power consumed, and P (t) dt the energy.
It is used even if we drop physical units (e.g., volts) for simplicity in calculations.
14 16
Example: exponential averaging system Example: backward difference system

X
yn = α · xn + (1 − α) · yn−1 = α (1 − α)k · xn−k yn = xn − xn−1
k=0

It is causal, linear, time-invariant, with memory. With α = 12 : It is causal, linear, time-invariant, with memory.

x x
y y

0 0

17 19

Example: accumulator system Other examples


Time-invariant non-linear memory-less systems:
n
X
yn = xk yn = x2n , yn = log2 xn , yn = max{min{⌊256xn ⌋, 255}, 0}
k=−∞
Linear but not time-invariant systems:
It is causal, linear, time-invariant, with memory. (
xn , n ≥ 0
yn = = xn · u n
0, n<0
x
y yn = x⌊n/4⌋
yn = xn · ℜ(eωjn )

0 Linear time-invariant non-causal systems:


1
yn = (xn−1 + xn+1 )
2
9
X sin(πkω)
yn = xn+k · · [0.5 + 0.5 · cos(πk/10)]
k=−9
πkω

18 20
Constant-coefficient difference equations Convolution
Of particular practical interest are causal linear time-invariant systems
of the form Another example of a LTI systems is
xn b0 yn ∞
X
N
X yn = ak · xn−k
yn = b0 · xn − ak · yn−k −a1
z −1 k=−∞
k=1 yn−1
where {ak } is a suitably chosen sequence of coefficients.
z −1 This operation over sequences is called convolution and is defined as
−a2
Block diagram representation yn−2
of sequence operations: ∞
X
x′n −a3
z −1 {pn } ∗ {qn } = {rn } ⇐⇒ ∀n ∈ Z : rn = pk · qn−k .
yn−3 k=−∞
xn xn + x′n
Addition:
If {yn } = {an } ∗ {xn } is a representation of an LTI system T , with
Multiplication xn a axn
by constant:
The ak and bm are {yn } = T {xn }, then we call the sequence {an } the impulse response
constant coefficients. of T , because {an } = T {δn }.
xn xn−1
Delay: z −1
21 23

or Convolution examples
xn xn−1 xn−2 xn−3
M z −1 z −1 z −1 A B C D
X
yn = bm · xn−m b0 b1 b2 b3
m=0

yn

or the combination of both: xn b0 a−1 yn E F A∗B A∗C


0

z −1 z −1
b1 −a1
N
X M
X xn−1 yn−1
ak · yn−k = bm · xn−m
k=0 m=0 z −1 z −1 C∗A A∗E D∗E A∗F
b2 −a2
xn−2 yn−2

z −1 z −1
b3 −a3
xn−3 yn−3
The MATLAB function filter is an efficient implementation of the last variant.
22 24
Properties of convolution Exercise 2 What type of discrete system (linear/non-linear, time-invariant/
For arbitrary sequences {pn }, {qn }, {rn } and scalars a, b: non-time-invariant, causal/non-causal, causal, memory-less, etc.) is:

→ Convolution is associative (a) yn = |xn | (e) yn =


3xn−1 + xn−2
xn−3
({pn } ∗ {qn }) ∗ {rn } = {pn } ∗ ({qn } ∗ {rn })
(b) yn = −xn−1 + 2xn − xn+1 (f) yn = xn · en/14
→ Convolution is commutative
8
Y (g) yn = xn · un
{pn } ∗ {qn } = {qn } ∗ {pn } (c) yn = xn−i

i=0 X
→ Convolution is linear (h) yn = xi · δi−n+2
(d) yn = 21 (x2n + x2n+1 ) i=−∞
{pn } ∗ {a · qn + b · rn } = a · ({pn } ∗ {qn }) + b · ({pn } ∗ {rn })
→ The impulse sequence (slide 12) is neutral under convolution Exercise 3
Prove that convolution is
{pn } ∗ {δn } = {δn } ∗ {pn } = {pn }
(a) commutative,
→ Sequence shifting is equivalent to convolving with a shifted (b) associative.
impulse
{pn−d } = {pn } ∗ {δn−d }
25 27

Proof: all LTI systems just apply convolution Exercise 4 MATLAB/GNU Octave commands (similar to)
Any sequence {xn } can be decomposed into a weighted sum of shifted x = [ 0 0 0 -4 0 0 0 0 0 0 2 2 2 2 ...
impulse sequences: 2 0 -3 -3 -3 0 0 0 0 0 1 -4 0 4 ...

X 3 -1 2 -3 -1 0 2 -4 -2 1 0 0 0 3 ...
{xn } = xk · {δn−k } -3 3 -3 3 -3 3 -3 3 -3 0 0 0 0 0 0 ];
k=−∞ n = 0:length(x)-1;
y = filter([1 1 1 1]/4, [1], x);
Let’s see what happens if we apply a linear(∗) time-invariant(∗∗) system plot(n, x, 'bx-', n, y, 'ro-');
T to such a decomposed sequence:
produced the plot on slide 16 to illustrate the 4-point moving average sys-
!

X (∗)

X tem. The standard library function filter(b, a, x) applies to the finite
T {xn } = T xk · {δn−k } = xk · T {δn−k } sequence x the discrete system defined by the constant-coefficient difference
k=−∞ k=−∞
! equation with coefficient vectors b and a (see slide 22 and “help filter”).
∞ ∞
(∗∗) X X Change this program to generate the corresponding plot for the
= xk · {δn−k } ∗ T {δn } = xk · {δn−k } ∗ T {δn }
k=−∞ k=−∞
(a) exponential averaging system (slide 17)
= {xn } ∗ T {δn } q.e.d. (b) accumulator system (slide 18)
(c) backward difference system (slide 19)
⇒ The impulse response T {δn } fully characterizes an LTI system.
26 28
Exercise 5 A finite-length sequence is non-zero only at a finite number of Convolution: optics example
positions. If m and n are the first and last non-zero positions, respectively, If a projective lens is out of focus, the blurred image is equal to the
then we call n − m + 1 the length of that sequence. What maximum length original image convolved with the aperture shape (e.g., a filled circle):
can the result of convolving two sequences of length k and l have?

Exercise 6 The length-3 sequence a0 = −3, a1 = 2, a2 = 1 is convolved


with a second sequence {bn } of length 5.
(a) Write down this linear operation as a matrix multiplication involving a ∗ =
matrix A, a vector ~b ∈ R5 , and a result vector ~c.
(b) Use MATLAB to multiply your matrix by the vector ~b = (1, 0, 0, 2, 2)
and compare the result with that of using the conv function.
(c) Use the MATLAB facilities for solving systems of linear equations to Point-spread function h (disk, r = as
2f
): image plane focal plane
undo the above convolution step.
1
x2 + y 2 ≤ r2

r2 π
,
h(x, y) =
Exercise 7 (a) Find a pair of sequences {an } and {bn }, where each one 0, x2 + y 2 > r2
a
contains at least three different values and where the convolution {an }∗{bn } Original image I, blurred image B = I ∗ h, i.e.
results in an all-zero sequence.
ZZ
(b) Does every LTI system T have an inverse LTI system T −1 such that B(x, y) = I(x − x′ , y − y ′ ) · h(x′ , y ′ ) · dx′ dy ′ s
{xn } = T −1 T {xn } for all sequences {xn }? Why? f
29 31

Direct form I and II implementations Convolution: electronics example


xn b0 a−1
0 yn x n a−1
0 b0 yn R
Uin
Uin U
√in
2

Uout
−1 −1 −1
z z z Uin C Uout
b1 −a1 −a1 b1
xn−1 yn−1 Uout

z −1 z −1 = z −1 0
1/RC ω (= 2πf)
b2 −a2 −a2 b2 t 0
xn−2 yn−2 Any passive network (R, L, C) convolves its input voltage Uin with an
z −1
z −1
z −1 impulse response function h, leading to Uout = Uin ∗ h, that is
b3 −a3 −a3 b3 Z ∞
xn−3 yn−3
Uout (t) = Uin (t − τ ) · h(τ ) · dτ
The block diagram representation of the constant-coefficient difference −∞
equation on slide 22 is called the direct form I implementation. In this example:
The number of delay elements can be halved by using the commuta-  −t
Uin − Uout dUout 1
tivity of convolution to swap the two feedback loops, leading to the · e RC , t ≥ 0
=C· , h(t) = RC
direct form II implementation of the same LTI system. R dt 0, t<0
These two forms are only equivalent with ideal arithmetic (no rounding errors and range limits).
30 32
Why are sine waves useful? Why are exponential functions useful?
1) Adding together sine waves of equal frequency, but arbitrary ampli- Adding together two exponential functions with the same base z, but
tude and phase, results in another sine wave of the same frequency: different scale factor and offset, results in another exponential function
with the same base:
A1 · sin(ωt + ϕ1 ) + A2 · sin(ωt + ϕ2 ) = A · sin(ωt + ϕ)
A1 · z t+ϕ1 + A2 · z t+ϕ2 = A1 · z t · z ϕ1 + A2 · z t · z ϕ2
with
q = (A1 · z ϕ1 + A2 · z ϕ2 ) · z t = A · z t
A= A21 + A22 + 2A1 A2 cos(ϕ2 − ϕ1 )
Likewise, if we convolve a sequence {xn } of values
A1 sin ϕ1 + A2 sin ϕ2
tan ϕ = . . . , z −3 , z −2 , z −1 , 1, z, z 2 , z 3 , . . .
A1 cos ϕ1 + A2 cos ϕ2 A
A2 A2 · sin(ϕ2 )

Sine waves of any phase can be ϕ2 xn = z n with an arbitrary sequence {hn }, we get {yn } = {z n } ∗ {hn },
A1 A2 · cos(ϕ2 )
formed from sin and cos alone: ∞
X ∞
X ∞
X
ωt
ϕ1
ϕ A1 · sin(ϕ1 )
yn = xn−k · hk = z n−k · hk = z n · z −k · hk = z n · H(z)
A · sin(ωt + ϕ) = A1 · cos(ϕ1 ) k=−∞ k=−∞ k=−∞

a · sin(ωt) + b · cos(ωt) where H(z) is independent of n.


√ Exponential sequences are closed under convolution with
with a = A · cos(ϕ), b = A · sin(ϕ) and A = a2 + b2 , tan ϕ = ab . arbitrary sequences. The same applies in the continuous case.
33 35

Note: Convolution of a discrete sequence {xn } with another sequence Why are complex numbers so useful?
{yn } is nothing but adding together scaled and delayed copies of {xn }. 1) They give us all n solutions√(“roots”) of equations involving poly-
(Think of {yn } decomposed into a sum of impulses.) nomials up to degree n (the “ −1 = j ” story).
If {xn } is a sampled sine wave of frequency f , so is {xn } ∗ {yn }! 2) They give us the “great unifying theory” that combines sine and
=⇒ Sine-wave sequences form a family of discrete sequences exponential functions:
that is closed under convolution with arbitrary sequences.
1 jωt 
cos(ωt) = e + e− jωt
The same applies for continuous sine waves and convolution. 2
1 
sin(ωt) = e jωt − e− jωt
2) Sine waves are orthogonal to each other: 2j
Z ∞ or
1 j(ωt+ϕ) 
sin(ω1 t + ϕ1 ) · sin(ω2 t + ϕ2 ) dt “=” 0 cos(ωt + ϕ) = e + e− j(ωt+ϕ)
−∞ 2
or
⇐⇒ ω1 6= ω2 ∨ ϕ1 − ϕ2 = (2k + 1)π/2 (k ∈ Z)
cos(ωn + ϕ) = ℜ(e j(ωn+ϕ) ) = ℜ[(e jω )n · e jϕ ]
They can be used to form an orthogonal function basis for a transform.
The term “orthogonal” is used here in the context of an (infinitely dimensional) vector space, sin(ωn + ϕ) = ℑ(e j(ωn+ϕ) ) = ℑ[(e jω )n · e jϕ ]
where the “vectors” Rare functions of the form f : R → R (or f : R → C) and the scalar product

is defined as f · g = −∞ f (t) · g(t) dt. Notation: ℜ(a + jb) := a and ℑ(a + jb) := b where j2 = −1 and a, b ∈ R.
34 36
We can now represent sine waves as projections of a rotating complex Recall: Fourier transform
vector. This allows us to represent sine-wave sequences as exponential We define the Fourier integral transform and its inverse as
sequences with basis e jω . Z ∞
A phase shift in such a sequence corresponds to a rotation of a complex F{g(t)}(f ) = G(f ) = g(t) · e−2π jf t dt
vector. −∞
Z ∞
3) Complex multiplication allows us to modify the amplitude and phase
of a complex rotating vector using a single operation and value. F −1 {G(f )}(t) = g(t) = G(f ) · e2π jf t df
−∞
Rotation of a 2D vector in (x, y)-form is notationally slightly messy,
Many equivalent forms of the Fourier transform are used in the literature. There is no strong
but fortunately j2 = −1 does exactly what is required here: consensus on whether the forward transform uses e−2π jf t and the backwards transform e2π jf t ,
or vice versa. The above form uses the ordinary frequency f , whereas some authors prefer the
      angular frequency ω = 2πf :
x3 x2 −y2 x1
= · (x3 , y3 )
y3 y2 x2 y1 Z ∞
  F {h(t)}(ω) = H(ω) = α h(t) · e∓ jωt dt
x 1 x 2 − y1 y2 (−y2 , x2 ) −∞
=
x 1 y2 + x 2 y 1 (x2 , y2 ) Z ∞
(x1 , y1 ) F −1 {H(ω)}(t) = h(t) = β H(ω)· e± jωt dω
−∞
z1 = x1 + jy1 , z2 = x2 + jy2
This substitution introduces factors α and β such that αβ = 1/(2π). Some authors set α = 1
z1 · z2 = x1 x2 − y1 y2 + j(x1 y2 + x2 y1 ) and β = 1/(2π), to keep √the convolution theorem free of a constant prefactor; others prefer the
unitary form α = β = 1/ 2π, in the interest of symmetry.
37 39

Complex phasors Properties of the Fourier transform


Amplitude and phase are two distinct characteristics of a sine function
If
that are inconvenient to keep separate notationally.
x(t) •−◦ X(f ) and y(t) •−◦ Y (f )
Complex functions (and discrete sequences) of the form
are pairs of functions that are mapped onto each other by the Fourier
A · e j(ωt+ϕ) = A · [cos(ωt + ϕ) + j · sin(ωt + ϕ)] transform, then so are the following pairs.
(where j2 = −1) are able to represent both amplitude and phase in Linearity:
one single algebraic object. ax(t) + by(t) •−◦ aX(f ) + bY (f )
Thanks to complex multiplication, we can also incorporate in one single
factor both a multiplicative change of amplitude and an additive change Time scaling:  
of phase of such a function. This makes discrete sequences of the form 1 f
x(at) •−◦ X
jωn
|a| a
xn = e
Frequency scaling:
eigensequences with respect to an LTI system T , because for each ω,
 
there is a complex number (eigenvalue) H(ω) such that 1 t
x •−◦ X(af )
T {xn } = H(ω) · {xn } |a| a
In the notation of slide 35, where the argument of H is the base, we would write H(e jω ).
38 40
Time shifting: Convolution theorem
Continuous form:
x(t − ∆t) •−◦ X(f ) · e−2π jf ∆t
F{(f ∗ g)(t)} = F{f (t)} · F{g(t)}
Frequency shifting:
F{f (t) · g(t)} = F{f (t)} ∗ F{g(t)}
x(t) · e2π j∆f t •−◦ X(f − ∆f ) Discrete form:

Parseval’s theorem (total energy): {xn } ∗ {yn } = {zn } ⇐⇒ X(e jω ) · Y (e jω ) = Z(e jω )


Z ∞ Z ∞
2
Convolution in the time domain is equivalent to (complex) scalar mul-
|x(t)| dt = |X(f )|2 df tiplication in the frequency domain.
−∞ −∞
Convolution in the frequency domain corresponds to scalar multiplica-
tion in the time domain.
− jωr dr = − jωr dsdr =
R R R R
Proof: z(r) = s x(s)y(r − s)ds ⇐⇒ r z(r)e r s x(s)y(r − s)e
R R − jωr
R − jωs
R − jω(r−s) t:=r−s
s x(s) r y(r − s)e drds = s x(s)e r y(r − s)e drds =
− jωs − jωt − jωs − jωt
R R R R P R
s x(s)e t y(t)e dtds = s x(s)e ds · t y(t)e dt. (Same for instead of .)
41 43

Fourier transform example: rect and sinc Dirac delta function


The Fourier transform of the “rectangular function” The continuous equivalent of the impulse sequence {δn } is known as
 Dirac delta function δ(x). It is a generalized function, defined such
1
 1 if |t| < 2
 1 that
1
rect(t) = 2
if |t| = 21 

 0 otherwise 0 0, x 6= 0 1
− 21 0 1 δ(x) =
2 ∞, x = 0
Z ∞
is the “(normalized) sinc function”
δ(x) dx = 1
Z 1 −∞
2
−2π jf t sin πf 0 x
F{rect(t)}(f ) = e dt = = sinc(f )
− 21 πf and can be thought of as the limit of function sequences such as

and vice versa 0, |x| ≥ 1/n
δ(x) = lim
F{sinc(t)}(f ) = rect(f ). n→∞ n/2, |x| < 1/n
or
Some noteworthy properties of these functions: n 2 2
R∞ R∞ 1 δ(x) = lim √ e−n x
• −∞ sinc(t) dt = 1 = −∞ rect(t) dt n→∞ π
• sinc(0) = 1 = rect(0) 0 The delta function is mathematically speaking not a function, but a distribution, that is an
• ∀n ∈ Z \ {0} : sinc(n) = 0 −3 −2 −1 0 1 2 3 expression that is only defined when integrated.
42 44
Some properties of the Dirac delta function:
Z ∞
Fourier transform symmetries
f (x)δ(x − a) dx = f (a) We call a function x(t)
−∞
Z ∞ odd if x(−t) = −x(t)
±2π jxa
e dx = δ(a) even if x(−t) = x(t)
−∞
∞ ∞ and ·∗ is the complex conjugate, such that (a + jb)∗ = (a − jb).
X
±2π jnxa 1 X
e = δ(x − n/a) Then
n=−∞
|a| n=−∞
1 x(t) is real ⇔ X(−f ) = [X(f )]∗
δ(ax) = δ(x) x(t) is imaginary ⇔ X(−f ) = −[X(f )]∗
|a|
x(t) is even ⇔ X(f ) is even
Fourier transform: x(t) is odd ⇔ X(f ) is odd
Z ∞
F{δ(t)}(f ) = δ(t) · e−2π jf t dt = e0 = 1 x(t) is real and even ⇔ X(f ) is real and even
−∞ x(t) is real and odd ⇔ X(f ) is imaginary and odd
Z ∞ x(t) is imaginary and even ⇔ X(f ) is imaginary and even
F −1 {1}(t) = 1 · e2π jf t df = δ(t) x(t) is imaginary and odd ⇔ X(f ) is real and odd
−∞

45 47

Sine and cosine in the frequency domain Example: amplitude modulation


1 2π jf0 t 1 −2π jf0 t 1 2π jf0 t 1 Communication channels usually permit only the use of a given fre-
cos(2πf0 t) = e + e sin(2πf0 t) = e − e−2π jf0 t quency interval, such as 300–3400 Hz for the analog phone network or
2 2 2j 2j
1 1 590–598 MHz for TV channel 36. Modulation with a carrier frequency
F{cos(2πf0 t)}(f ) = δ(f − f0 ) + δ(f + f0 ) fc shifts the spectrum of a signal x(t) into the desired band.
2 2
j j Amplitude modulation (AM):
F{sin(2πf0 t)}(f ) = − δ(f − f0 ) + δ(f + f0 )
2 2
ℜ ℜ y(t) = A · cos(2πtfc ) · x(t)
1 1
2 2 X(f ) Y (f )
ℑ ℑ
1
2 j 1
2 j ∗ =

−f0 f0 f −f0 f0 f −fl 0 fl f −fc fc f −fc 0 fc f

The spectrum of the baseband signal in the interval −fl < f < fl is
As any x(t) ∈ R can be decomposed into sine and cosine functions, the spectrum of any real-
valued signal will show the symmetry X(e jω ) = [X(e− jω )]∗ , where ∗ denotes the complex shifted by the modulation to the intervals ±fc − fl < f < ±fc + fl .
conjugate (i.e., negated imaginary part). How can such a signal be demodulated?
46 48
Sampling using a Dirac comb Sampling and aliasing
The loss of information in the sampling process that converts a con-
tinuous function x(t) into a discrete sequence {xn } defined by sample
cos(2π tf)
cos(2π t(k⋅ f ± f))
xn = x(ts · n) = x(n/fs ) s

can be modelled through multiplying x(t) by a comb of Dirac impulses



X 0
s(t) = ts · δ(t − ts · n)
n=−∞

to obtain the sampled function

x̂(t) = x(t) · s(t)

The function x̂(t) now contains exactly the same information as the
discrete sequence {xn }, but is still in a form that can be analysed using Sampled at frequency fs , the function cos(2πtf ) cannot be distin-
the Fourier transform on continuous functions. guished from cos[2πt(kfs ± f )] for any k ∈ Z.
49 51

The Fourier transform of a Dirac comb Frequency-domain view of sampling



X ∞
X x(t) s(t) x̂(t)
s(t) = ts · δ(t − ts · n) = e2π jnt/ts
n=−∞ n=−∞
· =
is another Dirac comb
( ) ... ... ... ...

X 0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
S(f ) = F ts · δ(t − ts n) (f ) =
n=−∞ X(f ) S(f ) X̂(f )
Z∞ X
∞ ∞  
−2π jf t
X n
ts · δ(t − ts n) e dt = δ f−
ts
. ∗ =
−∞ n=−∞ n=−∞
... ... ... ...

s(t) S(f ) 0 f −fs fs f −fs 0 fs f

Sampling a signal in the time domain corresponds in the frequency


domain to convolving its spectrum with a Dirac comb. The resulting
copies of the original signal spectrum in the spectrum of the sampled
−2ts −ts 0 ts 2ts t −2fs −fs 0 fs 2fs f
signal are called “images”.
50 52
Discrete-time Fourier transform Nyquist limit and anti-aliasing filters
The Fourier transform of a sampled signal
Without anti-aliasing filter With anti-aliasing filter

X single-sided Nyquist
x̂(t) = ts · xn · δ(t − ts · n) X(f ) bandwidth X(f ) limit = fs /2
anti-aliasing filter
n=−∞

is
Z ∞ ∞
X f
F{x̂(t)}(f ) = X̂(f ) = x̂(t) · e −2π jf t
dt = ts · xn · e−2π j fs n 0 f −fs 0 fs f
double-sided bandwidth
−∞ n=−∞
reconstruction filter
X̂(f ) X̂(f )
Some authors prefer the notation X̂(e jω ) = n xn · e− jωn to highlight the periodicity of X̂ and
P

its relationship with the z-transform (slide 108), where ω = 2π ff .


s

The inverse transform is


Z ∞ Z fs /2 f
−2fs −fs 0 fs 2fs f −2fs −fs 0 fs 2fs f
x̂(t) = X̂(f ) · e2π jf t df or xm = X̂(f ) · e2π j fs m df.
−∞ −fs /2
Anti-aliasing and reconstruction filters both suppress frequencies outside |f | < fs /2.

53 55

Nyquist limit and anti-aliasing filters Reconstruction of a continuous


band-limited waveform
If the (double-sided) bandwidth of a signal to be sampled is larger than The ideal anti-aliasing filter for eliminating any frequency content above
the sampling frequency fs , the images of the signal that emerge during fs /2 before sampling with a frequency of fs has the Fourier transform
sampling may overlap with the original spectrum. (
Such an overlap will hinder reconstruction of the original continuous 1 if |f | < f2s
H(f ) = = rect(ts f ).
signal by removing the aliasing frequencies with a reconstruction filter. 0 if |f | > f2s
Therefore, it is advisable to limit the bandwidth of the input signal to This leads, after an inverse Fourier transform, to the impulse response
the sampling frequency fs before sampling, using an anti-aliasing filter.  
sin πtfs 1 t
In the common case of a real-valued base-band signal (with frequency h(t) = fs · = · sinc .
πtfs ts ts
content down to 0 Hz), all frequencies f that occur in the signal with
non-zero power should be limited to the interval −fs /2 < f < fs /2. The original band-limited signal can be reconstructed by convolving
The upper limit fs /2 for the single-sided bandwidth of a baseband this with the sampled signal x̂(t), which eliminates the periodicity of
signal is known as the “Nyquist limit”. the frequency domain introduced by the sampling process:
x(t) = h(t) ∗ x̂(t)
Note that sampling h(t) gives the impulse function: h(t) · s(t) = δ(t).
54 56
Impulse response of ideal low-pass filter with cut-off frequency fs /2: Reconstruction filters
The mathematically ideal form of a reconstruction filter for suppressing
aliasing frequencies interpolates the sampled signal xn = x(ts · n) back
into the continuous waveform

X sin π(t − ts · n)
x(t) = xn · .
n=−∞
π(t − ts · n)

Choice of sampling frequency


Due to causality and economic constraints, practical analog filters can only approx-
imate such an ideal low-pass filter. Instead of a sharp transition between the “pass
band” (< fs /2) and the “stop band” (> fs /2), they feature a “transition band”
0
in which their signal attenuation gradually increases.
The sampling frequency is therefore usually chosen somewhat higher than twice
the highest frequency of interest in the continuous signal (e.g., 4×). On the other
hand, the higher the sampling frequency, the higher are CPU, power and memory
−3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 requirements. Therefore, the choice of sampling frequency is a tradeoff between
t⋅ f
s signal quality, analog filter cost and digital subsystem expenses.
57 59

Reconstruction filter example Exercise 8 Digital-to-analog converters cannot output Dirac pulses. In-
stead, for each sample, they hold the output voltage (approximately) con-
stant, until the next sample arrives. How can this behaviour be modeled
sampled signal mathematically as a linear time-invariant system, and how does it affect the
interpolation result spectrum of the output signal?
scaled/shifted sin(x)/x pulses

Exercise 9 Many DSP systems use “oversampling” to lessen the require-


ments on the design of an analog reconstruction filter. They use (a finite
approximation of) the sinc-interpolation formula to multiply the sampling
frequency fs of the initial sampled signal by a factor N before passing it to
the digital-to-analog converter. While this requires more CPU operations
and a faster D/A converter, the requirements on the subsequently applied
analog reconstruction filter are much less stringent. Explain why, and draw
schematic representations of the signal spectrum before and after all the
relevant signal-processing steps.

Exercise 10 Similarly, explain how oversampling can be applied to lessen


the requirements on the design of an analog anti-aliasing filter.
1 2 3 4 5
58 60
Band-pass signal sampling X(f ) δ(f + fc )

Sampled signals can also be reconstructed if their spectral components


remain entirely within the interval n · fs /2 < |f | < (n + 1) · fs /2 for

some n ∈ N. (The baseband case discussed so far is just n = 0.)
In this case, the aliasing copies of the positive and the negative frequencies will interleave instead −fc 0 fc f −fc 0 fc f
of overlap, and can therefore be removed again with a reconstruction filter with the impulse
response

Y (f ) Ẑ(f ) Z(f )
 
sin πtfs /2 2n + 1 sin πt(n + 1)fs sin πtnfs anti-aliasing filter
h(t) = fs · cos 2πtfs = (n + 1)fs − nfs .
πtfs /2 4 πt(n + 1)fs πtnfs
sample
−→
X(f ) anti-aliasing filter X̂(f ) reconstruction filter

−2fc −fc −B 0 B fc f −2fc −fc 0 B fc f


2 2

Shifting the center frequency fc of the interval of interest to 0 Hz (DC)


− 54 fs 0 5
4 fs f −fs −fs 0 fs fs f
2 2 makes the spectrum asymmetric. This leads to a complex-valued time-
n=2
domain representation (∃f : Z(f ) 6= [Z(−f )]∗ =⇒ ∃t : z(t) ∈ C \ R).
61 63

IQ sampling / complex baseband signal The real part ℜ(z(t)) is also known as “in-phase” signal (I) and the
Consider signal x(t) ∈ R in which only frequencies fl < |f | < fh are imaginary part ℑ(z(t)) as “quadrature” signal (Q).
of interest. This band has a centre frequency of fc = (fl + fh )/2 and
a bandwidth B = fh − fl . It can be sampled efficiently (at the lowest
⊗ sample Q

possible sampling frequency) by downconversion: x(t) −90◦ y(t) z(t) zn

→ Shift its spectrum by −fc : ⊗ sample I

y(t) = x(t) · e−2π jfc t


Q
cos(2πfc t)
→ Low-pass filter it with a cut-off frequency of B/2:
Z Consider:

z(t) = B y(τ ) · sinc((t − τ )B) · dτ •−◦ Z(f ) = Y (f ) · rect(f /B) • sin(x) = cos(x − 12 π)
−∞ • cos(x) · cos(x) = 1
+ 1
cos 2x
2 2
1 1
• sin(x) · sin(x) = 2
− 2
cos 2x I
→ Sample the result at sampling frequency B (or higher): • sin(x) · cos(x) = 0 + 1
2
1
sin 2x
1
• cos(x) · cos(x − ϕ) = 2
cos(ϕ) + 2
cos(2x − ϕ)
zn = z(n/B) • sin(x) · cos(x − ϕ) = 1
2
sin(ϕ) + 1
2
sin(2x − ϕ)

62 64
ℑ[z(t)]
Exercise 11 Reconstructing a sampled baseband signal:
Recall products of sine and cosine:
1 1
• cos(x) · cos(y) = 2
cos(x − y) + 2
cos(x + y) • Generate a one second long Gaussian noise sequence {rn } (using
1 1
• sin(x) · sin(y) = 2
cos(x − y) − 2
cos(x + y) MATLAB function randn) with a sampling rate of 300 Hz.
1 1
• sin(x) · cos(y) = sin(x − y) + sin(x + y)
2 2 ℜ[z(t)]
• Use the fir1(50, 45/150) function to design a finite impulse re-
Examples: sponse low-pass filter with a cut-off frequency of 45 Hz. Use the
Amplitude-modulated signal: filtfilt function in order to apply that filter to the generated noise
signal, resulting in the filtered noise signal {xn }.
1
x(t) = s(t) · cos(2πtfc + ϕ) −→ z(t) = · s(t) · e jϕ • Then sample {xn } at 100 Hz by setting all but every third sample
2
value to zero, resulting in sequence {yn }.
Noncoherent demodulation: s(t) = 2|z(t)| (s(t) > 0 required)
Coherent demodulation: s(t) = 2ℜ[z(t) · e− jϕ ] (ϕ required) • Generate another low-pass filter with a cut-off frequency of 50 Hz
Frequency-modulated signal: and apply it to {yn }, in order to interpolate the reconstructed filtered
 Z t  noise signal {zn }. Multiply the result by three, to compensate the
1 j R t s(τ )dτ + jϕ energy lost during sampling.
x(t) = cos 2πtfc + s(τ )dτ + ϕ −→ z(t) = ·e 0
0 2
• Plot {xn }, {yn }, and {zn }. Finally compare {xn } and {zn }.
d
Demodulation: idea is s(t) = dt ∠z(t), where a · ∠e jφ = φ (a, φ ∈ R), but only for −π ≤ φ < π.
Why should the first filter have a lower cut-off frequency than the second?
h i
dz(t) z(t)
In practice: s(t) ≈ ℑ dt · z (t) /|z(t)|2 or s(t) ≈ ∠ z(t−∆t) /∆t

65 67

Digital modulation schemes Exercise 12 Reconstructing a sampled band-pass signal:

ASK BPSK QPSK • Generate a 1 s noise sequence {rn }, as in exercise 11, but this time
10 00 use a sampling frequency of 3 kHz.
• Apply to that a band-pass filter that attenuates frequencies outside
0 1 0 1 the interval 31–44 Hz, which the MATLAB Signal Processing Toolbox
function cheby2(3, 30, [31 44]/1500) will design for you.

11 01
• Then sample the resulting signal at 30 Hz by setting all but every
100-th sample value to zero.
• Generate with cheby2(3, 20, [30 45]/1500) another band-pass
8PSK 101 16QAM FSK
111 100
1 filter for the interval 30–45 Hz and apply it to the above 30-Hz-
00 sampled signal, to reconstruct the original signal. (You’ll have to
0 multiply it by 100, to compensate the energy lost during sampling.)
110 000 01

11
• Plot all the produced sequences and compare the original band-pass
signal and that reconstructed after being sampled at 30 Hz.
010 001 10

00 01 11 10
Why does the reconstructed waveform differ much more from the original
011
if you reduce the cut-off frequencies of both band-pass filters by 5 Hz?
66 68
Exercise 13 FM demodulation of a single radio station from IQ data: Spectrum of a periodic signal
• The file iq-fm-96M-240k.dat (on the course web page) contains 20 A signal x(t) that is periodic with frequency fp can be factored into a
seconds of a BBC Radio Cambridgeshire FM broadcast, IQ sampled single period ẋ(t) convolved with an impulse comb p(t). This corre-
at the transmitter’s centre frequency of 96.0 MHz, at a sample rate sponds in the frequency domain to the multiplication of the spectrum
of 240 kHz, after having been filtered to 192 kHz bandwidth. of the single period with a comb of impulses spaced fp apart.
• Load the IQ samples into MATLAB using x(t) ẋ(t) p(t)
f = fopen('iq-fm-96M-240k.dat', 'r', 'ieee-le');
c = fread(f, [2,inf], '*float32'); = ∗
fclose(f);
... ... ... ...
z = c(1,:) + j*c(2,:);
−1/fp 0 1/fp t t −1/fp 0 1/fp t
• FM demodulate the complex baseband radio signal z (using angle)
X(f ) Ẋ(f ) P (f )
• apply a 16 kHz low-pass filter (using butter, filter)
• reduce the sample rate from 240 kHz down to 48 kHz (keep only = ·
every 5th sample using the : operator) ... ...
• normalize amplitude (−1 . . . + 1), output as WAV (wavwrite), listen −fp 0 fp f f −fp 0 fp f
69 71

Exercise 14 FM demodulation of multiple radio stations from IQ data: Spectrum of a sampled signal
• The file iq-fm-97M-3.6M.dat contains 4 seconds of Cambridgeshire A signal x(t) that is sampled with frequency fs has a spectrum that is
radio spectrum, IQ sampled at a centre frequency of 97.0 MHz, with periodic with a period of fs .
2.88 MHz bandwidth and a sample rate of 3.6 MHz. Load this file
into MATLAB (as in exercise 13).
x(t) s(t) x̂(t)
• Shift the frequency spectrum of this IQ signal up by 1.0 MHz, such
that the 96.0 MHz carrier of BBC Radio Cambridge ends up at 0 Hz.
· =
• Apply a 200 kHz low-pass filter (butter).
... ... ... ...
• Display the spectrogram of the signal after each of the preceding 0 t −1/fs 0 1/fs t −1/fs 0 1/fs t
three steps (using spectrogram). How does the displayed frequency
X(f ) S(f ) X̂(f )
relate to the original radio frequency?
• FM demodulate, low-pass filter, and subsample the signal to 48 kHz, ∗ =
and output it as a 16-bit WAV file, as in exercise 13.
... ... ... ...
• Estimate the centre frequencies of two other FM radio stations within 0 f −fs fs f −fs 0 fs f
the recorded band (using spectrogram), then demodulate these too.
70 72
Continuous vs discrete Fourier transform Discrete Fourier Transform visualized
• Sampling a continuous signal makes its spectrum periodic      
x0 X0
• A periodic signal has a sampled spectrum 

 
  x1
 
  X1


     
We sample a signal x(t) with fs , getting x̂(t). We take n consecutive
   x2   X2 
     
samples of x̂(t) and repeat these periodically, getting a new signal ẍ(t)


 
· x3  
= X3 

with period n/fs . Its spectrum Ẍ(f ) is sampled (i.e., has non-zero


 
  x4  
  X4 

value) at frequency intervals fs /n and repeats itself with a period fs .


 
  x5  
  X5 

     
Now both ẍ(t) and its spectrum Ẍ(f ) are finite vectors of length n.    x6   X6 
x7 X7
ẍ(t) Ẍ(f ) The n-point DFT of a signal {xi } sampled at frequency fs contains in
the elements X0 to Xn/2 of the resulting frequency-domain vector the
frequency components 0, fs /n, 2fs /n, 3fs /n, . . . , fs /2, and contains
... ... ... ...
in Xn−1 downto Xn/2 the corresponding negative frequencies. Note
that for a real-valued input vector, both X0 and Xn/2 will be real, too.
−n/fs fs−1 0 fs−1 n/fs t −fs −fs /n 0 fs /n fs f Why is there no phase information recovered at fs /2?
73 75

Discrete Fourier Transform (DFT) Inverse DFT visualized


n−1 n−1
X
−2π j ik 1 X ik
Xk = xi · e n xk = · Xi · e2π j n
i=0
n i=0      
The n-point DFT multiplies a vector with an n × n matrix X0 x0
     
     X1   x1 
1 1 1 1 ··· 1      
 1
1
e−2π j n
2
e−2π j n
3
e−2π j n ··· e−2π j n
n−1
    X2   x2 
       
 1
2
e−2π j n
4
e−2π j n
6
e−2π j n ··· e−2π j n
2(n−1)  1    X3   x3 

Fn = 
 · · = 
 1
3
e−2π j n
6
e−2π j n
9
e−2π j n ··· e−2π j n
3(n−1) 
 8 

 
  X4  
  x4 

 
 .. .. .. .. .. ..     X5   x5 
 . . . . . .       
     
1 e−2π j n e−2π j n
n−1 2(n−1)
e−2π j
3(n−1)
n ··· e−2π j
(n−1)(n−1)
n    X6   x6 
        X7 x7
x0 X0 X0 x0
 x1   X1     X1  x1
    1    
Fn ·  x 2  =  X 2  , X2 x2
    ∗    
· Fn ·  = 
 ..   ..  n    .. ..
 .   .     . .
xn−1 Xn−1 Xn−1 xn−1

74 76
Fast Fourier Transform (FFT) Fast complex multiplication
Calculating the product of two complex numbers as
n−1
X
 −2π j ik
Fn {xi }n−1
i=0 k = xi · e n
(a + jb) · (c + jd) = (ac − bd) + j(ad + bc)
i=0
n n
2
−1 2
−1
X ik
−2π j n/2 k
−2π j n
X ik
−2π j n/2 involves four (real-valued) multiplications and two additions.
= x2i · e + e x2i+1 · e
i=0 i=0 The alternative calculation
  n   n 
−1 k −1
2
 F n2 {x2i }i=0

 + e−2π j n · F n2 {x2i+1 }i=0
2
, k< n
2 α = a(c + d)
k k
=  n   n  (a + jb) · (c + jd) = (α − β) + j(α + γ) with β = d(a + b)
−1 k −1
+ e−2π j n · F n2 {x2i+1 }i=0 n
 2 2
 F n2 {x2i }i=0
 , k≥ 2 γ = c(b − a)
k− n2 n
k− 2

provides the same result with three multiplications and five additions.
The DFT over n-element vectors can be reduced to two DFTs over
n/2-element vectors plus n multiplications and n additions, leading to The latter may perform faster on CPUs where multiplications take three
log2 n rounds and n log2 n additions and multiplications overall, com- or more times longer than additions.
This “Karatsuba multiplication” is most helpful on simpler microcontrollers. Specialized signal-
pared to n2 for the equivalent matrix multiplication. processing CPUs (DSPs) feature 1-clock-cycle multipliers. High-end desktop processors use
A high-performance FFT implementation in C with many processor-specific optimizations and pipelined multipliers that stall where operations depend on each other.
support for non-power-of-2 sizes is available at http://www.fftw.org/.
77 79

Efficient real-valued FFT FFT-based convolution


The symmetry properties of the Fourier transform applied to the discrete m−1
Calculating the convolution of two finite sequences {xi }i=0 and {yi }n−1
i=0
Fourier transform {Xi }n−1 n−1
i=0 = Fn {xi }i=0 have the form of lengths m and n via
∀i : xi = ℜ(xi ) ⇐⇒ ∀i : Xn−i = Xi∗ min{m−1,i}
X
∀i : xi = j · ℑ(xi ) ⇐⇒ ∀i : Xn−i = −Xi∗ zi = xj · yi−j , 0≤i<m+n−1
j=max{0,i−(n−1)}
These two symmetries, combined with the linearity of the DFT, allows us
to calculate two real-valued n-point DFTs takes mn multiplications.
{Xi′ }n−1
i=0 = Fn {x′i }n−1
i=0 {Xi′′ }n−1
i=0 = Fn {x′′i }n−1
i=0
Can we apply the FFT and the convolution theorem to calculate the
simultaneously in a single complex-valued n-point DFT, by composing its convolution faster, in just O(m log m + n log n) multiplications?
input as {zi } = F −1 (F{xi } · F{yi })
xi = x′i + j · x′′i
and decomposing its output as There is obviously no problem if this condition is fulfilled:
1 1 {xi } and {yi } are periodic, with equal period lengths
Xi′ = (Xi + Xn−i ∗
) ∗
Xi′′ = (Xi − Xn−i )
2 2 In this case, the fact that the DFT interprets its input as a single period
of a periodic signal will do exactly what is needed, and the FFT and
To optimize the calculation of a single real-valued FFT, use this trick to calculate the two half-size
real-value FFTs that occur in the first round. inverse FFT can be applied directly as above.
78 80
In the general case, measures have to be taken to prevent a wrap-over: Each block is zero-padded at both ends and then convolved as before:
−1
A B F [F(A)⋅F(B)]

∗ ∗ ∗

−1
A’ B’ F [F(A’)⋅F(B’)] = = =

The regions originally added as zero padding are, after convolution, aligned
Both sequences are padded with zero values to a length of at least m+n−1. to overlap with the unpadded ends of their respective neighbour blocks.
This ensures that the start and end of the resulting sequence do not overlap. The overlapping parts of the blocks are then added together.
81 83

Zero padding is usually applied to extend both sequence lengths to the Deconvolution
next higher power of two (2⌈log2 (m+n−1)⌉ ), which facilitates the FFT. A signal u(t) was distorted by convolution with a known impulse re-
With a causal sequence, simply append the padding zeros at the end. sponse h(t) (e.g., through a transmission channel or a sensor problem).
With a non-causal sequence, values with a negative index number are The “smeared” result s(t) was recorded.
wrapped around the DFT block boundaries and appear at the right Can we undo the damage and restore (or at least estimate) u(t)?
end. In this case, zero-padding is applied in the center of the block,
between the last and first element of the sequence.
Thanks to the periodic nature of the DFT, zero padding at both ends
has the same effect as padding only at one end. ∗ =
If both sequences can be loaded entirely into RAM, the FFT can be ap-
plied to them in one step. However, one of the sequences might be too
large for that. It could also be a realtime waveform (e.g., a telephone
signal) that cannot be delayed until the end of the transmission.
In such cases, the sequence has to be split into shorter blocks that are ∗ =
separately convolved and then added together with a suitable overlap.

82 84
The convolution theorem turns the problem into one of multiplication: Spectral estimation
Z
s(t) = u(t − τ ) · h(τ ) · dτ
Sine wave 4×fs/32 Discrete Fourier Transform
1
s = u∗h 15

F{s} = F{u} · F{h} 10


0
F{u} = F{s}/F{h} 5

u = F −1 {F{s}/F{h}} −1
0 10 20 30
0
0 10 20 30
In practice, we also record some noise n(t) (quantization, etc.): Sine wave 4.61×fs/32 Discrete Fourier Transform
Z 1
15
c(t) = s(t) + n(t) = u(t − τ ) · h(τ ) · dτ + n(t)
10
0
Problem – At frequencies f where F{h}(f ) approaches zero, the
5
noise will be amplified (potentially enormously) during deconvolution:
−1 0
ũ = F −1 {F{c}/F{h}} = u + F −1 {F{n}/F{h}} 0 10 20 30 0 10 20 30

85 87

Typical workarounds: We introduced the DFT as a special case of the continuous Fourier
→ Modify the Fourier transform of the impulse response, such that transform, where the input is sampled and periodic.
|F{h}(f )| > ǫ for some experimentally chosen threshold ǫ. If the input is sampled, but not periodic, the DFT can still be used
to calculate an approximation of the Fourier transform of the original
→ If estimates of the signal spectrum |F{s}(f )| and the noise
continuous signal. However, there are two effects to consider. They
spectrum |F{n}(f )| can be obtained, then we can apply the
are particularly visible when analysing pure sine waves.
“Wiener filter” (“optimal filter”)
Sine waves whose frequency is a multiple of the base frequency (fs /n)
|F{s}(f )|2
W (f ) = of the DFT are identical to their periodic extension beyond the size
|F{s}(f )|2 + |F{n}(f )|2 of the DFT. They are, therefore, represented exactly by a single sharp
before deconvolution: peak in the DFT. All their energy falls into one single frequency “bin”
ũ = F −1 {W · F{c}/F{h}} in the DFT result.
Sine waves with other frequencies, which do not match exactly one of
Exercise 15 Use MATLAB to deconvolve the blurred stars from slide 31. the output frequency bins of the DFT, are still represented by a peak
The files stars-blurred.png with the blurred-stars image and stars-psf.png with the impulse
response (point-spread function) are available on the course-material web page. You may find at the output bin that represents the nearest integer multiple of the
the MATLAB functions imread, double, imagesc, circshift, fft2, ifft2 of use. DFT’s base frequency. However, such a peak is distorted in two ways:
Try different ways to control the noise (see above) and distortions near the margins (window-
ing). [The MATLAB image processing toolbox provides ready-made “professional” functions
deconvwnr, deconvreg, deconvlucy, edgetaper, for such tasks. Do not use these, except per-
→ Its amplitude is lower (down to 63.7%).
haps to compare their outputs with the results of your own attempts.] → Much signal energy has “leaked” to other frequencies.
86 88
The reason for the leakage and scalloping losses is easy to visualize with the
35 help of the convolution theorem:
30 The operation of cutting a sequence of the size of the DFT input vector out
of a longer original signal (the one whose continuous Fourier spectrum we
25
try to estimate) is equivalent to multiplying this signal with a rectangular
20 function. This destroys all information and continuity outside the “window”
15 that is fed into the DFT.
10 Multiplication with a rectangular window of length T in the time domain is
equivalent to convolution with sin(πf T )/(πf T ) in the frequency domain.
5
The subsequent interpretation of this window as a periodic sequence by
0
16
0 5 the DFT leads to sampling of this convolution result (sampling meaning
10 15 15.5
20 25 multiplication with a Dirac comb whose impulses are spaced fs /n apart).
30 15
DFT index
input freq. Where the window length was an exact multiple of the original signal period,
sampling of the sin(πf T )/(πf T ) curve leads to a single Dirac pulse, and
The leakage of energy to other frequency bins not only blurs the estimated spec-
trum. The peak amplitude also changes significantly as the frequency of a tone
the windowing causes no distortion. In all other cases, the effects of the con-
changes from that associated with one output bin to the next, a phenomenon volution become visible in the frequency domain as leakage and scalloping
known as scalloping. In the above graphic, an input sine wave gradually changes losses.
from the frequency of bin 15 to that of bin 16 (only positive frequencies shown).
89 91

Windowing Some better window functions


Sine wave Discrete Fourier Transform
300 1
1

200 0.8
0
100
0.6
−1 0
0 200 400 0 200 400 0.4
Sine wave multiplied with window function Discrete Fourier Transform
1 100 0.2
Rectangular window
Triangular window
0
0 50 Hann window
Hamming window

−1 0 0.2 0.4 0.6 0.8 1


0
0 200 400 0 200 400 All these functions are 0 outside the interval [0,1].
90 92
Rectangular window (64−point) Triangular window Zero padding increases DFT resolution
The two figures below show two spectra of the 16-element sequence
20 20
Magnitude (dB)

Magnitude (dB)
si = cos(2π · 3i/16) + cos(2π · 4i/16), i ∈ {0, . . . , 15}.
0 0
The left plot shows the DFT of the windowed sequence
−20 −20
xi = si · w i , i ∈ {0, . . . , 15}
−40 −40
and the right plot shows the DFT of the zero-padded windowed sequence
−60 −60
0 0.5 1 0 0.5 1 
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) ′ si · wi , i ∈ {0, . . . , 15}
xi =
Hann window Hamming window 0, i ∈ {16, . . . , 63}
where wi = 0.54 − 0.46 × cos (2πi/15) is the Hamming window.
20 20
Magnitude (dB)

Magnitude (dB)
0 0 DFT without zero padding DFT with 48 zeros appended to window
4 4
−20 −20

−40 −40 2 2
−60 −60
0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) 0 0
0 5 10 15 0 20 40 60
93 95

Numerous alternatives to the rectangular window have been proposed Applying the discrete Fourier transform to an n-element long real-
that reduce leakage and scalloping in spectral estimation. These are valued sequence leads to a spectrum consisting of only n/2+1 discrete
vectors multiplied element-wise with the input vector before applying frequencies.
the DFT to it. They all force the signal amplitude smoothly down to Since the resulting spectrum has already been distorted by multiplying
zero at the edge of the window, thereby avoiding the introduction of the (hypothetically longer) signal with a windowing function that limits
sharp jumps in the signal when it is extended periodically by the DFT. its length to n non-zero values and forces the waveform smoothly down
Three examples of such window vectors {wi }n−1 i=0 are: to zero at the window boundaries, appending further zeros outside the
Triangular window (Bartlett window): window will not distort the signal further.

i The frequency resolution of the DFT is the sampling frequency divided
w i = 1 − 1 −

by the block size of the DFT. Zero padding can therefore be used to
n/2
increase the frequency resolution of the DFT.
Hann window (raised-cosine window, Hanning window):
  Note that zero padding does not add any additional information to the
i signal. The spectrum has already been “low-pass filtered” by being
wi = 0.5 − 0.5 × cos 2π
n−1 convolved with the spectrum of the windowing function. Zero padding
Hamming window: in the time domain merely samples this spectrum blurred by the win-
  dowing step at a higher resolution, thereby making it easier to visually
i
wi = 0.54 − 0.46 × cos 2π distinguish spectral lines and to locate their peak more precisely.
n−1
94 96
Digital filters Solutions:
Filter: supresses (removes, attenuates) unwanted signal components.
→ Make the impulse response finite by multiplying the sampled
→ low-pass filter – suppress all frequencies above a cut-off frequency h(t) with a windowing function
→ high-pass filter – suppress all frequencies below a cut-off frequency,
→ Make the impulse response causal by adding a delay of half the
including DC (direct current = 0 Hz)
window size
→ band-pass filter – suppress signals outside a frequency interval (=
passband) The impulse response of an n-th order low-pass filter is then chosen as
→ band-stop filter (aka: band-reject filter) – suppress signals inside sin[2π(i − n/2)fc /fs ]
a single frequency interval (= stopband) hi = 2fc /fs · · wi
2π(i − n/2)fc /fs
→ notch filter – narrow band-stop filter, ideally suppressing only a
single frequency where {wi } is a windowing sequence, such as the Hamming window
For digital filters, we also distinguish wi = 0.54 − 0.46 × cos (2πi/n)
→ finite impulse response (FIR) filters
with wi = 0 for i < 0 and i > n.
→ infinite impulse response (IIR) filters Note that for fc = fs /4, we have hi = 0 for all even values of i. Therefore, this special case
requires only half the number of multiplications during the convolution. Such “half-band” FIR
depending on how far their memory reaches back in time. filters are used, for example, as anti-aliasing filters wherever a sampling rate needs to be halved.
97 99

Window-based design of FIR filters FIR low-pass filter design example


Recall that the ideal continuous low-pass filter with cut-off frequency
fc has the frequency characteristic Impulse Response
0.5
   1

Imaginary Part
1 if |f | < fc f

Amplitude
H(f ) = = rect
0 if |f | > fc 2fc 0
30
0

and the impulse response


−1
−0.5
sin 2πtfc −1 0 1 0 10 20 30
h(t) = 2fc = 2fc · sinc(2fc · t). Real Part n (samples)
2πtfc
0 0
Sampling this impulse response with the sampling frequency fs of the

Phase (degrees)
Magnitude (dB)
signal to be processed will lead to a periodic frequency characteristic, −20 −500
that matches the periodic spectrum of the sampled signal.
−40 −1000
There are two problems though:
−60 −1500
→ the impulse response is infinitely long 0 0.5 1
Normalized Frequency (×π rad/sample)
0 0.5 1
Normalized Frequency (×π rad/sample)
→ this filter is not causal, that is h(t) 6= 0 for t < 0
order: n = 30, cutoff frequency (−6 dB): fc = 0.25 × fs /2, window: Hamming
98 100
Filter performance Converting low-pass to high-pass filters
An ideal filter has a gain of 1 in the pass-band and a gain of 0 in the by frequency inversion
stop band, and nothing in between. In order to turn the spectrum X(f ) of a real-valued signal xi sampled at fs
into an inverted spectrum X ′ (f ) = X(fs /2 − f ), we merely have to shift
A practical filter will have
the periodic spectrum by fs /2:
→ frequency-dependent gain near 1 in the passband X ′ (f ) X(f )

→ frequency-dependent gain below a threshold in the stopband = ∗


... ... ... ...
→ a transition band between the pass and stop bands −fs 0 fs f −fs 0 fs f − f2s 0 fs
2
f
We truncate the ideal, infinitely-long impulse response by multiplication This can be accomplished by multiplying the sampled sequence xi with yi =
with a window sequence. cos πfs t = cos πi, which is nothing but multiplication with the sequence
In the frequency domain, this will convolve the rectangular frequency . . . , 1, −1, 1, −1, 1, −1, 1, −1, . . .
response of the ideal low-pass filter with the frequency characteristic So in order to design a discrete high-pass filter that attenuates all frequencies
of the window. f outside the range fc < |f | < fs /2, we merely have to design a low-pass
The width of the main lobe determines the width of the transition filter that attenuates all frequencies outside the range −fc < f < fc , and
band, and the side lobes cause ripples in the passband and stopband. then multiply every second value of its impulse response with −1.
101 103

Converting low-pass to band-pass filters Exercise 16 Explain the difference between the DFT, FFT, and FFTW.
by modulation
Exercise 17 Push-button telephones use a combination of two sine tones
To obtain a band-pass filter that attenuates all frequencies f outside to signal, which button is currently being pressed:
the range fl < f < fh , we first design a low-pass filter with a cut-off 1209 Hz 1336 Hz 1477 Hz 1633 Hz
frequency (fh − fl )/2 and multiply its impulse response with a sine 697 Hz 1 2 3 A
wave of frequency (fh + fl )/2, before applying the usual windowing: 770 Hz 4 5 6 B
852 Hz 7 8 9 C
sin[π(i − n/2)(fh − fl )/fs ] 941 Hz * 0 # D
hi = (fh − fl )/fs · · cos[πi(fh + fl )/fs ] · wi
π(i − n/2)(fh − fl )/fs
(a) You receive a digital telephone signal with a sampling frequency of
H(f ) 8 kHz. You cut a 256-sample window out of this sequence, multiply it with a
windowing function and apply a 256-point DFT. What are the indices where
= ∗ the resulting vector (X0 , X1 , . . . , X255 ) will show the highest amplitude if
button 9 was pushed at the time of the recording?
−fh −fl 0 fl fh f − fh −f
2
l fh −fl
2
f − fh +f
2
l
0 fh +fl
2
f (b) Use MATLAB to determine, which button sequence was typed in the
touch tones recorded in the file touchtone.wav on the course-material web
page.
102 104
Polynomial representation of sequences Example of polynomial division:
We can represent sequences {xn } as polynomials: 1 X ∞
= 1 + av + a2 v 2 + a3 v 3 + · · · = an v n

X 1 − av n=0
X(v) = xn v n
n=−∞ 1 + av + a2 v 2 + · · ·
1 − av 1
Example of polynomial multiplication: 1 − av
av
(1 + 2v + 3v 2 ) · (2 + 1v)
av − a2 v 2
2 + 4v + 6v 2 a2 v 2
+ 1v + 2v 2 + 3v 3 a2 v 2 − a3 v 3
= 2 + 5v + 8v 2 + 3v 3 ···

Rational functions (quotients of two polynomials) can provide a con-


Compare this with the convolution of two sequences (in MATLAB):
venient closed-form representations for infinitely-long exponential se-
conv([1 2 3], [2 1]) equals [2 5 8 3] quences, in particular the impulse responses of IIR filters.

105 107

Convolution of sequences is equivalent to polynomial multiplication: The z-transform



X The z-transform of a sequence {xn } is defined as:
{hn } ∗ {xn } = {yn } ⇒ yn = hk · xn−k
k=−∞ ∞
X
↓ ↓ X(z) = xn z −n
! ! n=−∞

X ∞
X
H(v) · X(v) = hn v n · xn v n Note that this differs only in the sign of the exponent from the polynomial representation discussed
on the preceeding slides.
n=−∞ n=−∞

X ∞
X Recall that the above X(z) is exactly the factor with which an expo-
= hk · xn−k · v n nential sequence {z n } is multiplied, if it is convolved with {xn }:
n=−∞ k=−∞

Note how the Fourier transform of a sequence can be accessed easily {z n } ∗ {xn } = {yn }
from its polynomial form:
∞ ∞
∞ X X
n−k n
X(e− jω ) =
X
xn e− jωn ⇒ yn = z xk = z · z −k xk = z n · X(z)
k=−∞ k=−∞
n=−∞

106 108
The z-transform defines for each sequence a continuous complex-valued This function can be converted into the form
surface over the complex plane C. For finite sequences, its value is al- m
Y m
Y
ways defined across the entire complex plane. (1 − cl · z −1 ) (z − cl )
For infinite sequences, it can be shown that the z-transform converges b0 l=1 b0 k−m l=1
H(z) = · k = ·z · k
only for the region a0 Y a0 Y
−1
(1 − dl · z ) (z − dl )
l=1 l=1
xn+1 xn+1
lim < |z| < lim

n→∞ xn n→−∞ xn where the cl are the non-zero positions of zeros (H(cl ) = 0) and the dl are
The z-transform identifies a sequence unambiguously only in conjunction with a given region of
the non-zero positions of the poles (i.e., z → dl ⇒ |H(z)| → ∞) of H(z).
convergence. In other words, there exist different sequences, that have the same expression as Except for a constant factor, H(z) is entirely characterized by the position
their z-transform, but that converge for different amplitudes of z. of these zeros and poles.
The z-transform is a generalization of the discrete-time Fourier trans- On the unit circle z = e jω , H(e jω ) is the discrete-time Fourier transform of
form, which it contains on the complex unit circle (|z| = 1): {hn } (ω = πf / f2s ). The DTFT amplitude can also be expressed in terms
∞ of the relative position of e jω to the zeros and poles:
X
t−1 · F{x̂(t)}(f ) = X(e ) = jω
xn e− jωn Qm
s

b0 |e jω − cl |
n=−∞ |H(e )| = · Qkl=1
a0 jω
l=1 |e − dl |
where ω = 2π ffs .
109 111

The z-transform of the impulse xn b0 a−1


0 yn Example: z-transform of a simple filter
response {hn } of the causal LTI
system defined by z −1 z −1 Consider this IIR filter: Its z-transform
b1 −a1
xn−1 yn−1 xn 0.8 yn
k m
0.8 0.8z
X X H(z) = =
al · yn−l = bl · xn−l z −1
z −1 1 − 0.2 · z −1 z − 0.2
··· ··· z −1
0.2
l=0 l=0 ··· ··· yn−1 has a zero at c1 = 0 and a pole at
with {yn } = {hn } ∗ {xn } is the a0 = 1, a1 = −0.2, d1 = 0.2. Amplitude |H(z)|:
z −1 z −1
bm −ak
rational function xn−m yn−k b0 = 0.8

b0 + b1 z −1 + b2 z −2 + · · · + bm z −m xn = δ n ⇒ yn =
H(z) =
a0 + a1 z −1 + a2 z −2 + · · · + ak z −k Impulse Response
0.8
(bm =
6 0, ak =
6 0) which can also be written as
0.6
Amplitude

P
zk m l=0 bl z
m−l
0.4
H(z) = P .
z m kl=0 al z k−l 0.2

H(z) has m zeros and k poles at non-zero locations in the z plane, 0


0 5
plus k − m zeros (if k > m) or m − k poles (if m > k) at z = 0. n (samples)
110 112
0.8 0.8z Magnitude Response z 1
H(z) = 1−0.2·z −1
= z−0.2
(cont’d) 1
H(z) = z−0.7
= 1−0.7·z −1
How do poles affect time domain?
0.95 z Plane Impulse Response

Imaginary Part
0.9 1 1

Amplitude
Magnitude
0.85
0 0.5
0.8

0.75 −1 0
−1 0 1 0 10 20 30
0.7
Real Part n (samples)
0 0.2 0.4 0.6 0.8
z 1
Normalized Frequency (×π rad/sample) H(z) = z−0.9
= 1−0.9·z −1
Run this LTI filter at sampling frequency fs and test it with sinusoidial
input (frequency f , amplitude 1): xn = cos(2πf n/fs ) z Plane Impulse Response

Imaginary Part
1 1
Output: yn = A(f ) · cos(2πf n/fs + θ(f ))

Amplitude
What are the gain A(f ) and phase delay θ(f ) at frequency f ? 0 0.5
jπf /fs )}
Answer: A(f ) = |H(e j2πf /fs
)|, θ(f ) = ∠H(e j2πf /fs
)= tan−1 ℑ{H(e
ℜ{H(e jπf /fs )} −1 0
Example: fs = 8 kHz, f = 2 kHz (normalized frequency f / f2s = 0.5) ⇒ Gain A(2 kHz) = −1 0 1 0 10 20 30
0.8 j 0.8 j(− j−0.2) 0.8−0.16 j q 0.82 +0.162
|H(e jπ/2 )| = |H( j)| = j−0.2 =
( j−0.2)(− j−0.2)
=
1+0.04
=
1.042
= 0.784. . . Real Part n (samples)
113 115

z 1
Visual verification in MATLAB: x H(z) = z−1
= 1−z −1
1.5 y (time domain)
n = 0:15; fs = 8000; z Plane Impulse Response
y (z−transform)

Imaginary Part
f = 1500; 1 1

Amplitude
x = cos(2*pi*f*n/fs);
1 0 0.5
b = [0.8]; a = [1 -0.2];
y1 = filter(b,a,x);
z = exp(j*2*pi*f/fs); −1 0
0.5 −1 0 1 0 10 20 30
H = 0.8*z/(z-0.2);
Real Part n (samples)
A = abs(H);
z 1
theta = atan(imag(H)/real(H)); H(z) = z−1.1
= 1−1.1·z −1
y2 = A*cos(2*pi*f*n/fs+theta); 0
z Plane Impulse Response
plot(n, x, 'bx-', ... Imaginary Part
1 20
n, y1, 'go-', ...

Amplitude
n, y2, 'r+-') −0.5
0 10
legend('x', ...
'y (time domain)', ... −1
−1 0
'y (z-transform)') −1 0 1 0 10 20 30
ylim([-1.1 1.8]) 0 5 10 15 Real Part n (samples)
114 116
z2
H(z) = (z−0.9·e jπ/6 )·(z−0.9·e− jπ/6 )
= 1
1−1.8 cos(π/6)z −1 +0.92 ·z −2 Properties of the z-transform
Imaginary Part z Plane Impulse Response
1 2

Amplitude
As with the Fourier transform, convolution in the time domain corre-
2 sponds to complex multiplication in the z-domain:
0 0

−1 −2 {xn } •−◦ X(z), {yn } •−◦ Y (z) ⇒ {xn } ∗ {yn } •−◦ X(z) · Y (z)
−1 0 1 0 10 20 30
Real Part n (samples)
z2 1 Delaying a sequence by one corresponds in the z-domain to multipli-
H(z) = =
(z−e jπ/6 )·(z−e− jπ/6 ) 1−2 cos(π/6)z −1 +z −2 cation with z −1 :
z Plane Impulse Response {xn−∆n } •−◦ X(z) · z −∆n
Imaginary Part

1 Amplitude 5

2
0 0

−1 −5
−1 0 1 0 10 20 30
Real Part n (samples)
117 119

z2
H(z) = (z−0.9·e jπ/2 )·(z−0.9·e− jπ/2 )
= 1
1−1.8 cos(π/2)z −1 +0.92 ·z −2
= 1
1+0.92 ·z −2 IIR Filter design techniques
z Plane Impulse Response The design of a filter starts with specifying the desired parameters:
Imaginary Part

1 1

Amplitude

The passband is the frequency range where we want to approx-


2 imate a gain of one.
0 0

−1 −1 → The stopband is the frequency range where we want to approx-


−1 0 1 0 10 20 30 imate a gain of zero.
Real Part n (samples)

H(z) = z
= 1 → The order of a filter is the number of poles it uses in the
z+1 1+z −1
z-domain, and equivalently the number of delay elements nec-
z Plane Impulse Response essary to implement it.
Imaginary Part

1 1
Amplitude

→ Both passband and stopband will in practice not have gains


0 0 of exactly one and zero, respectively, but may show several
deviations from these ideal values, and these ripples may have
−1 −1 a specified maximum quotient between the highest and lowest
−1 0 1 0 10 20 30
Real Part n (samples) gain.
118 120
→ There will in practice not be an abrupt change of gain between Butterworth filter design example
passband and stopband, but a transition band where the fre-
quency response will gradually change from its passband to its Impulse Response
stopband value. 1 1

Imaginary Part

Amplitude
The designer can then trade off conflicting goals such as a small tran-
0 0.5
sition band, a low order, a low ripple amplitude, or even an absence of
ripples.
−1 0
Design techniques for making these tradeoffs for analog filters (involv- −1 0 1 0 10 20 30
Real Part n (samples)
ing capacitors, resistors, coils) can also be used to design digital IIR
filters: 0 0

Phase (degrees)
Magnitude (dB)
Butterworth filters −20
−50
Have no ripples, gain falls monotonically across the pass and transition
p −40
band. Within the passband, the gain drops slowly down to 1 − 1/2
−60 −100
(−3 dB). Outside the passband, it drops asymptotically by a factor 2N 0 0.5 1 0 0.5 1
per octave (N · 20 dB/decade). Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)

order: 1, cutoff frequency (−3 dB): 0.25 × fs /2


121 123

Chebyshev type I filters Butterworth filter design example


Distribute the gain error uniformly throughout the passband (equirip-
ples) and drop off monotonically outside. Impulse Response
1 0.5
Chebyshev type II filters

Imaginary Part

Amplitude
Distribute the gain error uniformly throughout the stopband (equirip- 0 0
ples) and drop off monotonically in the passband.
−1
Elliptic filters (Cauer filters) −1 0 1
−0.5
0 10 20 30
Distribute the gain error as equiripples both in the passband and stop- Real Part n (samples)

band. This type of filter is optimal in terms of the combination of the 0 0

Phase (degrees)
passband-gain tolerance, stopband-gain tolerance, and transition-band Magnitude (dB)
−20 −200
width that can be achieved at a given filter order.
−40 −400
All these filter design techniques are implemented in the MATLAB Signal Processing Toolbox in
the functions butter, cheby1, cheby2, and ellip, which output the coefficients an and bn of the −60 −600
difference equation that describes the filter. These can be applied with filter to a sequence, or 0 0.5 1 0 0.5 1
can be visualized with zplane as poles/zeros in the z-domain, with impz as an impulse response, Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
and with freqz as an amplitude and phase spectrum. The commands sptool and fdatool
provide interactive GUIs to design digital filters. order: 5, cutoff frequency (−3 dB): 0.25 × fs /2
122 124
Chebyshev type I filter design example Elliptic filter design example
Impulse Response Impulse Response
1 0.5 1 0.5
Imaginary Part

Imaginary Part
Amplitude

Amplitude
0 0 0 0

−1 −0.5 −1 −0.5
−1 0 1 0 10 20 30 −1 0 1 0 10 20 30
Real Part n (samples) Real Part n (samples)

0 0 0 0

Phase (degrees)

Phase (degrees)
Magnitude (dB)

Magnitude (dB)
−20 −200 −20
−200
−40 −400 −40

−60 −600 −60 −400


0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)

order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB order: 5, cutoff frequency: 0.5 × fs /2, pass-band ripple: −3 dB, stop-band ripple: −20 dB
125 127

Chebyshev type II filter design example Exercise 18 Draw the direct form II block diagrams of the causal infinite-
impulse response filters described by the following z-transforms and write
down a formula describing their time-domain impulse responses:
Impulse Response
0.5 1
1 (a) H(z) =
Imaginary Part

1 − 21 z −1
Amplitude

1 −4
0 0 1− 44
z
(b) H ′ (z) = 1 −1
1 − 4z
−1 −0.5 1 1 1
−1 0 1 0 10 20 30 (c) H ′′ (z) = + z −1 + z −2
Real Part n (samples)
2 4 2

0 200 Exercise 19 (a) Perform the polynomial division of the rational function
Phase (degrees)
Magnitude (dB)

given in exercise 18 (a) until you have found the coefficient of z −5 in the
−20 0
result.
−40 −200 (b) Perform the polynomial division of the rational function given in exercise
18 (b) until you have found the coefficient of z −10 in the result.
−60 −400
0 0.5 1 0 0.5 1 (c) Use its z-transform to show that the filter in exercise 18 (b) has actually
Normalized Frequency (×π rad/sample) Normalized Frequency (×π rad/sample)
a finite impulse response and draw the corresponding block diagram.
order: 5, cutoff frequency: 0.5 × fs /2, stop-band ripple: −20 dB
126 128
Exercise 20 Consider the system h : {xn } → {yn } with yn + yn−1 = Two random variables xn and xm are called independent if
xn − xn−4 .
Pxn ,xm (a, b) = Pxn (a) · Pxm (b)
(a) Draw the direct form I block diagram of a digital filter that realises h.
and a random process is called stationary if
(b) What is the impulse response of h?
(c) What is the step response of h (i.e., h ∗ u)? Pxn1 +l ,...,xnk +l (a1 , . . . , ak ) = Pxn1 ,...,xnk (a1 , . . . , ak )
(d) Apply the z-transform to (the impulse response of) h to express it as a for all l, that is, if the probability distributions are time invariant.
rational function H(z). The derivative pxn (a) = Px′ n (a) is called the probability density func-
(e) Can you eliminate a common factor from numerator and denominator? tion, and helps us to define quantities such as the
What does this mean? R
(f) For what values z ∈ C is H(z) = 0?
→ expected value E(xn) = apxn (a) da R
(g) How many poles does H have in the complex plane?
→ mean-square value (average power) E(|xn|2) = |a|2pxn (a) da
(h) Write H as a fraction using the position of its poles and zeros and draw → variance Var(xn ) = E[|xn − E(xn )|2 ] = E(|xn |2 ) − |E(xn )|2
their location in relation to the complex unit circle. → correlation Cor(xn , xm ) = E(xn · x∗m )
(i) If h is applied to a sound file with a sampling frequency of 8000 Hz, → covariance Cov(xn , xm ) = E[(xn − E(xn )) · (xm − E(xm ))∗ ] =
sine waves of what frequency will be eliminated and sine waves of what E(xn x∗m ) − E(xn )E(xm )∗
frequency will be quadrupled in their amplitude? Remember that E(·) is linear, that is E(ax) = aE(x) and E(x + y) = E(x) + E(y). Also,
Var(ax) = a2 Var(x) and, if x and y are independent, Var(x + y) = Var(x) + Var(y).
129 131

Random sequences and noise A stationary random process {xn } can be characterized by its mean
A discrete random sequence {xn } is a sequence of numbers value
mx = E(xn ),
. . . , x−2 , x−1 , x0 , x1 , x2 , . . .
its variance
where each value xn is the outcome of a random variable xn in a σx2 = E(|xn − mx |2 ) = γxx (0)
corresponding sequence of random variables (σx is also called standard deviation), its autocorrelation sequence
. . . , x−2 , x−1 , x0 , x1 , x2 , . . . φxx (k) = E(xn+k · x∗n )
Such a collection of random variables is called a random process. Each and its autocovariance sequence
individual random variable xn is characterized by its probability distri- γxx (k) = E[(xn+k − mx ) · (xn − mx )∗ ] = φxx (k) − |mx |2
bution function
Pxn (a) = Prob(xn ≤ a) A pair of stationary random processes {xn } and {yn } can, in addition,
be characterized by its crosscorrelation sequence
and the entire random process is characterized completely by all joint
probability distribution functions φxy (k) = E(xn+k · yn∗ )

Pxn1 ,...,xnk (a1 , . . . , ak ) = Prob(xn1 ≤ a1 ∧ . . . ∧ xnk ≤ ak ) and its crosscovariance sequence


γxy (k) = E[(xn+k − mx ) · (yn − my )∗ ] = φxy (k) − mx m∗y
for all possible sets {xn1 , . . . , xnk }.
130 132
Deterministic crosscorrelation sequence Filtered random sequences
For deterministic sequences {xn } and {yn }, the crosscorrelation sequence Let {xn } be a random sequence from a stationary random process.
is The output

X
cxy (k) = xi+k yi .

X ∞
X
i=−∞
yn = hk · xn−k = hn−k · xk
After dividing through the overlapping length of the finite sequences involved, cxy (k) can be
used to estimate, from a finite sample of a stationary random sequence, the underlying φxy (k). k=−∞ k=−∞
MATLAB’s xcorr function does that with option unbiased.
If {xn } is similar to {yn }, but lags l elements behind (xn ≈ yn−l ), then of an LTI applied to it will then be another random sequence, charac-
cxy (l) will be a peak in the crosscorrelation sequence. It is therefore widely terized by
X∞
calculated to locate shifted versions of a known sequence in another one.
my = mx hk
The deterministic crosscorrelation sequence is a close cousin of the convo- k=−∞
lution, with just the second input sequence mirrored:
and
{cxy (n)} = {xn } ∗ {y−n }

X φxx (k) = E(xn+k · x∗n )
It can therefore be calculated equally easily via the Fourier transform: φyy (k) = φxx (k−i)chh (i), where P∞
chh (k) = i=−∞ hi+k hi .
Cxy (f ) = X(f ) · Y ∗ (f ) i=−∞

Swapping the input sequences mirrors the output sequence: cxy (k) = cyx (−k).
133 135

Equivalently, we define the deterministic autocorrelation sequence in In other words:


the time domain as
{φyy (n)} = {chh (n)} ∗ {φxx (n)}
∞ {yn } = {hn } ∗ {xn } ⇒
X Φyy (f ) = |H(f )|2 · Φxx (f )
cxx (k) = xi+k xi .
i=−∞
Similarly:
which corresponds in the frequency domain to
{φyx (n)} = {hn } ∗ {φxx (n)}
{yn } = {hn } ∗ {xn } ⇒
Cxx (f ) = X(f ) · X ∗ (f ) = |X(f )|2 . Φyx (f ) = H(f ) · Φxx (f )

In other words, the Fourier transform Cxx (f ) of the autocorrelation


sequence {cxx (n)} of a sequence {xn } is identical to the squared am- White noise
plitudes of the Fourier transform, or power spectrum, of {xn }. A random sequence {xn } is a white noise signal, if mx = 0 and
This suggests, that the Fourier transform of the autocorrelation se-
φxx (k) = σx2 δk .
quence of a random process might be a suitable way for defining the
power spectrum of that random process. The power spectrum of a white noise signal is flat:
What can we say about the phase in the Fourier spectrum of a time-invariant random process?
Φxx (f ) = σx2 .
134 136
Application example: The rightmost figure was generated from the same set of 1000 windows,
Where an LTI {yn } = {hn } ∗ {xn } can be observed to operate on but this time the complex values of the DFTs were averaged before the
absolute value was taken. This is called coherent averaging and, because
white noise {xn } with φxx (k) = σx2 δk , the crosscorrelation between
of the linearity of the DFT, identical to first averaging the 1000 windows
input and output will reveal the impulse response of the system:
and then applying a single DFT and taking its absolute value. The windows
start 64 samples apart. Only periodic waveforms with a period that divides
φyx (k) = σx2 · hk 64 are not averaged away. This periodic averaging step suppresses both the
noise and the second sine wave.
where φyx (k) = φxy (−k) = E(yn+k · x∗n ).
Periodic averaging
If a zero-mean signal {xi } has a periodic component with period p, the
periodic component can be isolated by periodic averaging :
k
1 X
x̄i = lim xi+pn
k→∞ 2k + 1
n=−k

Periodic averaging
P corresponds in the time domain to convolution with a
Dirac comb n δi−pn . In the frequency domain, this means multiplication
with a Dirac comb that eliminates all frequencies but multiples of 1/p.
137 139

DFT averaging Image, video and audio compression


Structure of modern audiovisual communication systems:

✲ sensor + ✲ perceptual ✲ entropy channel


signal sampling

coding
coding coding
The above diagrams show different types of spectral estimates of a sequence
xi = sin(2πj × 8/64) + sin(2πj × 14.32/64) + ni with φnn (i) = 4δi .

Left is a single 64-element DFT of {xi } (with rectangular window). The
flat spectrum of white noise is only an expected value. In a single discrete noise ✲ channel
Fourier transform of such a sequence, the significant variance of the noise
spectrum becomes visible. It almost drowns the two peaks from sine waves.

After cutting {xi } into 1000 windows of 64 elements each, calculating their entropy ✛
human ✛ display ✛ perceptual ✛ channel
DFT, and plotting the average of their absolute values, the centre figure senses decoding decoding decoding
shows an approximation of the expected value of the amplitude spectrum,
with a flat noise floor. Taking the absolute value before spectral averaging
is called incoherent averaging, as the phase information is thrown away.

138 140
Audio-visual lossy coding today typically consists of these steps: → Image and audio coding standards
→ A transducer converts the original stimulus into a voltage. • A/µ-law coding (digital telephone network)
→ This analog signal is then sampled and quantized. • JPEG
The digitization parameters (sampling frequency, quantization levels) are preferably
chosen generously beyond the ability of human senses or output devices. • MPEG video
→ The digitized sensor-domain signal is then transformed into a • MPEG audio
perceptual domain.
This step often mimics some of the first neural processing steps in humans.
Literature
→ This signal is quantized again, based on a perceptual model of what
level of quantization-noise humans can still sense. → D. Salomon: A guide to data compression methods.
→ The resulting quantized levels may still be highly statistically de- ISBN 0387952608, 2002.
pendent. A prediction or decorrelation transform exploits this and
produces a less dependent symbol sequence of lower entropy. → L. Gulick, G. Gescheider, R. Frisina: Hearing. ISBN 0195043073,
1989.
→ An entropy coder turns that into an apparently-random bit string,
whose length approximates the remaining entropy. → H. Schiffman: Sensation and perception. ISBN 0471082082, 1982.
The first neural processing steps in humans are in effect often a kind of decorrelation transform;
our eyes and ears were optimized like any other AV communications system. This allows us to
use the same transform for decorrelating and transforming into a perceptually relevant domain.
141 143

Outline of the remaining lectures Entropy coding review – Huffman


1
→ Quick review of entropy coding Entropy: H =
X
p(α) · log2
1.00 p(α)
α∈A
→ Transform coding: techniques for converting sequences of highly- 0 1 = 2.3016 bit
dependent symbols into less-dependent lower-entropy sequences.
0.40 0.60
• run-length coding
• decorrelation, Karhunen-Loève transform (PCA) 0 1 0 1
v w
• other orthogonal transforms (especially DCT) 0.20 0.20
0.25
u
0.35 0 1
→ Introduction to some characteristics and limits of human senses
x
• perceptual scales and sensitivity limits Mean codeword length: 2.35 bit 0.15
0.10

• colour vision 0 1
Huffman’s algorithm constructs an optimal code-word tree for a set of
• human hearing limits, critical bands, audio masking symbols with known probability distribution. It iteratively picks the two y z
elements of the set with the smallest probability and combines them into 0.05 0.05

→ Quantization techniques to remove information that is irrelevant to


a tree by adding a common root. The resulting tree goes back into the
set, labeled with the sum of the probabilities of the elements it combines.
human senses The algorithm terminates when less than two elements are left.
142 144
Entropy coding review – arithmetic coding Coding of sources with memory and
Partition [0,1] according 0.0 0.35 0.55 0.75 0.9 0.95 1.0
correlated symbols
to symbol probabilities: u v w x y z Run-length coding:
Encode text wuvw . . . as numeric value (0.58. . . ) in nested intervals:
1.0
z
0.75
z
0.62
z
0.5885
z
0.5850
z ↓
y y y y y
5 7 12 3 3
x x x x x

Predictive coding:
w w w w w encoder decoder
f(t) g(t) g(t) f(t)
− +
v v v v v

predictor predictor
P(f(t−1), f(t−2), ...) P(f(t−1), f(t−2), ...)

u u u u u Delta coding (DPCM): P (x) = x


Xn

0.0 0.55
0.55
0.5745 0.5822
Linear predictive coding: P (x1 , . . . , xn ) = ai x i
i=1
145 147

Arithmetic coding Old (Group 3 MH) fax code


pixels white code black code
Several advantages: • Run-length encoding plus modified Huffman 0 00110101 0000110111
code 1 000111 010
→ Length of output bitstring can approximate the theoretical in- • Fixed code table (from eight sample pages) 2
3
0111
1000
11
10
• separate codes for runs of white and black
formation content of the input to within 1 bit. pixels
4 1011 011
5 1100 0011
• termination code in the range 0–63 switches
→ Performs well with probabilities > 0.5, where the information between black and white code
6
7
1110
1111
0010
00011
per symbol is less than one bit. • makeup code can extend length of a run by 8 10011 000101
a multiple of 64 9 10100 000100
10 00111 0000100
→ Interval arithmetic makes it easy to change symbol probabilities • termination run length 0 needed where run
length is a multiple of 64 11 01000 0000101
(no need to modify code-word tree) ⇒ convenient for adaptive • single white column added on left side be-
12 001000 0000111
13 000011 00000100
coding fore transmission 14 110100 00000111
• makeup codes above 1728 equal for black 15 110101 000011000
and white 16 101010 0000010111
Can be implemented efficiently with fixed-length arithmetic by rounding ... ... ...
• 12-bit end-of-line marker: 000000000001
probabilities and shifting out leading digits as soon as leading zeros (can be prefixed by up to seven zero-bits 63 00110100 000001100111
to reach next byte boundary) 64 11011 0000001111
appear in interval size. Usually combined with adaptive probability 128 10010 000011001000
estimation. Example: line with 2 w, 4 b, 200 w, 3 b, EOL → 192 010111 000011001001
1000|011|010111|10011|10|000000000001 ... ... ...
Huffman coding remains popular because of its simplicity and lack of patent-licence issues. 1728 010011011 0000001100101
146 148
Modern (JBIG) fax code Practical limits of measuring conditional probabilities
Performs context-sensitive arithmetic coding of binary pixels. Both encoder The practical estimation of conditional probabilities, in their most gen-
and decoder maintain statistics on how the black/white probability of each eral form, based on statistical measurements of example signals, quickly
pixel depends on these 10 previously transmitted neighbours: reaches practical limits. JBIG needs an array of only 211 = 2048 count-
ing registers to maintain estimator statistics for its 10-bit context.
If we wanted to encode each 24-bit pixel of a colour image based on
?
its statistical dependence of the full colour information from just ten
previous neighbour pixels, the required number of
Based on the counted numbers nblack and nwhite of how often each pixel
(224 )11 ≈ 3 × 1080
value has been encountered so far in each of the 1024 contexts, the proba-
bility for the next pixel being black is estimated as registers for storing each probability will exceed the estimated number
nblack + 1 of particles in this universe. (Neither will we encounter enough pixels
pblack = to record statistically significant occurrences in all (224 )10 contexts.)
nwhite + nblack + 2
This example is far from excessive. It is easy to show that in colour im-
The encoder updates its estimate only after the newly counted pixel has ages, pixel values show statistical significant dependence across colour
been encoded, such that the decoder knows the exact same statistics. channels, and across locations more than eight pixels apart.
Joint Bi-level Expert Group: International Standard ISO 11544, 1993.
Example implementation: http://www.cl.cam.ac.uk/~mgk25/jbigkit/ A simpler approximation of dependence is needed: correlation.
149 151

Statistical dependence Correlation


Random variables X, Y are dependent iff ∃x, y: Two random variables X ∈ R and Y ∈ R are correlated iff
P (X = x ∧ Y = y) 6= P (X = x) · P (Y = y).
E{[X − E(X)] · [Y − E(Y )]} 6= 0
If X, Y are dependent, then
where E(· · · ) denotes the expected value of a random-variable term.
⇒ ∃x, y : P (X = x | Y = y) 6= P (X = x) ∨
P (Y = y | X = x) 6= P (Y = y) Correlation implies dependence, but de- Dependent but not correlated:
pendence does not always lead to corre- 1
⇒ H(X|Y ) < H(X) ∨ H(Y |X) < H(Y )
lation (see example to the right).
Application However, most dependency in audiovi-
0
sual data is a consequence of correlation,
Where x is the value of the next symbol to be transmitted and y is
which is algorithmically much easier to
the vector of all symbols transmitted so far, accurate knowledge of the
exploit.
conditional probability P (X = x | Y = y) will allow a transmitter to −1
−1 0 1
remove all redundancy.
An application example of this approach is JBIG, but there y is limited Positive correlation: higher X ⇔ higher Y , lower X ⇔ lower Y
to 10 past single-bit pixels and P (X = x | Y = y) is only an estimate. Negative correlation: lower X ⇔ higher Y , higher X ⇔ lower Y
150 152
Correlation of neighbour pixels Covariance Matrix
256
Values of neighbour pixels at distance 1
256
Values of neighbour pixels at distance 2
For a random vector X = (X1 , X2 , . . . , Xn ) ∈ Rn we define the co-
variance matrix
192 192

Cov(X) = E (X − E(X)) · (X − E(X))T = (Cov(Xi , Xj ))i,j =
128 128
 
Cov(X1 , X1 ) Cov(X1 , X2 ) Cov(X1 , X3 ) · · · Cov(X1 , Xn )
64 64  Cov(X2 , X1 ) Cov(X2 , X2 ) Cov(X2 , X3 ) · · · Cov(X2 , Xn ) 
 
 Cov(X3 , X1 ) Cov(X3 , X2 ) Cov(X3 , X3 ) · · · Cov(X3 , Xn ) 
 
0
0 64 128 192 256
0
0 64 128 192 256  .. .. .. . . .. 
256
Values of neighbour pixels at distance 4
256
Values of neighbour pixels at distance 8  . . . . . 
Cov(Xn , X1 ) Cov(Xn , X2 ) Cov(Xn , X3 ) · · · Cov(Xn , Xn )
192 192

The elements of a random vector X are uncorrelated if and only if


128 128
Cov(X) is a diagonal matrix.
64 64
Cov(X, Y ) = Cov(Y, X), so all covariance matrices are symmetric:
Cov(X) = CovT (X).
0 0
0 64 128 192 256 0 64 128 192 256

153 155

Covariance and correlation Decorrelation by coordinate transform


Neighbour−pixel value pairs Decorrelated neighbour−pixel value pairs
256 320
We define the covariance of two random variables X and Y as
256
192
Cov(X, Y ) = E{[X − E(X)] · [Y − E(Y )]} = E(X · Y ) − E(X) · E(Y ) 192

and the variance as Var(X) = Cov(X, X) = E{[X − E(X)]2 }. 128 128

64
The Pearson correlation coefficient 64
0

Cov(X, Y )
ρX,Y = p 0
0 64 128 192 256
−64
−64 0 64 128 192 256 320
Var(X) · Var(Y ) Probability distribution and entropy

correlated value pair (H = 13.90 bit)


Idea: Take the values of a group of cor-
decorrelated value 1 (H = 7.12 bit) related symbols (e.g., neighbour pixels) as
is a normalized form of the covariance. It is limited to the range [−1, 1]. decorrelated value 2 (H = 4.75 bit)
a random vector. Find a coordinate trans-
form (multiplication with an orthonormal
If the correlation coefficient has one of the values ρX,Y = ±1, this matrix) that leads to a new random vector
whose covariance matrix is diagonal. The
implies that X and Y are exactly linearly dependent, i.e. Y = aX + b, vector components in this transformed co-
with a = Cov(X, Y )/Var(X) and b = E(Y ) − E(X). ordinate system will no longer be corre-
lated. This will hopefully reduce the en-
tropy of some of these components.
−64 0 64 128 192 256 320

154 156
Theorem: Let X ∈ Rn and Y ∈ Rn be random vectors that are Karhunen-Loève transform (KLT)
linearly dependent with Y = AX + b, where A ∈ Rn×n and b ∈ Rn We are given a random vector variable X ∈ Rn . The correlation of the
are constants. Then elements of X is described by the covariance matrix Cov(X).
E(Y) = A · E(X) + b How can we find a transform matrix A that decorrelates X, i.e. that
turns Cov(AX) = A · Cov(X) · AT into a diagonal matrix? A would
Cov(Y) = A · Cov(X) · AT
provide us the transformed representation Y = AX of our random
vector, in which all elements are mutually uncorrelated.
Proof: The first equation follows from the linearity of the expected-
value operator E(·), as does E(A · X · B) = A · E(X) · B for matrices Note that Cov(X) is symmetric. It therefore has n real eigenvalues
A, B. With that, we can transform λ1 ≥ λ2 ≥ · · · ≥ λn and a set of associated mutually orthogonal
eigenvectors b1 , b2 , . . . , bn of length 1 with

Cov(Y) = E (Y − E(Y)) · (Y − E(Y))T Cov(X)bi = λi bi .

= E (AX − AE(X)) · (AX − AE(X))T
= E A(X − E(X)) · (X − E(X))T AT
 We convert this set of equations into matrix notation using the matrix
 B = (b1 , b2 , . . . , bn ) that has these eigenvectors as columns and the
= A · E (X − E(X)) · (X − E(X))T · AT diagonal matrix D = diag(λ1 , λ2 , . . . , λn ) that consists of the corre-
= A · Cov(X) · AT sponding eigenvalues:
Cov(X)B = BD
157 159

Quick review: eigenvectors and eigenvalues B is orthonormal, that is BB T = I.


We are given a square matrix A ∈ Rn×n . The vector x ∈ Rn is an Multiplying the above from the right with B T leads to the spectral
eigenvector of A if there exists a scalar value λ ∈ R such that decomposition
Cov(X) = BDB T
Ax = λx. of the covariance matrix. Similarly multiplying instead from the left
with B T leads to
The corresponding λ is the eigenvalue of A associated with x. B T Cov(X)B = D
The length of an eigenvector is irrelevant, as any multiple of it is also and therefore shows with
an eigenvector. Eigenvectors are in practice normalized to length 1.
Cov(B T X) = D
Spectral decomposition
that the eigenvector matrix B T is the wanted transform.
Any real, symmetric matrix A = AT ∈ Rn×n can be diagonalized into
the form The Karhunen-Loève transform (also known as Hotelling transform
A = U ΛU T , or Principal Component Analysis) is the multiplication of a correlated
random vector X with the orthonormal eigenvector matrix B T from the
where Λ = diag(λ1 , λ2 , . . . , λn ) is the diagonal matrix of the ordered spectral decomposition Cov(X) = BDB T of its covariance matrix.
eigenvalues of A (with λ1 ≥ λ2 ≥ · · · ≥ λn ), and the columns of U This leads to a decorrelated random vector B T X whose covariance
are the n corresponding orthonormal eigenvectors of A. matrix is diagonal.
158 160
Karhunen-Loève transform example I Karhunen-Loève transform example I
Before KLT: We finally apply the orthogonal 3 × 3 transform
matrix U , which we just used to diagonalize the
covariance matrix, to the entire image:
  
S̄1 S̄1 · · · S̄1
T
T = U · S −  S̄2 S̄2 · · · S̄2 
S̄3 S̄3 · · · S̄3
red green blue  
S̄1 S̄1 · · · S̄1
colour image red channel green channel blue channel After KLT: + S̄2 S̄2 · · · S̄2 

S̄3 S̄3 · · · S̄3
The colour image (left) has m = r2 pixels, each We can now define the mean colour vector
of which is an n = 3-dimensional RGB vector   The resulting transformed image
m 0.4839
Ix,y = (rx,y , gx,y , bx,y )T 1 X
S̄c = Sc,i , S̄ = 0.4456 
  
m i=1 0.3411 u1,1 u1,2 u1,3 · · · ur,r
The three rightmost images show each of these u v w T =  v1,1 v1,2 v1,3 · · · vr,r 
colour planes separately as a black/white im- Projections on eigenvector subspaces: w1,1 w1,2 w1,3 · · · wr,r
age. and the covariance matrix
We want to apply the KLT on a set of such m consists of three new “colour” planes whose
Rn colour vectors. Therefore, we reformat the 1 X
Cc,d = (Sc,i − S̄c )(Sd,i − S̄d ) pixel values have no longer any correlation to
image I into an n × m matrix of the form m − 1 i=1 the pixels at the same coordinates in another

r1,1 r1,2 r1,3 · · · rr,r
 
0.0328 0.0256 0.0160
 plane. [The bear disappeared from the last of
S = g1,1 g1,2 g1,3 · · · gr,r 
 C = 0.0256 0.0216 0.0140 
 these (w), which represents mostly some of the
b1,1 b1,2 b1,3 · · · br,r 0.0160 0.0140 0.0109 v=w=0 w=0 original green grass in the background.]
161 163

[When estimating a covariance from a number of samples, the sum is divided by the number of
samples minus one. This takes into account the variance of the mean S̄c , which is not the exact
Spatial correlation
expected value, but only an estimate of it.]
The previous example used the Karhunen-Loève transform in order to
The resulting covariance matrix has three eigenvalues 0.0622, 0.0025, and 0.0006
    
eliminate correlation between colour planes. While this is of some
0.0328
 0.0256
0.0256 0.0160 0.7167 0.7167 relevance for image compression, far more correlation can be found
0.0216 0.0140   0.5833 = 0.0622
  0.5833 
0.0160 0.0140 0.0109 0.3822 0.3822 between neighbour pixels within each colour plane.
In order to exploit such correlation using the KLT, the sample set has
    
0.0328 0.0256 0.0160 −0.5509 −0.5509
 0.0256 0.0216 0.0140   0.1373  = 0.0025  0.1373 
0.0160 0.0140 0.0109 0.8232 0.8232 to be extended from individual pixels to entire images. The underlying

0.0328 0.0256 0.0160

−0.4277
 
−0.4277

calculation is the same as in the preceeding example, but this time
 0.0256 0.0216 0.0140   0.8005  = 0.0006  0.8005  the columns of S are entire (monochrome) images. The rows are the
0.0160 0.0140 0.0109 −0.4198 −0.4198
different images found in the set of test images that we use to examine
and can therefore be diagonalized as typical correlations between neighbour pixels.
In other words, we use the same formulas as in the previous example, but this time n is the number
 
0.0328 0.0256 0.0160
 0.0256 0.0216 0.0140  = C = U · D · U T = of pixels per image and m is the number of sample images. The Karhunen-Loève transform is
0.0160 0.0140 0.0109 here no longer a rotation in a 3-dimensional colour space, but it operates now in a much larger
    vector space that has as many dimensions as an image has pixels.
0.7167 −0.5509 −0.4277 0.0622 0 0 0.7167 0.5833 0.3822
 0.5833 0.1373 0.8005   0 0.0025 0   −0.5509 0.1373 0.8232  To keep things simple, we look in the next experiment only at m = 9000 1-dimensional “images”
0.3822 0.8232 −0.4198 0 0 0.0006 −0.4277 0.8005 −0.4198 with n = 32 pixels each. As a further simplification, we use not real images, but random noise
that was filtered such that its amplitude spectrum is proportional to 1/f , where f is the frequency.
The result would be similar in a sufficiently large collection of real test images.
(e.g. using MATLAB’s singular-value decomposition function svd).
162 164
Karhunen-Loève transform example II Discrete cosine transform (DCT)
Matrix columns of S filled with samples of 1/f filtered noise The forward and inverse discrete cosine transform
N −1
C(u) X (2x + 1)uπ
S(u) = p s(x) cos
N/2 x=0 2N
N −1
X C(u) (2x + 1)uπ
... s(x) = p S(u) cos
u=0 N/2 2N
Covariance matrix C Matrix U with eigenvector columns
with 
√1 u=0
C(u) = 2
1 u>0
is an orthonormal transform:
N −1 
X C(u) (2x + 1)uπ C(u′ ) (2x + 1)u′ π 1 u = u′
p cos ·p cos =
N/2 2N N/2 2N 0 u=6 u′
x=0

165 167

Matrix U ′ with normalised KLT Matrix with Discrete Cosine The 2-dimensional variant of the DCT applies the 1-D transform on
eigenvector columns Transform base vector columns both rows and columns of an image:

C(u) C(v)
S(u, v) = p p ·
N/2 N/2
N −1 N −1
X X (2x + 1)uπ (2y + 1)vπ
s(x, y) cos cos
x=0 y=0
2N 2N

s(x, y) =
N −1 N −1
X X C(u) C(v) (2x + 1)uπ (2y + 1)vπ
p p · S(u, v) cos cos
u=0 v=0 N/2 N/2 2N 2N

Breakthrough: Ahmed/Natarajan/Rao discovered the DCT as an ex- A range of fast algorithms have been found for calculating 1-D and
cellent approximation of the KLT for typical photographic images, but 2-D DCTs (e.g., Ligtenberg/Vetterli).
far more efficient to calculate.
Ahmed, Natarajan, Rao: Discrete Cosine Transform. IEEE Transactions on Computers, Vol. 23,
January 1974, pp. 90–93.
166 168
Whole-image DCT Whole-image DCT, 90% coefficient cutoff

2D Discrete Cosine Transform (log10) Original image 90% truncated 2D DCT (log10) 90% truncated DCT: reconstructed image

4 4

3 3

2 2

1 1

0 0

−1 −1

−2 −2

−3 −3

−4 −4

169 171

Whole-image DCT, 80% coefficient cutoff Whole-image DCT, 95% coefficient cutoff

80% truncated 2D DCT (log10) 80% truncated DCT: reconstructed image 95% truncated 2D DCT (log10) 95% truncated DCT: reconstructed image

4 4

3 3

2 2

1 1

0 0

−1 −1

−2 −2

−3 −3

−4 −4

170 172
Whole-image DCT, 99% coefficient cutoff Psychophysics of perception
Sensation limit (SL) = lowest intensity stimulus that can still be perceived
Difference limit (DL) = smallest perceivable stimulus difference at given
99% truncated 2D DCT (log10) 99% truncated DCT: reconstructed image intensity level
4

3
Weber’s law
2
Difference limit ∆φ is proportional to the intensity φ of the stimu-
1
lus (except for a small correction constant a, to describe deviation of
0
experimental results near SL):
−1
∆φ = c · (φ + a)
−2

−3
Fechner’s scale
−4
Define a perception intensity scale ψ using the sensation limit φ0 as
the origin and the respective difference limit ∆φ = c · φ as a unit step.
The result is a logarithmic relationship between stimulus intensity and
scale value:
φ
ψ = logc
φ0
173 175

Base vectors of 8×8 DCT Fechner’s scale matches older subjective intensity scales that follow
v differentiability of stimuli, e.g. the astronomical magnitude numbers
0 1 2 3 4 5 6 7 for star brightness introduced by Hipparchos (≈150 BC).
0 Stevens’ power law
A sound that is 20 DL over SL is perceived as more than twice as loud
1
as one that is 10 DL over SL, i.e. Fechner’s scale does not describe
2 well perceived intensity. A rational scale attempts to reflect subjective
relations perceived between different values of stimulus intensity φ.
3 Stanley Smith Stevens observed that such rational scales ψ follow a
u

power law:
4 ψ = k · (φ − φ0 )a
5 Example coefficients a: brightness 0.33, loudness 0.6, heaviness 1.45,
temperature (warmth) 1.6.
6

174 176
RGB video colour coordinates YUV transform example
Hardware interface (VGA): red, green, blue signals with 0–0.7 V
Electron-beam current and photon count of cathode-ray displays are
roughly proportional to (v − v0 )γ , where v is the video-interface or
control-grid voltage and γ is a device parameter that is typically in
the range 1.5–3.0. In broadcast TV, this CRT non-linearity is com-
pensated electronically in TV cameras. A welcome side effect is that
it approximates Stevens’ scale and therefore helps to reduce perceived
noise.
Software interfaces map RGB voltage linearly to {0, 1, . . . , 255} or 0–1.
How numeric RGB values map to colour and luminosity depends at
present still highly on the hardware and sometimes even on the oper- original Y channel U and V channels
ating system or device driver.
The centre image shows only the luminance channel as a black/white
The new specification “sRGB” aims to standardize the meaning of image. In the right image, the luminance channel (Y) was replaced
an RGB value with the parameter γ = 2.2 and with standard colour with a constant, such that only the chrominance information remains.
coordinates of the three primary colours. This example and the next make only sense when viewed in colour. On a black/white printout of
http://www.w3.org/Graphics/Color/sRGB, IEC 61966 this slide, only the Y channel information will be present.
177 179

YUV video colour coordinates Y versus UV sensitivity example

The human eye processes colour and luminosity at different resolutions.


To exploit this phenomenon, many image transmission systems use a
colour space with a luminance coordinate original blurred U and V blurred Y channel
Y = 0.3R + 0.6G + 0.1B In the centre image, the chrominance channels have been severely low-
and colour (“chrominance”) components pass filtered (Gaussian impulse response ). But the human eye
V = R − Y = 0.7R − 0.6G − 0.1B perceives this distortion as far less severe than if the exact same filtering
U = B − Y = −0.3R − 0.6G + 0.9B is applied to the luminance channel (right image).
178 180
YCrCb video colour coordinates Equiloudness curves and the unit “phon”
Since −0.7 ≤ V ≤ 0.7 and −0.9 ≤ U ≤ 0.9, a more convenient
normalized encoding of chrominance is:
Y=0.1 Y=0.3 Y=0.5
1 1 1

0.8 0.8 0.8

U
0.6 0.6 0.6

Cr

Cr

Cr
Cb = + 0.5
0.4 0.4 0.4

0.2 0.2 0.2

2.0 0
0 0.5 1
0
0 0.5 1
0
0 0.5 1
Cb Cb Cb
Y=0.7 Y=0.9 Y=0.99
1 1 1

V 0.8 0.8 0.8

Cr = + 0.5 0.6 0.6 0.6

1.6

Cr

Cr

Cr
0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
Cb Cb Cb

Modern image compression techniques operate on Y , Cr, Cb channels


separately, using half the resolution of Y for storing Cr, Cb.
Some digital-television engineering terminology:
If each pixel is represented by its own Y , Cr and Cb byte, this is called a “4:4:4” format. In the
compacter “4:2:2” format, a Cr and Cb value is transmitted only for every second pixel, reducing
the horizontal chrominance resolution by a factor two. The “4:2:0” format transmits in alternat-
ing lines either Cr or Cb for every second pixel, thus halving the chrominance resolution both
horizontally and vertically. The “4:1:1” format reduces the chrominance resolution horizontally
Each curve represents a loudness level in phon. At 1 kHz, the loudness unit
by a quarter and “4:1:0” does so in both directions. [ITU-R BT.601] phon is identical to dBSPL and 0 phon is the sensation limit.
181 183

The human auditory system

→ frequency range 20–16000 Hz (babies: 20 kHz)

→ sound pressure range 0–140 dBSPL (about 10−5 –102 pascal)

→ mechanical filter bank (cochlea) splits input into frequency


components, physiological equivalent of Fourier transform

→ most signal processing happens in the frequency domain where


phase information is lost
Sound waves cause vibration in the eardrum. The three smallest human bones in
→ some time-domain processing below 500 Hz and for directional the middle ear (malleus, incus, stapes) provide an “impedance match” between air
hearing and liquid and conduct the sound via a second membrane, the oval window, to the
cochlea. Its three chambers are rolled up into a spiral. The basilar membrane that
→ sensitivity and difference limit are frequency dependent separates the two main chambers decreases in stiffness along the spiral, such that
the end near the stapes vibrates best at the highest frequencies, whereas for lower
frequencies that amplitude peak moves to the far end.
182 184
Frequency discrimination and critical bands Audio demo: loudness and masking
A pair of pure tones (sine functions) cannot be distinguished as two loudness.wav
separate frequencies if both are in the same frequency group (“critical Two sequences of tones with frequencies 40, 63, 100, 160, 250, 400,
band”). Their loudness adds up, and both are perceived with their 630, 1000, 1600, 2500, 4000, 6300, 10000, and 16000 Hz.
average frequency.
→ Sequence 1: tones have equal amplitude
The human ear has about 24 critical bands whose width grows non-
linearly with the center frequency. → Sequence 2: tones have roughly equal perceived loudness
Amplitude adjusted to IEC 60651 “A” weighting curve for soundlevel meters.
Each audible frequency can be expressed on the “Bark scale” with
values in the range 0–24. A good closed-form approximation is masking.wav
Twelve sequences, each with twelve probe-tone pulses and a 1200 Hz
26.81
b≈ − 0.53 masking tone during pulses 5 to 8.
1 + 1960f Hz
Probing tone frequency and relative masking tone amplitude:
where f is the frequency and b the corresponding point on the Bark 10 dB 20 dB 30 dB 40 dB
scale. 1300 Hz
Two frequencies are in the same critical band if their distance is below 1900 Hz
1 bark. 700 Hz
185 187

Masking Audio demo: loudness.wav


→ Louder tones increase the sensation limit for nearby frequencies and
suppress the perception of quieter tones. 80 0 dBA curve (SL)
first series

→ This increase is not symmetric. It extends about 3 barks to lower 70


second series

frequencies and 8 barks to higher ones.


60
→ The sensation limit is increased less for pure tones of nearby fre-
quencies, as these can still be perceived via their beat frequency. 50

dBSPL
For the study of masking effects, pure tones therefore need to be
40
distinguished from narrowband noise.
→ Temporal masking: SL rises shortly before and after a masker. 30

20

10

40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000
Hz
186 188
Audio demo: masking.wav Example of a linear quantizer (resolution R, peak value V ):
   
x 1
80 0 dBA curve (SL) y = max −V, min V, R +
masking tones R 2
probing tones
70 masking thresholds
Adding a noise signal that is uniformly distributed on [0, 1] instead of adding 21 helps to spread
the frequency spectrum of the quantization noise more evenly. This is known as dithering.
60
Variant with even number of output values (no zero):
50     
x 1
dBSPL

40 y = max −V, min V, R +


R 2
30
Improving the resolution by a factor of two (i.e., adding 1 bit) reduces
20 the quantization noise by 6 dB.
10 Linearly quantized signals are easiest to process, but analog input levels
need to be adjusted carefully to achieve a good tradeoff between the
0
signal-to-quantization-noise ratio and the risk of clipping. Non-uniform
40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000 quantization can reduce quantization noise where input values are not
Hz uniformly distributed and can approximate human perception limits.
189 191

Quantization Logarithmic quantization


Uniform/linear quantization: Non-uniform quantization: Rounding the logarithm of the signal amplitude makes the quantiza-
6 6 tion error scale-invariant and is used where the signal level is not very
5 5
4
3
4
3
predictable. Two alternative schemes are widely used to make the
2
1
2
1 logarithm function odd and linearize it across zero before quantization:
0 0
−1
−2
−1
−2 µ-law:
−3 −3
−4 −4
−5
−6
−5
−6 V log(1 + µ|x|/V )
−6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6 −6 −5 −4 −3 −2 −1 0 1 2 3 4 5 6
y= sgn(x) for −V ≤ x ≤ V
log(1 + µ)
Quantization is the mapping from a continuous or large set of val-
ues (e.g., analog voltage, floating-point number) to a smaller set of A-law:
(typically 28 , 212 or 216 ) values.
( A|x| V
This introduces two types of error: 1+log A
sgn(x) for 0 ≤ |x| ≤ A
y= A|x|
V (1+log V )
→ the amplitude of quantization noise reaches up to half the max- 1+log A
sgn(x) for V
A
≤ |x| ≤ V
imum difference between neighbouring quantization levels
European digital telephone networks use A-law quantization (A = 87.6), North American ones
→ clipping occurs where the input amplitude exceeds the value of use µ-law (µ=255), both with 8-bit resolution and 8 kHz sampling frequency (64 kbit/s). [ITU-T
G.711]
the highest (or lowest) quantization level
190 192
Summary of the baseline JPEG algorithm
V µ−law (US)
A−law (Europe) The most widely used lossy method from the JPEG standard:

→ Colour component transform: 8-bit RGB → 8-bit YCrCb

→ Reduce resolution of Cr and Cb by a factor 2


signal voltage

0 → For the rest of the algorithm, process Y , Cr and Cb compo-


nents independently (like separate gray-scale images)
The above steps are obviously skipped where the input is a gray-scale image.

→ Split each image component into 8 × 8 pixel blocks


Partial blocks at the right/bottom margin may have to be padded by repeating the
last column/row until a multiple of eight is reached. The decoder will remove these
padding pixels.
−V
→ Apply the 8 × 8 forward DCT on each block
−128 −96 −64 −32 0 32 64 96 128 On unsigned 8-bit input, the resulting DCT coefficients will be signed 11-bit integers.
byte value

193 195

Joint Photographic Experts Group – JPEG → Quantization: divide each DCT coefficient with the correspond-
Working group “ISO/TC97/SC2/WG8 (Coded representation of picture and audio information)” ing value from an 8×8 table, then round to the nearest integer:
was set up in 1982 by the International Organization for Standardization. The two standard quantization-matrix examples for luminance and chrominance are:

Goals: 16
12
11
12
10
14
16 24 40
19 26 58
51 61
60 55
17
18
18
21
24
26
47
66
99
99
99
99
99
99
99
99
→ continuous tone gray-scale and colour images 14
14
13
17
16
22
24 40 57
29 51 87
69 56
80 62
24
47
26
66
56
99
99
99
99
99
99
99
99
99
99
99
→ recognizable images at 0.083 bit/pixel 18
24
22
35
37
55
56 68 109
64 81 104
103 77
113 92
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
→ useful images at 0.25 bit/pixel 49
72
64
92
78
95
87 103 121
98 112 100
120 101
103 99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
99
→ excellent image quality at 0.75 bit/pixel
→ indistinguishable images at 2.25 bit/pixel
→ apply DPCM coding to quantized DC coefficients from DCT

→ feasibility of 64 kbit/s (ISDN fax) compression with late 1980s → read remaining quantized values from DCT in zigzag pattern
hardware (16 MHz Intel 80386). → locate sequences of zero coefficients (run-length coding)
→ workload equal for compression and decompression → apply Huffman coding on zero run-lengths and magnitude of
The JPEG standard (ISO 10918) was finally published in 1994. AC values
William B. Pennebaker, Joan L. Mitchell: JPEG still image compression standard. Van Nostrad
Reinhold, New York, ISBN 0442012721, 1993.
→ add standard header with compression parameters
Gregory K. Wallace: The JPEG Still Picture Compression Standard. Communications of the http://www.jpeg.org/
ACM 34(4)30–44, April 1991, http://doi.acm.org/10.1145/103085.103089 Example implementation: http://www.ijg.org/
194 196
Storing DCT coefficients in zigzag order Lossless JPEG algorithm
horizontal frequency
In addition to the DCT-based lossy compression, JPEG also defines a
0 1 5 6 14 15 27 28 lossless mode. It offers a selection of seven linear prediction mecha-
2 4 7 13 16 26 29 42 nisms based on three previously coded neighbour pixels:
vertical frequency

3 8 12 17 25 30 41 43
1: x=a
9 11 18 24 31 40 44 53 2: x=b
10 19 23 32 39 45 52 54 3: x=c c b
20 22 33 38 46 51 55 60 4: x=a+b−c
5: x = a + (b − c)/2
21 34 37 47 50 56 59 61
6: x = b + (a − c)/2 a ?
35 36 48 49 57 58 62 63
7: x = (a + b)/2
After the 8×8 coefficients produced by the discrete cosine transform
have been quantized, the values are processed in the above zigzag order Predictor 1 is used for the top row, predictor 2 for the left-most row.
by a run-length encoding step. The predictor used for the rest of the image is chosen in a header. The
The idea is to group all higher-frequency coefficients together at the end of the sequence. As many
image blocks contain little high-frequency information, the bottom-right corner of the quantized
difference between the predicted and actual value is fed into either a
DCT matrix is often entirely zero. The zigzag scan helps the run-length coder to make best use Huffman or arithmetic coder.
of this observation.
197 199

Huffman coding in JPEG Advanced JPEG features


s value range Beyond the baseline and lossless modes already discussed, JPEG pro-
0 0 vides these additional features:
1 −1, 1
2 −3, −2, 2, 3 → 8 or 12 bits per pixel input resolution for DCT modes
3 −7 . . . − 4, 4 . . . 7 → 2–16 bits per pixel for lossless mode
4 −15 . . . − 8, 8 . . . 15 → progressive mode permits the transmission of more-significant
5 −31 . . . − 16, 16 . . . 31
DCT bits or lower-frequency DCT coefficients first, such that
6 −63 . . . − 32, 32 . . . 63
a low-quality version of the image can be displayed early during
... ...
i −(2i − 1) . . . − 2i−1 , 2i−1 . . . 2i − 1
a transmission
DCT coefficients have 11-bit resolution and would lead to huge Huffman → the transmission order of colour components, lines, as well as
tables (up to 2048 code words). JPEG therefore uses a Huffman table only DCT coefficients and their bits can be interleaved in many ways
to encode the magnitude category s = ⌈log2 (|v| + 1)⌉ of a DCT value v. A → the hierarchical mode first transmits a low-resolution image,
sign bit plus the (s − 1)-bit binary value |v| − 2s−1 are appended to each followed by a sequence of differential layers that code the dif-
Huffman code word, to distinguish between the 2s different values within ference to the next higher resolution
magnitude category s. Not all of these features are widely used today. Several follow-on standards exist: JPEG XR uses
When storing DCT coefficients in zigzag order, the symbols in the Huffman tree are actually a fully invertible DCT-like 4 × 4 block transform, JPEG 2000 uses a Cohen-Daubechies-Feauveau
tuples (r, s), where r is the number of zero coefficients preceding the coded value (run-length). wavelet transform.
198 200
JPEG examples (baseline DCT) Moving Pictures Experts Group – MPEG
→ MPEG-1: Coding of video and audio optimized for 1.5 Mbit/s
(1× CD-ROM). ISO 11172 (1993).
→ MPEG-2: Adds support for interlaced video scan, optimized
for broadcast TV (2–8 Mbit/s) and HDTV, scalability options.
Used by DVD and DVB. ISO 13818 (1995).
→ MPEG-4: Adds advanced video codec (AVC) and advanced
audio codec (AAC) for lower bitrate applications. ISO 14496
(2001).
→ System layer multiplexes several audio and video streams, time
stamp synchronization, buffer control.
1:5 (1.6 bit/pixel) 1:10 (0.8 bit/pixel) → Standard defines decoder semantics.
→ Asymmetric workload: Encoder needs significantly more com-
putational power than decoder (for bit-rate adjustment, motion
estimation, perceptual modeling, etc.)
http://mpeg.chiariglione.org/
201 203

JPEG examples (baseline DCT) MPEG video coding


→ Uses YCrCb colour transform, 8×8-pixel DCT, quantization,
zigzag scan, run-length and Huffman encoding, similar to JPEG
→ the zigzag scan pattern is adapted to handle interlaced fields
→ Huffman coding with fixed code tables defined in the standard
MPEG has no arithmetic coder option.

→ adaptive quantization
→ SNR and spatially scalable coding (enables separate transmis-
sion of a moderate-quality video signal and an enhancement
signal to reduce noise or improve resolution)

1:20 (0.4 bit/pixel) 1:50 (0.16 bit/pixel)


→ Predictive coding with motion compensation based on 16×16
macro blocks.
J. Mitchell, W. Pennebaker, Ch. Fogg, D. LeGall: MPEG video compression standard.
Better image quality at a compression ratio 1:50 ISBN 0412087715, 1997. (CL library: I.4.20)
can be achieved by applying DCT JPEG to a 50%
B. Haskell et al.: Digital Video: Introduction to MPEG-2. Kluwer Academic, 1997.
scaled down version of the image (and then inter-
(CL library: I.4.27)
polate back to full resolution after decompression):
John Watkinson: The MPEG Handbook. Focal Press, 2001. (CL library: I.4.31)
202 204
MPEG motion compensation MPEG audio coding
time
Three different algorithms are specified, each increasing the processing
power required in the decoder.
Supported sampling frequencies: 32, 44.1 or 48 kHz.

Layer I
→ Waveforms are split into segments of 384 samples each (8 ms at 48 kHz).
→ Each segment is passed through an orthogonal filter bank that splits the
signal into 32 subbands, each 750 Hz wide (for 48 kHz).
This approximates the critical bands of human hearing.

→ Each subband is then sampled at 1.5 kHz (for 48 kHz).


12 samples per window → again 384 samples for all 32 bands

backward
current picture
forward → This is followed by scaling, bit allocation and uniform quantization.
reference picture reference picture Each subband gets a 6-bit scale factor (2 dB resolution, 120 dB range, like floating-
point coding). Layer I uses a fixed bitrate without buffering. A bit allocation step
Each MPEG image is split into 16×16-pixel large macroblocks. The predic- uses the psychoacoustic model to distribute all available resolution bits across the 32
tor forms a linear combination of the content of one or two other blocks of bands (0–15 bits for each sample). With a sufficient bit rate, the quantization noise
will remain below the sensation limit.
the same size in a preceding (and following) reference image. The relative
positions of these reference blocks are encoded along with the differences. → Encoded frame contains bit allocation, scale factors and sub-band samples.
205 207

MPEG reordering of reference images Layer II


Display order of frames: Uses better encoding of scale factors and bit allocation information.
Unless there is significant change, only one out of three scale factors is transmitted. Explicit zero
time
code leads to odd numbers of quantization levels and wastes one codeword. Layer II combines
several quantized values into a granule that is encoded via a lookup table (e.g., 3 × 5 levels: 125
values require 7 instead of 9 bits). Layer II is used in Digital Audio Broadcasting (DAB).

Layer III
I B B B P B B B P B B B P → Modified DCT step decomposes subbands further into 18 or 6 frequencies

Coding order: → dynamic switching between MDCT with 36-samples (28 ms, 576 freq.)
time
and 12-samples (8 ms, 192 freq.)
enables control of pre-echos before sharp percussive sounds (Heisenberg)

→ non-uniform quantization

I P B B B P B B B P B B B
→ Huffman entropy coding

MPEG distinguishes between I-frames that encode an image independent of any others, P-frames
→ buffer with short-term variable bitrate
that encode differences to a previous P- or I-frame, and B-frames that interpolate between the
two neighbouring B- and/or I-frames. A frame has to be transmitted before the first B-frame → joint stereo processing
that makes a forward reference to it. This requires the coding order to differ from the display
order. MPEG audio layer III is the widely used “MP3” music compression format.
206 208
Psychoacoustic models Outlook
MPEG audio encoders use a psychoacoustic model to estimate the spectral
and temporal masking that the human ear will apply. The subband quan- Further topics that we have not covered in this brief introductory tour
tization levels are selected such that the quantization noise remains below through DSP, but for the understanding of which you should now have
the masking threshold in each subband. a good theoretical foundation:
The masking model is not standardized and each encoder developer can
chose a different one. The steps typically involved are:
→ multirate systems

→ Fourier transform for spectral analysis → effects of rounding errors


→ Group the resulting frequencies into “critical bands” within which
→ adaptive filters
masking effects will not vary significantly
→ Distinguish tonal and non-tonal (noise-like) components → DSP hardware architectures
→ Apply masking function
→ modulation and symbol detection techniques
→ Calculate threshold per subband
→ Calculate signal-to-mask ratio (SMR) for each subband → sound effects
Masking is not linear and can be estimated accurately only if the actual sound pressure levels
If you find a typo or mistake in these lecture notes, please notify [email protected].
reaching the ear are known. Encoder operators usually cannot know the sound pressure level
selected by the decoder user. Therefore the model must use worst-case SMRs.
209 211

Exercise 21 Compare the quantization techniques used in the digital tele- Some final thoughts about redundancy . . .
phone network and in audio compact disks. Which factors do you think led
to the choice of different techniques and parameters here? Aoccdrnig to rsceearh at Cmabrigde Uinervtisy, it
deosn’t mttaer in waht oredr the ltteers in a wrod are,
Exercise 22 Which steps of the JPEG (DCT baseline) algorithm cause a the olny iprmoetnt tihng is taht the frist and lsat
loss of information? Distinguish between accidental loss due to rounding ltteer be at the rghit pclae. The rset can be a total
errors and information that is removed for a purpose. mses and you can sitll raed it wouthit porbelm. Tihs is
bcuseae the huamn mnid deos not raed ervey lteter by
Exercise 23 How can you rotate by multiples of ±90◦ or mirror a DCT- istlef, but the wrod as a wlohe.
JPEG compressed image without losing any further information. Why might
the resulting JPEG file not have the exact same file length?
. . . and perception
Exercise 24 Decompress this G3-fax encoded line: Count how many Fs there are in this text:
1101011011110111101100110100000000000001
FINISHED FILES ARE THE RE-
Exercise 25 You adjust the volume of your 16-bit linearly quantizing sound- SULT OF YEARS OF SCIENTIF-
card, such that you can just about hear a 1 kHz sine wave with a peak IC STUDY COMBINED WITH THE
amplitude of 200. What peak amplitude do you expect will a 90 Hz sine EXPERIENCE OF YEARS
wave need to have, to appear equally loud (assuming ideal headphones)?
210 212

You might also like