Channel coding (1)
Prof H Xu
March 27, 2023
Introduction
Introduction
Modulator/Transmitter
• Matches the message to the channel
•Modulation encodes the message into the amplitude, phase or
frequency of the carrier signal (PSK, FSK, QAM, OFDM, PAM,
PPM)
Receiver/Demodulator
•The receiver amplifies and filters the signal
•The ideal receiver output is a scaled, delayed version of the
message signal
•The demodulator extracts the message from the receiver output
•Converts the receiver output to bits or symbols
Introduction
Channel
•Physical medium that the signal is transmittedthrough (or stored
on)
•Examples: Air, wires, coaxial cables, fiber optic cables, space (CD,
DVD)
•Every channel introduces some amount of distortion, noise and
interference
•The channel properties determine
•Data throughputof the system
•System bit error rate (BER)
•Quality of service (QoS)offered by the system
Introduction
Channel
Source Destination
Wireless channel
Without fading: Gaussian channel
With fading: Rayleigh fading, Rician fading, Lognormal-Rician
Fading, Nakagami-m, Weibull fading
Frequency selective fading, Frequency nonselective fading,
Wireless
Channel Quasi-static fading, slow fading, fast fading
Correlated fading, uncorrelated fading
time selective fading, time nonselective fading,
Introduction
Ideal wireless channel: AWGN
Source Destination
x y
n
Wireless channel
y = x+n
where: n is AWGN with N ~ (0, 2 )
x : antipodal signal, MPSK, MQAM
Introduction
Ideal wireless channel: AWGN
AWGN simulation in MATLAB
1 n2
pdf of AWGN: f ( n) = exp − 2
2 2
2
Implementation in MATLAB: sigma*randn
How to calculate sigma given a SNR?
sigma depends on modulated signal.
Definition of SNR: SNR is the ratio of the average transmitted
signal power and the variance of the channel noise , also known
as signal-to-noise ratio.
Introduction
Ideal wireless channel: AWGN
In Digital communications, the ratio is the bit energy (Eb) per
noise power (N0), a normalized version of SNR
Eb bit energy = Signal Power, S, times the bit time, Tb=1/Rb
N0 = noise power spectral density, N, noise power divided by
bandwidth, W
Eb STb S / Rb S W
= = =
N 0 N / W N / W N Rb
Introduction
Ideal wireless channel: AWGN
• AWGN simulation in MATLAB
(1) Antipodal signal: x − 1,+1
y = x+n SNR =
E[ x 2 ]
=
1
= SNR −1 / 2
E[n 2 ] 2
(2) BPSK: x − cos 2f ct , + cos 2f ct
x0 (t ) = cos 2f 0t 0t T x1(t ) = − cos 2f 0t 0t T
Synchronized and sampled at peak
x − 1,+1
y = x+n SNR =
E[ x 2 ]
=
1
= (2 * SNR ) −1/ 2
E[n 2 ] 2 2
Introduction
Ideal wireless channel: AWGN
• AWGN simulation in MATLAB
(3) MPSK (M 2 )
= (2 * SNR ) −1/ 2
where SNR is signal-to-noise ration per symbol
Implementation in MATLAB: sigma*(randn+j*randn)/sqrt(2)
(4) MQAM
Let M=16 x 1 j;1 3 j;3 j;3 3 j
y = x+n SNR =
E[ x 2 ] 10
= 2 = (10 / SNR )1/ 2
E[n ]
2
Implementation in MATLAB: sigma*(randn+j*randn)/sqrt(2)
Introduction
Ideal wireless channel: AWGN
(4) MQAM
2
E[| x | ] = ( M − 1)
2
3
2 4QAM
10
16QAM
E[| x |2 ] =
42 64QAM
170 256QAM
Introduction
A Mathematical Theory of Communications
``The fundamental problem of communication is that of
reproducing at one point exactly or approximately a message
selected at another point. …
If the channel is noisy it is not in general possible to reconstruct
the original message or the transmitted signal with certainty by
any operation on the received signal.’’
Introduction
Channel Capacity
•An important question for a communication channel is the
maximum rate at which it can transfer information.
•The channel capacity C is a theoretical maximum rate below
which information can be transmitted over the channel with an
arbitrarily low probability of error.
Introduction
Shannon’s Noisy Channel Coding Theorem
•How to achieve capacity?
•With every channel we can associate a ``channel capacity’’ C.
There exist error control codes such that information can be
transmitted across the channel at rates less than C with
arbitrarily low bit error rate.
•There are only two factors that determine the capacity of a
channel:
–Bandwidth (W)
–Signal-to-Noise Ratio (SNR) Eb/N0
Introduction
Channel Capacity on AWGN channel (Shannon-Hartley
capacity theorem):
P
C = W log 2 1 + [bits/s]
N
W [Hz]: Bandwidth
P = Eb R [ Watt]: Average received signal power
R: transmission rate
N = N 0W [Watt]: Average noise power
Introduction
Channel Capacity on AWGN channel (Shannon-Hartley
capacity theorem):
P
C = W log 2 1 + [bits/s]
N
Let R=C
P = Eb C Comment 1: There exists a limiting value of
N = N 0W Eb / N 0 below which there can be no error-
C E C free communication at any information rate
= log 2 1 + b
W N0 W
given a bandwidth efficiency.
Eb 2C /W − 1
=
N0 C /W
Introduction
W/C [Hz/bits/s]
Practical region
Unattainable
region
-1.6 [dB] Eb / N 0 [dB]
17
Introduction
What will we expect if we increase bandwidth?
C Eb C
= log 2 1 +
W N0 W
C
As W → or → 0, we get :
W
Eb 1
→ = 0.693 −1.6 [dB]
N0 log 2 e
P
C = W log 2 1 + [bits/s]
N
Comment 2: By increasing the bandwidth alone, the
capacity can not be increased to any desired value.
Why?
Introduction
Digital Communication Model with Coding
With every channel we can
associate a ``channel
capacity’’ C. There exist
error control codes such
that information can be
transmitted across the
channel at rates less than C
with arbitrarily low bit
error rate.
Introduction
Channel coding is also called error control coding or error
correcting coding
Purpose of Error Control Coding
(1) In data communications, coding is used for controlling
transmission errors induced by channel noise or other
impairments, such as fading and interferences, so that error-
free communication can be achieved.
(2) In data storage systems, coding is used for controlling
storage errors (during retrieval) caused by storage medium
defects, dust particles and radiation so that error-free
storage can be achieved.
Introduction
Coding Principle
Coding is achieved by adding properly designed redundant
digits (bits) to each message. These redundant digits (bits)
are used for detecting and/or correcting transmission (or
storage) errors.
Type of Coding
Block Coding : Block codes process the information on a
block-by-block basis, treating each block of information bits
independently of others.
A message of k digits is mapped into a structure sequence
of n digits, called a codeword.
Introduction
Type of Coding
Convolutional Coding:
→ An information sequence is divided into (short) blocks of
k-digits each. Each k-digit message is encoded into an n-digit
coded block. The n-digit coded block depends not only on
the corresponding k-digit message block but also on m( m 1)
previous message blocks. That is, the encoder has memory
of order m .
→ The encoder has k inputs and n outputs.
→ An information is encoded into a coded sequence. The
collection of all possible code sequences is called an ( n, k , m )
convolutional code.
Introduction
Type of Errors
→Random errors caused by:
Thermal and shot noise in transmitter and receiver
Thermal noise in channel.
Radiation picked by antenna.
→Burst errors (More than one symbol or
bit is affected) caused by:
Lightning and switching transients.
Fast fades.
Introduction
Type of Channels
→Random error channels:
Deep space channel, satellite channels, light of
sight transmission channel, etc.
→ Burst error channels:
Radio links, terrestrial microwave links, wire and
cable transmission channels, etc.
Introduction
Decoding
Two types of decoding: Hard-decision decoding and
soft-decision decoding.
Hard-decision decoding: When binary coding is used, the modulator
has only binary inputs. If binary demodulator output quantization is
used, the decoder has only binary inputs. In this case, the
demodulator is said to make hard decisions. Decoding based on hard
decision made by the demodulator is called hard decision decoding.
Soft-decision decoding: If the output of the demodulator consists of
more than one quantization level or is left unquantized, the
demodulator is said to make soft decisions. Decoding based on soft
decision made by the demodulator is called soft-decision decoding.
Introduction
Optimum Decoding
→Suppose the codeword c corresponding to a certain
message m is transmitted. Let r be the corresponding
output of the demodulator.
→An optimum decoding rule is the one that minimizes
the probability of decoding error. That is, P ( cˆ c r ) is
minimized. Or equivalently, maximizing P ( cˆ = c r ) .
Introduction
Maximum Likelihood Decoding
Suppose all the messages are equally likely. An optimum
decoding rule can be done as follows:
For every codeword c j , compute the conditional
probability P ( r c j ) . The codeword c j with the largest
conditional probability is chosen as the estimate ĉ for the
transmitted codeword c . This decoding rule is called
Maximum Likelihood Decoding (MLD).
Question: actually we received r, we should calculate
P (c j | r ) , not P ( r c j ) . Why P (c j | r ) P (r | c j ) ?
Introduction
How to calculate P ( r c j ) ?
Source Destination
cj r
n
Wireless channel
1 n2
c j −1, +1 pdf of AWGN: f ( n) = exp − 2
2 2
2
r = cj + n
1 (r − c j ) 2
P(r | c j ) = exp −
2 2
2 2
First look at Channel coding
Consider the binary symmetric channel:
1− q
Bit 0 Bit 0
q p (0 / 0) = p (1 / 1) = 1 − q
q
p (1 / 0) = p (0 / 1) = q
Bit 1 Bit 1
1− q
The channel encoder repeats the data bit three times:
Data bit Channel symbol
0 000
1 111
The channel decoding problem can be characterized as: given
a received sequence, which transmitted sequence was sent
and what is the error probability?
First look at Channel coding
Let us look at transmitted sequence 000.
There are 8 possible received sequence for transmitted
sequence 000 at receiver.
RS No of errors Prob. Of occurrence
000 0 (1 − q ) 3
001 1 q (1 − q ) 2
010 1 q (1 − q ) 2
100 1 q (1 − q ) 2
011 2 q 2 (1 − q )
101 2 q 2 (1 − q )
110 2 q 2 (1 − q )
111 3 q3
First look at Channel coding
Decoding rule:
(1) If the received sequences are 000, 001, 010, 100 then 000
is the transmitted sequence;
(2) If the received sequences are 111, 101, 110, 011 then 111
is the transmitted sequence;
The error probability at the decoder output is the probability
of two or three errors.
pe = 3 q 2 (1 − q) + q 3
Uncoded system pe = q q = 10 −2 pe = 10 −2
coded system pe = 3 q 2 (1 − q) + q 3 pe 3 10 −4
First look at Channel coding
In general
If n symbols are transmitted, the probability of an m error
pattern is p m (1 − p ) n − m
The probability of exactly m errors is p m (1 − p)n −m
n
m
n j
n
The probability of more than m errors is p (1 − p)n − j
j =m j
Now how to find p ?