ECE 6151: Communication Theory
Lecture Notes 5
Shengli Zhou
February 15, 2017
Outline (lecture 5)
1. Optimal receiver design:
MAP is the best receiver in terms of error performance. ML reduces to
MD, MC, MF in the AWGN channel
2. Performance analysis for binary signals
3. Union Bound
1 Optimal Receiver Design in AWGN
Communication problem a detection problem
Define M hypothesis (t [0, T ])
Hm : r(t) = sm (t) + n(t), m = 1, . . . , M
At the receiver side, given r(t), decide which Hm is true
Objective: minimizing the overall error probability
1.1 MAP receiver
Show a simple example r = sm + n
m = 0 X N (1, 1)
m = 1 X N (1, 1)
m = 2 X N (4, 1)
First find the optimal region with pi = 1/3
Second find the optimal region with p0 = 1/4, p1 = 1/2, p2 = 1/4
Suppose we can write down the likelihood function f (r(t)|Hm )
Define the decision region:
Rm = {if r(t) belongs to this region, then decide Hm }
1
Rigorous proof for optimal Rm :
P (e) = 1 P (correct)
X
=1 Pr(r(t) Rm |Hm )P (Hm )
m
XZ
=1 f (r(t)|Hm )dr(t)P (Hm )
m Rm
" #
Z X
=1 Im (r)f (r(t)|Hm )P (Hm ) dr(t)
m
(
1 r(t) Rm
Im (r(t)) =
0 r(t) 6 Rm
" #
X
g(r(t)) = Im (r(t))f (r(t)|Hm )P (Hm )
m
For each r(t), only one term in bracket can be non-zero,
g(t) is then maximized by choosing
Rm = {r(t) : f (Hm |r(t))P (Hm ) > f (Hj |r(t))P (Hj ), j 6= m}
Then the MAP receiver (Maximum A Posterior) is
= arg max f (r(t)|Hm )P (Hm ) = arg max f (Hm |r(t))
m
m m
1.2 ML receiver
Maximum Likelihood (ML) receiver assuming P (Hm ) = 1/M
m
= arg max f (r(t)|Hm )
m
Then how to find the likelihood function ?
1.3 Optimal receiver in AWGN
N0
The noise n(t) is AWGN with Rn ( ) = 2 ( ).
First discretize r(t) with a set of basis functions 1 (t), . . . , N (t), N +1 (t),
. . . , (t), where {i (t)}N
i=1 span the signal space.
Then we have
Hm : {ri = sm,i + ni }
i=1
The noise components are independent Gaussian
N0
E[ni nj ] = ij = 2 ij
2
2
So we have the likelihood function
N
!
(ri smi )2 rj2
Y
Y 1 1
f (r(t)|Hm ) = exp exp 2
i=1
2 2 2 2 2
j=N +1
Sufficient statistics:
Hi : r = si + n
where r = [r1 , . . . , rN ] , si = [si,1 , . . . , si,N ]T , n = [n1 , . . . , nN ]T
T
N
(ri smi )2 kr sm k2
1 Y
0
f (r(t)|Hm ) = Cf (r|Hm ) = C exp = C exp
i=1
2 2 2 2 2
{i (t)}N N
i=1 is complete for {sm (t)}m=1 , but not r(t)
So we have minimum distance receiver
m
MAP = arg max f (r|Hm )P (Hm )
m
m
ML = arg max f (r|Hm )
m
MD = arg min kr sm k2
m
m
MAP ML MD
Decision Region
Rm = {r : kr sm k < kr sj k, j 6= m}
X1 d1
int
Xt f1
X2 d2
int Distance choose
Computation the
smallest
f2
X1 dM
int
fN
Example: 4-PSK, t [0, T ]
s1 (t) = A cos(2fc t)
s2 (t) = A cos(2fc t + /2)
s3 (t) = A cos(2fc t + )
s4 (t) = A cos(2fc t + 3/2)
3
choose two basis functions
r r
2 2
1 (t) = cos(2fc t), 2 (t) = sin(2fc t)
T T
2
The signal points: E = A2T is the symbol energy
1 1 0
0
s1 = E , s2 = E , s3 = E , s4 = E ,
0 1 0 1
The constellation (geometrical view):
s3 s1
s2
Decision region for r = [r1 , r2 ]
= arg min kr sm k2 = arg min [(r1 sm1 )2 + (r2 sm2 )2 ]
m
m m
Now look for various alternative implementations
kr sm k2 = rT r 2xT sm + sTm sm
1
= rT r 2(rT sm Em )
2
where Em is the energy of the signal
Z T Z T N
X N
X
Em = s2m (t) = sm (t) smj j (t)dt = s2mj
0 0 j=1 j=1
The maximum correlation receiver
1
MC = arg max(rT sm Em )
m
m 2
Distance preserving property of complete orthonormal basis
Z T Z T N
X
r(t)sm (t)dt = r(t) smj j (t)dt
0 0 j=1
N
X Z T
= smj r(t)j (t)dt
j=1 0
N
X
= smj xj
j=1
= rT s
4
X1 d1
int
Xt f1
X2 d2
int Inner choose
Product the
largest
f2
X1 dM
int
fN
+
int +
- e1
Xt s1
+
int
-
+ choose
the
e2 largest
s2
+
int
-
+
eM
sM
Alternative implementation:
The matched filter based implementation
(
sm (T t) 0 t T
hm (t) =
0 else
and Z
y(t) = hm (t) ? r(t) = hm (t )r( )d
then Z T
y(T ) = sm (t)r(t)dt
0
and we have the matched filter implementation
+
h1 +
- e1
Xt
+
h2
-
+ choose
the
e2 largest
+
hM
-
+
eM
The optimal receivers
MAP ML MD, MC, MF
5
2 Performance Analysis
2.1 Gaussian Q function
Z
1 1
Q(x) = exp t2 dt
x 2 2
Q(x) is easier to use. Graphic meaning related to N (0, 1)
Other properties of Q(x)
Q(x) = 1 Q(x)
Z x Z
1 1 2 1 1 2
exp t dt = exp t dt
2 2 x 2 2
2.2 Binary baseband
s1 (t) = A, s2 (t) = A, 0 t T.
One basis function (t) = 1 , 0 t T.
T
Revisit the receiver (integrate and dump)
Two points on the constellation (d, 0), (d, 0), with d = Es = A2 T .
Bit error rate
p
P (e|s2 (t)) = P (r > 0|r N ( Es , N0 /2))
p
= P (n > Es |n N (0, N0 /2))
Es
=Q
r !
2Es
=Q .
N0
By symmetry
1 1
P (e) = P (e|0) + P (e|1)
2 !2
r
2Es
=Q
N0
E.g., Ps = 1W, N0 = 2 1013 W/Hz, T = 1s
r !
2 1 106 1 106
P (e) = Q = Q( 10) = 7.8 104
2 1013
6
How to verify the result by simulations ? How to demonstrate the results?
BER versus SNR in log-log plot (Es /N0 in dB)
Demonstrate the program first (step by step illustration)
Familiarize yourself with Matlab, modify this program for other constel-
lations
3 General Binary Systems
We can represent any binary (M = 2) system with no more than N = 2
(t)s.
phi2(t)
phi1(t)
Performance analysis based on rotating the basis
d12
P (e) = Q
2
Evaluate the distance
N
X
d212 = (s1i s2i )2
i=1
Z T
= (s1 (t) s2 (t))2 dt
0
p
= E1 + E2 2 E1 E2
where the correlation coefficient
Z T
1 1
= s1 (t)s2 (t)dt = sT s2 = cos()
E1 E2 0 E1 E2 1
The performance
s r !
d212 Es p
P (e) = Q =Q 1 E1 E2 /Es
2N0 N0
7
Figure 7.13 in Ziemers book: Waveform plots:
binary ASK (optimal comm)
s1 (t) = 0, s2 (t) = A cos(2fc t)
p
P (e) = Q( Es /N0 )
BPSK
s1 (t) = A cos(2fc t), s2 (t) = A cos(2fc t)
p
P (e) = Q( 2Es /N0 )
Binary FSK
s1 (t) = A cos(2fc t), s2 (t) = A cos(2(fc + f )t)
E1 = E2 = A2 T /2
Z T
2
= 2 A2 cos(2fc t) cos(2(fc + )t)dt
A T 0
sin(2f T )
= = sinc(2f T )
2f T
r !
Eb
P (e) = Q (1 )
N0
1
0.8
0.6
0.4
0.2
0.2
0.4
4 3 2 1 0 1 2 3 4
Minimum = 0.22 with f T = 0.7
Orthogonal FSK:
f T = 0.5n
Minimum Shift Keying (MSK), used in GSM, is different from the
brute-force implementation of BFSK, which does not enforce contin-
uous phase.
3dB power difference between BPSK and BASK/BFSK
8
Fun Problem: Prove that no binary transmission scheme is more energy
efficient than BPSK. That is, for the same energy per symbol Es , the
Euclidean distance d between the two signal waveforms is at most 2 Es .
p p
d2 ( E1 + E2 )2 (triangular inequality)
p
= E1 + E2 + 2 E1 E2
E1 + E2 + E1 + E2
= 2(E1 + E2 )
= 4Es (definition of Es )
Give an implementation with one matched filter s2 (t) s1 (t), following by
threshold detection.
What is the optimum threshold ?
1 1 1
r1 E1 > r2 E2 r2 r1 > (E2 E1 )
2 2 | {z } 2
:=r
| {z }
:=opt
Whiteboard pictures
[miss the photo on the ML to MD derivation]
9
10