Lecture 10 Wireless Communication
Monsoon semester 2023-24
Topics: Digital communication with Gaussian noise
Vinay Joseph
NIT Calicut
October 18, 2023
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Course outline: Module 2
Module 1
▶ High level analyses of wireless communication networks using a
simplified model of wireless channel
▶ Modeling a wireless channel
Module 2
▶ Point-to-point wireless communication
⋆ Main reference [1]: Chapter 3 and Appendix A, Fundamentals of
Wireless Communication, Tse et al
⋆ Available at
https://web.stanford.edu/~dntse/wireless_book.html
Module 3
▶ Wideband multi-user wireless communication systems
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Digital communication over wireless channel
Key problem: how to send and receive bits using a wireless channel?
Step 1: Additive White Gaussian Noise channel without fading
y [m] = x[m] + w [m]
▶ m: time slot
▶ x[m]: transmitted baseband symbol
▶ w [m]: additive white Gaussian noise
▶ y [m]: received signal
Step 2: Flat-fading channel
y [m] = h[m]x[m] + w [m]
Step 3: More general frequency-selective channel
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Additive White Gaussian Noise (AWGN) channel
AWGN channel model:
y [m] = x[m] + w [m]
▶ x[m] is transmitted in time slot m
▶ y [m] is received signal in time slot m
▶ Noise {w [m]}
⋆ For each m, w [m] ∼ CN (0, N0 )
⋆ The process {w [m]} is white, i.e., w [m] for different m are independent
(i.e., independent over time)
Simplify: Start with (one-shot) problem: How to detect real-valued
scalar x taking values {uA , uB } with additive Gaussian noise
w ∼ N (0, N0 /2)
y =x +w
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Preliminaries: Gaussian random variables
Gaussian random variable x with mean µ and variance σ 2 has
Probability Density Function (PDF):
(x − µ)2
1
f (x) = √ exp − , x ∈R
2πσ 2 2σ 2
▶ Notation x ∼ N (µ, σ 2 )
Standard Gaussian random variable w has mean 0 and variance 1,
i.e., w ∼ N (0, 1).
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Preliminaries: Properties of Gaussian random variables
An important property: Any linear combination of independent
Gaussian random variables is also a Gaussian random variable.
▶ If x1 , x2 , ..., xn are independent and xi ∼ N (µ, σi2 ), then
n
X Xn n
X
ci xi ∼ N ( ci µi , ci2 σi2 )
i=1 i=1 i=1
A general Gaussian random variable x ∼ N (µ, σ 2 ) can be expressed
in terms of Standard Gaussian random variable w ∼ N (0, 1):
x = σw + µ
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Preliminaries: Tail of Gaussian random variables
Tail Q(.) of the standard Gaussian random variable w is defined as:
Q(a) = P(w > a)
Tail decays exponentially fast (see figure below from [1])
1 1 2 2
√ 1 − 2 e −a /2 < Q(a) < e −a /2 , a > 1
a 2π a
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Preliminaries: Complex Gaussian random variable
Consider circular symmetric complex Gaussian random variable with
i.i.d. zero-mean Gaussian real and imaginary components
w = wR + jwI , where wR , wI ∼ N (0, σ 2 /2)
Notation: w ∼ CN (0, σ 2 )
Its PDF is !
1 ∥w ∥2
f (w ) = exp − ,w ∈ C
πσ 2 σ2
Phase of w is uniform over [0, 2π]
Magnitude r = ∥w ∥ is a Rayleigh random variable with PDF
2
r −r
f (r ) = 2 exp
σ 2σ 2
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Scalar detection in Gaussian noise
Consider the real additive Gaussian noise channel:
y =u+w
, ▶ u is equally likely to be uA or uB (i.e., binary),
▶ w ∼ N (0, N0 /2) is real Gaussian noise.
Detection problem: making a decision on whether uA or uB was
transmitted based on observed y .
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Scalar detection in additive Gaussian noise: An intuitive
solution
Consider observed y = u + w , where w ∼ N (0, N0 /2)
Suppose uA = −1 and uB = +1, and are equally likely.
Easy to decide
▶ What if y = −1.1? Decide: uA was sent.
▶ What if y = +1.1? Decide: uB was sent.
Slightly more difficult to decide:
▶ What if y = −0.75? Decide: uA was sent.
▶ What if y = +0.75? Decide: uB was sent.
How about the following:
▶ What if y = −0.0001? Decide: uA was sent.
▶ What if y = +0.0001? Decide: uB was sent.
We can infer the following intuitive rule:
▶ If y < 0, decide: uA was sent.
▶ If y > 0, decide: uB was sent.
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Detection rules: Max Aposteriori Probability (MAP) rule
MAP rule: chooses the symbol most likely to have been transmitted
given the received signal y , i.e., uA is chosen if
P {u = uA |y } ≥ P {u = uB |y } (1)
Since P {u = uA } = P {u = uB } = 0.5, (1) implies that uA is chosen
if
f (y , u = uA ) f (y , u = uB )
≥
f (y ) f (y )
f (y |u = uA )P {u = uA } f (y |u = uB )P {u = uB }
=⇒ ≥
f (y ) f (y )
=⇒ f (y |u = uA ) ≥ f (y |u = uB )
f (y |u = uA ) is the likelihood of y if uA is transmitted
Thus, if uA and uB are equally likely, MAP rule =⇒ Maximum
Likelihood (ML) rule
▶ We treat PDFs like probabilities above. More careful treatment here.
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Detection rules: ML rule
ML rule: choose uA if f (y |u = uA ) ≥ f (y |u = uB ), and else choose
uB
y = u + w and w ∼ N (0, N0 /2). Hence, conditioned on u = ui , the
received signal y ∼ N (ui , N0 /2), i = A, B.
Hence, ML rule =⇒ choose uA if
(y − uA )2 (y − ub )2
1 1
√ exp − ≥ √ exp −
πN0 N0 πN0 N0
2 2
=⇒ (y − uA ) ≤ (y − uB )
=⇒ |y − uA | ≤ |y − uB |
Hence, ML rule is essentially choosing nearest transmit symbol. This
is also aligned with our intuitive solution in slide 10
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
ML rule: Illustration of decision region
Figure from [1]
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
ML rule in terms of log-likelihood ratio
ML rule can be rewritten as follows:
f (y |u = uA ) ≥ f (y |u = uB )
f (y |u = uA )
=⇒ ≥1
f (y |u = uB )
f (y |u = uA )
=⇒ log ≥0
f (y |u = uB )
Define log-likelihood ratio (LLR) function
f (y |u = uA )
Λ(y ) = log
f (y |u = uB )
Thus, ML rule can be written in terms of LLR function as follows:
uA
Λ(y ) ≥ 0
<
uB
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
ML rule: Probability of error derivation
Using Law of Total Probability ( Wikipedia), probability of error pe
can be expressed in terms of conditional probabilities:
pe = P {u = uA } P {error|u = uA } + P {u = uB } P {error|u = uB }
(2)
Here
|uA − uB |
P {error|u = uA } = P w > |u = uA , (see slide 13)
2
|uA − uB |
=P w > , since w and u are independen
2
!
|uA − uB |
=Q p
2 N0 /2
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
ML rule: Probability of error
Due to symmetry of the setting,
P {error|u = uA } = P {error|u = uB }
Also, P {u = uA } = P {u = uB }
Hence, (2) then gives us the following:
!
|uA − uB |
pe = Q p
2 N0 /2
pe only depends on distance between two transmit symbols uA and uB
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Digital communication over a AWGN channel
After introducing AWGN channel in slide 4, we focused on closely
related problem of detection in additive Gaussian noise till now
Let us look at AWGN channel model now. Recall:
y [m] = x[m] + w [m]
▶ x[m] is transmitted in time slot m
▶ y [m] is received signal in time slot m
▶ Noise {w [m]}
⋆ For each m, w [m] ∼ CN (0, N0 )
⋆ The process {w [m]} is white, i.e., independent over time
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Digital communication over a AWGN channel using BPSK
BPSK =⇒ x[m] is either +a or −a (for some a > 0)
Since x[m] is real, y [m] is also real. Thus, imaginary component of
w [m] can be ignored for probability of error determination.
Hence, using slide 16 result for additive Gaussian noise,
!
a
P {error} = Q p
N0 /2
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
BPSK in AWGN channel: Prob. of error in terms of SNR
Probability of error can also be expressed as follows:
!
a p
P {error} = Q p =Q 2SNRB
N0 /2
Here, SNRB = a2 /N0 per symbol
There are few ways to define SNR. We use below definition in [1] (Eq
(3.9)):
average received signal energy per (complex) symbol time
SNR =
noise energy per (complex) symbol time
▶ Energy of entire complex noise is considered (though imaginary
components are not relevant when computing probability of error)
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Digital communication over a AWGN channel using QPSK
QPSK =⇒ x[m] ∈ {a(1 + j), a(1 − j), a(−1 + j), a(−1 − j)}
▶ Real and imaginary parts in x[m] are associated with I (in-phase) and
Q (quadrature) channels used in communication
▶ QPSK delivers one extra bit per symbol compared to BPSK
Noise is independent across I and Q channels. So, bits can be
detected separately along the channels.
Thus, bit error probability is same as BPSK and given by:
!
a p
P {error} = Q p =Q SNRQ
N0 /2
SNRQ = 2a2 /N0 per symbol
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
AWGN channel: BPSK vs QPSK
√
Probability of error for BPSK is Q 2SNRB and for QPSK is
p
Q SNRQ
For SNRB = SNRQ , QPSK has higher probability of error (see
definition of tail in slide 7).
But, QPSK delivers one extra bit per symbol compared to BPSK
For a fair comparison, we have to compare 4-PAM and QPSK both of
which deliver 2 bits per symbol.
[1] compares them and shows that approximately 2.5 times more
transmit energy is needed by 4-PAM for same probabilty of error, and
highlights following as a general design principle:
A good communication scheme exploits all available ”degrees of
freedom” in the channel.
▶ I and Q channels examples of degrees of freedom. For BPSK vs QPSK,
QPSK is using both I and Q channels whereas BPSK uses only one.
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
AWGN channel: BPSK and QPSK probability of error
trends
√
Probability of error for BPSK is Q 2SNRB and for QPSK is
p
Q SNRQ
2 /2
Recall from slide 7 that Q(a) < e −a
Hence, probability of error for BPSK and QPSK decays exponentially
with SNR in an AWGN channel
We will later see that the exponential decay is not seen with fading.
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Takeaways
We study digital communication over fading channels in this module.
We started by studying AWGN channel.
Given a communication channel (like AWGN channel),
▶ We can obtain detection rules based on MAP rule and ML rule
uA
⋆ ML rule in terms of LLR function: Λ(y ) ≥ 0
<
uB
▶ Asses performance by analysing probability of error (e.g., in terms of
SNR)
⋆ E.g., probability of error for BPSK and QPSK decays exponentially
with SNR in an AWGN channel
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
References
Tse, D., Viswanath, P. (2005). Fundamentals of Wireless
Communication. Cambridge: Cambridge University Press
Vinay Joseph (NIT Calicut) Lecture 10 Wireless Communication
Appendix: Probability to PDF when deriving ML rule from
MAP rule
MAP rule: Given received signal y , choose uA if
P {u = uA |y } ≥ P {u = uB |y }
MAP rule (generalized): Given received signal is in [y , y + δy ], choose
uA if
P {u = uA |[y , y + δy ]} ≥ P {u = uB |[y , y + δy ]} (3)
P {u = uA } = P {u = uB } = 0.5 (i.e., a neighborhood of y ), (3)
implies that uA is chosen if
P(u = uA , [y , y + δy ]) P(u = uB , [y , y + δy ])
≥
P([y , y + δy ]) P([y , y + δy ])
f (y , u = uA )δy f (y , u = uB )δy
=⇒ ≥
f (y )δy f (y )δy
f (y , u = uA )y f (y , u = uB )
=⇒ ≥
f (y ) f (y )
Rest of the steps are covered in this slide.
Vinay JosephSome related material
(NIT Calicut) is10here
Lecture Wireless Communication