Syllabus :
2.1 Introduction to Digital
Communication System (DCS) :
Jn the previous semester you have leamt some important
concepts of analog communication
Basically a commonicton system canbe analog or dg
‘ype.
In an analog communication system he information signal a
‘he inputs continuously varying in amplitude and ime and it
is used to proportionally change a characteristic sich a8
amplitude, fequeney or phase ofa sinusoidal caer. This
produces «modulated signal.
= Inthe digital communication syter th information sina i
discrete innate.
2.2 Sources and Signals :
= A souve of information generates an information sgn
called message.
= The examples of message signals ara follows:
1. Voice signal
2. TV picture
3, Teletype data
4. Temperature or pressure
5. Many other sources
~The message signals mentioned above are all non-ecical
a
=
‘Analog
Block ciagram and subsystr description of digital communication syst.
fees} =} “fe]
Introduction to Digital
Communication
Hence a transducer is used to convert them into their
electrical equivalent. Such electrical equivalent of a message
js called as baseband signal.
2.2.1 Analog to Digital Conversion :
= The message signal cam be analog or digital type. An analog
signal an alvays be converted into a digital signal.
= The analog to digital conversion (A/D) can be achieved by
sing the system shown in Fig. 221.
= This system consists of thee blocks namely sampler,
quantizer and encoder.
Sampler:
= The analog signals applied atthe input ofthe sam ple
= The sampler is a switch which samples the input signal a
regular imervals of time and produces the discrete version of
the inut signal
Quantizer:
= Quantization sa process of approximation or rounding off.
= Quantization process approximates each sample to its nearest
standard voltage level called quantization level. We get the
approximate version of the sampled signal atthe output of
the quantizer.
~The number of quantization levels is finite and generally it is
power of2i.e.2,4, 8, 16,3
Encoder
An encoder converts each quantized sample into a separate
code word of length say N bits,
= Thus atthe output ofthe encoder we get digital code words
Quantized
Te Vroorsd
Digital signal
0 Fig, [Link] Analog-to-digital conversion
Scanned with CamScannerEB oigitai communication (EAT » Mu)
2.2.2 Graphical Representation of A/D
Conversion Proces:
1 What are the diferent parameters which noed to
bbe examined before choosing a PCM waveform for
particular application ?
(May 10, May 19, 5 Marks)
Fig. 2.22 illustrates the A to D conversion process
eraphically.
tis important to understand that the output is inthe form of
binary codes, Each transmitted binary code represents a
panicular amplitude ofthe input signal.
Hence the “information” is represented in the form of a
“code” whichis being transmitted.
Lovole
(ea Fig. 2.22 : Input and output waveforms of a PCM system
‘The range of input signal magnitudes is divided into 8-equal
levels CY axis in Fig. 22.2). Each level is denoted by a thee
bit digital word berween 000 and 111.
Input signal x (1) is sampled. If the sample is in the S* =
window of amplitude then a digital word 101 is transmitted.
Ifthe sample is in the 2" - window then the transmitted word
is 010 and soon,
In ths illustration we have converted the sampled amplitudes
into 3 bit codes, but in practice the number of bits per word
can be as high as 8, 9 or 10,
‘The codewords shown in Fig. 2.2.2 are three bit numbers. tis
possible fo introduce one more bit to indicate the “sign.”
Error:
Due to the approximation taking place in the quantization
process, the A to D conversion introduces some error in the
digital signa
Such errors can not be reversed and it isnot be possible to
produce an exact replica of the original analog signal at the
receiver.
However iti possible to minimize these errors by selecting a
proper sampling rate and number of quantization levels.
2.3 Why Digital ?
In recent days the commercial as well as military
‘communication systems are becoming digital.
‘The reasons behind using the digital communication are as
follows :
L
2.
3.
Digital signals can be easily regenerated,
Digital signals are less affected by noise
It is possible to use regenerative repeaters. This will
22 Introduction to Digital Communication
increase the range of communication.
4. Digital circuits are affected less by isorion ay
5. Te omer es Cra inet) ih Sy
6. Distal and systems re more eal dy
1 ee casy for digital signals 1 ude sig
processing
We will discuss these advantages one by one.
Digital signals are less affected by noise :
Tn the analog communication, are designed isto transny
2 waveform as shown in Fig. 2.3.1(a) because
information is contained in the shape of the waveform,
But during the transmission the shape of transmitieg
signal gets distorted as shown in Fig. 2.3.1(6) due to
shortcomings ofthe communication channel,
Recoived
‘Transmitted signal a
7 aI
(a) Transmitted signal in (©) Received signal inthe
‘analog communication ‘analog communication
9 Fig. 23.1
Since all the information i contained in the shape ofthe
‘waveform, it is necessary to preserve the shape of
waveform.
But the noise will distort the shape of the waveform as
shown in Fig. 23.1(6). Therefore the received
information will also be distorted.
‘Now consider the transmission of digital signal as sbova
in Fig. 23.100. To transmit such a binary signal the
objective is to transmit either 0 or L
poo
. ‘
(©) Transmitted signal in (@) Received signal in digital
Aigitltraismission transmission
(eS) Fig. 23.1
Fig. 2.3.1() shows the received digital ‘signal with
noise. The receiver has to decide whether a 0 is received
or a1 is received. Even for the noise. contamind
received signal it is easier forthe receiver to make &
correct decision most ofthe times.
Hence the digital transmission is more immune wo nie
238 compared to the analog communicatioa,
y Fegeneration :
‘The digital signals that are contaminated with noise ca
‘be casily reproduced without noise using regenerative
repeaters,
The repeaters are basically generators ofthe signl-
‘They receive the (signal + noise) at their input, sepa
2 Easy
Scanned with CamScannerut the signal from noise and
is free from noise,
Due to the use of repeaters the noise performance of
‘igital comnication systems is much better than that
OF the analog systems. Note that repeaters cannot be used
for the analog communication system because it is
‘impossible to separate noise once it gets mixed with the
egencrate the signal which
9f a digital repeater becomes clear from.
Boa Sp Peoenoratod
it ‘gral too
from nots
@nFig.232:A dig sepater
~ Ths Cis publ undo eect of ie one
sal. Ti improves te wie imny ed Spe
Performance in presence of noise. .
9. Digital communiestion esta or
distances : teforlong
tal communication becomes cost effective over
analog communication ifit is used for longer distances,
Consider a tong distance communication link shown in
Fig. 23.3
As the signal from the source travels @ distance from
source, its amplitude will reduce due to attenuation. Ia
‘dition to that, the signal will become increasingly
distorted, The noise from various sources gets added to
the signal.
EH
long distance communication link
= In digits! communication system repeaters are
introduced as shown in Fig. 23.3 between the
destination and souree to regenerate noise fre signal
4. Less distortion and interference :
— Digital circuits are less. subject to distortion and
interference as compared to the analog circuits, This is
because binary digital signals have two states : 0. oF 1.
A disturbance (noise) must be large enough to change
the state ofthe signal from Oto I or vice versa.
— Its also possible to use the regenerative repeaters as
discussed easter.
= Due to allthis the noise and other disturbances do not
accumulate during the digital transmission.
5. Lowerror rates :
= Due to use of regenerative repeaters used between the
tuansmitter and receivers, in digital communication
system and also due tothe use of various error detection
‘and correction procedures that we can use with the
digital communication systems, the etror rates are
extremely low.
— That means even in presence of noise, the digital data
can be received without introducing any error.
6. More reliability and flexibility =
Digital circuits are more reliable and thei cost of production
is low, Also due to the use of digital hardware
(microprocessor, LSI and digital switches) the circuit
implementation has a better flexibility
FP vista! communication (E&TC- Mu) 23 Introduction to Digital Communication,
7. Use of TOM
“The time division muliplexing (TDM) technique can be used
to tranamat many digital signals over a single communication
channel.
8, Signal processing :
Alpen fc i crit sly
ver the digital signals, Such processing will protect the
Signal against interference and jamming. They also provide
encryption an privacy.
2.3.1 Advantages of Digital Communication :
Some of the advantages of digital communication are as
follow
1. Due to the digital nature of the transmitted signal, the
. terference of additive noise does not introduce many errors,
So dgial communication has a better noise immunity.
2 Due to the channel coding. techniques used in digital
communication itis possible to detect and correct the erors
introduced during the data transmission.
3. Repeaters can be used between transmitter and receiver (0
regenerate the digital signal, This improves. the noise
immunity further and also extends the range of
‘communication.
4, Due tothe digital nature ofthe signal, itis possible to use the
advanced data processing techniques such as digital signal
processing, image processing, data compression et
5. TDM (Time Division Muliplexing) technique can be used to
teamsmit many voice channels over a single common
‘eansmission channel. Thus digital telephony is possible to
achieve.
6 Digital communication is suitable in military applications
where only a few pernited receivers can receive the
transite signal,
7. Digital communication is becoming simpler and cheaper as
‘compared to the analog communication due to the invention
of high speed computers and integrated circuits (C3).
2.3.2 Disadvantages :
Some of the important disadvantages of digital
communication are:
1. The bit mates of digital systems are high. Therefore they
require larger channel bandwidth as compared to analog
systems,
2 Digital modulation needs synchronization in case of
synchronous modulation
2.4 Typical Block Diagram and
Transformations in DCS :
"Write a note on block. diagram of digital |
communication system. (May 13, 7 Marks)
Fig. 2.4.1 shows the functional block diagram of the digital
‘communication system (DCS). It basically consists of a
transmitter (upper blocks), a receiver (lower blocks) and a
communication channel.
The signal processing steps carried out at the receiver are
exactly opposite to those taking place atthe transmitter.
‘The modulate and demodulate/detect blocks together are
known as MODEM.
Scanned with CamScannerAa
&
Digital Communication (E&TC
Introduction to Digital Communication
Information
source
Mossago
symbols
cndot
Ged oe Zowtiona
Toaher ae
aaa cesinaons eS ce
ess Fig. 24: Block dagram of atypical iptal communication sytem (DCS)
2
In Fig. 24.1 some Blocks are optional while the other are
essential
4.1 Transmitter :
‘The input information source is applied to the format block
‘which samples and converts the information into a digital
signal.
‘This digital signal is then applied to the source encoder
block. In source coding the encoder converts the digital
signal generated at the source output into another signal ia
digital form. Source encoding is used to reduce or eliminate
redundancy for ensuring an efficient representation of the
source output. Different source coding techniques are PCM,
DM, ADM etc
‘The conversion of signal from one form to the other is called
as mapping. Such a mapping is usually of one as to one type.
Due to elimination of redundancy the source coding
represents the source output very efficiently.
In Fig. 24.1, the only essential blocks are as follows :
Formatting, Modulation, De-Modulation / Detection and
‘Synchronization.
Encryption is the process of converting the digital signal a
the source encoder output into another secret coded signal.
‘This is an optional block. Encryption is required to ensure the
‘communication privacy.
‘The encrypted signal is applied tothe channel coding block.
Channel encoding is done to minimize the effect of channel
‘This will reduce the number of errors in the received data and
will make the system more reliable, Channel coding
“The channel encoder maps the incoming digital signal ina
channel input. Channel coding for the given data rate can
reduce the probability of ero
“The output of channel coder is applied to multiplexer
which combines other signals originating from some ober
‘Upto this pot the signals inthe form ofthe bit steam, At
modulator this bit steam is converted into waveforms, hat
are compatible with the transmission channel (his is aio
called as line coding).
“The pulse modulation process will convert the bitstream st
its input int suitable line codes. The Bandpass Modulation
is used for providing an efficient transmission of the sign!
over the channel. The modulator ean use any of the CW
digital modulation techniques such as ASK (amplitude sit
keyings), PSK (frequency shift keying) or PSK (phase shit
eying).
The demodulator is used for demodulation,
‘The frequency spreading (spread spectrum technique) will
produce signal thts immune to interference of any Kind
‘The modulated waveform is passed dough the option
multiple aceess block (FDMA J TDMA or CDMA) sod
applied tothe wansmission chanel
.2 Receiver :
At the receiver the transmited signal alongwith the wot
‘ued to it while travelling ver the chanel is received.
The blocks used atthe receiver perform exactly the opps
operation a that are performed athe transmit.
‘The received signal is fit passed though the mulipl
technique introduces some redundancy.
access decoder to separate out the signals.
Scanned with CamScannerspectrom signal. Frequency dispreading process is opposite to
the signal spreading process.
itis then passed through the demodulator/detector which is
pposite to the modulation process. The demultiplexer will
separate out the multiplexed signals and pas on the desired
(lected signal to the channel decoder.
Channel decoder :
— The channel decoders presenta the receiver and it maps the
channel output into digital signal in such a way that effect
of channel noise is redced oa minimum,
= Thus channel encoder and decoder together provide a reliable
communication over a noisy channel. Tis is achieved by
introducing redundancy (party bits) in predecided form, at
the transmiter.
= The ourpot of the channel encoder is a series of codewords
Which include the message and some parity bits, These
additonal party bits introduce redundancy.
~The channel decoder converts these codewords back into
digital messages.
~The channel coder output is applied to deeryption block,
Decryption is a process which is exactly opposite to the
encryption process. The decrypted signal is applied tothe
source decode.
‘Source decoder:
Source decoder is atthe receiver and it behaves exactly in an
inverse way to the source encoder,
1k delivers the destination (user) the original digital source
‘output.
~ Main advantage of using the source coding is that it reduces
the bandwidth requirement,
~The souree decoder output is finally passed through ‘the
{format block to recover the original information sigial back.
Thus in souree coding the redundancy is removed whereas in
channel coding the redundancy is introduced in a controlled
I is possible to opt for only source encoding or for only
‘channel encoding. It isnot essential to perform both but in
‘many systems both these are performed together.
It is possible 10 change the sequence in which channel
‘encoding and source encoding are being performed.
= Channel and source encoding improve the system
performance but they increase the circuit complexity as well
‘Synchronizatio
= Synchronization is essential in DCS. ts key element is clock
signal. This clock signal is involved in the control of all
signal processing functions within the DCS.
~ In Fig. 24.1 synchronization block is drawn without any
connecting lines, to show that it has got aro to play in each
and every block of DCS.
2.4.3 Signal Processing Functions or
Transformations :
Following are some of the important signal proces
functions or transformations that are related 10 the digital
‘communication :
|. Formatting and source coding
Equalization
Channel coding
Multiplexing and multiple access
Spreading
8. Encryption
9. Synchronization
Al these transformations have been di
sued in dt in
‘subsequent chapters of this book. "deal inthe
2.5 Basic Digital Communication
1
a
Nomenclature :
Some of the basic digital signal nomen:
frequently in digital communication lite
Information source :
Its the device which produces in
‘be communicated using DCS. Infor
analog or discrete,
~ The analog information canbe transformed ito digital
by means of A to D conversion discussed eater. It ses
the sampling and quantization blocks, Sampling is 2
formating pres while quantization i a source coding
technique.
Textual message :
~ Textual message is a sequence of characters as shown in
Fig. 25.1(@). It consists of alphabets, numbers and
special characters or symbols
‘Send Rs. 5000/- to me tomorrow
Ok. E wil
Fig. 25.1(a) : Textual message
For digital transmission, each character in the textual
‘message is converted into a sequence of digits.
Character :
A character is an alphabet’ or a number or a special
“symbol. The examples of a character are shown in
Fig. 2.5.10).
clare which is used
ature is as follows :
formation which is to
mation source can be
[Link], @
Fig, 25.10) : Characters
~ Fach characteris converted into sequence of binary
digits. This sealed character encoding. Sone ofthe
Standard codes used for chancter encoding ae ASCH
(American Standard Code for Infomation Exchange) or
EEBCDIC (Extended Binary Coded Decimal Interchange
ode) or Hole, Baudt codes,
Binary digit (bit) :
= Ihis the fundamental information wnt of all DC. The
bits also used a unit of infomation. A binary gil
«am have two possible ales i.e, zr or oe
Bitstream :
~ tis defined asthe soqence of binary digs ie. ze
and ones. A bit stream is generally called as the
baseband signa in DCS.
~ Big. 25.1(¢) shows a bit stream for the message WHO
which is epesented with it ASC code
Scanned with CamScanner~
Introduction to Digital Communication
COEECECEEED EPEAT
wt lo
a
(©1384) Fig. 2.5.16): Bitstream using 7-bit ASCIT
6. Symbol:
7
‘A symbol is also called as a digital message and itis a
‘roup of k bits considered one unit.
= Such a unit is referred to as symbol my, (1 = 1, 2,1 M
‘Thus there will be M symbols with k bits per symbol.
‘The relation between M and k is as follows :
2 Meo
~ Fig. 25.1(@) shows some examples of symbols.
Tord -ewBinary symbol (k= 1, M
10,00, 01 or 11 ..Quaternary symbol (k= 2, M=4)
000, 001, sn HED nary symbol
Fig. 25.1(@) : Examples of symbols
Digital waveform :
Digital waveform is a voltage or current waveform
which represents a digital symbol. Fig. 2.5.1(e) shows an
example of such waveform.
Even though the waveform shown in Fig. 25.1(¢) is @
sinusoidal waveform. It is called as a digital waveform
because it is encoded with digital information.
‘The digital waveform of Fig. 2.5.1) is a Binary
Frequency Shift Keying (BFSK) waveform, drawn t0
represent binary symbols with the help of frequency
value
‘Symbol “O” is represented by ftequency f, and symbol
“T" is represented by frequency fy. Ia Fig. 2.5.1(€), T
represents the duration ofeach symbol.
"Binary symbol
~Digial
wavetorm
Tere detest
ee
e130) Fig. 25.1(¢): Digital waveform
Data rate :
Data rate is defined as the numberof bits transmitted by
‘a DCS per second, It is expressed in bits per second and
denoted by R.
R=
cd
P= ples: Mbitslsecond
2.6 Digital Versus Analog Performance
Criteria :
‘The performance of a digital communication system is
evaluated in a completely different manner as compared 10
that of an analog communication system:
‘An analog communication system is evaluated on the basis of|
the following parameters:
1,__ Signal to noise ratio.
2, Percentage distortion
Expected mean square error between Wansmited ang
received waveforms.
(Py) value
Py is defined as the probability
digit
of incorrectly detecting
2.6.1 Comparison of Analog and Digital
Modulation :
Se | Analog modulation Digital modulation
No.
1, | Transmived modulated | Transmitted signal is ditt
signal is analog in nature. | i. tain of digital pulses.
Z| Amplitude, frequency or | Amplitude, width or
phase variations in the | position of the transmitted
teansmitted sigoal | pulses is constant. The
represent the information | message is transmitted in
lormessage. the form of code words.
‘Noise immunity is poor | Noise immunity is
excellent.
for AM, but improved for
FM and PM.
Tt is not possible to | It is possible to separate
4.
separate out noise and | signal from noise. So
signal. ‘Therefore | repeaters can be used.
repeaters can aot be used.
5. | Coding isnot possible. | Coding techniques can be
used to detect and correct
the errors.
6 | Bandwidth required is | Duc to higher bit rates,
lower than that for the | higher channel bandwidths
digital —_—_-modulation | is needed.
methods.
7 PROM: is wed for|TDM is used for
ultiplexing. multiplexing
8 [Not suitable for | Due to coding techniques, it
transmission of secret | is suitable for military
information in military | applications.
application.
‘9, | Analog modulation | Digital modulation systems
systems are AM, FM,| are PCM, DM, ADM,
PM,PAM,PWMetc. | DPCM etc
2.6.2 Channels for Digital Communications :
‘The type of modulation and coding used ina digital
communication system is largely dependent on the channel
characteristics and application areas in which the DCS is
being used
Some ofthe important characteristics ofa channel are:
1. Power required to achieve the desired S/N ratio.
2. Bandwidth ofthe chanel
Scanned with CamScanner
>Gaon.
Digital Communication Channels — 519 | ~
are functions of time |
= The signal is a general term and it is not necessary that |
Testone Com! Opes! aiachave garg | alte ed fr ect ces. |
ass a radio. chennelg | ~ Some of the important signals are shown in the following |
Toportant Signals: {abl alongwith thee mathematica expresion. |
Sr. ‘Name of signal
No. and mathematical represeniation Waveforms
1.
~0 (&-583)
j>—*"
2 | Unitstep signal
xQQ=u@=1 pes (0 = ut)
=0S t
o
1 fort
2
im tf 2@e 07
Aeon
Te above definate eed fr comple ia
x(t )as,
”
1
P= lim tf ix(tyPat...272)
pall
For periodic signal witha period T, the uations (271)
sd 272) gts maid to,
mW
J xa
Tn
For a complex petiodi signal x () the average normalized
powers given by,
12
J vor
1/2
Pe 223)
i
tr
014)
2.7.3 Energy :
‘The total normalized energy fra “ral” signal x (0s given
Lp
pe J x(oe 1205)
However if the signals omplex then the expression foro
oxmalized energy is given bY
pe J ixcpra 218)
rate that the energy i the area under th
always positive.
(1) curve over (<9 StS) henet
2.8 AReview of Fourier Series and
Fourier Transform
runication engineering we need t0 analyze
= Inthe field of eommt i
A fo so we have to express the signal in its
‘a given signal. To
frequency domain. -
=> The translation of a signal from time domain to frequency
domain is obtained by using the tools such as Fourier series
and Fourier transform.
2.8.1 Fourier Series :
Sine waves and cosine waves ar the basic building functions
for ay periodic signal.
= That means any periodic signal basically consists of ste
‘waves having different amplitudes, of different frequencies
‘and having diferent relative phase shifts.
~ Fourier series represents aperiodic waveform inthe form of
sum of infinite number of sine and cosine terms, It is ¢
representation ofthe signal ina time domain series form
= Fourier series is a “tool” used to analyze any periodic signal
‘After the “analysis” we obtain the following information
about the signal
1, What all frequency components are present in the
signal?
2. Theiramplitudes and
3. The relative phase difference between these frequency
components.
All the “frequency components” are nothing else but sine
wives a those frequencies.
Scanned with CamScannétDigit
2.8.2 Exponential Fourier Series [OR Complex
Exponential Fourier Series] :
| Communication (ERTC - MU)
= Substituting the sine and cosine functions in terms of
‘Amplitude and phat
Introduction to Digital Communication
spectrum
‘The amplitude and phase spectrums are continuous and not
discrete in nature, Out of them, the amplitude spectrum of a
teal valued function x(t) exhibits an even symmetry.
exponential function in the expression forthe quadat 1 Xf) = X(-f) 4287)
Fourier series, we can obtain another type of Fourier series | 2. And he phase spectrum hasan odd symmetry That means,
called the exponential Fourier series. of) = -0(-f) +88)
~ _Aeriodi signal x() is expressed in the exponential Fourier | 2.8,5 Properties of Fourier Transform :
series frm as fllows
= le 2.8.1: Fourler transform properties
Mathematical expression
MO = LD cette co) | pee err, oe
Rane 1. | Linearity oF | fa, x, (+a, 1 © fa, XO)
i * superposition | + a, X,(D] a anda are constants.
Where. Cy = Tf (ye! a2) me san F
mY 2 [Mime sealing | (ay C2, X ia, isconstant
‘Amplitude and Phase Spectrums : 3 Duality or FR ay
~ The amplitude spectrum of the signal xis denoted by, symmetry Ita) XC then Xi 3-9
ICI = (Real part of CJ? + Cmaginay part of C1"? a F
O83) te | eens ony oom,
= The phase spectrum of x(t is denoted by, 3. | Areaunderxt) | :
= MBC) J xm ar=x)
- een ate ce
. 4
Real pat oC, C8) Te reaanderx® | =
The amplitude spectrum is a symmetric or even function J x@er=xo)
That means 1C,1=1.C_, 1 But the phase spetrum is an ©
asymmeic or odd fascion, That means arg ( C, ) | —7—T Roque F
=-ag(C.,). shifting Fx OXE-f)
i A & | Ditferentaionin |g F
2.8.3 Fourier Transform : time domain | Gx #52REX00
~ A Fourier transform isthe limiting case of Fourie series. Itis | 9, | Tategation in |
used fr the analyssof non-periodic signals, time domain J wma h2xo
~ The Fourier transform of signal x(t) defined as follows: Ee Pat
« 10. | Conjugate F F
Fouiertansform: X(a)e J xa fanctions Ix() © X(9 thenxt ) OX CH
aay | | Mipiaion ia a
] ie Sime domain x0 & J Xx-aa
oR x= J xme™ ar =<
aon 2, | Convolaion in|
‘These equations are known as “analysis” equations. fimedomsin _[ 20? 0 OX 0%
Ex284;
2.8.4 Inverse Fourier Transform :
= The signal x () can be obtained back from its Fourier
transform X(f) by using the inverse Fourier transform. The
inverse Fourier transform (IFT) is defined as follows :
Inverse Fourie transform :
2 Tauern
xi
128.6)
Jx@e™*ar
OR x)
Obtain the Fourier transform of a rectangular
pulse of duration T and amplitude A as shown
in Fig. P.2.8.1(a).
>)
a
m2 lo
We
(e-1) Fig, P.28.1(a) : Rectangular pulse
Scanned with CamScannerDigital Communication (E&TC - MU)
Soin,
‘The rectangular pulse shown in Fig. P. 2.81(a) can be
expressed mathematically as,
net(WT) = A for-TStsT2
0 elsewhere
This is also known as the gate function.
‘Therefore the Fourier tansfrm will be,
Fix) = Xi
J 0-6
by definition of ET.
Aspen Ealesdsoem,
(cf -<)
ay
aynbine so uso
wee Fixe) = Siinc fT
-Mulply and divide the RHS of Equation (1) by T10 get
sind =
2)
sin@efT)
Fix) = ATS
Inthe shove equation,
sino) sine
ae ine (FT)
F[x(y) = ATsine (fT)
4 Arct(WD B ATsinc(IT)
Thus the rectangular pulse transforms into a sine
function,
Amplitude spectrum : *
| The amplitude spectrum of the rectangular function is as
shown in Fig. P. 2.8.10)
| As, sinc@) = 1
AT sinc (0) = AT
The sinc function will have zero value for the following
values of "FT":
sine (fT)
for f=
1
duction to Digital Communication
(nim the phase shi
tae spectrum EXO) hy
10 Intro«
‘To absorb negative values of X(
of the am
The ngatv ample fe ample spore A
teen ahsored by inducing 2 180° ri :
Fig P2810. a
sa
1
sl T 9
pee spectrum
Phase shit 180° tabs he
Poser vals of [XC and mate
Ir] pois
ude and phase spectrums fora
alues of IX(Hl have been absorbed
“of 180° in the phase spectrum
ety Fig. P. 28.106): At
rectangular pulse. Negative ¥
in the additional phase shi
For the sine function shown
Be 2825 Tin tho Fourier transform and plot the
spectrum.
x)
sean T
1
|
CT
(619 Fig. P.2.8.2(a) : A sine pulse
Soln. :
1. The sinc signal shown in Fig. P. 2.8.2(a) can be expressed
smathematically a8,
x() = Asine 2 W) ofl)
2. To evaluate the Fourier transform of this function, we are
going to apply the duality and time scaling properties ofthe
Fourier transform. Refer to example 2.8.1 where we have
obtained the Fourier transform of a rectangular pulse of
amplitude A and duration T, as,
(e-1 Fig. P.2.8.1(b) : Amplitude spectrum of a rectangular
pulse
‘The phase spectrum has not been shown as it has zero value
forall the values of f
Arectt/T] GAT sine 1) )
3. Using the duality property we can wt that,
ATsinc() & Areetif/T] — ..@)
‘Compare the LHS of Equation (3) with the RHS of Equation
(2) which states the expression for x(t), we can write that,
F
2AW sine @ Wi) 0. jos jaat
fear geal
pubefort <0
5. |Double exponential] X(f)
= 2
as Qnty
Unit impulse x()=1
DC signal xi) = 80)
ines deco ni) |,
Cosine signal COR cos Ah His¢_ ty
56st
2.9.2 Rayleigh’s Energy Theorem :
‘The Rayleigh’s energy theorem is analogous to the Parseval's
ower theorem, It tates thatthe total energy ofthe signal x() is
‘qual tothe sum of energies ofits individual spectral components
in the frequency domain. The total normalized energy of a signal
x() is piven by,
_
—.
fo emacs
Be fixorer 293)
“Ths he tal nomaied energy i equal to th area under
the signal coneaponing ote gure of he amplinds ecto
Iie) tot signal
2.10 Spectral Density Functions :
‘The spectral density of a signal, is used for defining the
distribution of energy or power per unit bandwidth asa function of
frequency.
‘The spectral density of energy signals is called as “Energy
‘Spectral Density (ESD)" while that of the power signals is called as,
“Power Spectral Density (PSD)".
Scanned with CamScannerjtal Communication (E&TC - MU)
Energy Spectral Density (ESD)
‘According to Rayleigh’s energy theorem, the total energy of a
signals given by,
2.10.1
Jixoear
Ee
‘The Rayleigh energy theorem not just provides a useful
method of evaluating the total energy but it also tells us that
'X(O F can provide us the “distribution of energy” of signal x(t) in
the frequency domain,
Definition of ESD :
‘Therefore the squared amplitude spectrum X() is called as
the “Energy, Spectal Density (ESD)" or “Energy Densiy
Spectrum’
ESD = wQ=Ixor (2.10.1)
‘The most important property of ESD is thatthe area under the
ESD curve represens the energy ofa signal
ie Jviat = E
240.2 Power Spectral Density (PSD) +
“We now define the power spectral density of aperiodic signal
X(Q) as follows :
Power spectral density
sc = gr E 1x ¢af) PSEA)
(2.10.2)
= Equation (2.10.2) shows that power spectral density is a
discrete function of frequency. This is indicated by the term
5 (f — nf.) in Equation (2.102). The power spectral density
function is present only atthe harmonic frequencies off,
= The most important property of PSD is that the area under
PSD is equal tothe average power ofthe signal x(0).
p= Jsmar
(2.10.3)
2.11 Correlation of Energy Signals :
The correlation function is used as a measure of similarity
between two signals. Higher the value of corelation function is,
more is the degree of similarity. The corelation function is of two
types:
1. Auto-coreation function
2. Cross-coneation function
2.11.1. Auto-Correlation Function for the
Energy Signals :
‘The auto-correlation function for an energy signal x()
provides the measure of similarity between the signal x(0) and its
time delayed version x(t - t), The auto-correlation function forthe
Introduction to Digital Communication
Re =f pext-98 etsy
OR Rey = fateaexodt 2.12)
Irth signal (complex valued ten te a0-COTlAig
funeton for iis defined a8 flO
ney = SAOxTE-Dd Qty
betwe vd ation =
n ESD and autocorrel
rant in ety om
Fourier transform pait. That means,
E
RO Oo WO
2.11.2 Cross-Correlation of Energy Signals ;
i i ” used to obtain
1 “cross-corelation function” can be used t i.
ea ey tween gal and te Ue dele
Tres of a second signal. The cross correlation eal
als x, (0) and x(t) is given by,
reer
gn = [aot 219
te can date he sein tos coma sn
ne ead 08
Ri@ = fu-xl-Odt 2115)
If the signals x(t) and x,() are complex valued signals of
finite energy then the eross-corrlaton is defined as:
Ra) = J x-xh@-Dae
(2116)
2.11.3 Auto-Correlation Function of Power
Signal
‘The auto-correlation function of power signals is a measue
of similarity between a periodic signal x(t) and its delayed version
x({~ 9. If the signal x(0 is periodic with a period T,, then te
auto-correlation function over one complete period T, is defined
follows
Tr
Auto-corelation 1
function of aR@)= —T. : HO-atS9 8
periodic signals “Tye
For x(t) to be real valued.
219
Note that this expression is same as that for the ao-
Correlation function of energy signals, except for the inclusion of
1/T, and change in ie mis of integration,
‘eal valued energy signal is given by, For any period T it is defined as,
+72
1
R@) = tim +f x@)-xt-n)dt
Toe Tip
Scanned with CamScannerrm
BP igitat Communication (E&TC- Mu)
a —— 243 Introduction to Digital Communication
For x(t tobe real valued
For any period T with x() complex,
k tm
© = tim 2
= stim + J xs a-ae
™
For any petiod T with
ection, XO complex and + in negative
ym
Re = tim Lf
T J xatowoa ,
ToT Dd QL)
Both x (and x*(¢—
3*(0 respectively *) are advanced by £10 get x (+1) and
Relation between correlation and PSD
ve eS
F
RG) & SO
2.11.4 Cross-Correlation of Power Signals :
CCross-comelation is defined fortwo difere
a a a te ve sete
Sls then the crosscomsaion between thm it dela
+1
Ryle) =_lim ff
Tap
Oxf CH at 19)
Similarly we can define 2 second cross-correlation function
+T2 -
Ryo = tim f xyoxte-nae
T38_tn
Ifthe two periodic signals x,() and x,() have the same time
period T,, then the cross-correlation is defined as,
11.10)
Tn
nyoed | osteoid
me
rs
211.11)
ook Pxoqene ana
de
oe onal ue fee fc tt
siete act power sls 20 and (0)
ae jae ate or eames).
Review Questions
State various sources of information signals.
Whatis a transducer ? Why isit used ?
With the help of block diagram explain the process of
Ato D conversion.
Write @ note on : Graphical representation of A/D
conversion,
Stato advantages and disadvantages of digital
representation of a signal.
With the help of block diagram explain the operation
of digital communication system,
Explain how digital communication is superior to the
‘analog communication,
State advantages and disadvantages of digital
communication.
2.9. Compare analog and digital communication.
@.10 Write a short note on :° Channels for digital
‘communications.
at
az
a3
a4
a7
as
goo
Scanned with CamScannerInformation Theory and
Source Coding
‘Shannon's source coding
source coding,
tional entropy,
tial entropy, Joint and condi
iy theorem.
Syllabus :
Measure of information and properties, Entropy and its rel
theorem, Shannon, Fano source coding, Huffman source coding
Mutual information and channel capacity, Cannel coding theorems
pores, Min
ero
Channel capaci
‘te information theory is wsed f0 find answers {0 many
3.1 _ Introduction to Information Theory : questions related to a communication sytem, But the most
sas walt as [ morant questions ht it answer ar 8 ows
1. How fara signal can be compressed ? What it the Fimit
= After the World War Il, the field of communication started ‘on signal compression
seeing very iy and egies stated desing more | Q.2, Wh ise ma possible transmis
efficient and more reliable systems. reliable communication overa noisy channel ?
aa er genon pase hs page on formation | ~The answer t tes questo can obtained from the
hea in 1948, Te was refined ater on by many otier | enopy ofthe source and the capacity of a channel.
eminiin 3.2 Uncertainity :
Shannon suggested that efficent communication from a |
a ~The words uncertainty, suprise and information are al
to deszaton i pssible sing sure coding related to each other. Before an event occurs, there is an
aan bi commenction vera nisy chancel can | yeriny, when the erent actually takes place there an
achieved by sing errr control coding amount of surprise and after the event has taken place there is
= The performance of communication systems is dependent on | gsi of information.
treo a = The concept of information is related t0 the “uncertainty
1. The available signal pone ‘This canbe explained using the following sentences:
8 ig
= Any communication system needs to be
reliable.
sion rate fora
2 Meta msndesh 1. Bath eolves round sn,
3. Lntstnvtactesel | 2 eit cnet
= Til ow we nave ved the signal theory forthe anys of | 3, India may win the wold x
the noise on transmitted sig but limitation of sig a
ars ao aa ie a et) = = The first sentence does not have any uncertainity or surprise,
ion isa cnt alin fae clement Hence te infomatos contin tis minimum
= ‘Therforg, anew approach called “formation theary" tas | ~ Bit tnek t the other ovo sentences, They are Fatt of
aoe ato fh Te ey ea
The information theory deals with three basic concey a -
E y "| Extending this concept we can say that if source transmits 2
ee ressage of probability p, then the information carried by he
ose message wil increas its probability p, decreases, ie 8
the mssage becomes less ily. So higher the uncet
3 Ue of cong for ing duel sty tr teteefsoeatan, eae wmeman
information transfer. '
3.2. orm
Information theory is a broadly based mathematical 2 Rieu ation (Mest
discipline and it is applicable to various fields such as
communications, computer science, statistics and probability.
= Inthe field of communications the information theory is used.
for mathematical modelling and analysis of a communication
system.
Scanned with CamScanner|
|
Scanned with camocanner
cL Commurication Ear:
To explain what is
U) e
Hem transmiting messages m,
ilies of occurence of these
"spectively, Then the amount of
Aho gh the sin
\= mm (R) an
‘This is the definition of the information. *
User oman
= The information 1, defined i
Te nermn didi Bion 2.2. ly a
imensionless quantity. But by convention the unit atached
‘otis “bits” This is, when the base ofthe foprith ie 2
If we change the base of the logarithm, then the unt of
information will also change. ,
For the natural logarithm (bse “the unit of infomation is
"nat", and for base 10 the units “Harley” ot “ect
Previously the term “bit” was used to represent binary digit
So hence-forh we ar going to use the tm “it” a unit of
information and represent the binary digit bythe term “bin”
3.2.2 Properties of Information :
Q.1) Define ‘amount of information’, Discuss. the
different properties of information, Also define
Entropy 2 (May 06, 10 Marks)
The important properties about the amount of information
conveyed by a message ae as follows:
‘The information contained in an absolutly certain event is
zero, That means,
Let the pro
messages be Py Byte,
“information” transmitted I
alm, is given as,
where,
T= 0 wforpat G22)
2. The information contents of a message increases with
‘decreas inthe value of its probability of occurence (p,
‘That means the most unexpected event (which has the least
probability of occurrence) will
information. (see Ex. 32.6).
3, The occurrence of an event, either provides some information
‘or no information, but itis never associated with a loss of
‘information. That means
1 2 0 .forOspst
4, is. continuous function ofp,
contain maximum
G23)
'S. The total information of two or more mutually independent
message signals or events is equal to the sum of the
information contents ofthe individual messages. i.
1, = Uththt~ G24)
‘Total information
Information Theory and Source Coding
Ex3217 A mossago signal m, is transmitted by @
transmittor, The probability of occurrence of
this signal fs 1/4. Calculate tho information
conveyed by iti torms ofits, nats and deci.
Soln.:
| hence using the definition of
1. Tis given that py = Renee using
information we get,
tote
tog, *Toped
2s whos.
terms of nats : (Nog base “e”)
Hog, (17 Pu)
anAns.
‘This shows that I bit = 0.693 nats
3, Information in terms of decit log base 10)
A, = log l 1/9.) =logi (41 = 06 dect
This shows that bit = 0.3 decit Ans.
32.2: Ina binary PCM eystem the bins O and 1 are
transmitted. f they have the probabilities of
ant 1 ramedivey hen eae te
information conveyed by each one of them
‘and comment on the result
1
3
Po = 4 and P= 4
Information conveyed by biit 0 = log, (4/3)
Jogi (4/3)
T0892
Information conveyed bythe int I= log, (4
IRE Ckaple oves thal the Information content of @
= 0.415 bits
bits
of easurence is higher.
For the same system in the previous example
calculate the amount of information conveyed
by the bins if they are equally likely to be
transmitted.
Soln.:
‘The binits O and 1 are equally likely to be wansmited,
Py = pand ~ ay
ote = 1 ~@)
Here py = Probability of
and p, = Probability of “1” being transmitted.
being transmitted.
P= MHz
Tnformation conveyed by binit 0 = Jy =og,2= 1 bitand
Information conveyed by bint |= I, =log,2= bitFP vicitat Communication (E&TC-MU)
Ex.324:
For a system that transmits M oqually Iikoly
message signals where M = 2" (N is an
integer), prove that the information in. each
message is N bits.
Soln.:
kis given that there are M message signals mm,
Ang all are equally likely, ie. probability of each message
‘ill be the same, Now,
7
DytPyton thy = “
lt
Ptptitp =
2 Mxpe
P @
“Tris isthe probability of ocumence of ach mess
Tnformation in each message
T= Tog, Mp} =loe [2°] = bits 3)
ts used in each
“This is a PCM system with "N* numberof bins used i
rination content per message is also
‘ransmited message. The infor
ill
1 bits, Thus the numberof binary digits (bits) per message wi
‘be numerically equal 0 the Bits of information.
For wo independent messages m, and m,
prove that the total amount of information
| conveyed is the sum of the information
‘associated with each message incividvally.
E325:
Sotn.:
the potable of ccc oth messages m and,
be p and, openly, eft indivi Hnfomatin
contents willbe:
bon (aR) w
tog, (1/P,) ool)
As the two messages ately indepen
probaly ef tersimdnsecaraesi piensa
and
P= PXxP, ~@)
‘Therefore the corresponding information is given a,
1, = 10g,{1/p] =lop,(1/p,,}
1, = bog (V/p,]+loe,(/p]) (8)
he = th (3)
| a transmitter is supposed to transmit only
fone message m, always, prove that the
information conveyed by this message is zero.
Ex. 3.2.
Soin. :
As only one message is being transmitted its probability of
‘occurrence will be maximum ie. unity. _
33 Information Theory and Soutee Cedi
cried by a "eran 0 SE” EVER jy
“Thus the informatio
zero.
3.3. Types of Source
ication We
ication of sure
.s and their Models ;
come across different types
Inthe fel of commu Ey te at tee ig
of sources. TM
Fig. 33
‘nolo e009
‘iatonary cour
Pwecrena (OMS)
tassification of sources of information
com Fig. 33.
43.3.1 Analog Informatio
ation systems are designed (0 ansmit the
ource 10 some destination,
yn Sources =
= Communics
information generated by a 5%
Information sources can be of different tYPES-
re in the radio and TV broadcasting the source js
= For exampl "
jc) ot moving images
generally an audio (voice or musi
(video).
= Thesignals
therefore these sources are called as
3.3.2. Discrete Information Sources =
produced by these sources ae analog signals an
“Analog sources”.
If source of information is a digital computer or a storage
device such as magnetic tape or optical disk then their output
is a diserete time signal such as a binary signal or an ASCIL
signal
= Hence the sources producing a’diserete output are known as
the discrete sources.
= Thus the output of a diserete information source is inthe form
of string of binary digits 10101111...
Discrste Memoryless Source (DMS) :
— Ifthe current output leter (or symbol) produced by a discrete
source is statistically independent from all the past and future
‘outputs then the source is called as “discrete memoryless
source (DMS)".
Stationary sources :
~ Ifthe output of a discrete source is statistically dependent 00,
its past of future outputs, then it is called as the discrete
peau stationary source,
Substaue this value in the equation of infomation, ~ The example ofa discrete source isthe so 7
~ ones aren e isthe source generating the
Bio?
+ A = Obits,
Scanned with CamScannerEze eee
Digital Communication (E&TC-Mu)
3.4 Average Infor
¥
a4
mation or Entropy:
CAI
Re ‘tory M2Y 05: 10 Marke)
mer ater ana ‘fomaton rato of discrete
0.8 rte maton ze, 4 Nar
theorem 0" Channel capacity 4, Srenaa® Hartoy
oo (May 11, 10 Marko)
MFOPY of an informat
maximum ?
tion sourco ? When
(Bec. 18, 4 Marks)
‘maximum transfer of information,
We will prove that the
Probabilities of the symbol
source,
troy depends only on te
's that are being produced by the
Dative the condition for maximum entropy of @
Source. How does entropy vary wth probably ?
(May 10, May 13, Dec. 13, § Marks)
Derive an expression forentropy.
(May 12,2 Marka)
entropy. When is entropy
(Dec. 16,5 Marks)
Follow the steps given below to aban the expression for
eatropy.
Steps tobe followed:
Step 1: Lat tev be M dlerar maxages my Amp
thee probable of occurences be Py Py
Step: Let thes be tal L messages. Therefor thee ae pL
numberof, messages, pL amber of m, messages
et.
‘Step 3: Caleulae the information conveyed by mssge m,
1=1og, (1/5,
Step: Calculate the total information conveyed by wm, as
eras) = EXT, |
Step 5: Similarly eaeuste the ttl information forall te
‘other messages, 4¢-Tecrgay Lyctea)t
Step 6: Add all. these eos ae
cra aaa) =
Derive the expression for.
maximum 2
Information Theory and Source Coding
iter is wansiting M ferent and
sare tpstgeh my yey Lt ei robabiies of
coceumence be ps Pry fspectively.
Suppose that during a long period of transmission a sequence
of L messages is generate. ne
1. Then if Lis very very large we can expect that in the L
message sequence, int
Pp, Lemessages of m, are transmit
p,Lemessages of m, are transmitted
pr, Limessages of m, ae ransmited
pub mesoges of mae tenis
“The information conveyed by the message m, is given as,
1, = tog, 0p)
However there are py Lumber of messages of m, Therefore
the information conveyed by py L number of messages will
a Tray = PrL log, 1p) GAD)
Similarly the teal information conveyed by py L number of
sm, messages is given as
Tycreeny = Pal logs (1/2) 43.4.2)
Similar expression can be writen forthe remaining messages.
3. As we already know, the total information of more than one
mutually independent message signals is equal to the sum of|
he information coatent of individual messages. ie.
Tera = icra theres tacraay te B43)
Substitute the values of I ¢ rat» Hert) A. from the
Equations (34.1) and (3.42), to get,
PiLlory (1/41 + Pe Ltogs (1/3)
+s L-log, [p51 +
Neroaty = LfP; logs (1 p,)+ pp log, (1 p2)
+ Plog, (Py) ted G45)
‘The “Entropy” is defined as the average information per
Imessage interval. It is represented by the symbol “HT.
‘Therefore from Equation (3.4.5), we can write that,
Kray
44)
1
cia)
a=
= Plog (1/,)+B3l08 (11) +. G46)
M
Fmopy: H= E pylog (py) 047)
k
the ol information otsioed in sep to
oa the epson fx eniopys fos o Sac
This isthe expression for Eatopy.
“TB cipresvon lisues at Ue eauopy of W Sole Is
0
H = pylog,(1/P,)= 2)
‘Thus the average information or entropy of the most likely
and most unlikely messages is zero.
3.4.3. Entropy of a Binary Memoryless
Source :
— Consider a binary source for which the probebilty of
‘occurrence of symbol 0 is say p and that for symbol 1 is say
(-p.
We assume that the source is memoryless so every symbol
transmitted by the source is statistically independent of the
remaining symbols.
= Letus obtain the entropy of such a source as illustrated in the
following example.
ay
Information Theory and Source Coctn,
For a source transmitting two independe,
messages m, and m, having probabilities oy
pand (1 -p) respectively, prove that the
entropy is maximum when both
messages are equally Ikoly. ISO plot thy
Variation of entropy (H) as a function oy
probability (p) of one of the messages,
PEEaT
Soin. Itis given that,
‘Message m, has a probability of p and message m, hay q
342?
probability of (1~P)-
“Therefore the average information per message is given as,
H = plog,( Mp) + (1-p)loe (WP) (ty
1. ntropy when both the messages are equally likey :
‘When m, and m, are equally fkely,
p= (-p)
Pao ~-Q)
Substituting in Equation (1) we eet
H = Hog, (2)+3 108; (2)
cH = I bidmessage ~Q)
1 will be proved in the next part of the example that this
value of H is its maximum value
2. Variation of H with probat
‘Table P. 34.2 shows values of H for different values of (),
“These values are obtained by using Equation (1).
Table P.3.4.2
yp?
oa |os|os}os | +
t-p] 1 [os Jos | os} 04 | 02} o
o97| 1 | 097] 072! 0
oa
06
04}
oa|
1
02 040508 08 10)?
(€-35 Fig. P.3.4.2: Average information H for two messages
plotted as a function of probability p of one of the messages
Scanned with CamScannerOS ———
Thi
Hosg= | bidmessage and
equally likely.
Ex For a source transmi
re Mz..having probabil
the average informat
ton/mossay
messages are equally likely,
iting M messages, m,
HieS Of Py, De san find out
199 when all tho
Soln. :
“AS all Be M meses are ely ely
‘occurrence of each message is given as, "
PLE Pas np
2 oa Py = WM “0
‘Therefore the average informati
38¢ information per message is given as,
the probability of
He Z plond/py
KI
B= py logs ( ip.) +p logs 1p,)+
+ loss (py)
1
Bil (OM) +9 log, CM) +. 43 tog, CM)
H = tog, (M) @
Note: it can be shown that for” M tranemited
messages, the maximum value of “HY is
obtained when all the messages are equally
Trely. ns
Hue = log,(M) G41)
4 Extension of a Discrete Memoryless
Source :
ER
a ener eee near cere eee
cena nascar erica
ees ota conte
Secs
ee raters meat coer oats
ws chi often Be comand en
ona
eee
2 EERE EEN TeTeLos PAPERTLT
Sse GOT ENTERITIS
fn
oinstie 341
= Ao wecan mak op of? or 3 or 4 ces isto
fmmancbucr Sous, ert
= Wecanimagn hatch uch lock big rode yan
‘extended source having M" distinct blocks where M is the
onera ds mesos yl) ne gil some
— Foreuaple ite gino binary source en he
wrbere dine soe: ttiprtacs M2 (1)
Information Theory and Source Coding
ie form groups of 2. n= 2hen the extended source will
have Me. 2 = 4 distinc blocks namely 00, 01,10 and 11
tn = 3 then the extended source will have M!
stinc locks (000 to 111).
In case of a discrete memoryless channel all
symbols are statistically independent.
fe symbol of an extended source
the source
= So probability of a sours
(ee p (00) orp (01 et.) 8 equal to the probity of the
product of n source symbols present in cach block
“That meang ifn =2, then the source symbols produced by the
extended source are 00, 01, 10 or 11 and their probabilities
arc as follows
(0) = pOxPO
p(t) = pox)
pao = pOxpO
pad = p@xpd)
= This is because the joint probability p (AB) for statistically
independent events A and B is
p(AB) = p(A)-p(®)
3.4.5 Entropy of the Extended Source :
= Letthe entropy ofthe original discrete memoryless source be
H (X) and that of the extended source be H (X").
— Then from the discussion we had till now, we can write that
the entropy of extended source is
HX) = aH@) G12)
strated inthe following two examples.
This concept
Ex.344: A discrete memoryless source is capable of
transmitting three distinct symbols me, m, and.
m,, Their probabilities are 1/2, 1/4 and 1/4
respectively, Calculaté the source entropy.
Tofind : Source entropy.
Note that there is no blocking or grouping of symbols, my, ma,
and m,, So the given source isnot an extended one,
= Sothe source entropy is given by
HX) = polos (ly) +, logo.) +P oe)
1 1
Flog, (2) + log, (4)+4 10g, (4)
o1,2,2.3
2a
bits Ans.
canned wit
amscannerEx.3.4.5: Find the entropy of the second order extondod
source of the source used in the previous
example,
Soln.:
Step 1: Write the symbots of extended source:
= Now the source is an extended one with second onder
extension i.e, n
= Soeach block will contain two symbs out of mm and my
~ Table P. 3.4.5(a) shows the nine possible symbols of the
extended source,
Table P.3.4.5(a) : Symbols of extended source
‘Symbois of
the seeond | r= | o1= | oe | ove | or | o4= | oe | or= | =
onder {am | mr mm] my] mr} mm] mr] as | Me
‘extended
source
‘Step2: Obtain the probabilities ofall symbols:
= Probability of symbol ois given by,
POG) = PCM) XP Cm)
because my and my ae statistically independent.
PO) = 3XZ55
= Similarly it is possible o obtain the probabilities ofthe other
symbols, They ae istedin Table. 34500)
‘Table P.34S(b) : Symbols and their probabilities
mt
16
mm
6
me
1116
|
18
rm | mm,
118 [' 1N6
cn
118
ma
ve
ym
18
I
| Censor ane
' 8
| © PCo,)-log, (1/069)
i=0
=
%
1
frog, (4)+ Hog, (8) + § lots (8)+ G08 (8)
+ Heo, (16) + Telos (16) + Flom (8)
| hee, (16)+hon (16)
(
2,2
= 343.
HX’) Ans.
‘We see here that
HX) = 2H(X)
Hence we have proved that
HOX) = 9X)
d Sours
a Digital Communication (E&TC-MU) 7 Information Theory and Sou
What is entropy ? Show that the entropy ig
maximum when all the Messages ary
‘oquiprobablo. Assume M =
Ex. 3.46
Soln
1. Let there be three messages Mir
cof preps and py spectively:
m, and m, with probabilities
2, Asal the messages ar equiprabable
pe pets IBERIA REE
4, Therefre the entropy is given PY>
= ploy (p,) +P 108 (I/ 3) +P 08s Up)
Sustitng te valves WEEE,
1 1
w= Hog.) 4508) +5108
«yet = i310 ise
Information Rate (R):
DE
nd information rate of discrete
Se ius a (Dec. 10, 4 Marks)
35
Tf the source of the messages generates “” number of
messages per second then the information rate is given as,
R= kH GSI)
Where, 1 = Number of messages/sec,
and Hi = Average information/message (Entropy).
Units of information rate
information”
R= —_
(=)
R = Average information per second expressed
in bitsisee.
Ex. 3.5.1
‘An analog signal is bandlimited to 4 kHz. Itis
‘sampled at the Nyquist rate and the samples
are quantized into 4 levels. The quantization
levels Q,, Q,, Q, and Q, are independent
probabilities
3
and have the
P= P= and ps = py
information ate ofthe source.
messages
Find the
Soin. :
Data
1, tis given that there are 4 messages Q\, Qy,Qs_and Q, wi
aitesot tt Ban
Probables of § gg andy
‘ilz. Therefore sampling rate = 2x 4 = 8 kHz
Scanned with CamScannerInformation Theory and Source Coding
The average Soln.:
“38 information per message is ound Given that: 1, ot ration 02 sc.
A by Loe.) + py | nda, 2, Dash duration: 3 x0.2-= 0.6 sec.
+h be tE nee 3. P(do =P (da) =2 Pah
BNP * Pe logs (trp) 4. Space between symbols i 0.2 se.
$ lors 8) 43 lon te =7
8 1850-43 tg, 2) 2 tg Information rte
2H = 18bitdmessage 1.” Probabilities of dote and dashes
©) Ra
‘of message generation () [mea | ean probity ofa dash be "PY Therefore the probability
sec]: a dot will be "2", The total probability of tansmiting dots and
As the samp es per
ths sampling 6 is 8 Kia, we have S000 I
samples per
second. Each sample is ‘conve ‘dashes is equal to 1.
lO, menage) Theat fhe for qntzatn ee Pea + Ply = 1
SE). Therefore we have 800 messagece * , pen
ems s LPs
© Thee 8000 messagesisce a a P+
ion Fates Probability of dash = 1/3 }
R = Hxr=18x 8009 and probability of dot = ee)
R= 14400 bitssec
2. Average Information H (X) per symbol :
2: Assuming that the
=P (dot) fog, [ 1/P (dt) ]
messages Q,, H(X) = P(dot)- fog
= in me 3.5.1 are to be Tenerted ee a +P (dash) “log, [ 1/P (dash) ]
ina
ae ry POM, calculate the information rate, HQ) = @13)og, [32] + (1) log, (31
‘The four messages can be identified by using binary code as (0.3899 + 0.5283 = 0.9182 bits/symbol
indicated in the following table :
Symbol rate (Number of symbols/sec.)
Gea ay Binary code voto” total average time per symbol can be calculated as
Q ie w ‘Average symbol time
Q. in mn T, = (TporXP(DOT))
a 378 10 + [ToasiP DASH) 1+ Types
@ 378 1 1, = [02x23 }+(06% 13] +02
1. Number of messages per second = 8000 : = 0.5333 see symbol
rae — emesis on - Hence the average rate of symbol transmission is given by :
f= 1,2 1.875 symbolssee
16000 binitssec
4 4, Information rate (R) :
2 Entropy H = © pylog, (1! p,) R= rxH(X)=1875x09182
k=I
2 3
3108, 8+2x § 98, (8/3) = 0.75 +0257 3.54: The voice signal in a PCM system is quantized
= 09758 bits in 16 levels with the folowing probabities :
3. Information rate R =x H = 16000 0.9758 Fi fs
15612.36bitslec. a
Ex.353: Consider a telegraph source having two Pe
symbols, dot and dash, The dot duration is Calculate the entropy and information rate.
0.2 seconds; and the dash duration is 3 times Assume fy = 3 kHz. :
the’ dot duration. The probabiliy of the dot
‘occurring is twice that of the dash, and the | its given tha,
time betwoon symbols is 0.2 seconds. | 1. The number ofevele= 16,
Cala the internation rate of ho 1a9h |” Tera nner of mesgen = 16
source, 2 f=3kt,
amscannerigital Communication (E&TC-MU)
(@) To find the entropy of the source :
‘The entropy is defined as,
M
HE pba dn “
Ket
As M= 16 ‘Equation (1) gets modified to,
16
Ho= DY py tog, (tp)
Ket
= 40.1 Jog, (10.191 +4 [005 og, (10.05))
+4 (0.075 log, (10.075)]
+4 10.025 log, (110025).
(04 log, (10) + 02 log, (20)
+03 log, (13.33) + 0.1 log, (0)
Togig 10 0:2 1og9 20
= 047652 +” Toei?
ogi (13.33) 01 1081040
403 e592 * T9502
3.85 bitsimessage Ans. @
(b) To find the message rate (1):
“The minimum rate of sampling is Nyquist rate
f= Xf
= 2x3kiie=6 iz
Hence there ae 6000 samplesse. As each sample is
coveted toone ofthe 16 levels there are 6000 messages,
Therefore
@)
Message rate #= 6000 messageliee @
39
a)
Information Theory and SOUICE Cosin
1
eat
= 1x10
= 1x 10% messages/se,
we
Message transmission rate ¢
(a) To obtain the source entroPy
4
n= D pyle (Mm)
fogs (1 ,)+P 9801s)
“+p foga (1/75) + e082 Pe)
ud tog, ( V0 ) +03 106 (103)
“40.2 log, (10.2 )+ 04 log, (10.1 )
An)
We
= 1,846 bits/message
To obtain the Information rate (A):
R = Hxr= 1846x110
1.846 bits/sec
TA source consists of 4 letters A, B, C and
For transmission each letter Is coded int
sequence of two binary pulses. A ig
represented by 00, B by 01, C by 10 and D by
11. Tho probably of o2urrence ofeach et
4
is P(A) = and
O)
Ex.3.5.6
&. Determine the entropy ofthe
average rate of transmission of
P(D)=
source and
information.
Soln.
‘The given data can be summarised as shown inthe following
(©) To find the information rate (R): table:
R_= rxH_=6000x 3.85 =23100 bits/[Link]. [Message | Probal Code
Ex.3.5.5: A message source generates one of four A Ws 00
messages randomly every microsecond. The 5 jauc|ner
probabilities of these messages are 04, 0.3, = 4 10
02 and 0.1. Each emitted message is > a
independent of other messages in the u ai
sequence : Assumption :
1. What is the source entropy ? ‘Let us assume that the message transmission rate be r = 4000
2. What is the rate of information generated oe ‘dota ;
by this souree in bits per sacond ? (2) Todetermine the source entropy
Soln. : H = § log, (5) +4 log, (4)
Itis given tat, 7
1. Number of messages, M = 4, let us denote them by m,, my +7 log, (4) + 0.3 log, (1083)
1m, and my, F
a . H = 1.9855 bits/message Ans.
2 Theis Probables are py = 04, p, = 03, py = 0.2 and | (b) To determine the information rate :
P= Ol.
i - R=rxH = [4000 messagestsec] x [1.9855 bits/messagé]
3. One message is transmitted per microsecond. R=7942.3 bits/sec wis
Scanned with CamScannerSE er
Information Theory and Source Coding
+ osion [7s] +0125, fats]
12s tor alas] #0280 [i
sows, [rc]
Wh = oseosnssoaseas +028
eats tincs
fe Teneaed ote
Ans.
‘As sampling rate is 16 Hz, we have 16 samples per second.
(0 one of 5 quantization level > We
ach sample is convert
have 16 messages sec. :- r= 16 messages sec.
Step 3: Calculate information rate:
Hxr= 1.875% 16
30bits sec. Ans.
Son: E350: An analog signal is bandimited to 8 Hz.
Step: Valueorn: amplad at Nyquist rato and quantized at
a 5 levels. Five messages have probabilities
erage infomation per message iita 4
B AAA Land. caleulato entropy and
HL = log3(17P)) +P, logs (172) 2ia 6764 16
information rate.
£05198 (1/006) +0.00035 log (1 000035)
02435 +410
H = 02475 bits/message
Value of x:
‘number of messages = M
. f, = BHe
1 = Number of words/see. = 1000 Ls
Step3: Information rate R : Entopy = H= E Pylogs(1/P,)
R = rxH=1000x02475
R = 2475 btssec
‘An analog signal is band limited to 8 Hz
sampled at Nyquist rate and Quantized at 5
P=
levels with probabilities 0.5, 0.125, 0.0625,
0.25 and 0.0625. Calculate entropy and 5
information,
He E (05109, 05) +025 log, (0.25) + 0.125 log, (0.125)
Soln. fan
+ 0.0625 log (0.0625) + 0.0625 log, 0.0625 J]
3
‘Tofind : Entropy and information
Step1: Calculate entropy (H) :
Z [05x(~1)+025%(~2)+0125x(-3)
5
He EZ Pylog, (iP) + 0.0625 x (—4 )+0.0625 x(~4 J]
1 = 05-05-0375 025-025
= L) 4p, tog, (1/23) +P los WM,
, og (p,) +P: toes (17) + lo (MP) iesueeee
1
+ Plog, (12+ Ps los (F) Informationrate R= rxH=28%-1875
= 2x 161,875 = 30 bits/sec.
Scanned with CamScannerDigital Communication (EAT(
U)
35.10: Find the avorago capacly in bite por socond
‘that would bo required to transmit a TV signal
at tho rato of 82 pictures por socond. Each
picture Is mado up of 2 10" pituro olomonts
‘and 16 ditforont brightnoss lovols. Al picture
‘loments aro indopondont and all lovols havo
‘equal probability of occurronc
‘ IDEs
Soln.:
Given: Number of pictures /see. = 32
Nomber of picture elementspicture = 2% 10"
Number of brightness levels = 16=M
Let. us calculate numberof picture eementssee,
‘Number of picture lementssoc. =
Number of picture elements/picture x Number of picturessec
= 2x totx32
= 64x10"
Wis given that, all picture Jevels are equiprobable.
Step 1: To calculate average information/messoge ¢
H = log, M=log, 16
= 4 ou since all levels are equiprobable.
Step 2: To calcvlate information rate ie, average channel
capacity =
Let, Number of messages persee. = Number of picture
‘elementslse,
1 = 64x10"
rx H= 64x 10°x4
256 x 10%bits sec
Information ate
1. The average channel capacity
C = R=256 Mbps
Ex3.5.11: A given source alphabet consists of 900
words, of which 16 occur with probabilty 0.06
each and remaining words occur with
probability 0.00035 each. 1 1000 words are
transmitted each second, what is the average
rate of information transmission
PES
Ans.
Soln.
Step 1: Value of
‘Average information per message.
H =P, logs (1/P, )+Py logs (Py)
L
wn Thoory and SOUrCe C
Inform
: Jo iS with @ SOUTC® alphay
Ex. 3.6.12: iret OMS wi oa oe =
(0,7, 0.15, 0.15)
Find:
1. Information contont of each messagg
2. Entropy of OMS
mation rato of OMS If It onaratog
3, Information ows it
ae
rm
" and 0.15 respectively. ;
contents will be:
fe individ!
be Jog, (1/2)
tog, (1/73) nd T= 108, (1)
Now.p = Ps
ee fogs (1/07), f= Hoe (0/045)
= log, (10:15)
0514, =2.735=273 Ang
‘Step 2: To find entropy of DMS :
‘The entropy or average information is given by:
Hi = py log: (1/7) +P 108 (1/ ps) + Pa lof (1/3)
0.7 10g, (1/0.7) + 0.15 log, (1/0415)
4+0.15 Tog, (1/015)
1.18 bits / message ons,
Step.
Information rate R
Now, +
and H = 1.18 bits /message
R= 4000x 1.18 = 4720 bits / sec
‘3.5.13: Define entropy and information rate. Considet
“sk messages with probabilities 0.2, 025,
0.15, 0.1, 0.2'and 0.1. Calculate the entropy
of the source. PEERS
4000 message / sec.
Soln. :
Itis given that,
Number of messages, M = 6, let us denote them by My, My
My, My, M; and M,.
0.06 log, (1/ 0.06 )
+ 0.00035 og, (1/0.00035 ) 2
0.2435 +4 x 10°
*. H = 0.2475 bits/message 3.
Step 2: Value of r:
1 = Number of words/sec.
‘Step 3: Information rate R
R = rxH= 100002475
R = 27S ditssec.
Their probabilities are P, 125, Py = 0.15, P,=0h,
P,=02,P,=01
To obtain entropy of source (H)
6
H= Z Pylog, Ph)
s+ -H = P, log, (I/P,) +P; log, (1/P,) + P, log (1/P3)
+P, log (I/P,) + Ps log, (1/P,) + Ps log, (1/P,)
«Average rate of information transmission is 247.5 bits/sec.
a
Scanned with CamScanneres
Digital Communication (E&TC-MUy
= 92 108s (V02)-+025 op, cpa
+015 1085 (10.15) 4 0, Yop, (10.1)
*02 lo (D2)+01 tog, 10.1
H = 04613405 +0.4105 409;
322+. 04640 4
oa 8403301
Ans.
36 Shannon's First Theorem
(Source Coding Theorem
~ Ot of the ost important probe
how 6 eficey EAM losis cmon
source
Gone MER is used to achieve this operation and te
evice which perfoms us
sours cnenae PTO the source encing Am
~ lords to make the source encoder
Know the statistics of the source, Sia
- sneans those source symbols which have hi
Probability should be assigned with centers
whereas the source symbols wt
‘ccurence should be tasted wi
Requirements of an efficient source.
encoder :
An efficient source encoder should s
satisfy the following two
requirements :
1 The encoder should produce the code wor in the binary
form,
2
1 should be possible to decode the source encoder uniquely,
4 as to reconstruct the original source sequence perfectly
from the encoded binary sequence.
295 Fig. 36.1 Source encoding
Consider the block diagram of Fig. 3.6.1. The output of the
DMS is being converted by the source encoder into binary
signal ie. we get code words a the output of the encoder.
‘Then the average code word length L ofthe source encoder is
defined as follows :
M
L = E px(enghot message minds)
G6.)
— Thus the code word length L represents the average number
of bits per symbol used by the source encoder in the process
cf ouceeacoding,
~ Let the minimum posible valu of L be denoted by Lg.
‘Then the “coding efficiency" of the source encoder i defied
ase 0.62)
3412
Information Theory and Source Coding
peal il tea Hh 3 ps, The space
ath F Linn Ca bE
" ‘The source coding theorem states that the average ee ‘word
Lge = H (3.6.4)
‘Substituting this value of Lg into Equation (3.62) we get,
H
Code efficiency = 8.65)
‘The variable length coding is done in oxder to increase the
efficiency ofthe source encoder. Now we are going to discuss
two algorithms which use this variable length coding
technique. They are:
1. Shannon-Fano algorithm. 2, Huffman coding.
3.6.1 Shannon-Fano Code :
‘Suppose that M number of messages are being transmitted by
‘source and that each message is being coded into an N bit
‘ord. That means M = 2
‘We have already seen that the average information contained
per message see Ex. 36.1, for equally likely messages is
siven by,
H = log.M 8.66)
~ Now substitute M = 2in the Equation (3.6.6) to get,
H = tog, 2"=Nbitsimessage 3.6.7)
As the numberof bits per message is also N, the average
information carried by an individual bit is,
HIN = 1bit G68)
This is the maximum average information per bit.
Problem :
‘The problem arses when the M messages are not equall
Hel Then H-” [eae
2750 bitsise. HOE SL NE alo
Ex.364: A zero memory source emits six messages || » | 05 | 0 | sep | 0
(N, 1, R, K, A, T) with probabilities (0.20, 0.10, Ption| ace
0.02, 0.15, 0.40, 0.03) respectively. Given that |! | ozs | 1 | 0 | om 0
‘Ais coded as ‘0’. Find: Parton] here
1, Entropy of source. » [om [1 [1 | 0 | om] [10
2. Determine Shannon Fano code and Parton| here
tabulate them, x | os [os [ot [ot | 0 [oem] 0
3, What is the original symbol sequence of Patio |e
the Shannon Fano coded signal
(110011110111111110100), * pom | tpt pt ft facet
Scanned with CamScannerStep 2
z Px length of my, in bits)
05% 1)+25x2)40.125x3)
+ (0.0625 x 4) + (0.0625 x4)
1.875 bitsmessage
‘Step 3 : Calculate average informati
; 8¢ information per message (Ht):
HE Pepto, ump)
= 05 log; (1/05) +0.25 tog, (110.25)
+ 0.125 log, (10.125) +0.0625 tog, (110.0625)
+ 0.0625 log, (10.0625)
= 0540540375 +025 40.25
H = 1.875 bitsmessage
Step 4 : Calculate efficiency of code :
4. 1875,
n= [x 100=7 522% 100 = 100%
3.7 Data Compaction and Prefi
Coding : .
~The signals produced by a physical source contain a
significant amount of redundant information.
‘The transmission of such redindant information results in
wastage of primary communication resources. such as
bandwith
= Such a signal transmission is therefore inefficient
= In onder to improve the efficiency of transmission, the
redundant information should be removed before transmiting
the signals.
While removing the redundancy, care has to be tsken 0
preserve all the information. There should not be any loss of
information.
~ Generally this operation is performed on the signal whe it is
in the digital form (data form). Hence this operation is called
data compaction or lossless data compression.
‘When data compaction is performed, a new code is produced,
‘which represents the source output which has the following
characteristics:
1. _Itis more efficient in terms of bits per symbol.
2. This exact. So the original data can be reproduced from
the new code without any loss of information.
= The entropy ofthe source puts the fundamental limit on the
removal of redundancy from the data
~ In data compaction technique, short description (short code
words) are assigned to the most frequent outcomes of the
source output. (eg. letter “I” or word “the” in English
Tanguage) That means the most Likely symbols are encoded
‘nto short messages.
Scanned with CamScanner
8 SSS S——s—eié—L
Information Theory and Source Coding
ss Frequent outcomes (such as X of Z) are assigned
‘And the les
1 the less likely symbols are
longer descriptions. That means
encoded into long messages.
= Now we will discuss some sou
achieving the data compaction.
3.7.1 Prefix Coding +
Consider a diserete memoryless source of alphabets (my.
Pail
roe coding schemes used for
so My] Baving probabilities (Po Pr
= The source code representing the output ofthis source wll be
practically useful only ifthe code i uniquely decodable.
= That means for every finite sequence of symbols emitted by
the souee, there has to be a distinct sequence of code words.
In this section we will discuss a special class of codes which
satisfies a condition called prefix condition.
Prefix condition :
= Corresponding to every source symbol Mm, m,
‘of “a binary 0s and Isis transmitted.
— Lette code word coresponding tothe source symbol my, be
denoted BY (5, Sy = Sia} Where 8, So af 0s of Is and
the length ofthis code words “a, as shown below.
Sa So Sn
soucn omboimy, «oft PA] [OTOL [2 | codewors
'—— abit codeword ——t
‘code word.
39
= The inital part of the code word is represented by the
elements 5.3, Sn Sy With i Sn. For example, as shown in
Fig. 371, the first three elements s,y, 2 and s,, form the
inital par, .
Set Sea Beg sno sn ss
Inia part
(elomont)
Sin
[2] ctor
ea Fig,
Prefix:
Any sequence made up of initial part of the code word
called a prefix of the code word.
Prefix code :
‘A prefix code is defined as the code in which no code word is
the prefix of any other code word.
~The concept of prefix code can be illustrated withthe help of
Table 3.7.1
‘Table 3:7. illustates the various code words for different
Source symbols mm, m, musing thee different codes.igital Communication (ERTC-MU) 37,
Table 37.
Source symbot__] Code | Code Mt | Code ML
My 0 0 0
my 1 m 10
Ti wo [on | 10
Ty u fon [om
Lot us check which of the three codes shown in Table 3.7.1
are the prefix codes,
Code
‘The code word for m is 00. The prefix of this code word is
“O" which itself is a code word for symbol my, Hence code I
is nota prefix code.
‘Similarly, consider code word for m, ie. 11. The prefix here
‘which itself is a code word for symbol m,, Hence code
Lis nota prefix code.
Code:
“The code word for mis O1. The prefix is O which is a code
word for symbol my,
“The code word for m, is O11, the prefix of which (0) i
“other code word (a). Same is true For code word of my
So code I isnot a prefix code.
Code I:
Code IIL is prefix code because no code word is the prefix
‘of any other code word.
Decoding of prefix code word :
Let us now see how to decode a sequence of code words
‘generated from a prefix source code.
For decoding we need a source decoder.
‘The source decoder will start atthe beginning ofthe sequence
and decodes one code word ata time.
Decision tree :
‘The decoder sets up something which is equivalent to the
decision tree as shown in Fig. 3.72.
‘The decision tee is a graphical representation of the code
word in a particular source code.
Fig. 3.7.2 shows a decision tree for the code III discussed
is some
eatin.
Ig: my mand my,
or ys Mga my
3 are the terminal
Source he ae
symbol Codeitt
™o ° intiat
m 10. state me
m 0 |
ti Docodor
ms. always starts
here
decison point 7
Third m3
decision point
(€-98) Fig. 3.7.2: Code III and its code tree
Information Theory and Source Codi
state and four terminal states
‘The trve has one
m, and my,
Operation of the decode!
al state. The initial stage
“The decoder always starts at the
is also called as first decision points
1. Response tothe first recetved bit =
ived bit ean be 0 oF I.
‘The frst rece ;
moves to terminal state my a,
Iritis 0 then the decoder
shown in Fig, 3.7.38).
‘And if received bit is:
second decision point as
0
1 then the decoder moves to
shown in Fig. 3.7300).
eo
Be" Zhouanoes'om
Initial
state
ce Fig. 37300)
Mo
Rocelved
m
Decoder
moves to
second decision
point
‘Second decision point
e100 Fig. 3.7300)
[Response to the second received bit :
‘The second received bit will move the decoder to either
“my” if the second bit is 0 and the decoder moves tothe
2
third decision point ifthe received bit is 1
“The decoder keeps responding to the received bits in ths
[Note that each bit inthe received code word is examined
only once.
Ex.3.71: Atransmitter uses a prefix code given below fo
transmit symbols m, to ms. If the transmited
bit sequence is 1011111000... obtain the
decoded sequence at the output of a prefit
decoder.
‘Source symbol
110
1
Scanned with CamScanner~Digital Communica
tion (E&TC
Soln. MU)
Step 1 Draw the decision
The dei frees
Fig P.37.10)
NS for the given code is. gh
le is shown in
‘Theenet
wed Sequences 1011111009
Ital
stato
Seem econ git —_// >>
The isn, —_;
e100 Fig,
‘The decoding process is sho
(3): Decision tree
vn Fig P. 3.7100,
+ Decoder tinal ata
Fret tt te raclved
+ Decoder moves to second detion pt
Secon tit = 0 rooted
myiscbiined + « Decoder moves om,
+ Dest moreso state
[Petts rears
+ oon noes eo nso pit
[Fouten= 1
*Desimo nt
[romans
+ Dect ore ten
‘ls obtained —e
(©-100 Fig. P.3.7.1(0): Decoding process
= So the encoded sequence is decoded as the source symbol
sequence. my, My, Ms My, mas shown in ig. P.37.1(0).
Ex Ele]
Decadad > oy
sont
ere Fig. P3710)
Kraft-MecMillan inequality :
= If a prefix code is constructed for discrete memoryless
‘channel (DMC) to satisfy the following conditions, then it
always satisfies an inequality called Kraft-McMillan
inequality.
‘The conditions are:
1, The DMC transmits messages or symbols Sy §),
S-
Information Theory and Source Coding,
The probly of the tangmited symbols 3 Py Pe
tops Fspectvely
4. The code wont for symbol 5, has a Tegth fy where
ke 0.1k-1
‘An the KrafeMeMillan inequality tates that
ket
Eres
k=o
ere the factor 2 indicts the adi inthe binary alphabet.
G11)
non the code word lengths of
‘Tis inequality pus a cont
the code
If this inequality is violated by acode then it no more remains
fa prefix code. That means all prefix codes satisfy the
Kraft. MoMfillan inequality.
3.7.2 Properties of Prefix Code :
‘The first important property of a prefix code is that it is
always uniquely decodable. But the converse is not always
true, That means a uniquely decodable code may not always
bbe a prefix coe,
‘The second property of prefix code is that if a prefix code is
constructed fora discrete memoryless source then code word
lengths of the code always satisfy the Kraft-MeMillan
inequality.
Prefix codes are distinguishable from the other uniquely
decodable codes because the end of a code word can always
be recognized.
Hence it is possible to decode a prefix as soon as we receive
the binary sequence representing a source symbol. Therefore,
prefix codes are always known as instantaneous codes,
If we make the onder n of an extended source encoder
sufficiently large, we can ensure that the code faithfully
represent the discrete memoryless source very closely.
‘That means the average code word length of an extended
prefix code can be made as small asthe entropy of the source
ifthe extended code has a sufficiently large onder.
But the reduction in average code word length is achieved at
the expense of increased decoding complexity
3.7.3. Huffman Coding :
{2.1 Write shor nots on : Huan coding.
(tay 08, Hey 00, 5 Marks)
‘The Hoffman code is a soutee code! It is a prefix code as
well, Here word length of the code word approaches the
fundamental mit set by the entropy of discrete memoryless
source.
~ Scanned with CamScannera
Digital Communication (E&TC-MU) 349
Fe372
This code is “optimum” as it provides the smallest average
code word length for a given diserete memoryless source, It
encodes each message transmitted by a DMS with different
Yalue of number of bits based on their probabilities.
‘The Huffman encoding algorithm is as follows
1. The source symbols (messages) are arranged in the
‘order of decreasing probability, The two source symbols
hnaving the lowest probability are assigned with binary
digits and 1.
‘These two source symbols (messages) are then
“combined” into a new source symbol (message) with
Probability equal to the sum of the two original
Probabilities. The probability of the new symbol is
placed in thelist of symbols as per its value
‘This procedure is repeated until we are left with only
two source symbols (messages) for which a 0 and a 1
are assigned.
‘The code of each original source symbol is obtained by
‘working backward and tracing the sequence of Os and Is
assigned to that symbol. (As shown in Ex. 3.72).
‘The Huffman coding can be shown in the form of an
algorithm as follows :
1. List source symbols (messages) in the order of
decreasing probability
2. The two source symbols of lowest probability
are assigned numbers 0 and 1
3. These two source symbols are combined into
anew message,
4. The probability of this new message is equal to
the sum of probabilities of the two original symbols.
5. The probability of this new message is placed in
the list according to its value.
+ i
6. Repeat this procedure until we are lift with only two |
source symbols, for which a0 and a i are assigned.
E109
Consider the five source symbols (messages)
of @ discrete memoryless source and their
probabilities as shown inthe
Table P. 3.7.2(a). Follow the Huffman's
|
Information Theory and Source Cog,
Soln. :
(a) To find the code word for each message +
ps to he followed :
Slept Arrange the messages, in the order of decreasing
probabilities, :
Step 21 Assign numbers 0 and 1 to the
owest probability, ;
‘Step 32 Combine these two messages info A NeW Message ang
place this probability in the probability list a per
value
‘Steps 4 and 5 : Repeat this procedure.
Step 6: Write the code word for each-message by tracking
back from the last stage tothe first
Let us find code word for each message by following the
{070 MeSSAgES having
Huffman's algorithm in steps
‘Arrange the given messages inthe order of decreasing
probabilities as shown in Fig. P.3.7.2(a).
‘The two messages having lowest probability ay
assigned O and 1. The two messages With lowes,
probabilities are m, and ms as shown jg
Fig. P.3.720@.
Step2:
04
02
oz
o1—3_ Nowmos
teens
(e105 Fig. P.3.7.2(a)
[Now consider that these two messages m, and m, 38
being combined into a new message (Fig. P. 3.7.2)
‘and place the probability of the new combined
‘message, in the list according to its value, Place the
combined message as high as possible when its
probability is equal to that ofthe other messages. This
is as shown in Fig. P. 3.7.20).
Tessages Saget
algorithm to find the code words for each m 04
message. Also find the average code word | | m, 02
length and the “average information - per 3 ean
message. a ~
é ee ™ =
Table P3726) fe ty
Message. _[m, | mz | m [my [my
Probability | 0.4 | 02 [02 | 01] 01 106 Fig P.3.7.2(6)
“Scanned with CamScannerfF OS eer
Digital Communication (E&TC-MU)
Step 4:
the combin.
stage Be! MeSH acontng to its vie in
neat Pie it a8 high as posite ifthe other
"ats have the same probly. This is as shown
inFig.P.3.729), :
Step 5:
Follow the same procedure til
remain and assign the 0
shown in Fig. P.37.2(6,
(E107 Fig. P.37.2(6)
How to write the code word for a message ?
Consider the doted path shown in Fig. P. 37.20). To
‘write the code for message m, this path isto be used,
Start from stage IV and track back upto stage I along.
the dotted path. And write down the code word in
‘terms of Os and Is starting from stage IV.
‘The code word for message mis 0 11.
Similarly write code words forthe other messages as shown
in Table P. 3.7200).
Table P. 3.7.21)
Messages | m | m, | m ['m, [m,
Probabitities | 0.4 [02 [02] 01 [ot
Code words [00 [10 [ii [ow | o11
(@) Tofind the average code word length :
‘The average code word length s given,
s
L = E py [lengthofm inti}
kel
= (04x2)+(02x2)+(0.2x2)
+ (0.1x3)+ (0.13 )=22
3:20
Information Thoory and Source Coding
(O Tofind the entropy ofthe source:
“The entropy ofthe source is given as,
5
w= E plo /r)
kel
H = 040g, (1/04) +02 log, (1/02)
402 log, (1/0.2)+0.1 log, (10.1)
0.1 log, (10.1)
= 0.52877 4 0.46439 + 0.46439 + 0.33219
#033219
H = 212193
EEE
£x.37.3: Consider the same memoryless source as in
Ex. 9:72, all the data fs same, Find the
Hutiman code by moving the probability of the
combined message as low as possible,
‘Tracking backwards through the various steps
find the code words of this second Huffman
code.
Soln,
Al the steps to be followed are same as those followed for
the Huffman's frst code explained in Ex. 3.7.2, except for the
cchange that the combined message is to be placed as low as
possible. This is as shown in Fig. P. 3.7.3.
‘Stagell Stage III
[symbol Sugel
| im 04 —~+ 04 + 04
Read the encircled bits to got code for m, = 0011
(©:06 Fig.P. 3.73 : Huttman's second code
To find code word :
This procedure is same as that followed in the previous
‘tample. Follow the dated path in Fg. P. 3.7.3 obtain the code
for message mys,
Code word for the message mis 0011
Similarly the code words for the other messages can be
‘biained, They ae as listed below :
Scanned with CamScanner3
Probabitity | Code word
os 1
02, on
my 02 000
my ou oo10
Ms ou oo
Note hat to tans the same messages 8 the ofthe
Ex. 3.7.
A discrete memoryless source hasan
alphabet of seven symbols with probabilities
for its output as desenbed in Table P. 3.7.4a).
Table P.3:7.4(0)
[sma TS[s[s[s]S[S]&
[Probabinty [025 0.25] 0.125 [0.125.125 0825 oes
Compute the Huffman code for this source
moving the “combined” symbol as high as
possible. Explain why the computed source
code has an efficiency of 100 percent.
Soln.:
‘The Huffman code for the source alphabets is as shown in
Fig. P. 3.7.4.
[Symbol Sigel Sugeil Stagout StageIV Stage V Sage’
S56 Ba,
5 Se So
(€10 Fig. P. 3.74 : Huffman code
Follow the path indicated by the dotted line to obtain the code
word for symbol S, as 10. Similarly we can obtain the code words
for the remaining symbols. These areas listed in Table P. 3.74(b.
7
ay
Information Theory and Source Coc
To compute the efficiency
1. Thoaverage cade lene
6
= EP x (length of symbot in bits)
ke
From Table P. 3.7.4(b)
(0.25 %2)+(0.25%2)4( 0125 x3) x3
Le
++ (0.0625 x4) 2
L = 2625 bits/symbol
“The average information per message = Ht
6
= EZ pC) logs (1/p C41
iso
[00.25 log, (4)1%2-+ (0.125 log, (8 )] x3
+ [0.0625 log, (16) ]x2
= (025%2%2]+(0.125 x3%3]
+ (0.0625 x42)
reas biaimesse.
2.625
3. Code eficieney n =x 100 =3.558 x 100
1 = 100%
‘AS the average Information per symbol (H) is
equal to the average code length (L), the code
efficiency is 100%.
the
Note :
Huffman
coding for
‘compression. For a discrate . memoryless
Ex375: Use ta
source X with shx symbols Xm Xe find a
compact code for every symbol if the
probabilty distribution is as follows :
P(x) =0.30, p(x.) =0.25, p(X) = 0.20,
P(%)=0.12, Pl) =0.08, (Xe) =005
Calculate entropy of the source, average
length ofthe code, eficioney and redundancy
of the code.
Soln. :
Part
luffman coding :
The code word for the symbol x, is 00 (See dotted line.
Similarly the code words for the other symbols are as listed in
‘Table P. 3.7.50).
wu) Table P. 3.75(a)
Table P.3.74(b)
Symbol | Probability | Code word | Code word length
So 025 10 2bit
5 025) im 2bit
3 0.125 or
S 0.125 010
Se 0.125 on
Ss (0.0625 (0000
Se 0.0625 ‘0001
Scanned with CamScannerRE
gta
se Communenion CATE) ae Inomaton Theory and Source Coding
Tabi F3.7505
2 staged Stages
Symbol siaget singe? 0
% 04s —r oss —P 045
fees Tots
ami
Code wor } ol} n O11 | o100 | o1or
1. Average code word length (uy:
6
L = Zn ctengthork cote wou
ker
= 3%2)+025%2)4 022)
+(0.12%3)+ (0.08 4)+005%4)
L = 2.38 bits message
Average information per message OR source
‘entropy (H) :
6
#H
Z ples, a/p)
k
= 033 logs (110.3) + 0.25 log, (110.25)
+02 log, (1 0.2) +0.12 og, (100.12)
++ 0.08 log, (1/0.08) + 0.05 log, (10.05)
= 05210405 + 0.4643 +0.367 +0.2915 +0216
H = 235 bits /symbol.
3. Code efficiency (n) :
Code efficiency n= HWL=235238
0.987 or 98.7% Ans,
Ex.3.7.6: Apply Huffman’s coding procedure to the
following message ensemble and determine
the average length of the coded message.
Find the coding efficiency.
(X] = [Xi Kos Xan Xan Xs Xo XI
P(X) = (0.45, 0.15, 0.1, 0.1, 0.08, 0.08, 0.04).
Soln. :
Step 1: Obtain the code words:
Refer Fig. P. 3.7.6, The code words for various messages are
obtained from this figure and they are listed in Table P. 3.7.6.
% 018
% 0
* ow to dtd new obtain
Socom ene, avoror
cess Fig. P.3.76
Similarly we ean obtain the code words for other messages as
shown in Table P. 3.7.6.
‘Table P.3.7.6
Message [| Xi | X: | Xs | Xe | %s | % | %
Probabitity | 0.45 | 0.15 | 0.1 | 0.1 | 0.08 | 0.08 | 0.04
ote wera | 1 [001 [ort [000 | 01 | o100 [oxo
Siep2: Average codeword ength
‘Avenge codeword ag Listen by,
7
L = & pyxtength of m,inbits
(0.45 x 1) + (0.15 x3) + (0.1 x3) + (0.1 x4)
+ 2x0084) +0044)
LL = 26bisymba
Step 3: Entropy ofthe source
7
He-Z aban
vel
= = 045 ng, 045-015 og 015-204
=2x008og 008 0.0126, 004
© 05184 +0.t05 +056t4+05820+ 01858
= 23621 babymba
Step4: Fn code eiceney =
n= Bcto0 2232100
Consider a DMS with source probabilities
(0.20, 0.20, 0.15,0.15,0.10, 0.10,0.05,0,05}
1. Determine an efficient fixed length
of the code words,
2, Determine the Huffman code for this
source,
3. Compare the two codes and comment.
Scanned with CamScannerN
Information Theory and SOUrcS Coc,
Digital Communication (E&TC-MU)
Solr Aver ode word length (L) =
Huffman code : The coing iss shown in Fig. P.3.77. 6
Srey tm Zn Length ofk* code word)
™ . kel
me (032) #10252) +0153) +125
+(0.1x3)+ (0083)
L = 245 its message 7
™ or "
2. Average Information per message OR source
™ entropy (H)
ns 6
S Foto te oda ne E nbe tiny
™ s0sh Sindee be cate k
J frmett
™ o0s1 = 013 ogy (0.3) + 0.25 log (10.25)
ous0 Fig P37 +0. log (10.15) + 0.12 log (0.12)
Similarly we can obtain the codes forall other messages. +008 og (108) +0.1 tog, (10.1)
‘Thy we ied in TP. 3:79. = 05210405 +0405 40367 +02915 +0332
Table P.3.77
H = 2422bits symbol olny,
Messer || m [mm [os [ms [mm Tm || 5 ctncioncy +
code [in [000 | oot | oro [10 [ 10 [orto [ons cin = Bern «22100 258875 ag
Ex.878: A zero memory source emits six messages
(my, Ma, My, M4, Ms, me) with probabilities | 4.
(050, 025, 0.15, 0.12, 0:10, 0.08) 09887 = 00113 s
respectively, Find
aes 3.7.4 Run Length Encoding (RLE) :
2 Dwar te avegewerttrah | ean gh ag LE he sige te
Find envoy ofthe source can bowed x rering etary.
4. Determine its efficiency and
redundancy a
‘The data made of any combination of symbols can te
Soln. : ‘compressed using the RLE.
The code word for the symbol m, is 00 (see dotted line). | ~
Similarly the code words for the other symbols are as listed in
Table P.3.7.8.
In this technique, we replace a repeated sequence (also called
8s run) ofthe same symbol with two entities : a count andthe
symbol inself as illustrated in the followit
example,
ews Saleen
ecsege Stiga Staiger Stage stagelv sage | EX87.9: Uso the RLE method to compress te
* following sequence of symbols :
pn AAAACCBBBDDDDDEFF
ead | soin.:
‘We can compress the given sequence of symbols using RLE
™ axe 0154 ss follows:
™ or ORIGINAL: AAAA CC BBB DDDDDEFF
™
(©-99 Fig, P. 3.78 : Huffman coding COMPRESSED: © 4a2c 3B eb 1 2F
‘Table P. 3.738 : Code word
ies aa (16h Fig. P.3:7.9 :RLE encoding
Pecoas 03[ 025 [015 [012] 01 [008 | | lnthisway asting of 17 symbols has been compressed ino
Code word 0 | 10 | oto | ont [110 | 111 string of only 12 symbols |
Codewordiensth | 2] 2 [3 [3 [3 | 3
Scanned with CamScannerInformation Thoory and Source Coding
Jataconissonen es Of hs meting me riven | | St. | Parameter [Shannon Fano | Huffman (H.C)
mae ly two SYmbols such as 0 and 1 No. (SF) ie)
= Insucha
eran) Sof oe of te syntot win | | 3 Encoding [SH uses a top- [HC wes a
ur between cach occurence ae approach | down encoding | bottom to top
= _Tisconspeiene ter symbol is se, ee. approach for its
= stated i the following example : encoding tree.
E9710:
Use the PLE method of compression to | | 4. ] Optimaity | SF is suboptimal [HC always
COmPreSS the following data : Means it does not | generates an
Data: 00000, always generate | optimal code,
Soln. 0001001 1000000 an optimal code.
Ta te given bin . | Code length | Does not always | Always produces
; ‘ven Binary data patter there are more 0s than 1. | | * ” Produce the | shortest
‘Therefore we will just show the number °F 0s that occur betweer shortest Sodewords.
1s. codeword.
ORIGINAL : [00000000007 oo 77 S| Eiieieney _| Low i
2200090000
19 7. | Use Not used | Used frequently
frequen
ce
‘OMPRESSED ; Lempel ZIV Coding :
(©1685) Fig. P.3.7.10(0) Q.1 Write note on Lampel-ZIV coding.
- 1s osibleo encode the compressed data int bina sag SE ets eS (0e0 19, 6 mati)
ed numberof bits per digit as shown in Fg. 3.7100)
(©1686 Fig. P.3.7.10(6)
Thus the number of bits inthe original sequence were 21 and
those in the compressed binary is 16. Hence the rate of
‘compression is 21/16 = 1.31 for this example.
3.7.5 Comparison of Shannon Fano and
Huffman Codin:
‘Table 3.7.2 : Comparison of Shannon Fano and Huffman
‘coding techniques :
pannon Fano
It is a technique
used for data
compression.
2. | Basis of
construction
It is a prefix code
constructed on the
basis of a set of
symbols and their
probabilities.
constructed
based on the set
of symbols and
their
probabilities,
= The major disadvantage of the Huffman code is that the
symbol probabilities must be known or estimated if they are
unknown,
~ In addition to this, the encoder and decoder must know the
coding tee
Moreover when text messages are to be modelled using
Huffman coding, the storage requirements do not allow this
code to capture the higher order relationship between words
and phrases
‘So we have to'compromise the efficiency of code,
‘These practical limitations of Huffman code can be overcome
by using the Lempel ZIV algorithm,
~ The alvantages of Lempel ZIV algorithm ar its adaptability
and simplicity of implementation,
Principle of Lempet ZIV algorithm
‘To illustrate this principle let us consider the example of an
‘input binary sequence specified as:
0010101001
Its assumed that the binary symbols O and 1 have already
‘een stored in this order in the code book.
Hence we write,
Subsequences stored : 0,1
Data to be parsed 0010101001 ~G72)
‘Now start examining the data in Equation (3,72) from LHS
‘and find the shortest subsequence which is not encountered
Scanned with CamScanner