0% found this document useful (0 votes)
1K views621 pages

Tomasi PDF

Uploaded by

shweta goyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views621 pages

Tomasi PDF

Uploaded by

shweta goyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Advanced Electronic Communications Systems

ications
Advanced Electronic Commun
Systems
Wayne Tomasi
Sixth Edition

Tomasi
Sixth Edition
ISBN 978-1-29202-735-7

9 781292 027357
Pearson New International Edition

Advanced Electronic Communications


Systems
Wayne Tomasi
Sixth Edition
Pearson Education Limited
Edinburgh Gate
Harlow
Essex CM20 2JE
England and Associated Companies throughout the world

Visit us on the World Wide Web at: www.pearsoned.co.uk

© Pearson Education Limited 2014

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted
in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the
prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom
issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS.

All trademarks used herein are the property of their respective owners. The use of any trademark
in this text does not vest in the author or publisher any trademark ownership rights in such
trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this
book by such owners.

ISBN 10: 1-292-02735-5


ISBN 10: 1-269-37450-8
ISBN 13: 978-1-292-02735-7
ISBN 13: 978-1-269-37450-7

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library

Printed in the United States of America


P E A R S O N C U S T O M L I B R A R Y

Table of Contents

1. Optical Fiber Transmission Media


Wayne Tomasi 1
2. Digital Modulation
Wayne Tomasi 49
3. Introduction to Data Communications and Networking
Wayne Tomasi 111
4. Fundamental Concepts of Data Communications
Wayne Tomasi 149
5. Data-Link Protocols and Data Communications Networks
Wayne Tomasi 213
6. Digital Transmission
Wayne Tomasi 277
7. Digital T-Carriers and Multiplexing
Wayne Tomasi 323
8. Telephone Instruments and Signals
Wayne Tomasi 383
9. The Telephone Circuit
Wayne Tomasi 405
10. The Public Telephone Network
Wayne Tomasi 439
11. Cellular Telephone Concepts
Wayne Tomasi 469
12. Cellular Telephone Systems
Wayne Tomasi 491
13. Microwave Radio Communications and System Gain
Wayne Tomasi 529

I
14. Satellite Communications
Wayne Tomasi 565
Index 609

II
Optical Fiber Transmission Media

CHAPTER OUTLINE

1 Introduction 8 Optical Fiber Configurations


2 History of Optical Fiber Communications 9 Optical Fiber Classifications
3 Optical Fibers versus Metallic Cable Facilities 10 Losses in Optical Fiber Cables
4 Electromagnetic Spectrum 11 Light Sources
5 Block Diagram of an Optical Fiber 12 Optical Sources
Communications System 13 Light Detectors
6 Optical Fiber Types 14 Lasers
7 Light Propagation 15 Optical Fiber System Link Budget

OBJECTIVES

■ Define optical communications


■ Present an overview of the history of optical fibers and optical fiber communications
■ Compare the advantages and disadvantages of optical fibers over metallic cables
■ Define electromagnetic frequency and wavelength spectrum
■ Describe several types of optical fiber construction
■ Explain the physics of light and the following terms: velocity of propagation, refraction, refractive index, critical
angle, acceptance angle, acceptance cone, and numerical aperture
■ Describe how light waves propagate through an optical fiber cable
■ Define modes of propagation and index profile
■ Describe the three types of optical fiber configurations: single-mode step index, multimode step index, and mul-
timode graded index
■ Describe the various losses incurred in optical fiber cables
■ Define light source and optical power
■ Describe the following light sources: light-emitting diodes and injection diodes
■ Describe the following light detectors: PIN diodes and avalanche photodiodes
■ Describe the operation of a laser
■ Explain how to calculate a link budget for an optical fiber system

From Chapter 1 of Advanced Electronic Communications Systems, Sixth Edition. Wayne Tomasi.
Copyright © 2004 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
1
Optical Fiber Transmission Media

1 INTRODUCTION

Optical fiber cables are the newest and probably the most promising type of guided trans-
mission medium for virtually all forms of digital and data communications applications, in-
cluding local, metropolitan, and wide area networks. With optical fibers, electromagnetic
waves are guided through a media composed of a transparent material without using elec-
trical current flow. With optical fibers, electromagnetic light waves propagate through the
media in much the same way that radio signals propagate through Earth’s atmosphere.
In essence, an optical communications system is one that uses light as the carrier of
information. Propagating light waves through Earth’s atmosphere is difficult and often im-
practical. Consequently, optical fiber communications systems use glass or plastic fiber ca-
bles to “contain” the light waves and guide them in a manner similar to the way electro-
magnetic waves are guided through a metallic transmission medium.
The information-carrying capacity of any electronic communications system is di-
rectly proportional to bandwidth. Optical fiber cables have, for all practical purposes, an in-
finite bandwidth. Therefore, they have the capacity to carry much more information than
their metallic counterparts or, for that matter, even the most sophisticated wireless commu-
nications systems.
For comparison purposes, it is common to express the bandwidth of an analog com-
munications system as a percentage of its carrier frequency. This is sometimes called the
bandwidth utilization ratio. For instance, a VHF radio communications system operating at
a carrier frequency of 100 MHz with 10-MHz bandwidth has a bandwidth utilization ratio
of 10%. A microwave radio system operating at a carrier frequency of 10 GHz with a 10%
bandwidth utilization ratio would have 1 GHz of bandwidth available. Obviously, the
higher the carrier frequency, the more bandwidth available, and the greater the information-
carrying capacity. Light frequencies used in optical fiber communications systems are be-
tween 1  1014 Hz and 4  1014 Hz (100,000 GHz to 400,000 GHz). A bandwidth utiliza-
tion ratio of 10% would be a bandwidth between 10,000 GHz and 40,000 GHz.

2 HISTORY OF OPTICAL FIBER COMMUNICATIONS

In 1880, Alexander Graham Bell experimented with an apparatus he called a photophone.


The photophone was a device constructed from mirrors and selenium detectors that trans-
mitted sound waves over a beam of light. The photophone was awkward and unreliable and
had no real practical application. Actually, visual light was a primary means of communi-
cating long before electronic communications came about. Smoke signals and mirrors were
used ages ago to convey short, simple messages. Bell’s contraption, however, was the first
attempt at using a beam of light for carrying information.
Transmission of light waves for any useful distance through Earth’s atmosphere is im-
practical because water vapor, oxygen, and particulates in the air absorb and attenuate the
signals at light frequencies. Consequently, the only practical type of optical communica-
tions system is one that uses a fiber guide. In 1930, J. L. Baird, an English scientist, and C.
W. Hansell, a scientist from the United States, were granted patents for scanning and trans-
mitting television images through uncoated fiber cables. A few years later, a German sci-
entist named H. Lamm successfully transmitted images through a single glass fiber. At that
time, most people considered fiber optics more of a toy or a laboratory stunt and, conse-
quently, it was not until the early 1950s that any substantial breakthrough was made in the
field of fiber optics.
In 1951, A. C. S. van Heel of Holland and H. H. Hopkins and N. S. Kapany of En-
gland experimented with light transmission through bundles of fibers. Their studies led to
the development of the flexible fiberscope, which is used extensively in the medical field.
It was Kapany who coined the term “fiber optics” in 1956.

2
Optical Fiber Transmission Media

In 1958, Charles H. Townes, an American, and Arthur L. Schawlow, a Canadian,


wrote a paper describing how it was possible to use stimulated emission for amplifying light
waves (laser) as well as microwaves (maser). Two years later, Theodore H. Maiman, a sci-
entist with Hughes Aircraft Company, built the first optical maser.
The laser (light amplification by stimulated emission of radiation) was invented in
1960. The laser’s relatively high output power, high frequency of operation, and capability
of carrying an extremely wide bandwidth signal make it ideally suited for high-capacity
communications systems. The invention of the laser greatly accelerated research efforts in
fiber-optic communications, although it was not until 1967 that K. C. Kao and G. A. Bock-
ham of the Standard Telecommunications Laboratory in England proposed a new commu-
nications medium using cladded fiber cables.
The fiber cables available in the 1960s were extremely lossy (more than 1000 dB/km),
which limited optical transmissions to short distances. In 1970, Kapron, Keck, and Maurer
of Corning Glass Works in Corning, New York, developed an optical fiber with losses less
than 2 dB/km. That was the “big” breakthrough needed to permit practical fiber optics com-
munications systems. Since 1970, fiber optics technology has grown exponentially. Re-
cently, Bell Laboratories successfully transmitted 1 billion bps through a fiber cable for 600
miles without a regenerator.
In the late 1970s and early 1980s, the refinement of optical cables and the development
of high-quality, affordable light sources and detectors opened the door to the development of
high-quality, high-capacity, efficient, and affordable optical fiber communications systems. By
the late 1980s, losses in optical fibers were reduced to as low as 0.16 dB/km, and in 1988 NEC
Corporation set a new long-haul transmission record by transmitting 10 gigabytes per second
over 80.1 kilometers of optical fiber. Also in 1988, the American National Standards Institute
(ANSI) published the Synchronous Optical Network (SONET). By the mid-1990s, optical voice
and data networks were commonplace throughout the United States and much of the world.

3 OPTICAL FIBERS VERSUS METALLIC CABLE FACILITIES

Communications through glass or plastic fibers has several advantages over conven-
tional metallic transmission media for both telecommunication and computer networking
applications.

3-1 Advantages of Optical Fiber Cables


The advantages of using optical fibers include the following:
1. Wider bandwidth and greater information capacity. Optical fibers have greater in-
formation capacity than metallic cables because of the inherently wider bandwidths avail-
able with optical frequencies. Optical fibers are available with bandwidths up to several
thousand gigahertz. The primary electrical constants (resistance, inductance, and capaci-
tance) in metallic cables cause them to act like low-pass filters, which limit their transmis-
sion frequencies, bandwidth, bit rate, and information-carrying capacity. Modern optical
fiber communications systems are capable of transmitting several gigabits per second over
hundreds of miles, allowing literally millions of individual voice and data channels to be
combined and propagated over one optical fiber cable.
2. Immunity to crosstalk. Optical fiber cables are immune to crosstalk because glass
and plastic fibers are nonconductors of electrical current. Therefore, fiber cables are not sur-
rounded by a changing magnetic field, which is the primary cause of crosstalk between
metallic conductors located physically close to each other.
3. Immunity to static interference. Because optical fiber cables are nonconductors of
electrical current, they are immune to static noise due to electromagnetic interference
(EMI) caused by lightning, electric motors, relays, fluorescent lights, and other electrical

3
Optical Fiber Transmission Media

noise sources (most of which are man-made). For the same reason, fiber cables do not ra-
diate electromagnetic energy.
4. Environmental immunity. Optical fiber cables are more resistant to environmen-
tal extremes (including weather variations) than metallic cables. Optical cables also oper-
ate over a wider temperature range and are less affected by corrosive liquids and gases.
5. Safety and convenience. Optical fiber cables are safer and easier to install and
maintain than metallic cables. Because glass and plastic fibers are nonconductors, there are
no electrical currents or voltages associated with them. Optical fibers can be used around
volatile liquids and gasses without worrying about their causing explosions or fires. Opti-
cal fibers are also smaller and much more lightweight and compact than metallic cables.
Consequently, they are more flexible, easier to work with, require less storage space,
cheaper to transport, and easier to install and maintain.
6. Lower transmission loss. Optical fibers have considerably less signal loss than
their metallic counterparts. Optical fibers are currently being manufactured with as lit-
tle as a few-tenths-of-a-decibel loss per kilometer. Consequently, optical regenerators
and amplifiers can be spaced considerably farther apart than with metallic transmission
lines.
7. Security. Optical fiber cables are more secure than metallic cables. It is virtually
impossible to tap into a fiber cable without the user’s knowledge, and optical cables cannot
be detected with metal detectors unless they are reinforced with steel for strength.
8. Durability and reliability. Optical fiber cables last longer and are more reliable
than metallic facilities because fiber cables have a higher tolerance to changes in environ-
mental conditions and are immune to corrosive materials.
9. Economics. The cost of optical fiber cables is approximately the same as metallic
cables. Fiber cables have less loss and require fewer repeaters, which equates to lower in-
stallation and overall system costs and improved reliability.

3-2 Disadvantages of Optical Fiber Cables


Although the advantages of optical fiber cables far exceed the disadvantages, it is impor-
tant to know the limitations of the fiber. The disadvantages of optical fibers include the
following:
1. Interfacing costs. Optical fiber cable systems are virtually useless by themselves.
To be practical and useful, they must be connected to standard electronic facilities, which
often require expensive interfaces.
2. Strength. Optical fibers by themselves have a significantly lower tensile strength
than coaxial cable. This can be improved by coating the fiber with standard Kevlar and a
protective jacket of PVC. In addition, glass fiber is much more fragile than copper wire,
making fiber less attractive where hardware portability is required.
3. Remote electrical power. Occasionally, it is necessary to provide electrical power
to remote interface or regenerating equipment. This cannot be accomplished with the opti-
cal cable, so additional metallic cables must be included in the cable assembly.
4. Optical fiber cables are more susceptible to losses introduced by bending the ca-
ble. Electromagnetic waves propagate through an optical cable by either refraction or re-
flection. Therefore, bending the cable causes irregularities in the cable dimensions, result-
ing in a loss of signal power. Optical fibers are also more prone to manufacturing defects,
as even the most minor defect can cause excessive loss of signal power.
5. Specialized tools, equipment, and training. Optical fiber cables require special
tools to splice and repair cables and special test equipment to make routine measurements.
Not only is repairing fiber cables difficult and expensive, but technicians working on opti-
cal cables also require special skills and training. In addition, sometimes it is difficult to lo-
cate faults in optical cables because there is no electrical continuity.

4
Optical Fiber Transmission Media

Ultraviolet light
FM radio and

Gamma rays
Infrared light

Cosmic rays
satelite and

Visible light
microwave
Ultrasonic

Terrestrial
television
AM radio
Subsonic

X-rays
Audio

radar
Hz 100 101 102 103 104 105 106 107 108 109 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022
kHz MHz GHz THz PHz EHz
(kilo) (mega) (giga) (tera) (penta) (exa)

Frequency

FIGURE 1 Electromagnetic frequency spectrum

4 ELECTROMAGNETIC SPECTRUM

The total electromagnetic frequency spectrum is shown in Figure 1. From the figure, it can be
seen that the frequency spectrum extends from the subsonic frequencies (a few hertz) to cos-
mic rays (1022 Hz). The light frequency spectrum can be divided into three general bands:
1. Infrared. The band of light frequencies that is too high to be seen by the human
eye with wavelengths ranging between 770 nm and 106 nm. Optical fiber systems
generally operate in the infrared band.
2. Visible. The band of light frequencies to which the human eye will respond with wave-
lengths ranging between 390 nm and 770 nm. This band is visible to the human eye.
3. Ultraviolet. The band of light frequencies that are too low to be seen by the hu-
man eye with wavelengths ranging between 10 nm and 390 nm.
When dealing with ultra-high-frequency electromagnetic waves, such as light, it is
common to use units of wavelength rather than frequency. Wavelength is the length that one
cycle of an electromagnetic wave occupies in space. The length of a wavelength depends
on the frequency of the wave and the velocity of light. Mathematically, wavelength is
c
λ (1)
f
where λ  wavelength (meters/cycle)
c  velocity of light (300,000,000 meters per second)
f  frequency (hertz)
With light frequencies, wavelength is often stated in microns, where 1 micron  106
meter (1 μm), or in nanometers (nm), where 1 nm  109 meter. However, when describ-
ing the optical spectrum, the unit angstrom is sometimes used to express wavelength, where
1 angstrom  1010 meter, or 0.0001 micron. Figure 2 shows the total electromagnetic
wavelength spectrum.

5 BLOCK DIAGRAM OF AN OPTICAL FIBER


COMMUNICATIONS SYSTEM

Figure 3 shows a simplified block diagram of a simplex optical fiber communications link.
The three primary building blocks are the transmitter, the receiver, and the optical fiber ca-
ble. The transmitter is comprised of a voltage-to-current converter, a light source, and a
source-to-fiber interface (light coupler). The fiber guide is the transmission medium, which

5
μm 0.01 2 3 3.9 4.55 4.92 5.77 5.97 6.22 7.7 15 60 400 1000
Å 100 2000 3000 3900 4550 4920 5770 5970 6220 7700 15,000 60,000 400,000 1,000,000
nm 10 200 300 390 455 492 577 597 622 770 1500 6000 40000 10000
Extreme Far Near Vio Blue Green Yel Orng Red Near Middle Far Far Far
Ultraviolet Visible light Infrared

Long electrical
Gamma rays oscillations
Radio waves
Cosmic rays X-rays Microwaves

-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Hz 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10

Wavelength

FIGURE 2 Electromagnetic wavelength spectrum

Source

Analog or
digital
interface Transmitter

Voltage-to- Source-to-
Light
current fiber
source
converter interface

Optical fiber cable Optical fiber cable


Signal
regenerator

Fiber-to- Current-to-
Light
light detector voltage
detector
interface converter

Receiver
Analog or
digital
interface

Destination

FIGURE 3 Optical fiber communications link

6
Optical Fiber Transmission Media

is either an ultrapure glass or a plastic cable. It may be necessary to add one or more re-
generators to the transmission medium, depending on the distance between the transmitter
and receiver. Functionally, the regenerator performs light amplification. However, in real-
ity the signal is not actually amplified; it is reconstructed. The receiver includes a fiber-to-
interface (light coupler), a photo detector, and a current-to-voltage converter.
In the transmitter, the light source can be modulated by a digital or an analog signal.
The voltage-to-current converter serves as an electrical interface between the input circuitry
and the light source. The light source is either an infrared light-emitting diode (LED) or an
injection laser diode (ILD). The amount of light emitted by either an LED or ILD is pro-
portional to the amount of drive current. Thus, the voltage-to-current converter converts an
input signal voltage to a current that is used to drive the light source. The light outputted by
the light source is directly proportional to the magnitude of the input voltage. In essence,
the light intensity is modulated by the input signal.
The source-to-fiber coupler (such as an optical lens) is a mechanical interface. Its
function is to couple light emitted by the light source into the optical fiber cable. The opti-
cal fiber consists of a glass or plastic fiber core surrounded by a cladding and then encap-
sulated in a protective jacket. The fiber-to-light detector-coupling device is also a mechan-
ical coupler. Its function is to couple as much light as possible from the fiber cable into the
light detector.
The light detector is generally a PIN (p-type-intrinsic-n-type) diode, an APD (ava-
lanche photodiode), or a phototransistor. All three of these devices convert light energy to
current. Consequently, a current-to-voltage converter is required to produce an output volt-
age proportional to the original source information. The current-to-voltage converter trans-
forms changes in detector current to changes in voltage.
The analog or digital interfaces are electrical interfaces that match impedances and
signal levels between the information source and destination to the input and output cir-
cuitry of the optical system.

6 OPTICAL FIBER TYPES

6-1 Optical Fiber Construction


The actual fiber portion of an optical cable is generally considered to include both the fiber
core and its cladding (see Figure 4). A special lacquer, silicone, or acrylate coating is gen-
erally applied to the outside of the cladding to seal and preserve the fiber’s strength, help-

Polyurethane outer jacket

Strength members

Buffer jacket

Protective coating
Fiber core FIGURE 4 Optical fiber cable
and cladding construction

7
Optical Fiber Transmission Media

ing maintain the cables attenuation characteristics. The coating also helps protect the fiber
from moisture, which reduces the possibility of the occurrence of a detrimental phenome-
non called stress corrosion (sometimes called static fatigue) caused by high humidity.
Moisture causes silicon dioxide crystals to interact, causing bonds to break down and spon-
taneous fractures to form over a prolonged period of time. The protective coating is sur-
rounded by a buffer jacket, which provides the cable additional protection against abrasion
and shock. Materials commonly used for the buffer jacket include steel, fiberglass, plastic,
flame-retardant polyvinyl chloride (FR-PVC), Kevlar yarn, and paper. The buffer jacket is
encapsulated in a strength member, which increases the tensile strength of the overall cable
assembly. Finally, the entire cable assembly is contained in an outer polyurethane jacket.
There are three essential types of optical fibers commonly used today. All three vari-
eties are constructed of either glass, plastic, or a combination of glass and plastic:

Plastic core and cladding


Glass core with plastic cladding (called PCS fiber [plastic-clad silica])
Glass core and glass cladding (called SCS [silica-clad silica])

Plastic fibers are more flexible and, consequently, more rugged than glass. Therefore,
plastic cables are easier to install, can better withstand stress, are less expensive, and weigh
approximately 60% less than glass. However, plastic fibers have higher attenuation charac-
teristics and do not propagate light as efficiently as glass. Therefore, plastic fibers are lim-
ited to relatively short cable runs, such as within a single building.
Fibers with glass cores have less attenuation than plastic fibers, with PCS being
slightly better than SCS. PCS fibers are also less affected by radiation and, therefore, are
more immune to external interference. SCS fibers have the best propagation characteristics
and are easier to terminate than PCS fibers. Unfortunately, SCS fibers are the least rugged,
and they are more susceptible to increases in attenuation when exposed to radiation.
The selection of a fiber for a given application is a function of the specific system re-
quirements. There are always trade-offs based on the economics and logistics of a particu-
lar application.

6-1-1 Cable configurations. There are many different cable designs available today.
Figure 5 shows examples of several optical fiber cable configurations. With loose tube con-
struction (Figure 5a), each fiber is contained in a protective tube. Inside the tube, a
polyurethane compound encapsules the fiber and prevents the intrusion of water. A phe-
nomenon called stress corrosion or static fatigue can result if the glass fiber is exposed to
long periods of high humidity. Silicon dioxide crystals interact with the moisture and cause
bonds to break down, causing spontaneous fractures to form over a prolonged period. Some
fiber cables have more than one protective coating to ensure that the fiber’s characteristics
do not alter if the fiber is exposed to extreme temperature changes. Surrounding the fiber’s
cladding is usually a coating of either lacquer, silicon, or acrylate that is typically applied
to seal and preserve the fiber’s strength and attenuation characteristics.
Figure 5b shows the construction of a constrained optical fiber cable. Surrounding the
fiber are a primary and a secondary buffer comprised of Kevlar yarn, which increases the
tensile strength of the cable and provides protection from external mechanical influences
that could cause fiber breakage or excessive optical attenuation. Again, an outer protective
tube is filled with polyurethane, which prevents moisture from coming into contact with the
fiber core.
Figure 5c shows a multiple-strand cable configuration, which includes a steel central
member and a layer of Mylar tape wrap to increase the cable’s tensile strength. Figure 5d
shows a ribbon configuration for a telephone cable, and Figure 5e shows both end and side
views of a PCS cable.

8
Optical Fiber Transmission Media

FIGURE 5 Fiber optic cable configurations: (a) loose tube construction; (b) constrained fiber;
(c) multiple strands; (d) telephone cable; (e) plastic-silica cable

As mentioned, one disadvantage of optical fiber cables is their lack of tensile


(pulling) strength, which can be as low as a pound. For this reason, the fiber must be rein-
forced with strengthening material so that it can withstand mechanical stresses it will typi-
cally undergo when being pulled and jerked through underground and overhead ducts and
hung on poles. Materials commonly used to strengthen and protect fibers from abrasion and
environmental stress are steel, fiberglass, plastic, FR-PVC (flame-retardant polyvinyl chlo-
ride), Kevlar yarn, and paper. The type of cable construction used depends on the perfor-
mance requirements of the system and both economic and environmental constraints.

7 LIGHT PROPAGATION

7-1 The Physics of Light


Although the performance of optical fibers can be analyzed completely by application of
Maxwell’s equations, this is necessarily complex. For most practical applications, geomet-
ric wave tracing may be used instead.

9
Optical Fiber Transmission Media

In 1860, James Clerk Maxwell theorized that electromagnetic radiation contained a


series of oscillating waves comprised of an electric and a magnetic field in quadrature (at
90° angles). However, in 1905, Albert Einstein and Max Planck showed that when light is
emitted or absorbed, it behaves like an electromagnetic wave and also like a particle, called
a photon, which possesses energy proportional to its frequency. This theory is known as
Planck’s law. Planck’s law describes the photoelectric effect, which states, “When visible
light or high-frequency electromagnetic radiation illuminates a metallic surface, electrons
are emitted.” The emitted electrons produce an electric current. Planck’s law is expressed
mathematically as
Ep  hf (2)
where Ep  energy of the photon (joules)
h  Planck’s constant  6.625  1034 J  s
f  frequency of light (photon) emitted (hertz)
Photon energy may also be expressed in terms of wavelength. Substituting Equation
1 into Equation 2 yields
Ep  hf (3a)

hc
or Ep  (3b)
λ
An atom has several energy levels or states, the lowest of which is the ground state.
Any energy level above the ground state is called an excited state. If an atom in one energy
level decays to a lower energy level, the loss of energy (in electron volts) is emitted as a
photon of light. The energy of the photon is equal to the difference between the energy of
the two energy levels. The process of decaying from one energy level to another energy
level is called spontaneous decay or spontaneous emission.
Atoms can be irradiated by a light source whose energy is equal to the difference be-
tween ground level and an energy level. This can cause an electron to change from one en-
ergy level to another by absorbing light energy. The process of moving from one energy
level to another is called absorption. When making the transition from one energy level to
another, the atom absorbs a packet of energy (a photon). This process is similar to that of
emission.
The energy absorbed or emitted (photon) is equal to the difference between the two
energy levels. Mathematically,
Ep  E2  E1 (4)
where Ep is the energy of the photon (joules).

7-2 Optical Power


Light intensity is a rather complex concept that can be expressed in either photometric or
radiometric terms. Photometry is the science of measuring only light waves that are visible
to the human eye. Radiometry, on the other hand, measures light throughout the entire elec-
tromagnetic spectrum. In photometric terms, light intensity is generally described in terms
of luminous flux density and measured in lumens per unit area. Radiometric terms, how-
ever, are often more useful to engineers and technologists. In radiometric terms, optical
power measures the rate at which electromagnetic waves transfer light energy. In simple
terms, optical power is described as the flow of light energy past a given point in a speci-
fied time. Optical power is expressed mathematically as
d1energy2
P (5a)
d1time2

10
Optical Fiber Transmission Media

dQ
or  (5b)
dt
where P  optical power (watts)
dQ  instantaneous charge (joules)
dt  instantaneous change in time (seconds)
Optical power is sometimes called radiant flux (φ), which is equivalent to joules per
second and is the same power that is measured electrically or thermally in watts. Radio-
metric terms are generally used with light sources with output powers ranging from tens
of microwatts to more than 100 milliwatts. Optical power is generally stated in decibels
relative to a defined power level, such as 1 mW (dBm) or 1 μW (dBμ). Mathematically
stated,
P 1watts2
dBm  10 logB R
0.001 1watts 2
(6)

P 1watts 2
dbμ  10 logB R
0.000001 1watts 2
and (7)

Example 1
Determine the optical power in dBm and dBμ for power levels of
a. 10 mW
b. 20 μW
Solution
a. Substituting into Equations 6 and 7 gives

10 mW
dBm  10 log  10 dBm
1 mW
10 mW
dBμ  10 log  40 dBμ
1 μW

b. Substituting into Equations 6 and 7 gives

20 μW
dBm  10 log  17 dBm
1 mW

20 μW
dBμ  10 log  13 dBμ
1μW

7-3 Velocity of Propagation


In free space (a vacuum), electromagnetic energy, such as light waves, travels at ap-
proximately 300,000,000 meters per second (186,000 mi/s). Also, in free space the ve-
locity of propagation is the same for all light frequencies. However, it has been demon-
strated that electromagnetic waves travel slower in materials more dense than free
space and that all light frequencies do not propagate at the same velocity. When the ve-
locity of an electromagnetic wave is reduced as it passes from one medium to another
medium of denser material, the light ray changes direction or refracts (bends) toward
the normal. When an electromagnetic wave passes from a more dense material into a
less dense material, the light ray is refracted away from the normal. The normal is sim-
ply an imaginary line drawn perpendicular to the interface of the two materials at the
point of incidence.

11
Optical Fiber Transmission Media

FIGURE 6 Refraction of light: (a) light refraction; (b) prismatic refraction

7-3-1 Refraction. For light-wave frequencies, electromagnetic waves travel


through Earth’s atmosphere (air) at approximately the same velocity as through a vacuum
(i.e., the speed of light). Figure 6a shows how a light ray is refracted (bent) as it passes from
a less dense material into a more dense material. (Actually, the light ray is not bent; rather,
it changes direction at the interface.) Figure 6b shows how sunlight, which contains all light
frequencies (white light), is affected as it passes through a material that is more dense than
air. Refraction occurs at both air/glass interfaces. The violet wavelengths are refracted the
most, whereas the red wavelengths are refracted the least. The spectral separation of white
light in this manner is called prismatic refraction. It is this phenomenon that causes rain-
bows, where water droplets in the atmosphere act as small prisms that split the white sun-
light into the various wavelengths, creating a visible spectrum of color.

7-3-2 Refractive Index. The amount of bending or refraction that occurs at the in-
terface of two materials of different densities is quite predictable and depends on the re-
fractive indexes of the two materials. Refractive index is simply the ratio of the velocity of
propagation of a light ray in free space to the velocity of propagation of a light ray in a given
material. Mathematically, refractive index is
c
n (8)
v

12
Optical Fiber Transmission Media

where n  refractive index (unitless)


c  speed of light in free space (3  108 meters per second)
v  speed of light in a given material (meters per second)
Although the refractive index is also a function of frequency, the variation in most
light wave applications is insignificant and, thus, omitted from this discussion. The indexes
of refraction of several common materials are given in Table 1.

7-3-3 Snell’s law. How a light ray reacts when it meets the interface of two trans-
missive materials that have different indexes of refraction can be explained with Snell’s
law. A refractive index model for Snell’s law is shown in Figure 7. The angle of incidence
is the angle at which the propagating ray strikes the interface with respect to the normal,
and the angle of refraction is the angle formed between the propagating ray and the nor-
mal after the ray has entered the second medium. At the interface of medium 1 and medium
2, the incident ray may be refracted toward the normal or away from it, depending on
whether n1 is greater than or less than n2. Hence, the angle of refraction can be larger or

Table 1 Typical Indexes of Refraction

Material Index of Refractiona

Vacuum 1.0
Air 1.0003 (≈1)
Water 1.33
Ethyl alcohol 1.36
Fused quartz 1.46
Glass fiber 1.5–1.9
Diamond 2.0–2.42
Silicon 3.4
Gallium-arsenide 2.6
a
Index of refraction is based on a wavelength of light emitted from a sodium flame
(589 nm).

FIGURE 7 Refractive model for Snell’s law

13
Optical Fiber Transmission Media

FIGURE 8 Light ray refracted away from the normal

smaller than the angle of incidence, depending on the refractive indexes of the two materi-
als. Snell’s law stated mathematically is
n1 sin θ1  n2 sin θ2 (9)
where n1  refractive index of material 1 (unitless)
n2  refractive index of material 2 (unitless)
θ1  angle of incidence (degrees)
θ2  angle of refraction (degrees)
Figure 8 shows how a light ray is refracted as it travels from a more dense (higher
refractive index) material into a less dense (lower refractive index) material. It can be
seen that the light ray changes direction at the interface, and the angle of refraction is
greater than the angle of incidence. Consequently, when a light ray enters a less dense
material, the ray bends away from the normal. The normal is simply a line drawn per-
pendicular to the interface at the point where the incident ray strikes the interface.
Similarly, when a light ray enters a more dense material, the ray bends toward the
normal.

Example 2
In Figure 8, let medium 1 be glass and medium 2 be ethyl alcohol. For an angle of incidence of 30°,
determine the angle of refraction.
Solution From Table 1,
n1 (glass)  1.5
n2 (ethyl alcohol)  1.36
Rearranging Equation 9 and substituting for n1, n2, and θ1 gives us
n1
sin θ1  sin θ2
n2
1.5
sin 30  0.5514  sin θ2
1.36
θ2  sin1 0.5514  33.47°
The result indicates that the light ray refracted (bent) or changed direction by 33.47° at the interface.
Because the light was traveling from a more dense material into a less dense material, the ray bent
away from the normal.

14
Optical Fiber Transmission Media

FIGURE 9 Critical angle refraction

7-3-4 Critical angle. Figure 9 shows a condition in which an incident ray is strik-
ing the glass/cladding interface at an angle (1) such that the angle of refraction (θ2) is 90°
and the refracted ray is along the interface. This angle of incidence is called the critical an-
gle (θc), which is defined as the minimum angle of incidence at which a light ray may strike
the interface of two media and result in an angle of refraction of 90° or greater. It is impor-
tant to note that the light ray must be traveling from a medium of higher refractive index to
a medium with a lower refractive index (i.e., glass into cladding). If the angle of refraction
is 90° or greater, the light ray is not allowed to penetrate the less dense material. Conse-
quently, total reflection takes place at the interface, and the angle of reflection is equal to
the angle of incidence. Critical angle can be represented mathematically by rearranging
Equation 9 as
n2
sin θ1  sin θ2
n1
With θ2  90°, θ1 becomes the critical angle (θc), and

112 
n2 n2
sin θc  sin θc 
n1 n1
n2
and θc  sin1 (10)
n1
where θc is the critical angle.
From Equation 10, it can be seen that the critical angle is dependent on the ratio of
the refractive indexes of the core and cladding. For example a ratio n2 /n1  0.77 produces
a critical angle of 50.4°, whereas a ratio n2 /n1  0.625 yields a critical angle of 38.7°.
Figure 10 shows a comparison of the angle of refraction and the angle of reflection
when the angle of incidence is less than or more than the critical angle.

7-3-5 Acceptance angle, acceptance cone, and numerical aperture. In previous


discussions, the source-to-fiber aperture was mentioned several times, and the critical and
acceptance angles at the point where a light ray strikes the core/cladding interface were ex-
plained. The following discussion addresses the light-gathering ability of a fiber, which is
the ability to couple light from the source into the fiber.

15
Optical Fiber Transmission Media

FIGURE 10 Angle of reflection and refraction

FIGURE 11 Ray propagation into and down an optical fiber cable

Figure 11 shows the source end of a fiber cable and a light ray propagating into and
then down the fiber. When light rays enter the core of the fiber, they strike the air/glass in-
terface at normal A. The refractive index of air is approximately 1, and the refractive index
of the glass core is 1.5. Consequently, the light enters the cable traveling from a less dense
to a more dense medium, causing the ray to refract toward the normal. This causes the light
rays to change direction and propagate diagonally down the core at an angle that is less than
the external angle of incidence (θin). For a ray of light to propagate down the cable, it must
strike the internal core/cladding interface at an angle that is greater than the critical angle
(θc). Using Figure 12 and Snell’s law, it can be shown that the maximum angle that exter-
nal light rays may strike the air/glass interface and still enter the core and propagate down
the fiber is

16
Optical Fiber Transmission Media

FIGURE 12 Geometric relationship


of Equations 11a and b

2n21  n22
θin1max2  sin1 (11a)
no
where θin(max)  acceptance angle (degrees)
no  refractive index of air (1)
n1  refractive index of glass fiber core (1.5)
n2  refractive index of quartz fiber cladding (1.46)
Since the refractive index of air is 1, Equation 11a reduces to

θin1max2  sin1 2n21  n22 (11b)


θin(max) is called the acceptance angle or acceptance cone half-angle. θin(max) defines
the maximum angle in which external light rays may strike the air/glass interface and still
propagate down the fiber. Rotating the acceptance angle around the fiber core axis de-
scribes the acceptance cone of the fiber input. Acceptance cone is shown in Figure 13a, and
the relationship between acceptance angle and critical angle is shown in Figure 13b. Note
that the critical angle is defined as a minimum value and that the acceptance angle is de-
fined as a maximum value. Light rays striking the air/glass interface at an angle greater than
the acceptance angle will enter the cladding and, therefore, will not propagate down the
cable.
Numerical aperture (NA) is closely related to acceptance angle and is the figure of
merit commonly used to measure the magnitude of the acceptance angle. In essence, nu-
merical aperture is used to describe the light-gathering or light-collecting ability of an op-
tical fiber (i.e., the ability to couple light into the cable from an external source). The larger
the magnitude of the numerical aperture, the greater the amount of external light the fiber
will accept. The numerical aperture for light entering the glass fiber from an air medium is
described mathematically as
NA  sin θin (12a)
and

NA  2n21  n22 (12b)


Therefore
θin  sin1 NA (12c)
where θin  acceptance angle (degrees)
NA  numerical aperture (unitless)
n1  refractive index of glass fiber core (unitless)
n2  refractive index of quartz fiber cladding (unitless)

17
Optical Fiber Transmission Media

FIGURE 13 (a) Acceptance angle; (b) acceptance cone

A larger-diameter core does not necessarily produce a larger numerical aperture, al-
though in practice larger-core fibers tend to have larger numerical apertures. Numerical
aperture can be calculated using Equations 12a or b, but in practice it is generally measured
by looking at the output of a fiber because the light-guiding properties of a fiber cable are
symmetrical. Therefore, light leaves a cable and spreads out over an angle equal to the ac-
ceptance angle.

8 OPTICAL FIBER CONFIGURATIONS

Light can be propagated down an optical fiber cable using either reflection or refraction. How
the light propagates depends on the mode of propagation and the index profile of the fiber.

8-1 Mode of Propagation


In fiber optics terminology, the word mode simply means path. If there is only one path for
light rays to take down a cable, it is called single mode. If there is more than one path, it is
called multimode. Figure 14 shows single and multimode propagation of light rays down an
optical fiber. As shown in Figure 14a, with single-mode propagation, there is only one

18
Optical Fiber Transmission Media

FIGURE 14 Modes of propagation: (a) single mode; (b) multimode

path for light rays to take, which is directly down the center of the cable. However, as Figure
14b shows, with multimode propagation there are many higher-order modes possible, and
light rays propagate down the cable in a zigzag fashion following several paths.
The number of paths (modes) possible for a multimode fiber cable depends on the fre-
quency (wavelength) of the light signal, the refractive indexes of the core and cladding, and
the core diameter. Mathematically, the number of modes possible for a given cable can be
approximated by the following formula:

πd 2
N⬇¢ 2n21  n22≤ (13)
λ
where N  number of propagating modes
d  core diameter (meters)
λ  wavelength (meters)
n1  refractive index of core
n2  refractive index of cladding
A multimode step-index fiber with a core diameter of 50 μm, a core refractive index of 1.6,
a cladding refractive index of 1.584, and a wavelength of 1300 nm has approximately 372
possible modes.

8-2 Index Profile


The index profile of an optical fiber is a graphical representation of the magnitude of the
refractive index across the fiber. The refractive index is plotted on the horizontal axis, and
the radial distance from the core axis is plotted on the vertical axis. Figure 15 shows the
core index profiles for the three types of optical fiber cables.
There are two basic types of index profiles: step and graded. A step-index fiber has a
central core with a uniform refractive index (i.e., constant density throughout). An outside
cladding that also has a uniform refractive index surrounds the core; however, the refractive
index of the cladding is less than that of the central core. From Figures 15a and b, it can be
seen that in step-index fibers, there is an abrupt change in the refractive index at the
core/cladding interface. This is true for both single and multimode step-index fibers.

19
Optical Fiber Transmission Media

FIGURE 15 Core index profiles: (a) single-mode step index; (b) multimode step index;
(c) multimode graded index

In the graded-index fiber, shown in Figure 15c, it can be see that there is no cladding,
and the refractive index of the core is nonuniform; it is highest in the center of the core and
decreases gradually with distance toward the outer edge. The index profile shows a core
density that is maximum in the center and decreases symmetrically with distance from the
center.

9 OPTICAL FIBER CLASSIFICATIONS

Propagation modes can be categorized as either multimode or single mode, and then mul-
timode can be further subdivided into step index or graded index. Although there are a wide
variety of combinations of modes and indexes, there are only three practical types of opti-
cal fiber configurations: single-mode step-index, multimode step index, and multimode
graded index.

9-1 Single-Mode Step-Index Optical Fiber


Single-mode step-index fibers are the dominant fibers used in today’s telecommunications
and data networking industries. A single-mode step-index fiber has a central core that is sig-
nificantly smaller in diameter than any of the multimode cables. In fact, the diameter is suf-
ficiently small that there is essentially only one path that light may take as it propagates
down the cable. This type of fiber is shown in Figure 16a. In the simplest form of single-
mode step-index fiber, the outside cladding is simply air. The refractive index of the glass
core (n1) is approximately 1.5, and the refractive index of the air cladding (n2) is 1. The large

20
Optical Fiber Transmission Media

FIGURE 16 Single-mode step-index fibers: (a) air cladding; (b) glass cladding

difference in the refractive indexes results in a small critical angle (approximately 42°) at
the glass/air interface. Consequently, a single-mode step-index fiber has a wide external ac-
ceptance angle, which makes it relatively easy to couple light into the cable from an exter-
nal source. However, this type of fiber is very weak and difficult to splice or terminate.
A more practical type of single-mode step-index fiber is one that has a cladding other
than air, such as the cable shown in Figure 16b. The refractive index of the cladding (n2) is
slightly less than that of the central core (n1) and is uniform throughout the cladding. This
type of cable is physically stronger than the air-clad fiber, but the critical angle is also much
higher (approximately 77°). This results in a small acceptance angle and a narrow source-to-
fiber aperture, making it much more difficult to couple light into the fiber from a light source.
With both types of single-mode step-index fibers, light is propagated down the fiber
through reflection. Light rays that enter the fiber either propagate straight down the core or,
perhaps, are reflected only a few times. Consequently, all light rays follow approximately
the same path down the cable and take approximately the same amount of time to travel the
length of the cable. This is one overwhelming advantage of single-mode step-index fibers,
as explained in more detail in a later section of this chapter.

9-2 Multimode Step-Index Optical Fiber


A multimode step-index optical fiber is shown in Figure 17. Multimode step-index fibers
are similar to the single-mode step-index fibers except the center core is much larger with
the multimode configuration. This type of fiber has a large light-to-fiber aperture and, con-
sequently, allows more external light to enter the cable. The light rays that strike the
core/cladding interface at an angle greater than the critical angle (ray A) are propagated
down the core in a zigzag fashion, continuously reflecting off the interface boundary. Light

21
Optical Fiber Transmission Media

FIGURE 17 Multimode step-index fiber

FIGURE 18 Multimode graded-index fiber

rays that strike the core/cladding interface at an angle less than the critical angle (ray B) en-
ter the cladding and are lost. It can be seen that there are many paths that a light ray may
follow as it propagates down the fiber. As a result, all light rays do not follow the same path
and, consequently, do not take the same amount of time to travel the length of the cable.

9-3 Multimode Graded-Index Optical Fiber


A multimode graded-index optical fiber is shown in Figure 18. Graded-index fibers are char-
acterized by a central core with a nonuniform refractive index. Thus, the cable’s density is
maximum at the center and decreases gradually toward the outer edge. Light rays propagate
down this type of fiber through refraction rather than reflection. As a light ray propagates di-
agonally across the core toward the center, it is continually intersecting a less dense to more
dense interface. Consequently, the light rays are constantly being refracted, which results in
a continuous bending of the light rays. Light enters the fiber at many different angles. As the
light rays propagate down the fiber, the rays traveling in the outermost area of the fiber travel
a greater distance than the rays traveling near the center. Because the refractive index de-
creases with distance from the center and the velocity is inversely proportional to refractive
index, the light rays traveling farthest from the center propagate at a higher velocity. Conse-
quently, they take approximately the same amount of time to travel the length of the fiber.

9-4 Optical Fiber Comparison


9-4-1 Single-mode step-index fiber. Advantages include the following:
1. Minimum dispersion: All rays propagating down the fiber take approximately the
same path; thus, they take approximately the same length of time to travel down
the cable. Consequently, a pulse of light entering the cable can be reproduced at
the receiving end very accurately.

22
Optical Fiber Transmission Media

2. Because of the high accuracy in reproducing transmitted pulses at the receive end,
wider bandwidths and higher information transmission rates (bps) are possible
with single-mode step-index fibers than with the other types of fibers.
Disadvantages include the following:
1. Because the central core is very small, it is difficult to couple light into and
out of this type of fiber. The source-to-fiber aperture is the smallest of all the
fiber types.
2. Again, because of the small central core, a highly directive light source, such as a
laser, is required to couple light into a single-mode step-index fiber.
3. Single-mode step-index fibers are expensive and difficult to manufacture.

9-4-2 Multimode step-index fiber. Advantages include the following:


1. Multimode step-index fibers are relatively inexpensive and simple to manufacture.
2. It is easier to couple light into and out of multimode step-index fibers because they
have a relatively large source-to-fiber aperture.

Disadvantages include the following:


1. Light rays take many different paths down the fiber, which results in large dif-
ferences in propagation times. Because of this, rays traveling down this type of
fiber have a tendency to spread out. Consequently, a pulse of light propagating
down a multimode step-index fiber is distorted more than with the other types
of fibers.
2. The bandwidths and rate of information transfer rates possible with this type of
cable are less than that possible with the other types of fiber cables.

9-4-3 Multimode graded-index fiber. Essentially, there are no outstanding advan-


tages or disadvantages of this type of fiber. Multimode graded-index fibers are easier to cou-
ple light into and out of than single-mode step-index fibers but are more difficult than mul-
timode step-index fibers. Distortion due to multiple propagation paths is greater than in
single-mode step-index fibers but less than in multimode step-index fibers. This multimode
graded-index fiber is considered an intermediate fiber compared to the other fiber types.

10 LOSSES IN OPTICAL FIBER CABLES

Power loss in an optical fiber cable is probably the most important characteristic of the ca-
ble. Power loss is often called attenuation and results in a reduction in the power of the light
wave as it travels down the cable. Attenuation has several adverse effects on performance,
including reducing the system’s bandwidth, information transmission rate, efficiency, and
overall system capacity.
The standard formula for expressing the total power loss in an optical fiber cable is
Pout
A1dB2  10 log¢ ≤ (14)
Pin
where A(dB)  total reduction in power level, attenuation (unitless)
Pout  cable output power (watts)
Pin  cable input power (watts)
In general, multimode fibers tend to have more attenuation than single-mode cables,
primarily because of the increased scattering of the light wave produced from the dopants
in the glass. Table 2 shows output power as a percentage of input power for an optical

23
Optical Fiber Transmission Media

Table 2 % Output Power versus Loss in dB

Loss (dB) Output Power (%)

1 79
3 50
6 25
9 12.5
10 10
13 5
20 1
30 0.1
40 0.01
50 0.001

Table 3 Fiber Cable Attenuation

Core Diameter Cladding Diameter NA Attenuation


Cable Type (μm) (μm) (unitless) (dB/km)

Single mode 8 125 — 0.5 at 1300 nm


5 125 — 0.4 at 1300 nm
Graded index 50 125 0.2 4 at 850 nm
100 140 0.3 5 at 850 nm
Step index 200 380 0.27 6 at 850 nm
300 440 0.27 6 at 850 nm
PCS 200 350 0.3 10 at 790 nm
400 550 0.3 10 at 790 nm
Plastic — 750 0.5 400 at 650 nm
— 1000 0.5 400 at 650 nm

fiber cable with several values of decibel loss. A 1-dB cable loss reduces the output power
to 50% of the input power.
Attenuation of light propagating through glass depends on wavelength. The three
wavelength bands typically used for optical fiber communications systems are centered
around 0.85 microns, 1.30 microns, and 1.55 microns. For the kind of glass typically used for
optical communications systems, the 1.30-micron and 1.55-micron bands have less than 5%
loss per kilometer, while the 0.85-micron band experiences almost 20% loss per kilometer.
Although total power loss is of primary importance in an optical fiber cable, attenu-
ation is generally expressed in decibels of loss per unit length. Attenuation is expressed as
a positive dB value because by definition it is a loss. Table 3 lists attenuation in dB/km for
several types of optical fiber cables.
The optical power in watts measured at a given distance from a power source can be
determined mathematically as
P  Pt  10Al/10 (15)
where P  measured power level (watts)
Pt  transmitted power level (watts)
A  cable power loss (dB/km)
l  cable length (km)
Likewise, the optical power in decibel units is
P(dBm)  Pin(dBm)  Al(dB) (16)
where P  measured power level (dBm)
Pin  transmit power (dBm)
Al  cable power loss, attenuation (dB)

24
Optical Fiber Transmission Media

Example 3
For a single-mode optical cable with 0.25-dB/km loss, determine the optical power 100 km from a
0.1-mW light source.
Solution Substituting into Equation 15 gives
P  0.1mW  10{[(0.25)(100)]/(10)}
 1  104  10{[(0.25)(100)]/(10)}
 (1  104)(1  102.5)
 0.316 μW
0.316 μW
and P1dBm2  10 log¢ ≤
0.001
 35 dBm
or by substituting into Equation 16

≤  3 1100 km2 10.25 dB>km2 4


0.1 mW
P1dBm2  10 log¢
0.001 W
 10 dBm  25 dB
 35 dBm

Transmission losses in optical fiber cables are one of the most important characteristics of
the fibers. Losses in the fiber result in a reduction in the light power, thus reducing the sys-
tem bandwidth, information transmission rate, efficiency, and overall system capacity. The
predominant losses in optical fiber cables are the following:

Absorption loss
Material, or Rayleigh, scattering losses
Chromatic, or wavelength, dispersion
Radiation losses
Modal dispersion
Coupling losses

10-1 Absorption Losses


Absorption losses in optical fibers is analogous to power dissipation in copper cables; im-
purities in the fiber absorb the light and convert it to heat. The ultrapure glass used to man-
ufacture optical fibers is approximately 99.9999% pure. Still, absorption losses between 1
dB/km and 1000 dB/km are typical. Essentially, there are three factors that contribute to the
absorption losses in optical fibers: ultraviolet absorption, infrared absorption, and ion res-
onance absorption.

10-1-1 Ultraviolet absorption. Ultraviolet absorption is caused by valence elec-


trons in the silica material from which fibers are manufactured. Light ionizes the valence
electrons into conduction. The ionization is equivalent to a loss in the total light field and,
consequently, contributes to the transmission losses of the fiber.

10-1-2 Infrared absorption. Infrared absorption is a result of photons of light that


are absorbed by the atoms of the glass core molecules. The absorbed photons are converted
to random mechanical vibrations typical of heating.

10-1-3 Ion resonance absorption. Ion resonance absorption is caused by OH


ions in the material. The source of the OH ions is water molecules that have been trapped
in the glass during the manufacturing process. Iron, copper, and chromium molecules also
cause ion absorption.

25
Optical Fiber Transmission Media

FIGURE 19 Absorption losses in optical fibers

Figure 19 shows typical losses in optical fiber cables due to ultraviolet, infrared, and
ion resonance absorption.

10-2 Material, or Rayleigh, Scattering Losses


During manufacturing, glass is drawn into long fibers of very small diameter. During this
process, the glass is in a plastic state (not liquid and not solid). The tension applied to the
glass causes the cooling glass to develop permanent submicroscopic irregularities. When
light rays propagating down a fiber strike one of these impurities, they are diffracted. Dif-
fraction causes the light to disperse or spread out in many directions. Some of the dif-
fracted light continues down the fiber, and some of it escapes through the cladding. The
light rays that escape represent a loss in light power. This is called Rayleigh scattering
loss. Figure 20 graphically shows the relationship between wavelength and Rayleigh scat-
tering loss.

10-3 Chromatic, or Wavelength, Dispersion


Light-emitting diodes (LEDs) emit light containing many wavelengths. Each wavelength
within the composite light signal travels at a different velocity when propagating through
glass. Consequently, light rays that are simultaneously emitted from an LED and propagated
down an optical fiber do not arrive at the far end of the fiber at the same time, resulting in an
impairment called chromatic distortion (sometimes called wavelength dispersion). Chromatic
distortion can be eliminated by using a monochromatic light source such as an injection laser
diode (ILD). Chromatic distortion occurs only in fibers with a single mode of transmission.
10-4 Radiation Losses
Radiation losses are caused mainly by small bends and kinks in the fiber. Essentially, there
are two types of bends: microbends and constant-radius bends. Microbending occurs as a
result of differences in the thermal contraction rates between the core and the cladding ma-
terial. A microbend is a miniature bend or geometric imperfection along the axis of the fiber
and represents a discontinuity in the fiber where Rayleigh scattering can occur. Mi-
crobending losses generally contribute less than 20% of the total attenuation in a fiber.

26
Optical Fiber Transmission Media

FIGURE 20 Rayleigh scattering loss as a function of wavelength

Constant-radius bends are caused by excessive pressure and tension and generally occur
when fibers are bent during handling or installation.

10-5 Modal Dispersion


Modal dispersion (sometimes called pulse spreading) is caused by the difference in the
propagation times of light rays that take different paths down a fiber. Obviously, modal dis-
persion can occur only in multimode fibers. It can be reduced considerably by using graded-
index fibers and almost entirely eliminated by using single-mode step-index fibers.
Modal dispersion can cause a pulse of light energy to spread out in time as it propa-
gates down a fiber. If the pulse spreading is sufficiently severe, one pulse may interfere with
another. In multimode step-index fibers, a light ray propagating straight down the axis of
the fiber takes the least amount of time to travel the length of the fiber. A light ray that strikes
the core/cladding interface at the critical angle will undergo the largest number of internal
reflections and, consequently, take the longest time to travel the length of the cable.
For multimode propagation, dispersion is often expressed as a bandwidth length
product (BLP) or bandwidth distance product (BDP). BLP indicates what signal frequen-
cies can be propagated through a given distance of fiber cable and is expressed mathemat-
ically as the product of distance and bandwidth (sometimes called linewidth). Bandwidth
length products are often expressed in MHz  km units. As the length of an optical cable
increases, the bandwidth (and thus the bit rate) decreases in proportion.

Example 4
For a 300-meter optical fiber cable with a BLP of 600 MHzkm, determine the bandwidth.
600 MHz  km
Solution B 
0.3 km
B  2 GHz

Figure 21 shows three light rays propagating down a multimode step-index optical
fiber. The lowest-order mode (ray 1) travels in a path parallel to the axis of the fiber. The
middle-order mode (ray 2) bounces several times at the interface before traveling the length

27
Optical Fiber Transmission Media

FIGURE 21 Light propagation down a multimode step-index fiber

FIGURE 22 Light propagation down a single-mode step-index fiber

FIGURE 23 Light propagation down a multimode graded-index fiber

of the fiber. The highest-order mode (ray 3) makes many trips back and forth across the fiber
as it propagates the entire length. It can be seen that ray 3 travels a considerably longer dis-
tance than ray 1 over the length of the cable. Consequently, if the three rays of light were
emitted into the fiber at the same time, each ray would reach the far end at a different time,
resulting in a spreading out of the light energy with respect to time. This is called modal
dispersion and results in a stretched pulse that is also reduced in amplitude at the output of
the fiber.
Figure 22 shows light rays propagating down a single-mode step-index cable. Be-
cause the radial dimension of the fiber is sufficiently small, there is only a single transmis-
sion path that all rays must follow as they propagate down the length of the fiber. Conse-
quently, each ray of light travels the same distance in a given period of time, and modal
dispersion is virtually eliminated.
Figure 23 shows light propagating down a multimode graded-index fiber. Three
rays are shown traveling in three different modes. Although the three rays travel differ-
ent paths, they all take approximately the same amount of time to propagate the length
of the fiber. This is because the refractive index decreases with distance from the center,
and the velocity at which a ray travels is inversely proportional to the refractive index.

28
Optical Fiber Transmission Media

FIGURE 24 Pulse-width dispersion in an optical fiber cable

Consequently, the farther rays 2 and 3 travel from the center of the cable, the faster they
propagate.
Figure 24 shows the relative time/energy relationship of a pulse of light as it propa-
gates down an optical fiber cable. From the figure, it can be seen that as the pulse propa-
gates down the cable, the light rays that make up the pulse spread out in time, causing a cor-
responding reduction in the pulse amplitude and stretching of the pulse width. This is called
pulse spreading or pulse-width dispersion and causes errors in digital transmission. It can
also be seen that as light energy from one pulse falls back in time, it will interfere with the
next pulse, causing intersymbol interference.
Figure 25a shows a unipolar return-to-zero (UPRZ) digital transmission. With
UPRZ transmission (assuming a very narrow pulse), if light energy from pulse A were to
fall back (spread) one bit time (tb), it would interfere with pulse B and change what was a
logic 0 to a logic 1. Figure 25b shows a unipolar nonreturn-to-zero (UPNRZ) digital trans-
mission where each pulse is equal to the bit time. With UPNRZ transmission, if energy
from pulse A were to fall back one-half of a bit time, it would interfere with pulse B. Con-
sequently, UPRZ transmissions can tolerate twice as much delay or spread as UPNRZ
transmissions.
The difference between the absolute delay times of the fastest and slowest rays of light
propagating down a fiber of unit length is called the pulse-spreading constant ( t) and is gener-
ally expressed in nanoseconds per kilometer (ns/km). The total pulse spread (T) is then equal
to the pulse-spreading constant (t) times the total fiber length (L). Mathematically, T is
T(ns)  t(ns/km)  L(km) (17)
For UPRZ transmissions, the maximum data transmission rate in bits per second
(bps) is expressed as
1
fb1bps2  (18)
¢t  L

29
Optical Fiber Transmission Media

FIGURE 25 Pulse spreading of digital transmissions: (a) UPRZ; (b)


UPNRZ

and for UPNRZ transmissions, the maximum transmission rate is


1
fb1bps2  (19)
2¢t  L

Example 5
For an optical fiber 10 km long with a pulse-spreading constant of 5 ns/km, determine the maximum
digital transmission rates for
a. Return-to-zero.
b. Nonreturn-to-zero transmissions.
Solution
a. Substituting into Equation 18 yields
1
fb   20 Mbps
5 ns>km  10 km
b. Substituting into Equation 19 yields
1
fb   10 Mbps
12  5 ns>km2  10 km
The results indicate that the digital transmission rate possible for this optical fiber is twice as high (20
Mbps versus 10 Mbps) for UPRZ as for UPNRZ transmission.

30
Optical Fiber Transmission Media

FIGURE 26 Fiber alignment impairments: (a) lateral


misalignment; (b) gap displacement; (c) angular misalign-
ment; (d) surface finish

10-6 Coupling Losses


Coupling losses are caused by imperfect physical connections. In fiber cables, coupling
losses can occur at any of the following three types of optical junctions: light source-to-fiber
connections, fiber-to-fiber connections, and fiber-to-photodetector connections. Junction
losses are most often caused by one of the following alignment problems: lateral misalign-
ment, gap misalignment, angular misalignment, and imperfect surface finishes.

10-6-1 Lateral displacement. Lateral displacement (misalignment) is shown in


Figure 26a and is the lateral or axial displacement between two pieces of adjoining fiber ca-
bles. The amount of loss can be from a couple tenths of a decibel to several decibels. This
loss is generally negligible if the fiber axes are aligned to within 5% of the smaller fiber’s
diameter.

10-6-2 Gap displacement (misalignment). Gap displacement (misalignment) is


shown in Figure 26b and is sometimes called end separation. When splices are made in

31
Optical Fiber Transmission Media

optical fibers, the fibers should actually touch. The farther apart the fibers, the greater the
loss of light. If two fibers are joined with a connector, the ends should not touch because
the two ends rubbing against each other in the connector could cause damage to either or
both fibers.

10-6-3 Angular displacement (misalignment). Angular displacement (misalign-


ment) is shown in Figure 26c and is sometimes called angular displacement. If the angular
displacement is less than 2°, the loss will typically be less than 0.5 dB.

10-6-4 Imperfect surface finish. Imperfect surface finish is shown in Figure 26d.
The ends of the two adjoining fibers should be highly polished and fit together squarely. If
the fiber ends are less than 3° off from perpendicular, the losses will typically be less than
0.5 dB.

11 LIGHT SOURCES

The range of light frequencies detectable by the human eye occupies a very narrow segment
of the total electromagnetic frequency spectrum. For example, blue light occupies the
higher frequencies (shorter wavelengths) of visible light, and red hues occupy the lower fre-
quencies (longer wavelengths). Figure 27 shows the light wavelength distribution produced
from a tungsten lamp and the range of wavelengths perceivable by the human eye. As the
figure shows, the human eye can detect only those lightwaves between approximately 380
nm and 780 nm. Furthermore, light consists of many shades of colors that are directly re-
lated to the heat of the energy being radiated. Figure 27 also shows that more visible light
is produced as the temperature of the lamp is increased.
Light sources used for optical fiber systems must be at wavelengths efficiently propa-
gated by the optical fiber. In addition, the range of wavelengths must be considered because
the wider the range, the more likely the chance that chromatic dispersion will occur. Light

Ultraviolet Infrared
wavelengths wavelengths

1
Yellow
2000°k
Normalized human eye response

0.8
Orange 2500°k
Green Tungsten lamp
0.6 radiation spectrums
for different
temperatures
3400°k
0.4
Eye
response
GaAs
0.2

Blue Red

0 200 400 600 800 1000 1200 1400


Wavelength (nanometers)

FIGURE 27 Tungsten lamp radiation and human eye response

32
Optical Fiber Transmission Media

sources must also produce sufficient power to allow the light to propagate through the fiber
without causing distortion in the cable itself or in the receiver. Lastly, light sources must be
constructed so that their outputs can be efficiently coupled into and out of the optical cable.

12 OPTICAL SOURCES

There are essentially only two types of practical light sources used to generate light for op-
tical fiber communications systems: LEDs and ILDs. Both devices are constructed from
semiconductor materials and have advantages and disadvantages. Standard LEDs have
spectral widths of 30 nm to 50 nm, while injection lasers have spectral widths of only 1 nm
to 3 nm (1 nm corresponds to a frequency of about 178 GHz). Therefore, a 1320-nm light
source with a spectral linewidth of 0.0056 nm has a frequency bandwidth of approximately
1 GHz. Linewidth is the wavelength equivalent of bandwidth.
Selection of one light-emitting device over the other is determined by system eco-
nomic and performance requirements. The higher cost of laser diodes is offset by higher
performance. LEDs typically have a lower cost and a corresponding lower performance.
However, LEDs are typically more reliable.

12-1 LEDs
An LED is a p-njunction diode, usually made from a semiconductor material such as aluminum-
gallium-arsenide (AlGaAs) or gallium-arsenide-phosphide (GaAsP). LEDs emit light by spon-
taneous emission—light is emitted as a result of the recombination of electrons and holes.
When forward biased, minority carriers are injected across the p-n junction. Once
across the junction, these minority carriers recombine with majority carriers and give up en-
ergy in the form of light. This process is essentially the same as in a conventional semi-
conductor diode except that in LEDs certain semiconductor materials and dopants are cho-
sen such that the process is radiative; that is, a photon is produced. A photon is a quantum
of electromagnetic wave energy. Photons are particles that travel at the speed of light but at
rest have no mass. In conventional semiconductor diodes (germanium and silicon, for ex-
ample), the process is primarily nonradiative, and no photons are generated. The energy gap
of the material used to construct an LED determines the color of light it emits and whether
the light emitted by it is visible to the human eye.
To produce LEDs, semiconductors are formed from materials with atoms having ei-
ther three or five valence electrons (known as Group III and Group IV atoms, respectively,
because of their location in the periodic table of elements). To produce light wavelengths
in the 800-nm range, LEDs are constructed from Group III atoms, such as gallium (Ga) and
aluminum (Al), and a Group IV atom, such as arsenide (As). The junction formed is com-
monly abbreviated GaAlAs for gallium-aluminum-arsenide. For longer wavelengths, gal-
lium is combined with the Group III atom indium (In), and arsenide is combined with the
Group V atom phosphate (P), which forms a gallium-indium-arsenide-phosphate (GaInAsP)
junction. Table 4 lists some of the common semiconductor materials used in LED con-
struction and their respective output wavelengths.

Table 4 Semiconductor Material Wavelengths

Material Wavelength (nm)

AlGaInP 630–680
GaInP 670
GaAlAs 620–895
GaAs 904
InGaAs 980
InGaAsP 1100–1650
InGaAsSb 1700–4400

33
Optical Fiber Transmission Media

FIGURE 28 Homojunction LED structures: (a) silicon-doped gallium arsenide;


(b) planar diffused

12-2 Homojunction LEDs


A p-n junction made from two different mixtures of the same types of atoms is called a ho-
mojunction structure. The simplest LED structures are homojunction and epitaxially grown,
or they are single-diffused semiconductor devices, such as the two shown in Figure 28. Epit-
axially grown LEDs are generally constructed of silicon-doped gallium-arsenide (Figure
28a). A typical wavelength of light emitted from this construction is 940 nm, and a typical
output power is approximately 2 mW (3 dBm) at 100 mA of forward current. Light waves
from homojunction sources do not produce a very useful light for an optical fiber. Light is
emitted in all directions equally; therefore, only a small amount of the total light produced
is coupled into the fiber. In addition, the ratio of electricity converted to light is very low.
Homojunction devices are often called surface emitters.
Planar diffused homojunction LEDs (Figure 28b) output approximately 500 μW at a
wavelength of 900 nm. The primary disadvantage of homojunction LEDs is the nondirec-
tionality of their light emission, which makes them a poor choice as a light source for op-
tical fiber systems.

12-3 Heterojunction LEDs


Heterojunction LEDs are made from a p-type semiconductor material of one set of
atoms and an n-type semiconductor material from another set. Heterojunction devices
are layered (usually two) such that the concentration effect is enhanced. This produces
a device that confines the electron and hole carriers and the light to a much smaller
area. The junction is generally manufactured on a substrate backing material and then
sandwiched between metal contacts that are used to connect the device to a source of
electricity.
With heterojunction devices, light is emitted from the edge of the material and are
therefore often called edge emitters. A planar heterojunction LED (Figure 29) is quite sim-
ilar to the epitaxially grown LED except that the geometry is designed such that the forward
current is concentrated to a very small area of the active layer.
Heterojunction devices have the following advantages over homojunction devices:

The increase in current density generates a more brilliant light spot.


The smaller emitting area makes it easier to couple its emitted light into a fiber.
The small effective area has a smaller capacitance, which allows the planar hetero-
junction LED to be used at higher speeds.

Figure 30 shows the typical electrical characteristics for a low-cost infrared light-
emitting diode. Figure 30a shows the output power versus forward current. From the fig-
ure, it can be seen that the output power varies linearly over a wide range of input current

34
Optical Fiber Transmission Media

FIGURE 29 Planar heterojunction


LED

(0.5 mW [3 dBm] at 20 mA to 3.4 mW [5.3 dBm] at 140 mA). Figure 30b shows output
power versus temperature. It can be seen that the output power varies inversely with tem-
perature between a temperature range of 40°C to 80°C. Figure 30c shows relative output
power in respect to output wavelength. For this particular example, the maximum output
power is achieved at an output wavelength of 825 nm.

12-4 Burrus Etched-Well Surface-Emitting LED


For the more practical applications, such as telecommunications, data rates in excess of 100
Mbps are required. For these applications, the etched-well LED was developed. Burrus and
Dawson of Bell Laboratories developed the etched-well LED. It is a surface-emitting LED
and is shown in Figure 31. The Burrus etched-well LED emits light in many directions. The
etched well helps concentrate the emitted light to a very small area. Also, domed lenses can
be placed over the emitting surface to direct the light into a smaller area. These devices are
more efficient than the standard surface emitters, and they allow more power to be coupled
into the optical fiber, but they are also more difficult and expensive to manufacture.

12-5 Edge-Emitting LED


The edge-emitting LED, which was developed by RCA, is shown in Figure 32. These LEDs
emit a more directional light pattern than do the surface-emitting LEDs. The construction
is similar to the planar and Burrus diodes except that the emitting surface is a stripe rather
than a confined circular area. The light is emitted from an active stripe and forms an ellip-
tical beam. Surface-emitting LEDs are more commonly used than edge emitters because
they emit more light. However, the coupling losses with surface emitters are greater, and
they have narrower bandwidths.
The radiant light power emitted from an LED is a linear function of the forward cur-
rent passing through the device (Figure 33). It can also be seen that the optical output power
of an LED is, in part, a function of the operating temperature.

12-6 ILD
Lasers are constructed from many different materials, including gases, liquids, and solids,
although the type of laser used most often for fiber-optic communications is the semicon-
ductor laser.
The ILD is similar to the LED. In fact, below a certain threshold current, an ILD acts
similarly to an LED. Above the threshold current, an ILD oscillates; lasing occurs. As cur-
rent passes through a forward-biased p-n junction diode, light is emitted by spontaneous
emission at a frequency determined by the energy gap of the semiconductor material. When
a particular current level is reached, the number of minority carriers and photons produced
on either side of the p-n junction reaches a level where they begin to collide with already
excited minority carriers. This causes an increase in the ionization energy level and makes
the carriers unstable. When this happens, a typical carrier recombines with an opposite type

35
3.5

3.0

Output power (mW)


2.5

2.0

1.5

1.0

0.5

0.0
0 20 40 60 80 100 120 140 160

Forward current (mA)

(a)

1.2
Output power relative to 25°C

1.1

1.0

0.9

0.8

–60 –40 –20 0 20 40 60 80 100

Temperature, C

(b)

1.0

0.8
Relative power output

0.6

0.4

0.2

0
700 750 800 850 900

Wavelength (nm)

(c)

FIGURE 30 Typical LED electrical characteristics: (a) output


power-versus-forward current; (b) output power-versus-temperature;
and (c) output power-versus-output wavelength

36
Optical Fiber Transmission Media

FIGURE 31 Burrus etched-well surface-emitting LED

FIGURE 32 Edge-emitting LED

FIGURE 33 Output power versus forward current and operating temperature for an LED

of carrier at an energy level that is above its normal before-collision value. In the process,
two photons are created; one is stimulated by another. Essentially, a gain in the number of
photons is realized. For this to happen, a large forward current that can provide many car-
riers (holes and electrons) is required.
The construction of an ILD is similar to that of an LED (Figure 34) except that the
ends are highly polished. The mirrorlike ends trap the photons in the active region and, as
they reflect back and forth, stimulate free electrons to recombine with holes at a higher-
than-normal energy level. This process is called lasing.

37
Optical Fiber Transmission Media

FIGURE 34 Injection laser diode construction

FIGURE 35 Output power versus forward current and


temperature for an ILD

The radiant output light power of a typical ILD is shown in Figure 35. It can be
seen that very little output power is realized until the threshold current is reached; then
lasing occurs. After lasing begins, the optical output power increases dramatically,
with small increases in drive current. It can also be seen that the magnitude of the op-
tical output power of the ILD is more dependent on operating temperature than is the
LED.
Figure 36 shows the light radiation patterns typical of an LED and an ILD. Because
light is radiated out the end of an ILD in a narrow concentrated beam, it has a more direct
radiation pattern.
ILDs have several advantages over LEDs and some disadvantages. Advantages in-
clude the following:
ILDs emit coherent (orderly) light, whereas LEDs emit incoherent (disorderly) light.
Therefore, ILDs have a more direct radian pattern, making it easier to couple light
emitted by the ILD into an optical fiber cable. This reduces the coupling losses and
allows smaller fibers to be used.

38
Optical Fiber Transmission Media

FIGURE 36 LED and ILD radiation patterns

The radiant output power from an ILD is greater than that for an LED. A typical out-
put power for an ILD is 5 mW (7 dBm) and only 0.5 mW (3 dBm) for LEDs. This
allows ILDs to provide a higher drive power and to be used for systems that operate
over longer distances.
ILDs can be used at higher bit rates than LEDs.
ILDs generate monochromatic light, which reduces chromatic or wavelength dispersion.

Disadvantages include the following:

ILDs are typically 10 times more expensive than LEDs.


Because ILDs operate at higher powers, they typically have a much shorter lifetime
than LEDs.
ILDs are more temperature dependent than LEDs.

13 LIGHT DETECTORS

There are two devices commonly used to detect light energy in fiber-optic communications
receivers: PIN diodes and APDs.

13-1 PIN Diodes


A PIN diode is a depletion-layer photodiode and is probably the most common device used
as a light detector in fiber-optic communications systems. Figure 37 shows the basic con-
struction of a PIN diode. A very lightly doped (almost pure or intrinsic) layer of n-type semi-
conductor material is sandwiched between the junction of the two heavily doped n- and p-
type contact areas. Light enters the device through a very small window and falls on the
carrier-void intrinsic material. The intrinsic material is made thick enough so that most of the
photons that enter the device are absorbed by this layer. Essentially, the PIN photodiode op-
erates just the opposite of an LED. Most of the photons are absorbed by electrons in the va-
lence band of the intrinsic material. When the photons are absorbed, they add sufficient en-
ergy to generate carriers in the depletion region and allow current to flow through the device.

13-1-1 Photoelectric effect. Light entering through the window of a PIN diode is
absorbed by the intrinsic material and adds enough energy to cause electronics to move
from the valence band into the conduction band. The increase in the number of electrons
that move into the conduction band is matched by an increase in the number of holes in the

39
Optical Fiber Transmission Media

FIGURE 37 PIN photodiode construction

valence band. To cause current to flow in a photodiode, light of sufficient energy must be
absorbed to give valence electrons enough energy to jump the energy gap. The energy gap
for silicon is 1.12 eV (electron volts). Mathematically, the operation is as follows:
For silicon, the energy gap (Eg) equals 1.12 eV:
1 eV  1.6  1019 J
Thus, the energy gap for silicon is

Eg  11.12 eV2¢1.6  1019


J
≤  1.792  1019 J
eV

and energy (E)  hf (20)


34
where h  Planck’s constant  6.6256  10 J/Hz
f  frequency (hertz)
Rearranging and solving for f yields
E
f (21)
h
For a silicon photodiode,
1.792  1019 J
f  2.705  1014 Hz
6.6256  1034 J>Hz
Converting to wavelength yields

c 3  108 m>s
λ   1109 nm>cycle
f 2.705  1014 Hz

13-2 APDs
Figure 38 shows the basic construction of an APD. An APD is a pipn structure. Light en-
ters the diode and is absorbed by the thin, heavily doped n-layer. A high electric field in-
tensity developed across the i-p-n junction by reverse bias causes impact ionization to oc-
cur. During impact ionization, a carrier can gain sufficient energy to ionize other bound
electrons. These ionized carriers, in turn, cause more ionizations to occur. The process con-
tinues as in an avalanche and is, effectively, equivalent to an internal gain or carrier multi-
plication. Consequently, APDs are more sensitive than PIN diodes and require less addi-
tional amplification. The disadvantages of APDs are relatively long transit times and
additional internally generated noise due to the avalanche multiplication factor.

40
Optical Fiber Transmission Media

FIGURE 38 Avalanche photo-diode


construction

FIGURE 39 Spectral response


curve

13-3 Characteristics of Light Detectors


The most important characteristics of light detectors are the following:

1. Responsivity. A measure of the conversion efficiency of a photodetector. It is the


ratio of the output current of a photodiode to the input optical power and has the
unit of amperes per watt. Responsivity is generally given for a particular wave-
length or frequency.
2. Dark current. The leakage current that flows through a photodiode with no light
input. Thermally generated carriers in the diode cause dark current.
3. Transit time. The time it takes a light-induced carrier to travel across the depletion
region of a semiconductor. This parameter determines the maximum bit rate pos-
sible with a particular photodiode.
4. Spectral response. The range of wavelength values that a given photodiode will
respond. Generally, relative spectral response is graphed as a function of wave-
length or frequency, as shown in Figure 39.
5. Light sensitivity. The minimum optical power a light detector can receive and still
produce a usable electrical output signal. Light sensitivity is generally given for a
particular wavelength in either dBm or dBμ.

14 LASERS

Laser is an acronym for light amplification stimulated by the emission of radiation. Laser
technology deals with the concentration of light into a very small, powerful beam. The
acronym was chosen when technology shifted from microwaves to light waves. Basically,
there are four types of lasers: gas, liquid, solid, and semiconductor.

41
Optical Fiber Transmission Media

The first laser was developed by Theodore H. Maiman, a scientist who worked for
Hughes Aircraft Company in California. Maiman directed a beam of light into ruby crys-
tals with a xenon flashlamp and measured emitted radiation from the ruby. He discovered
that when the emitted radiation increased beyond threshold, it caused emitted radiation to
become extremely intense and highly directional. Uranium lasers were developed in 1960
along with other rare-earth materials. Also in 1960, A. Javin of Bell Laboratories developed
the helium laser. Semiconductor lasers (injection laser diodes) were manufactured in 1962
by General Electric, IBM, and Lincoln Laboratories.

14-1 Laser Types


Basically, there are four types of lasers: gas, liquid, solid, and semiconductor.
1. Gas lasers. Gas lasers use a mixture of helium and neon enclosed in a glass tube.
A flow of coherent (one frequency) light waves is emitted through the output cou-
pler when an electric current is discharged into the gas. The continuous light-wave
output is monochromatic (one color).
2. Liquid lasers. Liquid lasers use organic dyes enclosed in a glass tube for an active
medium. Dye is circulated into the tube with a pump. A powerful pulse of light ex-
cites the organic dye.
3. Solid lasers. Solid lasers use a solid, cylindrical crystal, such as ruby, for the active
medium. Each end of the ruby is polished and parallel. The ruby is excited by a tung-
sten lamp tied to an ac power supply. The output from the laser is a continuous wave.
4. Semiconductor lasers. Semiconductor lasers are made from semiconductor p-n
junctions and are commonly called ILDs. The excitation mechanism is a dc power
supply that controls the amount of current to the active medium. The output light
from an ILD is easily modulated, making it very useful in many electronic com-
munications applications.

14-2 Laser Characteristics


All types of lasers have several common characteristics. They all use (1) an active material
to convert energy into laser light, (2) a pumping source to provide power or energy, (3) op-
tics to direct the beam through the active material to be amplified, (4) optics to direct the
beam into a narrow powerful cone of divergence, (5) a feedback mechanism to provide con-
tinuous operation, and (6) an output coupler to transmit power out of the laser.
The radiation of a laser is extremely intense and directional. When focused into a fine
hairlike beam, it can concentrate all its power into the narrow beam. If the beam of light
were allowed to diverge, it would lose most of its power.

14-3 Laser Construction


Figure 40 shows the construction of a basic laser. A power source is connected to a flash-
tube that is coiled around a glass tube that holds the active medium. One end of the glass
tube is a polished mirror face for 100% internal reflection. The flashtube is energized by
a trigger pulse and produces a high-level burst of light (similar to a flashbulb). The flash
causes the chromium atoms within the active crystalline structure to become excited. The
process of pumping raises the level of the chromium atoms from ground state to an excited
energy state. The ions then decay, falling to an intermediate energy level. When the pop-
ulation of ions in the intermediate level is greater than the ground state, a population in-
version occurs. The population inversion causes laser action (lasing) to occur. After a pe-
riod of time, the excited chromium atoms will fall to the ground energy level. At this time,
photons are emitted. A photon is a packet of radiant energy. The emitted photons strike
atoms and two other photons are emitted (hence the term “stimulated emission”). The fre-
quency of the energy determines the strength of the photons; higher frequencies cause
greater-strength photons.

42
Optical Fiber Transmission Media

FIGURE 40 Laser construction

15 OPTICAL FIBER SYSTEM LINK BUDGET

As with any communications system, optical fiber systems consist of a source and a desti-
nation that are separated by numerous components and devices that introduce various
amounts of loss or gain to the signal as it propagates through the system. Figure 41 shows
two typical optical fiber communications system configurations. Figure 41a shows a re-
peaterless system where the source and destination are interconnected through one or more
sections of optical cable. With a repeaterless system, there are no amplifiers or regenerators
between the source and destination.
Figure 41b shows an optical fiber system that includes a repeater that either amplifies
or regenerates the signal. Repeatered systems are obviously used when the source and des-
tination are separated by great distances.
Link budgets are generally calculated between a light source and a light detector;
therefore, for our example, we look at a link budget for a repeaterless system. A repeater-
less system consists of a light source, such as an LED or ILD, and a light detector, such as
an APD connected by optical fiber and connectors. Therefore, the link budget consists of a
light power source, a light detector, and various cable and connector losses. Losses typical
to optical fiber links include the following:
1. Cable losses. Cable losses depend on cable length, material, and material purity.
They are generally given in dB/km and can vary between a few tenths of a dB to
several dB per kilometer.
2. Connector losses. Mechanical connectors are sometimes used to connect two sec-
tions of cable. If the mechanical connection is not perfect, light energy can escape,
resulting in a reduction in optical power. Connector losses typically vary between
a few tenths of a dB to as much as 2 dB for each connector.
3. Source-to-cable interface loss. The mechanical interface used to house the light
source and attach it to the cable is seldom perfect. Therefore, a small percentage
of optical power is not coupled into the cable, representing a power loss to the sys-
tem of several tenths of a dB.
4. Cable-to-light detector interface loss. The mechanical interface used to house the
light detector and attach it to the cable is also not perfect and, therefore, prevents
a small percentage of the power leaving the cable from entering the light detector.
This, of course, represents a loss to the system usually of a few tenths of a dB.
5. Splicing loss. If more than one continuous section of cable is required, cable sec-
tions can be fused together (spliced). Because the splices are not perfect, losses
ranging from a couple tenths of a dB to several dB can be introduced to the signal.

43
Optical Fiber Transmission Media

Signal Optical transmitter Fiber cable Optical receiver Signal


source (LED or ILD) (APD) destination

(a)

Signal Optical transmitter Fiber cable Repeater


source (LED or ILD) (Amplifier or regenerator)

Fiber cable
Optical receiver Signal
(APD) destination

(b)

FIGURE 41 Optical fiber communications systems: (a) without repeaters; (b) with repeaters

6. Cable bends. When an optical cable is bent at too large an angle, the internal char-
acteristics of the cable can change dramatically. If the changes are severe, total re-
flections for some of the light rays may no longer be achieved, resulting in refrac-
tion. Light refracted at the core/cladding interface enters the cladding, resulting in
a net loss to the signal of a few tenths of a dB to several dB.
As with any link or system budget, the useful power available in the receiver depends
on transmit power and link losses. Mathematically, receive power is represented as
Pr  Pt  losses (22)
where Pr  power received (dBm)
Pt  power transmitted (dBm)
losses  sum of all losses (dB)

Example 6
Determine the optical power received in dBm and watts for a 20-km optical fiber link with the fol-
lowing parameters:
LED output power of 30 mW
Four 5-km sections of optical cable each with a loss of 0.5 dB/km
Three cable-to-cable connectors with a loss of 2 dB each
No cable splices
Light source-to-fiber interface loss of 1.9 dB
Fiber-to-light detector loss of 2.1 dB
No losses due to cable bends

Solution The LED output power is converted to dBm using Equation 6:


30 mW
Pout  10 log
1 mW
 14.8 dBm

44
Optical Fiber Transmission Media

The cable loss is simply the product of the total cable length in km and the loss in dB/km. Four 5-km
sections of cable is a total cable length of 20 km; therefore,
total cable loss  20 km  0.5 dB/km
 10 dB
Cable connector loss is simply the product of the loss in dB per connector and the number of con-
nectors. The maximum number of connectors is always one less than the number of sections of cable.
Four sections of cable would then require three connectors; therefore,
total connector loss  3 connectors  2 dB/connector
 6 dB
The light source-to-cable and cable-to-light detector losses were given as 1.9 dB and 2.1 dB, respec-
tively. Therefore,
total loss  cable loss  connector loss  light source-to-cable loss  cable-to-light detector loss
 10 dB  6 dB  1.9 dB  2.1 dB
 20 dB
The receive power is determined by substituting into Equation 22:
Pr  14.8 dBm  20 dB
 5.2 dBm
 0.302 mW

QUESTIONS
1. Define a fiber-optic system.
2. What is the relationship between information capacity and bandwidth?
3. What development in 1951 was a substantial breakthrough in the field of fiber optics? In 1960?
In 1970?
4. Contrast the advantages and disadvantages of fiber-optic cables and metallic cables.
5. Outline the primary building blocks of a fiber-optic system.
6. Contrast glass and plastic fiber cables.
7. Briefly describe the construction of a fiber-optic cable.
8. Define the following terms: velocity of propagation, refraction, and refractive index.
9. State Snell’s law for refraction and outline its significance in fiber-optic cables.
10. Define critical angle.
11. Describe what is meant by mode of operation; by index profile.
12. Describe a step-index fiber cable; a graded-index cable.
13. Contrast the advantages and disadvantages of step-index, graded-index, single-mode, and multi-
mode propagation.
14. Why is single-mode propagation impossible with graded-index fibers?
15. Describe the source-to-fiber aperture.
16. What are the acceptance angle and the acceptance cone for a fiber cable?
17. Define numerical aperture.
18. List and briefly describe the losses associated with fiber cables.
19. What is pulse spreading?
20. Define pulse spreading constant.
21. List and briefly describe the various coupling losses.
22. Briefly describe the operation of a light-emitting diode.
23. What are the two primary types of LEDs?
24. Briefly describe the operation of an injection laser diode.
25. What is lasing?
26. Contrast the advantages and disadvantages of ILDs and LEDs.
27. Briefly describe the function of a photodiode.
28. Describe the photoelectric effect.

45
Optical Fiber Transmission Media

29. Explain the difference between a PIN diode and an APD.


30. List and describe the primary characteristics of light detectors.

PROBLEMS
1. Determine the wavelengths in nanometers and angstroms for the following light frequencies:
a. 3.45  1014 Hz
b. 3.62  1014 Hz
c. 3.21  1014 Hz
2. Determine the light frequency for the following wavelengths:
a. 670 nm
b. 7800 Å
c. 710 nm
3. For a glass (n  1.5)/quartz (n  1.38) interface and an angle of incidence of 35°, determine the
angle of refraction.
4. Determine the critical angle for the fiber described in problem 3.
5. Determine the acceptance angle for the cable described in problem 3.
6. Determine the numerical aperture for the cable described in problem 3.
7. Determine the maximum bit rate for RZ and NRZ encoding for the following pulse-spreading con-
stants and cable lengths:
a. t  10 ns/m, L  100 m
b. t  20 ns/m, L  1000 m
c. t  2000 ns/km, L  2 km
8. Determine the lowest light frequency that can be detected by a photodiode with an energy gap 
1.2 eV.
9. Determine the wavelengths in nanometers and angstroms for the following light frequencies:
a. 3.8  1014 Hz
b. 3.2  1014 Hz
c. 3.5  1014 Hz
10. Determine the light frequencies for the following wavelengths:
a. 650 nm
b. 7200 Å
c. 690 nm
11. For a glass (n  1.5)/quartz (n  1.41) interface and an angle of incidence of 38°, determine the
angle of refraction.
12. Determine the critical angle for the fiber described in problem 11.
13. Determine the acceptance angle for the cable described in problem 11.
14. Determine the numerical aperture for the cable described in problem 11.
15. Determine the maximum bit rate for RZ and NRZ encoding for the following pulse-spreading con-
stants and cable lengths:
a. t  14 ns/m, L  200 m
b. t  10 ns/m, L  50 m
c. t  20 ns/m, L  200 m
16. Determine the lowest light frequency that can be detected by a photodiode with an energy gap 
1.25 eV.
17. Determine the optical power received in dBm and watts for a 24-km optical fiber link with the
following parameters:
LED output power of 20 mW
Six 4-km sections of optical cable each with a loss of 0.6 dB/km
Three cable-to-cable connectors with a loss of 2.1 dB each
No cable splices
Light source-to-fiber interface loss of 2.2 dB
Fiber-to-light detector loss of 1.8 dB
No losses due to cable bends

46
Optical Fiber Transmission Media

ANSWERS TO SELECTED PROBLEMS


1. a. 869 nm, 8690 A°
b. 828 nm, 8280 A°
c. 935 nm, 9350 A°
3. 38.57°
5. 56°
7. a. RZ  1 Mbps, NRZ  500 kbps
b. RZ  50 kbps, NRZ  25 kbps
c. RZ  250 kbps, NRZ  125 kbps
9. a. 789 nm, 7890 A°
b. 937 nm, 9370 A°
c. 857 nm, 8570 A°
11. 42°
13. 36°
15. a. RZ  357 kbps, NRZ  179 kbps
b. RZ  2 Mbps, NRZ  1 Mbps
c. RZ  250 kbps, NRZ  125 kbps

47
48
Digital Modulation

CHAPTER OUTLINE

1 Introduction 7 Bandwidth Efficiency


2 Information Capacity, Bits, Bit Rate, Baud, and 8 Carrier Recovery
M-ary Encoding 9 Clock Recovery
3 Amplitude-Shift Keying 10 Differential Phase-Shift Keying
4 Frequency-Shift Keying 11 Trellis Code Modulation
5 Phase-Shift Keying 12 Probability of Error and Bit Error Rate
6 Quadrature-Amplitude Modulation 13 Error Performance

OBJECTIVES

■ Define electronic communications ■ Describe 8- and 16-PSK


■ Define digital modulation and digital radio ■ Describe quadrature-amplitude modulation
■ Define digital communications ■ Explain 8-QAM
■ Define information capacity ■ Explain 16-QAM
■ Define bit, bit rate, baud, and minimum bandwidth ■ Define bandwidth efficiency
■ Explain Shannon’s limit for information capacity ■ Explain carrier recovery
■ Explain M-ary encoding ■ Explain clock recovery
■ Define and describe digital amplitude modulation ■ Define and describe differential phase-shift keying
■ Define and describe frequency-shift keying ■ Define and explain trellis-code modulation
■ Describe continuous-phase frequency-shift keying ■ Define probability of error and bit error rate
■ Define phase-shift keying ■ Develop error performance equations for FSK,
■ Explain binary phase-shift keying PSK, and QAM
■ Explain quaternary phase-shift keying

From Chapter 2 of Advanced Electronic Communications Systems, Sixth Edition. Wayne Tomasi.
Copyright © 2004 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
49
Digital Modulation

1 INTRODUCTION

In essence, electronic communications is the transmission, reception, and processing of in-


formation with the use of electronic circuits. Information is defined as knowledge or intel-
ligence that is communicated (i.e., transmitted or received) between two or more points.
Digital modulation is the transmittal of digitally modulated analog signals (carriers) be-
tween two or more points in a communications system. Digital modulation is sometimes
called digital radio because digitally modulated signals can be propagated through Earth’s
atmosphere and used in wireless communications systems. Traditional electronic commu-
nications systems that use conventional analog modulation, such as amplitude modulation
(AM), frequency modulation (FM), and phase modulation (PM), are rapidly being replaced
with more modern digital moduluation systems that offer several outstanding advantages
over traditional analog systems, such as ease of processing, ease of multiplexing, and noise
immunity.
Digital communications is a rather ambiguous term that could have entirely different
meanings to different people. In the context of this text, digital communications include
systems where relatively high-frequency analog carriers are modulated by relatively low-
frequency digital information signals (digital radio) and systems involving the transmission
of digital pulses (digital transmission). Digital transmission systems transport information
in digital form and, therefore, require a physical facility between the transmitter and re-
ceiver, such as a metallic wire pair, a coaxial cable, or an optical fiber cable. In digital ra-
dio systems, the carrier facility could be a physical cable, or it could be free space.
The property that distinguishes digital radio systems from conventional analog-
modulation communications systems is the nature of the modulating signal. Both analog
and digital modulation systems use analog carriers to transport the information through the
system. However, with analog modulation systems, the information signal is also analog,
whereas with digital modulation, the information signal is digital, which could be computer-
generated data or digitally encoded analog signals.
Referring to Equation 1, if the information signal is digital and the amplitude (V) of
the carrier is varied proportional to the information signal, a digitally modulated signal
called amplitude shift keying (ASK) is produced. If the frequency (f) is varied proportional
to the information signal, frequency shift keying (FSK) is produced, and if the phase of the
carrier (θ) is varied proportional to the information signal, phase shift keying (PSK) is pro-
duced. If both the amplitude and the phase are varied proportional to the information sig-
nal, quadrature amplitude modulation (QAM) results. ASK, FSK, PSK, and QAM are all
forms of digital modulation:
v(t) = V sin (2p . ƒt + U)

ASK FSK PSK


(1)
QAM

Digital modulation is ideally suited to a multitude of communications applications,


including both cable and wireless systems. Applications include the following: (1) rela-
tively low-speed voice-band data communications modems, such as those found in most
personal computers; (2) high-speed data transmission systems, such as broadband digital
subscriber lines (DSL); (3) digital microwave and satellite communications systems; and
(4) cellular telephone Personal Communications Systems (PCS).
Figure 1 shows a simplified block diagram for a digital modulation system. In the
transmitter, the precoder performs level conversion and then encodes the incoming data
into groups of bits that modulate an analog carrier. The modulated carrier is shaped (fil-

50
Digital Modulation

FIGURE 1 Simplified block diagram of a digital radio system

tered), amplified, and then transmitted through the transmission medium to the receiver.
The transmission medium can be a metallic cable, optical fiber cable, Earth’s atmosphere,
or a combination of two or more types of transmission systems. In the receiver, the in-
coming signals are filtered, amplified, and then applied to the demodulator and decoder
circuits, which extracts the original source information from the modulated carrier. The
clock and carrier recovery circuits recover the analog carrier and digital timing (clock)
signals from the incoming modulated wave since they are necessary to perform the de-
modulation process.

2 INFORMATION CAPACITY, BITS, BIT RATE, BAUD,


AND M-ARY ENCODING

2-1 Information Capacity, Bits, and Bit Rate


Information theory is a highly theoretical study of the efficient use of bandwidth to propa-
gate information through electronic communications systems. Information theory can be
used to determine the information capacity of a data communications system. Information
capacity is a measure of how much information can be propagated through a communica-
tions system and is a function of bandwidth and transmission time.
Information capacity represents the number of independent symbols that can be car-
ried through a system in a given unit of time. The most basic digital symbol used to repre-
sent information is the binary digit, or bit. Therefore, it is often convenient to express the
information capacity of a system as a bit rate. Bit rate is simply the number of bits trans-
mitted during one second and is expressed in bits per second (bps).
In 1928, R. Hartley of Bell Telephone Laboratories developed a useful relationship
among bandwidth, transmission time, and information capacity. Simply stated, Hartley’s
law is
I∝Bt (2)
where I  information capacity (bits per second)
B  bandwidth (hertz)
t  transmission time (seconds)

51
Digital Modulation

From Equation 2, it can be seen that information capacity is a linear function of band-
width and transmission time and is directly proportional to both. If either the bandwidth or
the transmission time changes, a directly proportional change occurs in the information ca-
pacity.
In 1948, mathematician Claude E. Shannon (also of Bell Telephone Laboratories)
published a paper in the Bell System Technical Journal relating the information capacity of
a communications channel to bandwidth and signal-to-noise ratio. The higher the signal-
to-noise ratio, the better the performance and the higher the information capacity. Mathe-
matically stated, the Shannon limit for information capacity is

S
I  B log2¢1  ≤ (3)
N

S
or I  3.32B log10 ¢1  ≤ (4)
N
where I  information capacity (bps)
B  bandwidth (hertz)
S
 signal-to-noise power ratio (unitless)
N
For a standard telephone circuit with a signal-to-noise power ratio of 1000 (30 dB)
and a bandwidth of 2.7 kHz, the Shannon limit for information capacity is
I  (3.32)(2700) log10 (1  1000)
 26.9 kbps

Shannon’s formula is often misunderstood. The results of the preceding example indi-
cate that 26.9 kbps can be propagated through a 2.7-kHz communications channel. This may
be true, but it cannot be done with a binary system. To achieve an information transmission
rate of 26.9 kbps through a 2.7-kHz channel, each symbol transmitted must contain more
than one bit.

2-2 M-ary Encoding


M-ary is a term derived from the word binary. M simply represents a digit that corresponds
to the number of conditions, levels, or combinations possible for a given number of binary
variables. It is often advantageous to encode at a level higher than binary (sometimes re-
ferred to as beyond binary or higher-than-binary encoding) where there are more than two
conditions possible. For example, a digital signal with four possible conditions (voltage lev-
els, frequencies, phases, and so on) is an M-ary system where M  4. If there are eight pos-
sible conditions, M  8 and so forth. The number of bits necessary to produce a given num-
ber of conditions is expressed mathematically as
N  log2 M (5)
where N  number of bits necessary
M  number of conditions, levels, or combinations possible with N bits
Equation 5 can be simplified and rearranged to express the number of conditions possible
with N bits as
2N  M (6)
For example, with one bit, only 21  2 conditions are possible. With two bits, 22  4 con-
ditions are possible, with three bits, 23  8 conditions are possible, and so on.

52
Digital Modulation

2-3 Baud and Minimum Bandwidth


Baud is a term that is often misunderstood and commonly confused with bit rate (bps). Bit
rate refers to the rate of change of a digital information signal, which is usually binary.
Baud, like bit rate, is also a rate of change; however, baud refers to the rate of change of a
signal on the transmission medium after encoding and modulation have occurred. Hence,
baud is a unit of transmission rate, modulation rate, or symbol rate and, therefore, the terms
symbols per second and baud are often used interchangeably. Mathematically, baud is the
reciprocal of the time of one output signaling element, and a signaling element may repre-
sent several information bits. Baud is expressed as
1
baud  (7)
ts
where baud  symbol rate (baud per second)
ts  time of one signaling element (seconds)
A signaling element is sometimes called a symbol and could be encoded as a change in the
amplitude, frequency, or phase. For example, binary signals are generally encoded and
transmitted one bit at a time in the form of discrete voltage levels representing logic 1s
(highs) and logic 0s (lows). A baud is also transmitted one at a time; however, a baud may
represent more than one information bit. Thus, the baud of a data communications system
may be considerably less than the bit rate. In binary systems (such as binary FSK and bi-
nary PSK), baud and bits per second are equal. However, in higher-level systems (such as
QPSK and 8-PSK), bps is always greater than baud.
According to H. Nyquist, binary digital signals can be propagated through an ideal
noiseless transmission medium at a rate equal to two times the bandwidth of the medium.
The minimum theoretical bandwidth necessary to propagate a signal is called the minimum
Nyquist bandwidth or sometimes the minimum Nyquist frequency. Thus, fb  2B, where fb
is the bit rate in bps and B is the ideal Nyquist bandwidth. The actual bandwidth necessary
to propagate a given bit rate depends on several factors, including the type of encoding and
modulation used, the types of filters used, system noise, and desired error performance. The
ideal bandwidth is generally used for comparison purposes only.
The relationship between bandwidth and bit rate also applies to the opposite situation. For
a given bandwidth (B), the highest theoretical bit rate is 2B. For example, a standard telephone
circuit has a bandwidth of approximately 2700 Hz, which has the capacity to propagate
5400 bps through it. However, if more than two levels are used for signaling (higher-than-binary
encoding), more than one bit may be transmitted at a time, and it is possible to propagate a bit
rate that exceeds 2B. Using multilevel signaling, the Nyquist formulation for channel capacity is
fb  2B log2 M (8)
where fb  channel capacity (bps)
B  minimum Nyquist bandwidth (hertz)
M  number of discrete signal or voltage levels
Equation 8 can be rearranged to solve for the minimum bandwidth necessary to pass
M-ary digitally modulated carriers
fb
B¢ ≤ (9)
log2 M
If N is substituted for log2 M, Equation 9 reduces to
fb
B (10)
N
where N is the number of bits encoded into each signaling element.

53
Digital Modulation

If information bits are encoded (grouped) and then converted to signals with more
than two levels, transmission rates in excess of 2B are possible, as will be seen in subse-
quent sections of this chapter. In addition, since baud is the encoded rate of change, it also
equals the bit rate divided by the number of bits encoded into one signaling element. Thus,
fb
baud  (11)
N
By comparing Equation 10 with Equation 11, it can be seen that with digital modu-
lation, the baud and the ideal minimum Nyquist bandwidth have the same value and are
equal to the bit rate divided by the number of bits encoded. This statement holds true for all
forms of digital modulation except frequency-shift keying.

3 AMPLITUDE-SHIFT KEYING

The simplest digital modulation technique is amplitude-shift keying (ASK), where a binary
information signal directly modulates the amplitude of an analog carrier. ASK is similar
to standard amplitude modulation except there are only two output amplitudes possible.
Amplitude-shift keying is sometimes called digital amplitude modulation (DAM). Mathe-
matically, amplitude-shift keying is

v1ask2 1t2  31  vm 1t2 4B cos1ct2R


A
(12)
2
where vask(t)  amplitude-shift keying wave
vm(t)  digital information (modulating) signal (volts)
A/2  unmodulated carrier amplitude (volts)
ωc  analog carrier radian frequency (radians per second, 2π fct)
In Equation 12, the modulating signal (vm[t]) is a normalized binary waveform, where 1
V  logic 1 and 1 V  logic 0. Therefore, for a logic 1 input, vm(t)  1 V, Equation 12 re-
duces to

v1ask2 1t2  31  14B cos1ct2R


A
2

 A cos1ct2
and for a logic 0 input, vm(t)  1 V, Equation 12 reduces to

v1ask2 1t2  31  1 4B cos1ct2R


A
2
0

Thus, the modulated wave vask(t), is either A cos(ωct) or 0. Hence, the carrier is either “on” or
“off,” which is why amplitude-shift keying is sometimes referred to as on-off keying (OOK).
Figure 2 shows the input and output waveforms from an ASK modulator. From the
figure, it can be seen that for every change in the input binary data stream, there is one
change in the ASK waveform, and the time of one bit (tb) equals the time of one analog sig-
naling element (ts). It is also important to note that for the entire time the binary input is high,
the output is a constant-amplitude, constant-frequency signal, and for the entire time the bi-
nary input is low, the carrier is off. The bit time is the reciprocal of the bit rate and the time
of one signaling element is the reciprocal of the baud. Therefore, the rate of change of the

54
Digital Modulation

Binary
input

(a)

DAM
output
FIGURE 2 Digital amplitude modula-
tion: (a) input binary; (b) output DAM
(b) waveform

ASK waveform (baud) is the same as the rate of change of the binary input (bps); thus, the
bit rate equals the baud. With ASK, the bit rate is also equal to the minimum Nyquist band-
width. This can be verified by substituting into Equations 10 and 11 and setting N to 1:
fb fb
B  fb baud   fb
1 1

Example 1
Determine the baud and minimum bandwidth necessary to pass a 10 kbps binary signal using ampli-
tude shift keying.
Solution For ASK, N  1, and the baud and minimum bandwidth are determined from Equations
11 and 10, respectively:
10,000
B  10,000
1
10,000
baud   10,000
1
The use of amplitude-modulated analog carriers to transport digital information is a relatively
low-quality, low-cost type of digital modulation and, therefore, is seldom used except for very low-
speed telemetry circuits.

4 FREQUENCY-SHIFT KEYING

Frequency-shift keying (FSK) is another relatively simple, low-performance type of digital


modulation. FSK is a form of constant-amplitude angle modulation similar to standard fre-
quency modulation (FM) except the modulating signal is a binary signal that varies between
two discrete voltage levels rather than a continuously changing analog waveform. Conse-
quently, FSK is sometimes called binary FSK (BFSK). The general expression for FSK is
vfsk(t)  Vc cos{2π[fc  vm(t) f]t} (13)
where vfsk(t)  binary FSK waveform
Vc  peak analog carrier amplitude (volts)
fc  analog carrier center frequency (hertz)
f  peak change (shift) in the analog carrier frequency (hertz)
vm(t)  binary input (modulating) signal (volts)
From Equation 13, it can be seen that the peak shift in the carrier frequency (f) is
proportional to the amplitude of the binary input signal (vm[t]), and the direction of the shift

55
Digital Modulation

–Δƒ +Δƒ

ƒs ƒc ƒm

Logic 1

Logic 0 Binary input FIGURE 3 FSK in the frequency


signal domain

is determined by the polarity. The modulating signal is a normalized binary waveform


where a logic 1  1 V and a logic 0  1 V. Thus, for a logic 1 input, vm(t)  1, Equa-
tion 13 can be rewritten as
vfsk(t)  Vc cos[2π(fc  f)t]
For a logic 0 input, vm(t)  1, Equation 13 becomes
vfsk(t)  Vc cos[2π(fc  f)t]
With binary FSK, the carrier center frequency (fc) is shifted (deviated) up and down
in the frequency domain by the binary input signal as shown in Figure 3. As the binary in-
put signal changes from a logic 0 to a logic 1 and vice versa, the output frequency shifts be-
tween two frequencies: a mark, or logic 1 frequency (fm), and a space, or logic 0 frequency
(fs). The mark and space frequencies are separated from the carrier frequency by the peak
frequency deviation (f) and from each other by 2 f.
With FSK, frequency deviation is defined as the difference between either the mark
or space frequency and the center frequency, or half the difference between the mark and
space frequencies. Frequency deviation is illustrated in Figure 3 and expressed mathemat-
ically as
冟 fm  fs冟
¢f  (14)
2
where f  frequency deviation (hertz)
|fm  fs|  absolute difference between the mark and space frequencies (hertz)
Figure 4a shows in the time domain the binary input to an FSK modulator and the cor-
responding FSK output. As the figure shows, when the binary input (fb) changes from a logic
1 to a logic 0 and vice versa, the FSK output frequency shifts from a mark (fm) to a space (fs)
frequency and vice versa. In Figure 4a, the mark frequency is the higher frequency (fc 
f), and the space frequency is the lower frequency (fc  f), although this relationship could
be just the opposite. Figure 4b shows the truth table for a binary FSK modulator. The truth
table shows the input and output possibilities for a given digital modulation scheme.

4-1 FSK Bit Rate, Baud, and Bandwidth


In Figure 4a, it can be seen that the time of one bit (tb) is the same as the time the FSK out-
put is a mark of space frequency (ts). Thus, the bit time equals the time of an FSK signal-
ing element, and the bit rate equals the baud.

56
Digital Modulation

tb
Binary
input 0 1 0 1 0 1 0 1 0 1 0

ts
binary frequency
Analog input output
output
ƒs ƒm ƒs ƒm ƒs ƒm ƒs ƒm ƒs ƒm ƒs 0 space (fs)
1 mark (fm)
ƒm , mark frequency; ƒs space frequency

(a) (b)

FIGURE 4 FSK in the time domain: (a) waveform; (b) truth table

The baud for binary FSK can also be determined by substituting N  1 in Equa-
tion 11:
fb
baud   fb
1
FSK is the exception to the rule for digital modulation, as the minimum bandwidth is
not determined from Equation 10. The minimum bandwidth for FSK is given as
B  |( fs  fb)  ( fm  fb)|
 |fs  fm|  2 fb
and since |fs  fm| equals 2f, the minimum bandwidth can be approximated as
B  2(f  fb) (15)
where B  minimum Nyquist bandwidth (hertz)
f  frequency deviation (|fm  fs|) (hertz)
fb  input bit rate (bps)
Note how closely Equation 15 resembles Carson’s rule for determining the approxi-
mate bandwidth for an FM wave. The only difference in the two equations is that, for FSK,
the bit rate (fb) is substituted for the modulating-signal frequency (fm).
Example 2
Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c) baud for a binary FSK
signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of 2 kbps.
Solution a. The peak frequency deviation is determined from Equation 14:
冟49kHz  51kHz冟
¢f 
2
 1 kHz
b. The minimum bandwidth is determined from Equation 15:
B  2(1000  2000)
 6 kHz
c. For FSK, N  1, and the baud is determined from Equation 11 as

2000
baud   2000
1

57
Digital Modulation

Bessel functions can also be used to determine the approximate bandwidth for an
FSK wave. As shown in Figure 5, the fastest rate of change (highest fundamental frequency)
in a nonreturn-to-zero (NRZ) binary signal occurs when alternating 1s and 0s are occurring
(i.e., a square wave). Since it takes a high and a low to produce a cycle, the highest funda-
mental frequency present in a square wave equals the repetition rate of the square wave,
which with a binary signal is equal to half the bit rate. Therefore,
fb
fa  (16)
2
where fa  highest fundamental frequency of the binary input signal (hertz)
fb  input bit rate (bps)
The formula used for modulation index in FM is also valid for FSK; thus,

¢f
h 1unitless2 (17)
fa
where h  FM modulation index called the h-factor in FSK
fa  fundamental frequency of the binary modulating signal (hertz)
f  peak frequency deviation (hertz)
The worst-case modulation index (deviation ratio) is that which yields the widest band-
width. The worst-case or widest bandwidth occurs when both the frequency deviation and
the modulating-signal frequency are at their maximum values. As described earlier, the
peak frequency deviation in FSK is constant and always at its maximum value, and the
highest fundamental frequency is equal to half the incoming bit rate. Thus,
冟fm  fs冟

1unitless2
2
h
fb
2

冟fm  fs冟
or h (18)
fb

FIGURE 5 FSK modulator, tb, time of one bit = 1/fb; fm, mark frequency; fs, space
frequency; T1, period of shortest cycle; 1/T1, fundamental frequency of binary
square wave; fb, input bit rate (bps)

58
Digital Modulation

where h  h-factor (unitless)


fm  mark frequency (hertz)
fs  space frequency (hertz)
fb  bit rate (bits per second)

Example 3
Using a Bessel table, determine the minimum bandwidth for the same FSK signal described in Exam-
ple 1 with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of 2 kbps.
Solution The modulation index is found by substituting into Equation 17:
冟49 kHz  51 kHz冟
or h
2 kbps
2 kHz

2 kbps
1
From a Bessel table, three sets of significant sidebands are produced for a modulation index of
one. Therefore, the bandwidth can be determined as follows:
B  2(3  1000)
 6000 Hz
The bandwidth determined in Example 3 using the Bessel table is identical to the bandwidth
determined in Example 2.

4-2 FSK Transmitter


Figure 6 shows a simplified binary FSK modulator, which is very similar to a conventional
FM modulator and is very often a voltage-controlled oscillator (VCO). The center fre-
quency (fc) is chosen such that it falls halfway between the mark and space frequencies. A
logic 1 input shifts the VCO output to the mark frequency, and a logic 0 input shifts the VCO
output to the space frequency. Consequently, as the binary input signal changes back and
forth between logic 1 and logic 0 conditions, the VCO output shifts or deviates back and
forth between the mark and space frequencies.
In a binary FSK modulator, f is the peak frequency deviation of the carrier and is
equal to the difference between the carrier rest frequency and either the mark or the space
frequency (or half the difference between the carrier rest frequency) and either the mark or
the space frequency (or half the difference between the mark and space frequencies). A VCO-
FSK modulator can be operated in the sweep mode where the peak frequency deviation is

NRZ FSK output


binary FSK Modulator
input (VCO)
k1 = Hz/v

–Δƒ +Δƒ

ƒm ƒc ƒs

Logic 0
Logic 1

FIGURE 6 FSK modulator

59
Digital Modulation

FIGURE 7 Noncoherent FSK demodulator

FIGURE 8 Coherent FSK demodulator

simply the product of the binary input voltage and the deviation sensitivity of the VCO. With
the sweep mode of modulation, the frequency deviation is expressed mathematically as
f  vm(t)kl (19)
where f  peak frequency deviation (hertz)
vm(t)  peak binary modulating-signal voltage (volts)
kl  deviation sensitivity (hertz per volt).
With binary FSK, the amplitude of the input signal can only be one of two values, one
for a logic 1 condition and one for a logic 0 condition. Therefore, the peak frequency devi-
ation is constant and always at its maximum value. Frequency deviation is simply plus or
minus the peak voltage of the binary signal times the deviation sensitivity of the VCO. Since
the peak voltage is the same for a logic 1 as it is for a logic 0, the magnitude of the frequency
deviation is also the same for a logic 1 as it is for a logic 0.

4-3 FSK Receiver


FSK demodulation is quite simple with a circuit such as the one shown in Figure 7. The
FSK input signal is simultaneously applied to the inputs of both bandpass filters (BPFs)
through a power splitter. The respective filter passes only the mark or only the space fre-
quency on to its respective envelope detector. The envelope detectors, in turn, indicate the
total power in each passband, and the comparator responds to the largest of the two pow-
ers. This type of FSK detection is referred to as noncoherent detection; there is no frequency
involved in the demodulation process that is synchronized either in phase, frequency, or
both with the incoming FSK signal.
Figure 8 shows the block diagram for a coherent FSK receiver. The incoming FSK sig-
nal is multiplied by a recovered carrier signal that has the exact same frequency and phase as
the transmitter reference. However, the two transmitted frequencies (the mark and space fre-
quencies) are not generally continuous; it is not practical to reproduce a local reference that
is coherent with both of them. Consequently, coherent FSK detection is seldom used.

60
Digital Modulation

FIGURE 9 PLL-FSK demodulator

FIGURE 10 Noncontinuous FSK


waveform

The most common circuit used for demodulating binary FSK signals is the phase-
locked loop (PLL), which is shown in block diagram form in Figure 9. A PLL-FSK de-
modulator works similarly to a PLL-FM demodulator. As the input to the PLL shifts be-
tween the mark and space frequencies, the dc error voltage at the output of the phase
comparator follows the frequency shift. Because there are only two input frequencies (mark
and space), there are also only two output error voltages. One represents a logic 1 and the
other a logic 0. Therefore, the output is a two-level (binary) representation of the FSK in-
put. Generally, the natural frequency of the PLL is made equal to the center frequency of
the FSK modulator. As a result, the changes in the dc error voltage follow the changes in
the analog input frequency and are symmetrical around 0 V.
Binary FSK has a poorer error performance than PSK or QAM and, consequently, is sel-
dom used for high-performance digital radio systems. Its use is restricted to low-performance,
low-cost, asynchronous data modems that are used for data communications over analog,
voice-band telephone lines.
4-4 Continuous-Phase Frequency-Shift Keying
Continuous-phase frequency-shift keying (CP-FSK) is binary FSK except the mark and
space frequencies are synchronized with the input binary bit rate. Synchronous simply im-
plies that there is a precise time relationship between the two; it does not mean they are equal.
With CP-FSK, the mark and space frequencies are selected such that they are separated from
the center frequency by an exact multiple of one-half the bit rate (fm and fs  n[fb /2]), where
n  any integer). This ensures a smooth phase transition in the analog output signal when it
changes from a mark to a space frequency or vice versa. Figure 10 shows a noncontinuous
FSK waveform. It can be seen that when the input changes from a logic 1 to a logic 0 and
vice versa, there is an abrupt phase discontinuity in the analog signal. When this occurs, the
demodulator has trouble following the frequency shift; consequently, an error may occur.
Figure 11 shows a continuous phase FSK waveform. Notice that when the output fre-
quency changes, it is a smooth, continuous transition. Consequently, there are no phase dis-
continuities. CP-FSK has a better bit-error performance than conventional binary FSK for
a given signal-to-noise ratio. The disadvantage of CP-FSK is that it requires synchroniza-
tion circuits and is, therefore, more expensive to implement.

61
Digital Modulation

FIGURE 11 Continuous-phase MSK waveform

5 PHASE-SHIFT KEYING

Phase-shift keying (PSK) is another form of angle-modulated, constant-amplitude digital


modulation. PSK is an M-ary digital modulation scheme similar to conventional phase
modulation except with PSK the input is a binary digital signal and there are a limited num-
ber of output phases possible. The input binary information is encoded into groups of bits
before modulating the carrier. The number of bits in a group ranges from 1 to 12 or more.
The number of output phases is defined by M as described in Equation 6 and determined by
the number of bits in the group (n).

5-1 Binary Phase-Shift Keying


The simplest form of PSK is binary phase-shift keying (BPSK), where N  1 and M  2.
Therefore, with BPSK, two phases (21  2) are possible for the carrier. One phase repre-
sents a logic 1, and the other phase represents a logic 0. As the input digital signal changes
state (i.e., from a 1 to a 0 or from a 0 to a 1), the phase of the output carrier shifts between
two angles that are separated by 180°. Hence, other names for BPSK are phase reversal key-
ing (PRK) and biphase modulation. BPSK is a form of square-wave modulation of a
continuous wave (CW) signal.

5-1-1 BPSK transmitter. Figure 12 shows a simplified block diagram of a BPSK


transmitter. The balanced modulator acts as a phase reversing switch. Depending on the

+v +v
o –v

Binary Level
data Balanced Bandpass Modulated
converter
in modulator filter PSK output
(UP to BP)

sin(ωct)

Buffer

sin(ωct)

Reference
carrier
oscillator

FIGURE 12 BPSK transmitter

62
Digital Modulation

FIGURE 13 (a) Balanced ring modulator; (b) logic 1 input; (c) logic O input

logic condition of the digital input, the carrier is transferred to the output either in phase or
180° out of phase with the reference carrier oscillator.
Figure 13 shows the schematic diagram of a balanced ring modulator. The balanced
modulator has two inputs: a carrier that is in phase with the reference oscillator and the bi-
nary digital data. For the balanced modulator to operate properly, the digital input voltage
must be much greater than the peak carrier voltage. This ensures that the digital input con-
trols the on/off state of diodes D1 to D4. If the binary input is a logic 1 (positive voltage),
diodes D1 and D2 are forward biased and on, while diodes D3 and D4 are reverse biased
and off (Figure 13b). With the polarities shown, the carrier voltage is developed across

63
Digital Modulation

FIGURE 14 BPSK modulator: (a) truth table; (b) phasor


diagram; (c) constellation diagram

transformer T2 in phase with the carrier voltage across T1. Consequently, the output signal
is in phase with the reference oscillator.
If the binary input is a logic 0 (negative voltage), diodes D1 and D2 are reverse biased
and off, while diodes D3 and D4 are forward biased and on (Figure 13c). As a result, the car-
rier voltage is developed across transformer T2 180° out of phase with the carrier voltage
across T1. Consequently, the output signal is 180° out of phase with the reference oscillator.
Figure 14 shows the truth table, phasor diagram, and constellation diagram for a BPSK mod-
ulator. A constellation diagram, which is sometimes called a signal state-space diagram, is
similar to a phasor diagram except that the entire phasor is not drawn. In a constellation di-
agram, only the relative positions of the peaks of the phasors are shown.

5-1-2 Bandwidth considerations of BPSK. A balanced modulator is a product


modulator; the output signal is the product of the two input signals. In a BPSK modulator,
the carrier input signal is multiplied by the binary data. If 1 V is assigned to a logic 1 and
1 V is assigned to a logic 0, the input carrier (sin ωct) is multiplied by either a  or 1.
Consequently, the output signal is either 1 sin ωct or 1 sin ωct; the first represents a sig-
nal that is in phase with the reference oscillator, the latter a signal that is 180° out of phase
with the reference oscillator. Each time the input logic condition changes, the output phase
changes. Consequently, for BPSK, the output rate of change (baud) is equal to the input rate
of change (bps), and the widest output bandwidth occurs when the input binary data are an
alternating 1/0 sequence. The fundamental frequency ( fa) of an alternative 1/0 bit sequence
is equal to one-half of the bit rate ( fb/2). Mathematically, the output of a BPSK modulator
is proportional to
BPSK output  [sin(2πfat)]  [sin(2πfct)] (20)

64
Digital Modulation

FIGURE 15 Output phase-versus-time relationship for a BPSK modulator

where fa  maximum fundamental frequency of binary input (hertz)


fc  reference carrier frequency (hertz)
Solving for the trig identity for the product of two sine functions,

cos32π1fc  fa 2t4  cos32π1fc  fa 2t4


1 1
2 2
Thus, the minimum double-sided Nyquist bandwidth (B) is

fc  fa fc  fa
1fc  fa 2 fc  fa
or
2fa
and because fa  fb/2, where fb  input bit rate,
2fb
B  fb
2
where B is the minimum double-sided Nyquist bandwidth.
Figure 15 shows the output phase-versus-time relationship for a BPSK waveform. As
the figure shows, a logic 1 input produces an analog output signal with a 0° phase angle,
and a logic 0 input produces an analog output signal with a 180° phase angle. As the binary
input shifts between a logic 1 and a logic 0 condition and vice versa, the phase of the BPSK
waveform shifts between 0° and 180°, respectively. For simplicity, only one cycle of the
analog carrier is shown in each signaling element, although there may be anywhere be-
tween a fraction of a cycle to several thousand cycles, depending on the relationship be-
tween the input bit rate and the analog carrier frequency. It can also be seen that the time of
one BPSK signaling element (ts) is equal to the time of one information bit (tb), which in-
dicates that the bit rate equals the baud.
Example 4
For a BPSK modulator with a carrier frequency of 70 MHz and an input bit rate of 10 Mbps, deter-
mine the maximum and minimum upper and lower side frequencies, draw the output spectrum, de-
termine the minimum Nyquist bandwidth, and calculate the baud.

65
Digital Modulation

± sin(ωct) UP
BPSK Balanced Level Binary +v
BPF LPF data
input modulator converter o
output

sin(ωct)

Clock
recovery
Coherent
carrier
recovery

FIGURE 16 Block diagram of a BPSK receiver

Solution Substituting into Equation 20 yields


output  (sin ωat)(sin ωct)
 [sin 2π(5 MHz)t][sin 2π(70 MHz)t]

cos 2π 170 MHz  5 MHz 2t  cos 2π 170 MHz  5 MHz 2 t


1 1

2 2



lower side frequency upper side frequency

Minimum lower side frequency (LSF):


LSF  70 MHz  5 MHz  65 MHz
Maximum upper side frequency (USF):
USF  70 MHz  5 MHz  75 MHz
Therefore, the output spectrum for the worst-case binary input conditions is as follows:
The minimum Nyquist bandwidth (B) is

B  75 MHz  65 MHz  10 MHz


and the baud  fb or 10 megabaud.

5-1-3 BPSK receiver. Figure 16 shows the block diagram of a BPSK receiver. The
input signal may be sin ωct or sin ωct. The coherent carrier recovery circuit detects and
regenerates a carrier signal that is both frequency and phase coherent with the original
transmit carrier. The balanced modulator is a product detector; the output is the product of
the two inputs (the BPSK signal and the recovered carrier). The low-pass filter (LPF) sep-
arates the recovered binary data from the complex demodulated signal. Mathematically, the
demodulation process is as follows.
For a BPSK input signal of sin ωct (logic 1), the output of the balanced modulator is
output  (sin ωct)(sin ωct)  sin2 ωct (21)

66
Digital Modulation

(filtered out)

sin2 ct  11  cos 2ct2   cos 2 ct


1 1 1
or
2 2 2
1
leaving output   V  logic 1
2
It can be seen that the output of the balanced modulator contains a positive voltage
([1/2]V) and a cosine wave at twice the carrier frequency (2 ωc) The LPF has a cutoff fre-
quency much lower than 2ωc and, thus, blocks the second harmonic of the carrier and
passes only the positive constant component. A positive voltage represents a demodulated
logic 1.
For a BPSK input signal of sin ωct (logic 0), the output of the balanced modulator is
output  (sin ωct)(sin ωct)  sin2 ωct
(filtered out)

sin2 ct   11  cos 2ct2    cos 2ct


1 1 1
or
2 2 2
1
leaving output   V  logic 0
2
The output of the balanced modulator contains a negative voltage ([1/2]V) and a
cosine wave at twice the carrier frequency (2ωc). Again, the LPF blocks the second har-
monic of the carrier and passes only the negative constant component. A negative voltage
represents a demodulated logic 0.

5-2 Quaternary Phase-Shift Keying


Quaternary phase shift keying (QPSK), or quadrature PSK as it is sometimes called, is an-
other form of angle-modulated, constant-amplitude digital modulation. QPSK is an M-ary
encoding scheme where N  2 and M  4 (hence, the name “quaternary” meaning “4”).
With QPSK, four output phases are possible for a single carrier frequency. Because there
are four output phases, there must be four different input conditions. Because the digital in-
put to a QPSK modulator is a binary (base 2) signal, to produce four different input com-
binations, the modulator requires more than a single input bit to determine the output con-
dition. With two bits, there are four possible conditions: 00, 01, 10, and 11. Therefore, with
QPSK, the binary input data are combined into groups of two bits, called dibits. In the mod-
ulator, each dibit code generates one of the four possible output phases (45°, 135°,
45°, and 135°). Therefore, for each two-bit dibit clocked into the modulator, a single
output change occurs, and the rate of change at the output (baud) is equal to one-half the
input bit rate (i.e., two input bits produce one output phase change).

5-2-1 QPSK transmitter. A block diagram of a QPSK modulator is shown in


Figure 17. Two bits (a dibit) are clocked into the bit splitter. After both bits have been seri-
ally inputted, they are simultaneously parallel outputted. One bit is directed to the I chan-
nel and the other to the Q channel. The I bit modulates a carrier that is in phase with the ref-
erence oscillator (hence the name “I” for “in phase” channel), and the Q bit modulates a
carrier that is 90° out of phase or in quadrature with the reference carrier (hence the name
“Q” for “quadrature” channel).
It can be seen that once a dibit has been split into the I and Q channels, the operation
is the same as in a BPSK modulator. Essentially, a QPSK modulator is two BPSK modula-
tors combined in parallel. Again, for a logic 1  1 V and a logic 0  1 V, two phases
are possible at the output of the I balanced modulator (sin ωct and sin ωct), and two

67
Digital Modulation

FIGURE 17 QPSK modulator

phases are possible at the output of the Q balanced modulator (cos ωct and cos ωct).
When the linear summer combines the two quadrature (90° out of phase) signals, there
are four possible resultant phasors given by these expressions:  sin ωct  cos ωct,  sin
ωct  cos ωct, sin ωct  cos ωct, and sin ωct  cos ωct.

Example 5
For the QPSK modulator shown in Figure 17, construct the truth table, phasor diagram, and constel-
lation diagram.
Solution For a binary data input of Q  0 and I  0, the two inputs to the I balanced modulator are
1 and sin ωct, and the two inputs to the Q balanced modulator are 1 and cos ωct. Consequently,
the outputs are
I balanced modulator  (1)(sin ωct)  1 sin ωct
Q balanced modulator  (1)(cos ωct)  1 cos ωct
and the output of the linear summer is
1 cos ωct  1 sin ωct  1.414 sin(ωct  135°)
For the remaining dibit codes (01, 10, and 11), the procedure is the same. The results are shown in
Figure 18a.

In Figures 18b and c, it can be seen that with QPSK each of the four possible output
phasors has exactly the same amplitude. Therefore, the binary information must be encoded
entirely in the phase of the output signal. This constant amplitude characteristic is the most
important characteristic of PSK that distinguishes it from QAM, which is explained later in
this chapter. Also, from Figure 18b, it can be seen that the angular separation between any
two adjacent phasors in QPSK is 90°. Therefore, a QPSK signal can undergo almost a 45°
or 45° shift in phase during transmission and still retain the correct encoded information
when demodulated at the receiver. Figure 19 shows the output phase-versus-time relation-
ship for a QPSK modulator.

68
Digital Modulation

FIGURE 18 QPSK modulator: (a) truth table; (b) phasor diagram; (c) constellation diagram

FIGURE 19 Output phase-versus-time relationship for a QPSK modulator

5-2-2 Bandwidth considerations of QPSK. With QPSK, because the input data are
divided into two channels, the bit rate in either the I or the Q channel is equal to one-half of
the input data rate (fb/2). (Essentially, the bit splitter stretches the I and Q bits to twice their
input bit length.) Consequently, the highest fundamental frequency present at the data input
to the I or the Q balanced modulator is equal to one-fourth of the input data rate (one-half of
fb/2  fb/4). As a result, the output of the I and Q balanced modulators requires a minimum
double-sided Nyquist bandwidth equal to one-half of the incoming bit rate (fN  twice fb/4
 fb/2). Thus, with QPSK, a bandwidth compression is realized (the minimum bandwidth is
less than the incoming bit rate). Also, because the QPSK output signal does not change phase
until two bits (a dibit) have been clocked into the bit splitter, the fastest output rate of change
(baud) is also equal to one-half of the input bit rate. As with BPSK, the minimum bandwidth
and the baud are equal. This relationship is shown in Figure 20.

69
Digital Modulation

FIGURE 20 Bandwidth considerations of a QPSK modulator

In Figure 20, it can be seen that the worse-case input condition to the I or Q balanced
modulator is an alternative 1/0 pattern, which occurs when the binary input data have a
1100 repetitive pattern. One cycle of the fastest binary transition (a 1/0 sequence) in the I
or Q channel takes the same time as four input data bits. Consequently, the highest funda-
mental frequency at the input and fastest rate of change at the output of the balanced mod-
ulators is equal to one-fourth of the binary input bit rate.
The output of the balanced modulators can be expressed mathematically as
output  (sin ωat)(sin ωct) (22)

fb
at  2π t and ct  2πfc
4


where modulating carrier
signal

t≤ 1sin 2πfct2
fb
Thus, output  ¢sin 2π
4
1 fb 1 fb
cos 2π¢fc  ≤ t  cos 2π¢fc  ≤t
2 4 2 4

The output frequency spectrum extends from fc  fb /4 to fc  fb /4, and the minimum band-
width (fN) is

fb fb 2fb fb
¢fc  ≤  ¢fc  ≤  
4 4 4 2

Example 6
For a QPSK modulator with an input data rate (fb) equal to 10 Mbps and a carrier frequency of
70 MHz, determine the minimum double-sided Nyquist bandwidth (fN) and the baud. Also, compare
the results with those achieved with the BPSK modulator in Example 4. Use the QPSK block diagram
shown in Figure 17 as the modulator model.
Solution The bit rate in both the I and Q channels is equal to one-half of the transmission bit rate, or
fb 10 Mbps
fbQ  fbI    5 Mbps
2 2

70
Digital Modulation

The highest fundamental frequency presented to either balanced modulator is


fbQfbI 5 Mbps
fa  or   2.5 MHz
2 2 2
The output wave from each balanced modulator is
(sin 2πfat)(sin 2πfct)

cos 2π1fc  fa 2t  cos 2π1fc  fa 2t


1 1
2 2

cos 2π3 170  2.5 2 MHz4t  cos 2π3 170  2.52 MHz4t
1 1
2 2
1 1
cos 2π167.5 MHz2 t  cos 2π172.5 MHz2t
2 2
The minimum Nyquist bandwidth is
B  (72.5  67.5) MHz  5 MHz
The symbol rate equals the bandwidth; thus,
symbol rate  5 megabaud
The output spectrum is as follows:

B  5 MHz

It can be seen that for the same input bit rate the minimum bandwidth required to pass the output of
the QPSK modulator is equal to one-half of that required for the BPSK modulator in Example 4. Also,
the baud rate for the QPSK modulator is one-half that of the BPSK modulator.

The minimum bandwidth for the QPSK system described in Example 6 can also be
determined by simply substituting into Equation 10:
10 Mbps
B
2
 5 MHz

5-2-3 QPSK receiver. The block diagram of a QPSK receiver is shown in Figure
21. The power splitter directs the input QPSK signal to the I and Q product detectors and
the carrier recovery circuit. The carrier recovery circuit reproduces the original transmit
carrier oscillator signal. The recovered carrier must be frequency and phase coherent with
the transmit reference carrier. The QPSK signal is demodulated in the I and Q product de-
tectors, which generate the original I and Q data bits. The outputs of the product detectors
are fed to the bit combining circuit, where they are converted from parallel I and Q data
channels to a single binary output data stream.
The incoming QPSK signal may be any one of the four possible output phases shown
in Figure 18. To illustrate the demodulation process, let the incoming QPSK signal be sin
ωct  cos ωct. Mathematically, the demodulation process is as follows.

71
72
FIGURE 21 QPSK receiver
Digital Modulation

The receive QPSK signal (sin ωct  cos ωct) is one of the inputs to the I product
detector. The other input is the recovered carrier (sin ωct). The output of the I product de-
tector is

I  1sin ct  cos ct2 1sin ct2 (23)



QPSK input signal carrier

 (sin ct)(sin ct)  (cos ct)(sin ct)


 sin2 ct  (cos ct)(sin ct)

  11  cos 2ct2  sin 1c  c 2t  sin1c  c 2t


1 1 1
2 2 2
(filtered out) (equals 0)

1 1 1 1
I    cos 2ct  sin 2ct  sin 0
2 2 2 2

  V 1logic 02
1
2
Again, the receive QPSK signal (sin ωct  cos ωct) is one of the inputs to the Q
product detector. The other input is the recovered carrier shifted 90° in phase (cos ωct). The
output of the Q product detector is

Q  1sin ct  cos ct2 1cos ct2 (24)



QPSK input signal
冦 carrier

 cos ct  1sin ct2 1cos ct2


2

11  cos 2ct2  sin1c  c 2t  sin1c  c 2t


1 1 1

2 2 2
(filtered out) (equals 0)

1 1 1 1
Q  cos 2ct  sin 2ct  sin 0
2 2 2 2
1
 V1logic 12
2
The demodulated I and Q bits (0 and 1, respectively) correspond to the constellation
diagram and truth table for the QPSK modulator shown in Figure 18.

5-2-4 Offset QPSK. Offset QPSK (OQPSK) is a modified form of QPSK where the
bit waveforms on the I and Q channels are offset or shifted in phase from each other by one-
half of a bit time.
Figure 22 shows a simplified block diagram, the bit sequence alignment, and the con-
stellation diagram for a OQPSK modulator. Because changes in the I channel occur at the
midpoints of the Q channel bits and vice versa, there is never more than a single bit change
in the dibit code and, therefore, there is never more than a 90° shift in the output phase. In
conventional QPSK, a change in the input dibit from 00 to 11 or 01 to 10 causes a corre-
sponding 180° shift in the output phase. Therefore, an advantage of OQPSK is the lim-
ited phase shift that must be imparted during modulation. A disadvantage of OQPSK is

73
Digital Modulation

FIGURE 22 Offset keyed (OQPSK): (a) block diagram; (b) bit alignment; (c) constellation
diagram

that changes in the output phase occur at twice the data rate in either the I or Q channels.
Consequently, with OQPSK the baud and minimum bandwidth are twice that of conven-
tional QPSK for a given transmission bit rate. OQPSK is sometimes called OKQPSK
(offset-keyed QPSK).

5-3 8-PSK
With 8-PSK, three bits are encoded, forming tribits and producing eight different output
phases. With 8-PSK, n  3, M  8, and there are eight possible output phases. To encode eight
different phases, the incoming bits are encoded in groups of three, called tribits (23  8).

5-3-1 8-PSK transmitter. A block diagram of an 8-PSK modulator is shown in


Figure 23. The incoming serial bit stream enters the bit splitter, where it is converted to
a parallel, three-channel output (the I or in-phase channel, the Q or in-quadrature chan-
nel, and the C or control channel). Consequently, the bit rate in each of the three chan-
nels is fb /3. The bits in the I and C channels enter the I channel 2-to-4-level converter,
and the bits in the Q and C channels enter the Q channel 2-to-4-level converter. Essen-
tially, the 2-to-4-level converters are parallel-input digital-to-analog converters
(DACs). With two input bits, four output voltages are possible. The algorithm for the
DACs is quite simple. The I or Q bit determines the polarity of the output analog sig-
nal (logic 1  V and logic 0  V), whereas the C or C bit determines the magni-

74
Digital Modulation

FIGURE 23 8-PSK modulator

FIGURE 24 I- and Q-channel 2-to-4-level converters: (a) I-channel truth table;


(b) Q-channel truth table; (c) PAM levels

tude (logic 1  1.307 V and logic 0  0.541 V). Consequently, with two magnitudes and
two polarities, four different output conditions are possible.
Figure 24 shows the truth table and corresponding output conditions for the 2-to-4-
level converters. Because the C and C bits can never be the same logic state, the outputs
from the I and Q 2-to-4-level converters can never have the same magnitude, although they
can have the same polarity. The output of a 2-to-4-level converter is an M-ary, pulse-
amplitude-modulated (PAM) signal where M  4.
Example 7
For a tribit input of Q  0, 1  0, and C  0 (000), determine the output phase for the 8-PSK mod-
ulator shown in Figure 23.
Solution The inputs to the I channel 2-to-4-level converter are I  0 and C  0. From Figure 24
the output is 0.541 V. The inputs to the Q channel 2-to-4-level converter are Q  0 and C  1.
Again from Figure 24, the output is 1.307 V.
Thus, the two inputs to the I channel product modulators are 0.541 and sin ωct. The output is
I  (0.541)(sin ωct)  0.541 sin ωct
The two inputs to the Q channel product modulator are 1.307 V and cos ωct. The output is
Q  (1.307)(cos ωct)  1.307 cos ωct

75
Digital Modulation

The outputs of the I and Q channel product modulators are combined in the linear summer and pro-
duce a modulated output of
summer output  0.541 sin ωct  1.307 cos ωct
 1.41 sin(ωct  112.5°)
For the remaining tribit codes (001, 010, 011, 100, 101, 110, and 111), the procedure is the same. The
results are shown in Figure 25.

FIGURE 25 8-PSK modulator: (a) truth table; (b) phasor diagram;


(c) constellation diagram

76
Digital Modulation

FIGURE 26 Output phase-versus-time relationship for an 8-PSK modulator

From Figure 25, it can be seen that the angular separation between any two adjacent
phasors is 45°, half what it is with QPSK. Therefore, an 8-PSK signal can undergo almost
a 22.5° phase shift during transmission and still retain its integrity. Also, each phasor is
of equal magnitude; the tribit condition (actual information) is again contained only in the
phase of the signal. The PAM levels of 1.307 and 0.541 are relative values. Any levels may
be used as long as their ratio is 0.541/1.307 and their arc tangent is equal to 22.5°. For ex-
ample, if their values were doubled to 2.614 and 1.082, the resulting phase angles would
not change, although the magnitude of the phasor would increase proportionally.
It should also be noted that the tribit code between any two adjacent phases changes
by only one bit. This type of code is called the Gray code or, sometimes, the maximum dis-
tance code. This code is used to reduce the number of transmission errors. If a signal were
to undergo a phase shift during transmission, it would most likely be shifted to an adjacent
phasor. Using the Gray code results in only a single bit being received in error.
Figure 26 shows the output phase-versus-time relationship of an 8-PSK modulator.
5-3-2 Bandwidth considerations of 8-PSK. With 8-PSK, because the data are di-
vided into three channels, the bit rate in the I, Q, or C channel is equal to one-third of the
binary input data rate (f b /3). (The bit splitter stretches the I, Q, and C bits to three times their
input bit length.) Because the I, Q, and C bits are outputted simultaneously and in parallel,
the 2-to-4-level converters also see a change in their inputs (and consequently their outputs)
at a rate equal to fb /3.
Figure 27 shows the bit timing relationship between the binary input data; the I, Q, and
C channel data; and the I and Q PAM signals. It can be seen that the highest fundamental fre-
quency in the I, Q, or C channel is equal to one-sixth the bit rate of the binary input (one cy-
cle in the I, Q, or C channel takes the same amount of time as six input bits). Also, the highest
fundamental frequency in either PAM signal is equal to one-sixth of the binary input bit rate.
With an 8-PSK modulator, there is one change in phase at the output for every three
data input bits. Consequently, the baud for 8 PSK equals fb /3, the same as the minimum band-
width. Again, the balanced modulators are product modulators; their outputs are the product of
the carrier and the PAM signal. Mathematically, the output of the balanced modulators is
θ  (X sin ωat)(sin ωct) (25)
fb
where at  2π t and c t  2πfct
6

modulating signal carrier

and X  1.307 or 0.541


fb
Thus, θ  ¢X sin 2π t≤1sin 2πfct2
6
X fb X fb
 cos 2π¢fc  ≤t  cos 2π¢fc  ≤t
2 6 2 6

77
Digital Modulation

FIGURE 27 Bandwidth considerations of an 8-PSK modulator

The output frequency spectrum extends from fc  f b /6 to fc  f b /6, and the minimum band-
width (fN) is
fb fb 2fb fb
¢fc  ≤  ¢fc  ≤  
6 6 6 3
Example 8
For an 8-PSK modulator with an input data rate (fb) equal to 10 Mbps and a carrier frequency of
70 MHz, determine the minimum double-sided Nyquist bandwidth (fN) and the baud. Also, compare
the results with those achieved with the BPSK and QPSK modulators in Examples 4 and 6. Use the
8-PSK block diagram shown in Figure 23 as the modulator model.
Solution The bit rate in the I, Q, and C channels is equal to one-third of the input bit rate, or
10 Mbps
fbC  fbQ  fbI   3.33 Mbps
3

78
Digital Modulation

Therefore, the fastest rate of change and highest fundamental frequency presented to either balanced
modulator is
fbC fbQ fbI 3.33 Mbps
fa  or or   1.667 Mbps
2 2 2 2
The output wave from the balance modulators is
(sin 2π fa t)(sin 2πfc t)

cos 2π1fc  fa 2t  cos 2π1fc  fa 2t


1 1
2 2

cos 2π3 170  1.6672 MHz4t  cos 2π3 170  1.667 2 MHz4t
1 1
2 2
1 1
cos2π168.333 MHz2t  cos 2π171.667 MHz2 t
2 2
The minimum Nyquist bandwidth is
B  (71.667  68.333) MHz  3.333 MHz
The minimum bandwidth for the 8-PSK can also be determined by simply substituting into
Equation 10:
10 Mbps
B
3
 3.33 MHz
Again, the baud equals the bandwidth; thus,
baud  3.333 megabaud
The output spectrum is as follows:

B  3.333 MHz
It can be seen that for the same input bit rate the minimum bandwidth required to pass the output of
an 8-PSK modulator is equal to one-third that of the BPSK modulator in Example 4 and 50% less than
that required for the QPSK modulator in Example 6. Also, in each case the baud has been reduced by
the same proportions.

5-3-3 8-PSK receiver. Figure 28 shows a block diagram of an 8-PSK receiver. The
power splitter directs the input 8-PSK signal to the I and Q product detectors and the car-
rier recovery circuit. The carrier recovery circuit reproduces the original reference oscilla-
tor signal. The incoming 8-PSK signal is mixed with the recovered carrier in the I product
detector and with a quadrature carrier in the Q product detector. The outputs of the product
detectors are 4-level PAM signals that are fed to the 4-to-2-level analog-to-digital con-
verters (ADCs). The outputs from the I channel 4-to-2-level converter are the I and C
bits, whereas the outputs from the Q channel 4-to-2-level converter are the Q and C
bits. The parallel-to-serial logic circuit converts the I/C and Q>C bit pairs to serial I, Q,
and C output data streams.

5-4 16-PSK
16-PSK is an M-ary encoding technique where M  16; there are 16 different output phases
possible. With 16-PSK, four bits (called quadbits) are combined, producing 16 different
output phases. With 16-PSK, n  4 and M  16; therefore, the minimum bandwidth and

79
80
FIGURE 28 8-PSK receiver
Digital Modulation

FIGURE 29 16-PSK: (a) truth table; (b) constellation diagram

baud equal one-fourth the bit rate ( f b /4). Figure 29 shows the truth table and constellation
diagram for 16-PSK, respectively. Comparing Figures 18, 25, and 29 shows that as the level
of encoding increases (i.e., the values of n and M increase), more output phases are possi-
ble and the closer each point on the constellation diagram is to an adjacent point. With 16-
PSK, the angular separation between adjacent output phases is only 22.5°. Therefore, 16-
PSK can undergo only a 11.25° phase shift during transmission and still retain its integrity.
For an M-ary PSK system with 64 output phases (n  6), the angular separation between
adjacent phases is only 5.6°. This is an obvious limitation in the level of encoding (and bit
rates) possible with PSK, as a point is eventually reached where receivers cannot discern
the phase of the received signaling element. In addition, phase impairments inherent on
communications lines have a tendency to shift the phase of the PSK signal, destroying its
integrity and producing errors.

6 QUADRATURE-AMPLITUDE MODULATION

Quadrature-amplitude modulation (QAM) is a form of digital modulation similar to PSK


except the digital information is contained in both the amplitude and the phase of the trans-
mitted carrier. With QAM, amplitude and phase-shift keying are combined in such a way
that the positions of the signaling elements on the constellation diagrams are optimized to
achieve the greatest distance between elements, thus reducing the likelihood of one element
being misinterpreted as another element. Obviously, this reduces the likelihood of errors oc-
curring.

6-1 8-QAM
8-QAM is an M-ary encoding technique where M  8. Unlike 8-PSK, the output signal from
an 8-QAM modulator is not a constant-amplitude signal.

6-1-1 8-QAM transmitter. Figure 30a shows the block diagram of an 8-QAM
transmitter. As you can see, the only difference between the 8-QAM transmitter and the 8-
PSK transmitter shown in Figure 23 is the omission of the inverter between the C channel
and the Q product modulator. As with 8-PSK, the incoming data are divided into groups of
three bits (tribits): the I, Q, and C bit streams, each with a bit rate equal to one-third of

81
Digital Modulation

FIGURE 30 8-QAM transmitter: (a) block diagram; (b) truth table 4 level converters

the incoming data rate. Again, the I and Q bits determine the polarity of the PAM signal at
the output of the 2-to-4-level converters, and the C channel determines the magnitude. Be-
cause the C bit is fed uninverted to both the I and the Q channel 2-to-4-level converters,
the magnitudes of the I and Q PAM signals are always equal. Their polarities depend on
the logic condition of the I and Q bits and, therefore, may be different. Figure 30b shows
the truth table for the I and Q channel 2-to-4-level converters; they are identical.

Example 9
For a tribit input of Q  0, I  0, and C  0 (000), determine the output amplitude and phase for the
8-QAM transmitter shown in Figure 30a.
Solution The inputs to the I channel 2-to-4-level converter are I  0 and C  0. From Figure 30b,
the output is 0.541 V. The inputs to the Q channel 2-to-4-level converter are Q  0 and C  0. Again
from Figure 30b, the output is 0.541 V.
Thus, the two inputs to the I channel product modulator are 0.541 and sin ωct. The output is
I  (0.541)(sin ωct)  0.541 sin ωct
The two inputs to the Q channel product modulator are 0.541 and cos ωct. The output is
Q  (0.541)(cos ωct)  0.541 cos ωct
The outputs from the I and Q channel product modulators are combined in the linear summer and pro-
duce a modulated output of
summer output  0.541 sin ωct 0.541 cos ωct
 0.765 sin(ωct  135°)
For the remaining tribit codes (001, 010, 011, 100, 101, 110, and 111), the procedure is the same. The
results are shown in Figure 31.
Figure 32 shows the output phase-versus-time relationship for an 8-QAM modulator. Note that
there are two output amplitudes, and only four phases are possible.

6-1-2 Bandwidth considerations of 8-QAM. In 8-QAM, the bit rate in the I and
Q channels is one-third of the input binary rate, the same as in 8-PSK. As a result, the high-
est fundamental modulating frequency and fastest output rate of change in 8-QAM are the
same as with 8-PSK. Therefore, the minimum bandwidth required for 8-QAM is f b /3, the
same as in 8-PSK.

82
Digital Modulation

FIGURE 31 8-QAM modulator: (a) truth table; (b) phasor diagram; (c) constellation diagram

FIGURE 32 Output phase and amplitude-versus-time relationship for 8-QAM

6-1-3 8-QAM receiver. An 8-QAM receiver is almost identical to the 8-PSK re-
ceiver shown in Figure 28. The differences are the PAM levels at the output of the product
detectors and the binary signals at the output of the analog-to-digital converters. Because
there are two transmit amplitudes possible with 8-QAM that are different from those
achievable with 8-PSK, the four demodulated PAM levels in 8-QAM are different from
those in 8-PSK. Therefore, the conversion factor for the analog-to-digital converters must
also be different. Also, with 8-QAM the binary output signals from the I channel analog-
to-digital converter are the I and C bits, and the binary output signals from the Q channel
analog-to-digital converter are the Q and C bits.

83
Digital Modulation

6-2 16-QAM
As with the 16-PSK, 16-QAM is an M-ary system where M  16. The input data are acted
on in groups of four (24  16). As with 8-QAM, both the phase and the amplitude of the
transmit carrier are varied.

6-2-1 QAM transmitter. The block diagram for a 16-QAM transmitter is shown in
Figure 33. The input binary data are divided into four channels: I, I , Q, and Q . The bit rate
in each channel is equal to one-fourth of the input bit rate ( f b /4). Four bits are serially
clocked into the bit splitter; then they are outputted simultaneously and in parallel with the
I, I , Q, and Q channels. The I and Q bits determine the polarity at the output of the 2-to-
4-level converters (a logic 1  positive and a logic 0  negative). The I and Q bits deter-
mine the magnitude (a logic I  0.821 V and a logic 0  0.22 V). Consequently, the 2-to-
4-level converters generate a 4-level PAM signal. Two polarities and two magnitudes are
possible at the output of each 2-to-4-level converter. They are 0.22 V and 0.821 V.
The PAM signals modulate the in-phase and quadrature carriers in the product mod-
ulators. Four outputs are possible for each product modulator. For the I product modulator,
they are 0.821 sin ωct, 0.821 sin ωct, 0.22 sin ωct, and 0.22 sin ωct. For the Q prod-
uct modulator, they are 0.821 cos ωct, 0.22 cos ωct, 0.821 cos ωct, and 0.22 cos ωct.
The linear summer combines the outputs from the I and Q channel product modulators and
produces the 16 output conditions necessary for 16-QAM. Figure 34 shows the truth table
for the I and Q channel 2-to-4-level converters.

FIGURE 33 16-QAM transmitter block diagram

FIGURE 34 Truth tables for the I- and Q-channel 2-to-4-


level converters: (a) I channel; (b) Q channel

84
Digital Modulation

Example 10
For a quadbit input of I  0, I  0, Q  0, and Q  0 (0000), determine the output amplitude and
phase for the 16-QAM modulator shown in Figure 33.
Solution The inputs to the I channel 2-to-4-level converter are I  0 and I  0. From Figure 34,
the output is 0.22 V. The inputs to the Q channel 2-to-4-level converter are Q  0 and Q  0. Again
from Figure 34, the output is 0.22 V.
Thus, the two inputs to the I channel product modulator are 0.22 V and sin ωct. The output is
I  (0.22)(sin ωct)  0.22 sin ωct
The two inputs to the Q channel product modulator are 0.22 V and cos ωct. The output is
Q  (0.22)(cos ωct)  0.22 cos ωct
The outputs from the I and Q channel product modulators are combined in the linear summer and pro-
duce a modulated output of
summer output  0.22 sin ωct  0.22 cos ωct
 0.311 sin(ωct  135°)
For the remaining quadbit codes, the procedure is the same. The results are shown in Figure 35.

FIGURE 35 16-QAM modulator: (a) truth table; (b) phasor diagram; (c) constellation diagram

85
Digital Modulation

FIGURE 36 Bandwidth considerations of a 16-QAM modulator

6-2-2 Bandwidth considerations of 16-QAM. With a 16-QAM, because the input


data are divided into four channels, the bit rate in the I, I , Q, or Q channel is equal to one-
fourth of the binary input data rate ( f b /4). (The bit splitter stretches the I, I , Q, and Q bits
to four times their input bit length.) Also, because the I, I , Q, and Q bits are outputted si-
multaneously and in parallel, the 2-to-4-level converters see a change in their inputs and
outputs at a rate equal to one-fourth of the input data rate.
Figure 36 shows the bit timing relationship between the binary input data; the I, I ,
Q, and Q channel data; and the I PAM signal. It can be seen that the highest fundamental
frequency in the I, I , Q, or Q channel is equal to one-eighth of the bit rate of the binary in-
put data (one cycle in the I, I , Q, or Q channel takes the same amount of time as eight in-
put bits). Also, the highest fundamental frequency of either PAM signal is equal to one-
eighth of the binary input bit rate.
With a 16-QAM modulator, there is one change in the output signal (either its phase,
amplitude, or both) for every four input data bits. Consequently, the baud equals f b /4, the
same as the minimum bandwidth.

86
Digital Modulation

Again, the balanced modulators are product modulators and their outputs can be rep-
resented mathematically as
output  (X sin ωat)(sin ωct) (26)
fb
where at  2π t and c t  2πfct
8



modulating signal carrier

and X 0.22 or 0.821

fb
Thus, output  ¢X sin 2π t≤1sin 2πfct2
8

X fb X fb
 cos 2π ¢fc  ≤t  cos 2π ¢fc  ≤t
2 8 2 8
The output frequency spectrum extends from fc  f b /8 to fc  f b /8, and the minimum band-
width (fN) is

fb fb 2fb fb
¢fc  ≤  ¢fc  ≤  
8 8 8 4

Example 11
For a 16-QAM modulator with an input data rate (fb) equal to 10 Mbps and a carrier frequency of
70 MHz, determine the minimum double-sided Nyquist frequency (fN) and the baud. Also, compare
the results with those achieved with the BPSK, QPSK, and 8-PSK modulators in Examples 4, 6, and
8. Use the 16-QAM block diagram shown in Figure 33 as the modulator model.
Solution The bit rate in the I, I , Q, and Q channels is equal to one-fourth of the input bit rate, or
fb 10 Mbps
fbI  fbI¿  fbQ  fbQ¿    2.5 Mbps
4 4
Therefore, the fastest rate of change and highest fundamental frequency presented to either balanced
modulator is
fbI fbI¿ fbQ fbQ¿ 2.5 Mbps
fa  or or or   1.25 MHz
2 2 2 2 2
The output wave from the balanced modulator is
(sin 2π fat)(sin 2π fct)

cos 2π1fc  fa 2t  cos 2π1fc  fa 2t


1 1
2 2

cos 2π3 170  1.25 2 MHz4 t  cos 2π3 170  1.252 MHz4t
1 1
2 2
1 1
cos 2π168.75 MHz2 t  cos 2π171.25 MHz2t
2 2
The minimum Nyquist bandwidth is
B  (71.25  68.75) MHz  2.5 MHz
The minimum bandwidth for the 16-QAM can also be determined by simply substituting into
Equation 10:
10 Mbps
B
4
 2.5 MHz

87
Digital Modulation

The symbol rate equals the bandwidth; thus,


symbol rate  2.5 megabaud
The output spectrum is as follows:

B  2.5 MHz
For the same input bit rate, the minimum bandwidth required to pass the output of a 16-QAM mod-
ulator is equal to one-fourth that of the BPSK modulator, one-half that of QPSK, and 25% less than
with 8-PSK. For each modulation technique, the baud is also reduced by the same proportions.

Example 12
For the following modulation schemes, construct a table showing the number of bits encoded, num-
ber of output conditions, minimum bandwidth, and baud for an information data rate of 12 kbps:
QPSK, 8-PSK, 8-QAM, 16-PSK, and 16-QAM.
Solution
Modulation n M B (Hz) baud

QPSK 2 4 6000 6000


8-PSK 3 8 4000 4000
8-QAM 3 8 4000 4000
16-PSK 4 16 3000 3000
16-QAM 4 16 3000 3000

From Example 12, it can be seen that a 12-kbps data stream can be propagated through a narrower
bandwidth using either 16-PSK or 16-QAM than with the lower levels of encoding.

Table 1 summarizes the relationship between the number of bits encoded, the num-
ber of output conditions possible, the minimum bandwidth, and the baud for ASK, FSK,
PSK, and QAM. Note that with the three binary modulation schemes (ASK, FSK, and

Table 1 ASK, FSK, PSK, and QAM Summary

Modulation Encoding Scheme Outputs Possible Minimum Bandwidth Baud

ASK Single bit 2 fb fb


FSK Single bit 2 fb fb
BPSK Single bit 2 fb fb
QPSK Dibits 4 fb /2 fb /2
8-PSK Tribits 8 fb /3 fb /3
8-QAM Tribits 8 fb /3 fb /3
16-QAM Quadbits 16 fb /4 fb /4
16-PSK Quadbits 16 fb /4 fb /4
32-PSK Five bits 32 fb /5 fb /5
32-QAM Five bits 32 fb /5 fb /5
64-PSK Six bits 64 fb /6 fb /6
64-QAM Six bits 64 fb /6 fb /6
128-PSK Seven bits 128 fb /7 fb /7
128-QAM Seven bits 128 fb /7 fb /7

Note: fb indicates a magnitude equal to the input bit rate.

88
Digital Modulation

BPSK), n  1, M  2, only two output conditions are possible, and the baud is equal to the
bit rate. However, for values of n > 1, the number of output conditions increases, and the
minimum bandwidth and baud decrease. Therefore, digital modulation schemes where n > 1
achieve bandwidth compression (i.e., less bandwidth is required to propagate a given bit
rate). When data compression is performed, higher data transmission rates are possible for
a given bandwidth.

7 BANDWIDTH EFFICIENCY

Bandwidth efficiency (sometimes called information density or spectral efficiency) is often


used to compare the performance of one digital modulation technique to another. In
essence, bandwidth efficiency is the ratio of the transmission bit rate to the minimum band-
width required for a particular modulation scheme. Bandwidth efficiency is generally nor-
malized to a 1-Hz bandwidth and, thus, indicates the number of bits that can be propagated
through a transmission medium for each hertz of bandwidth. Mathematically, bandwidth
efficiency is

Bη  transmission bit rate (bps) (27)


minimum bandwidth (Hz)

 bits/s  bits/s  bits


hertz cycles/s cycle
where Bη  bandwidth efficiency
Bandwidth efficiency can also be given as a percentage by simply multiplying Bη by 100.

Example 13
For an 8-PSK system, operating with an information bit rate of 24 kbps, determine (a) baud, (b) min-
imum bandwidth, and (c) bandwidth efficiency.
Solution a. Baud is determined by substituting into Equation 10:
24,000
baud   8000
3

b. Bandwidth is determined by substituting into Equation 11:

24,000
B  8000
3
c. Bandwidth efficiency is calculated from Equation 27:

24,000 bps
Bh 
8000 Hz
 3 bits per second per cycle of bandwidth

Example 14
For 16-PSK and a transmission system with a 10 kHz bandwidth, determine the maximum bit rate.
Solution The bandwidth efficiency for 16-PSK is 4, which means that four bits can be propagated
through the system for each hertz of bandwidth. Therefore, the maximum bit rate is simply the prod-
uct of the bandwidth and the bandwidth efficiency, or
bit rate  4  10,000
 40,000 bps

89
Digital Modulation

Table 2 ASK, FSK, PSK, and QAM Summary

Modulation Encoding Scheme Outputs Possible Minimum Bandwidth Baud Bη

ASK Single bit 2 fb fb 1


FSK Single bit 2 fb fb 1
BPSK Single bit 2 fb fb 1
QPSK Dibits 4 fb /2 fb /2 2
8-PSK Tribits 8 fb /3 fb /3 3
8-QAM Tribits 8 fb /3 fb /3 3
16-PSK Quadbits 16 fb /4 fb /4 4
16-QAM Quadbits 16 fb /4 fb /4 4
32-PSK Five bits 32 fb /5 fb /5 5
64-QAM Six bits 64 fb /6 fb /6 6

Note: fb indicates a magnitude equal to the input bit rate.

7-1 Digital Modulation Summary


The properties of several digital modulation schemes are summarized in Table 2.

8 CARRIER RECOVERY

Carrier recovery is the process of extracting a phase-coherent reference carrier from a re-
ceiver signal. This is sometimes called phase referencing.
In the phase modulation techniques described thus far, the binary data were encoded
as a precise phase of the transmitted carrier. (This is referred to as absolute phase encoding.)
Depending on the encoding method, the angular separation between adjacent phasors varied
between 30° and 180°. To correctly demodulate the data, a phase-coherent carrier was re-
covered and compared with the received carrier in a product detector. To determine the ab-
solute phase of the received carrier, it is necessary to produce a carrier at the receiver that is
phase coherent with the transmit reference oscillator. This is the function of the carrier re-
covery circuit.
With PSK and QAM, the carrier is suppressed in the balanced modulators and, there-
fore, is not transmitted. Consequently, at the receiver the carrier cannot simply be tracked
with a standard phase-locked loop (PLL). With suppressed-carrier systems, such as PSK
and QAM, sophisticated methods of carrier recovery are required, such as a squaring loop,
a Costas loop, or a remodulator.

8-1 Squaring Loop


A common method of achieving carrier recovery for BPSK is the squaring loop. Figure 37
shows the block diagram of a squaring loop. The received BPSK waveform is filtered and
then squared. The filtering reduces the spectral width of the received noise. The squaring
circuit removes the modulation and generates the second harmonic of the carrier frequency.
This harmonic is phase tracked by the PLL. The VCO output frequency from the PLL then
is divided by 2 and used as the phase reference for the product detectors.

FIGURE 37 Squaring loop carrier recovery circuit for a BPSK receiver

90
Digital Modulation

With BPSK, only two output phases are possible: sin ωct and sin ωct. Mathe-
matically, the operation of the squaring circuit can be described as follows. For a receive
signal of sin ωct, the output of the squaring circuit is
output  (sin ωct)(sin ωct)  sin2 ωct
(filtered out)

11  cos 2ct2   cos 2ct


1 1 1

2 2 2
For a received signal of sin ωct, the output of the squaring circuit is
output  (sin ωct)(sin ωct)  sin2 ωct
(filtered out)

11  cos 2ct2   cos 2ct


1 1 1

2 2 2
It can be seen that in both cases, the output from the squaring circuit contained a con-
stant voltage (1/2 V) and a signal at twice the carrier frequency (cos 2ωct). The constant
voltage is removed by filtering, leaving only cos 2ωct.

8-2 Costas Loop


A second method of carrier recovery is the Costas, or quadrature, loop shown in Figure 38.
The Costas loop produces the same results as a squaring circuit followed by an ordinary PLL
in place of the BPF. This recovery scheme uses two parallel tracking loops (I and Q) simul-
taneously to derive the product of the I and Q components of the signal that drives the VCO.
The in-phase (I) loop uses the VCO as in a PLL, and the quadrature (Q) loop uses a 90°
shifted VCO signal. Once the frequency of the VCO is equal to the suppressed-carrier

FIGURE 38 Costas loop carrier recovery circuit

91
Digital Modulation

FIGURE 39 Remodulator loop carrier recovery circuit

frequency, the product of the I and Q signals will produce an error voltage proportional to
any phase error in the VCO. The error voltage controls the phase and, thus, the frequency of
the VCO.

8-3 Remodulator
A third method of achieving recovery of a phase and frequency coherent carrier is the re-
modulator, shown in Figure 39. The remodulator produces a loop error voltage that is pro-
portional to twice the phase error between the incoming signal and the VCO signal. The re-
modulator has a faster acquisition time than either the squaring or the Costas loops.
Carrier recovery circuits for higher-than-binary encoding techniques are similar to BPSK
except that circuits that raise the receive signal to the fourth, eighth, and higher powers are used.

9 CLOCK RECOVERY

As with any digital system, digital radio requires precise timing or clock synchronization
between the transmit and the receive circuitry. Because of this, it is necessary to regenerate
clocks at the receiver that are synchronous with those at the transmitter.
Figure 40a shows a simple circuit that is commonly used to recover clocking infor-
mation from the received data. The recovered data are delayed by one-half a bit time and
then compared with the original data in an XOR circuit. The frequency of the clock that is
recovered with this method is equal to the received data rate (fb). Figure 40b shows the re-
lationship between the data and the recovered clock timing. From Figure 40b, it can be seen
that as long as the receive data contain a substantial number of transitions (1/0 sequences),
the recovered clock is maintained. If the receive data were to undergo an extended period
of successive 1s or 0s, the recovered clock would be lost. To prevent this from occurring,
the data are scrambled at the transmit end and descrambled at the receive end. Scrambling
introduces transitions (pulses) into the binary signal using a prescribed algorithm, and the
descrambler uses the same algorithm to remove the transitions.

92
Digital Modulation

FIGURE 40 (a) Clock recovery circuit; (b) timing diagram

FIGURE 41 DBPSK modulator: (a) block diagram; (b) timing diagram

10 DIFFERENTIAL PHASE-SHIFT KEYING

Differential phase-shift keying (DPSK) is an alternative form of digital modulation where


the binary input information is contained in the difference between two successive sig-
naling elements rather than the absolute phase. With DPSK, it is not necessary to recover
a phase-coherent carrier. Instead, a received signaling element is delayed by one signal-
ing element time slot and then compared with the next received signaling element. The
difference in the phase of the two signaling elements determines the logic condition of
the data.

10-1 Differential BPSK

10-1-1 DBPSK transmitter. Figure 41a shows a simplified block diagram of a


differential binary phase-shift keying (DBPSK) transmitter. An incoming information bit is

93
Digital Modulation

FIGURE 42 DBPSK demodulator: (a) block diagram; (b) timing sequence

XNORed with the preceding bit prior to entering the BPSK modulator (balanced modulator).
For the first data bit, there is no preceding bit with which to compare it. Therefore, an initial ref-
erence bit is assumed. Figure 41b shows the relationship between the input data, the XNOR out-
put data, and the phase at the output of the balanced modulator. If the initial reference bit is as-
sumed a logic 1, the output from the XNOR circuit is simply the complement of that shown.
In Figure 41b, the first data bit is XNORed with the reference bit. If they are the same,
the XNOR output is a logic 1; if they are different, the XNOR output is a logic 0. The bal-
anced modulator operates the same as a conventional BPSK modulator; a logic 1 produces
sin ωct at the output, and a logic 0 produces sin ωct at the output.

10-1-2 DBPSK receiver. Figure 42 shows the block diagram and timing sequence
for a DBPSK receiver. The received signal is delayed by one bit time, then compared with
the next signaling element in the balanced modulator. If they are the same, a logic 1 ( volt-
age) is generated. If they are different, a logic 0 ( voltage) is generated. If the reference
phase is incorrectly assumed, only the first demodulated bit is in error. Differential encod-
ing can be implemented with higher-than-binary digital modulation schemes, although the
differential algorithms are much more complicated than for DBPSK.
The primary advantage of DBPSK is the simplicity with which it can be imple-
mented. With DBPSK, no carrier recovery circuit is needed. A disadvantage of DBPSK is
that it requires between 1 dB and 3 dB more signal-to-noise ratio to achieve the same bit er-
ror rate as that of absolute PSK.

11 TRELLIS CODE MODULATION

Achieving data transmission rates in excess of 9600 bps over standard telephone lines with
approximately a 3-kHz bandwidth obviously requires an encoding scheme well beyond the
quadbits used with 16-PSK or 16-QAM (i.e., M must be significantly greater than 16). As
might be expected, higher encoding schemes require higher signal-to-noise ratios. Using
the Shannon limit for information capacity (Equation 4), a data transmission rate of 28.8
kbps through a 3200-Hz bandwidth requires a signal-to-noise ratio of
I(bps)  (3.32  B) log(1  S/N)

94
Digital Modulation

therefore, 28.8 kbps  (3.32)(3200) log(1  S/N)


28,800  10,624 log(1  S/N)
28,800
 log11  S>N2
10,624

2.71  log(1  S/N)

thus, 102.71  1  S/N


513  1  S/N
512  S/N

in dB, S/N(dB)  10 log 512


 27 dB
Transmission rates of 56 kbps require a signal-to-noise ratio of 53 dB, which is virtually
impossible to achieve over a standard telephone circuit.
Data transmission rates in excess of 56 kbps can be achieved, however, over standard
telephone circuits using an encoding technique called trellis code modulation (TCM).
Dr. Ungerboeck at IBM Zuerich Research Laboratory developed TCM, which involves us-
ing convolutional (tree) codes, which combines encoding and modulation to reduce the
probability of error, thus improving the bit error performance. The fundamental idea behind
TCM is introducing controlled redundancy in the bit stream with a convolutional code,
which reduces the likelihood of transmission errors. What sets TCM apart from standard
encoding schemes is the introduction of redundancy by doubling the number of signal
points in a given PSK or QAM constellation.
Trellis code modulation is sometimes thought of as a magical method of increasing trans-
mission bit rates over communications systems using QAM or PSK with fixed bandwidths. Few
people fully understand this concept, as modem manufacturers do not seem willing to share in-
formation on TCM. Therefore, the following explanation is intended not to fully describe the
process of TCM but rather to introduce the topic and give the reader a basic understanding of
how TCM works and the advantage it has over conventional digital modulation techniques.
M-ary QAM and PSK utilize a signal set of 2N  M, where N equals the number of
bits encoded into M different conditions. Therefore, N  2 produces a standard PSK con-
stellation with four signal points (i.e., QPSK) as shown in Figure 43a. Using TCM, the
number of signal points increases to two times M possible symbols for the same factor-of-
M reduction in bandwidth while transmitting each signal during the same time interval.
TCM-encoded QPSK is shown in Figure 43b.
Trellis coding also defines the manner in which signal-state transitions are allowed to
occur, and transitions that do not follow this pattern are interpreted in the receiver as trans-
mission errors. Therefore, TCM can improve error performance by restricting the manner
in which signals are allowed to transition. For values of N greater than 2, QAM is the mod-
ulation scheme of choice for TCM; however, for simplification purposes, the following ex-
planation uses PSK as it is easier to illustrate.
Figure 44 shows a TCM scheme using two-state 8-PSK, which is essentially two
QPSK constellations offset by 45°. One four-state constellation is labeled 0-4-2-6, and the
other is labeled 1-5-3-7. For this explanation, the signal point labels 0 through 7 are meant
not to represent the actual data conditions but rather to simply indicate a convenient method
of labeling the various signal points. Each digit represents one of four signal points per-
mitted within each of the two QPSK constellations. When in the 0-4-2-6 constellation and
a 0 or 4 is transmitted, the system remains in the same constellation. However, when either
a 2 or 6 is transmitted, the system switches to the 1-5-3-7 constellation. Once in the 1-5-3-7

95
Digital Modulation

FIGURE 43 QPSK constellations: (a) standard encoding format; (b) trellis encoding format

0 0
0-4-2-6
4 4

1 2 1 2
5 6 5 6

3 3
1-5-3-7
7 7

FIGURE 44 8-PSK TCM constellations

constellation and a 3 or 7 is transmitted, the system remains in the same constellation, and
if a 1 or 5 is transmitted, the system switches to the 0-4-2-6 constellation. Remember that
each symbol represents two bits, so the system undergoes a 45° phase shift whenever it
switches between the two constellations. A complete error analysis of standard QPSK
compared with TCM QPSK would reveal a coding gain for TCM of 2-to-1 1 or 3 dB.
Table 3 lists the coding gains achieved for TCM coding schemes with several different
trellis states.
The maximum data rate achievable using a given bandwidth can be determined by re-
arranging Equation 10:
N  B  fb

96
Digital Modulation

Table 3 Trellis Coding Gain

Number of Trellis States Coding Gain (dB)

2 3.0
4 5.5
8 6.0
16 6.5
32 7.1
64 7.3
128 7.3
256 7.4

where N  number of bits encoded (bits)


B  bandwidth (hertz)
fb  transmission bit rate (bits per second)
Remember that with M-ary QAM or PSK systems, the baud equals the minimum re-
quired bandwidth. Therefore, a 3200-Hz bandwidth using a nine-bit trellis code pro-
duces a 3200 baud signal with each baud carrying nine bits. Therefore, the transmission
rate fb  9  3200  28.8 kbps.
TCM is thought of as a coding scheme that improves on standard QAM by increasing
the distance between symbols on the constellation (known as the Euclidean distance). The
first TCM system used a five-bit code, which included four QAM bits (a quadbit) and a fifth
bit used to help decode the quadbit. Transmitting five bits within a single signaling element
requires producing 32 discernible signals. Figure 45 shows a 128-point QAM constallation.

90°
0110000 0111000
8
1100000 1111001 1101000 1110001

0010011 0101001 0010111 0100001 0010101


6
1011101 1000011 1011111 1000111 1011011 1000101

0110101 0001101 0110100 0001111 0111100 0001011 0111101


4
1100101 1111010 1100100 1111010 11001100 1110010 1101101 1110011

0010001 0101011 0010010 0101010 0010110 0100010 0010100 0100011 0010000


2
1011000 1000001 1011100 1000010 1011110 1000110 1011010 1000100 1011001 100000
0001000 0110111 0001100 0110110 0001110 0111110 0001010 0111111 0001001
–8 –6 –4 –2 2 4 6 8
180° 1100111 1111111 1100110 1111110 1101110 1110110 1101111 1110111 0°

0101000 0101000 0101000 0101000 0101001 0101000 0101000 0101000 0101000


–2
1010000 1001001 1010100 1001010 1010100 1001110 1010010 1001100 1010001 1001000

0000000 0110011 0000100 0110010 0000110 0111010 0000010 0111011 0000001


–4
1100011 1111101 1100010 1111100 1101010 1110100 1101011 1110101

0101101 0011011 0101100 0011111 0100100 0011101 0100101


–6
1010101 1001011 1010111 1001111 1010011 1001101

0000101 0110001 0000111 0111001 0000011


–8
1100001 1111000 1101001 1110000
0101000 270° 0100000

FIGURE 45 128-Point QAM TCM constellation

97
Digital Modulation

236 224 216 212 218 228

234 205 185 173 164 162 170 181 197 220

226 193 165 146 133 123 121 125 137 154 179 207

229 189 156 131 110 96 87 33 92 100 117 140 172 208

201 160 126 98 79 64 58 54 62 71 90 112 141 180 221

222 177 135 102 77 55 41 35 31 37 48 65 91 118 155 198

203 158 119 84 60 39 24 17 15 20 30 49 72 101 138 182 230

194 148 108 75 50 28 13 6 4 8 21 38 63 93 127 171 219

238 186 142 103 69 43 22 9 1 0 5 16 32 56 85 122 163 213

190 144 106 73 45 25 11 3 2 7 18 36 59 88 124 166 217

199 152 113 80 52 33 19 12 10 14 26 42 66 97 134 174 225

210 167 128 94 67 47 34 27 23 29 40 57 81 111 147 187 237

232 183 149 115 89 68 53 46 44 51 61 78 99 132 168 209

214 175 139 116 95 82 74 70 76 86 104 129 157 195 235

205 176 150 130 114 107 105 109 120 136 161 191 227

215 184 169 153 145 143 151 159 178 202 231

233 211 200 192 188 196 204 223

239

FIGURE 46 One-fourth of a 960-Point QAM TCM constellation

A 3200-baud signal using nine-bit TCM encoding produces 512 different codes.
The nine data bits plus a redundant bit for TCM requires a 960-point constellation. Figure
46 shows one-fourth of the 960-point superconstellation showing 240 signal points. The
full superconstellation can be obtained by rotating the 240 points shown by 90°, 180°,
and 270°.

12 PROBABILITY OF ERROR AND BIT ERROR RATE

Probability of error P(e) and bit error rate (BER) are often used interchangeably, al-
though in practice they do have slightly different meanings. P(e) is a theoretical (mathe-
matical) expectation of the bit error rate for a given system. BER is an empirical (histor-
ical) record of a system’s actual bit error performance. For example, if a system has a
P(e) of 105, this means that mathematically you can expect one bit error in every
100,000 bits transmitted (1/105  1/100,000). If a system has a BER of 105, this
means that in past performance there was one bit error for every 100,000 bits transmit-
ted. A bit error rate is measured and then compared with the expected probability of er-
ror to evaluate a system’s performance.

98
Digital Modulation

Probability of error is a function of the carrier-to-noise power ratio (or, more specif-
ically, the average energy per bit-to-noise power density ratio) and the number of possible
encoding conditions used (M-ary). Carrier-to-noise power ratio is the ratio of the average
carrier power (the combined power of the carrier and its associated sidebands) to the
thermal noise power. Carrier power can be stated in watts or dBm, where

C1watts2
C1dBm2  10 log (28)
0.001
Thermal noise power is expressed mathematically as
N  KTB (watts) (29)
where N  thermal noise power (watts)
K  Boltzmann’s proportionality constant (1.38  1023 joules per kelvin)
T  temperature (kelvin: 0 K  273° C, room temperature  290 K)
B  bandwidth (hertz)

KTB
Stated in dBm, N1dBm2  10 log (30)
0.001
Mathematically, the carrier-to-noise power ratio is

1unitless ratio2
C C
 (31)
N KTB
where C  carrier power (watts)
N  noise power (watts)

1dB2  10 log
C C
Stated in dB,
N N
 C(dBm) N(dBm) (32)
Energy per bit is simply the energy of a single bit of information. Mathematically, en-
ergy per bit is
Eb  CTb (J/bit) (33)
where Eb  energy of a single bit (joules per bit)
Tb  time of a single bit (seconds)
C  carrier power (watts)
Stated in dBJ, Eb(dBJ)  10 log Eb (34)
and because Tb  1/fb, where fb is the bit rate in bits per second, Eb can be rewritten as

1J>bit2
C
Eb  (35)
fb
C
Stated in dBJ, Eb 1dBJ2  10 log (36)
fb

 10 log C  10 log fb (37)


Noise power density is the thermal noise power normalized to a 1-Hz bandwidth (i.e.,
the noise power present in a 1-Hz bandwidth). Mathematically, noise power density is

99
Digital Modulation

1W>Hz2
N
N0  (38)
B
where N0  noise power density (watts per hertz)
N  thermal noise power (watts)
B  bandwidth (hertz)
N
Stated in dBm, N01dBm2  10 log  10 log B (39)
0.001
 N(dBm)  10 log B (40)
Combining Equations 29 and 38 yields

 KT 1W>Hz2
KTB
N0  (41)
B
K
Stated in dBm, N01dBm2  10 log  10 log T (42)
0.001
Energy per bit-to-noise power density ratio is used to compare two or more digital
modulation systems that use different transmission rates (bit rates), modulation schemes
(FSK, PSK, QAM), or encoding techniques (M-ary). The energy per bit-to-noise power
density ratio is simply the ratio of the energy of a single bit to the noise power present in
1 Hz of bandwidth. Thus, Eb/N0 normalizes all multiphase modulation schemes to a com-
mon noise bandwidth, allowing for a simpler and more accurate comparison of their error
performance. Mathematically, Eb/N0 is
Eb C>fb CB
  (43)
N0 N>B Nfb
where EbN0 is the energy per bit-to-noise power density ratio. Rearranging Equation 43
yields the following expression:
Eb C B
  (44)
N0 N fb
where Eb/N0  energy per bit-to-noise power density ratio
C/N  carrier-to-noise power ratio
B/fb  noise bandwidth-to-bit rate ratio

1dB2  10 log  10 log


Eb C B
Stated in dB, (45)
N0 N fb
or  10 log Eb  10 log N0 (46)
From Equation 44, it can be seen that the Eb/N0 ratio is simply the product of the carrier-to-
noise power ratio and the noise bandwidth-to-bit rate ratio. Also, from Equation 44, it can
be seen that when the bandwidth equals the bit rate, Eb/N0  C/N.
In general, the minimum carrier-to-noise power ratio required for QAM systems is
less than that required for comparable PSK systems, Also, the higher the level of encoding
used (the higher the value of M), the higher the minimum carrier-to-noise power ratio.
Example 15
For a QPSK system and the given parameters, determine
a. Carrier power in dBm.
b. Noise power in dBm.

100
Digital Modulation

c. Noise power density in dBm.


d. Energy per bit in dBJ.
e. Carrier-to-noise power ratio in dB.
f. Eb/N0 ratio.
C  1012 W fb  60 kbps
N  1.2  1014 W B  120 kHz
Solution a. The carrier power in dBm is determined by substituting into Equation 28:

1012
C  10 log  90 dBm
0.001

b. The noise power in dBm is determined by substituting into Equation 30:


1.2  1014
N  10 log  109.2 dBm
0.001
c. The noise power density is determined by substituting into Equation 40:
N0  109.2 dBm  10 log 120 kHz  160 dBm
d. The energy per bit is determined by substituting into Equation 36:
1012
Eb  10 log  167.8 dBJ
60 kbps
e. The carrier-to-noise power ratio is determined by substituting into Equation 34:
C 1012
 10 log  19.2 dB
N 1.2  1014
f. The energy per bit-to-noise density ratio is determined by substituting into Equation 45:
Eb 120 kHz
 19.2  10 log  22.2 dB
N0 60 kbps

13 ERROR PERFORMANCE

13-1 PSK Error Performance


The bit error performance for the various multiphase digital modulation systems is directly re-
lated to the distance between points on a signal state-space diagram. For example, on the signal
state-space diagram for BPSK shown in Figure 47a, it can be seen that the two signal points
(logic 1 and logic 0) have maximum separation (d) for a given power level (D). In essence, one
BPSK signal state is the exact negative of the other. As the figure shows, a noise vector (VN),
when combined with the signal vector (VS), effectively shifts the phase of the signaling element
(VSE) alpha degrees. If the phase shift exceeds 90°, the signal element is shifted beyond the
threshold points into the error region. For BPSK, it would require a noise vector of sufficient am-
plitude and phase to produce more than a 90° phase shift in the signaling element to produce
an error. For PSK systems, the general formula for the threshold points is
π
TP  ± (47)
M
where M is the number of signal states.
The phase relationship between signaling elements for BPSK (i.e., 180° out of phase)
is the optimum signaling format, referred to as antipodal signaling, and occurs only when
two binary signal levels are allowed and when one signal is the exact negative of the other.
Because no other bit-by-bit signaling scheme is any better, antipodal performance is often
used as a reference for comparison.
The error performance of the other multiphase PSK systems can be compared with
that of BPSK simply by determining the relative decrease in error distance between points

101
Digital Modulation

FIGURE 47 PSK error region: (a) BPSK; (b) QPSK

on a signal state-space diagram. For PSK, the general formula for the maximum distance
between signaling points is given by
360° d>2
sin θ  sin  (48)
2M D
where d  error distance
M  number of phases
D  peak signal amplitude
Rearranging Equation 48 and solving for d yields

180°
d  ¢2 sin ≤D (49)
M
Figure 47b shows the signal state-space diagram for QPSK. From Figure 47 and Equa-
tion 48, it can be seen that QPSK can tolerate only a 45° phase shift. From Equation 47,

102
Digital Modulation

the maximum phase shift for 8-PSK and 16-PSK is 22.5° and 11.25°, respectively. Con-
sequently, the higher levels of modulation (i.e., the greater the value of M) require a greater
energy per bit-to-noise power density ratio to reduce the effect of noise interference. Hence,
the higher the level of modulation, the smaller the angular separation between signal points
and the smaller the error distance.
The general expression for the bit error probability of an M-phase PSK system is

erf 1z2
1
P1e2  (50)
log2M
where erf  error function
z  sin1π>M2 1 2log2M2 1 2Eb>N0 2
By substituting into Equation 50, it can be shown that QPSK provides the same error
performance as BPSK. This is because the 3-dB reduction in error distance for QPSK is off-
set by the 3-dB decrease in its bandwidth (in addition to the error distance, the relative
widths of the noise bandwidths must also be considered). Thus, both systems provide opti-
mum performance. Figure 48 shows the error performance for 2-, 4-, 8-, 16-, and 32-PSK sys-
tems as a function of Eb/N0.

FIGURE 48 Error rates of PSK modulation systems

103
Digital Modulation

Example 16
Determine the minimum bandwidth required to achieve a P(e) of 107 for an 8-PSK system operat-
ing at 10 Mbps with a carrier-to-noise power ratio of 11.7 dB.
Solution From Figure 48, the minimum Eb/N0 ratio to achieve a P(e) of 107 for an 8-PSK system
is 14.7 dB. The minimum bandwidth is found by rearranging Equation 44:
B Eb C
 
fb N0 N
 14.7 dB  11.7 dB  3 dB
B
 antilog 3  2
fb
B  2  10 Mbps  20 MHz

13-2 QAM Error Performance


For a large number of signal points (i.e., M-ary systems greater than 4), QAM outperforms
PSK. This is because the distance between signaling points in a PSK system is smaller
than the distance between points in a comparable QAM system. The general expression
for the distance between adjacent signaling points for a QAM system with L levels on each
axis is
22
d D (51)
L1
where d  error distance
L  number of levels on each axis
D  peak signal amplitude
In comparing Equation 49 to Equation 51, it can be seen that QAM systems have an
advantage over PSK systems with the same peak signal power level.
The general expression for the bit error probability of an L-level QAM system is

L1
≤ erfc1z 2
1
P1e2  ¢ (52)
log2L L
where erfc(z) is the complementary error function.

2log2L Eb
z
L  1 B N0
Figure 49 shows the error performance for 4-, 16-, 32-, and 64-QAM systems as a function
of Eb/N0.
Table 4 lists the minimum carrier-to-noise power ratios and energy per bit-to-noise
power density ratios required for a probability of error 106 for several PSK and QAM
modulation schemes.

Example 17
Which system requires the highest Eb/N0 ratio for a probability of error of 106, a four-level QAM
system or an 8-PSK system?
Solution From Figure 49, the minimum Eb/N0 ratio required for a four-level QAM system is 10.6
dB. From Figure 48, the minimum Eb/N0 ratio required for an 8-PSK system is 14 dB. Therefore, to
achieve a P(e) of 106, a four-level QAM system would require 3.4 dB less Eb/N0 ratio.

104
Digital Modulation

FIGURE 49 Error rates of QAM modulation systems

Table 4 Performance Comparison of Various Digital Modulation


Schemes (BER = 106)

Modulation Technique C/N Ratio (dB) Eb /N0 Ratio (dB)

BPSK 10.6 10.6


QPSK 13.6 10.6
4-QAM 13.6 10.6
8-QAM 17.6 10.6
8-PSK 18.5 14
16-PSK 24.3 18.3
16-QAM 20.5 14.5
32-QAM 24.4 17.4
64-QAM 26.6 18.8

105
Digital Modulation

FIGURE 50 Error rates for FSK modulation systems

13-3 FSK Error Performance


The error probability for FSK systems is evaluated in a somewhat different manner than
PSK and QAM. There are essentially only two types of FSK systems: noncoherent (asyn-
chronous) and coherent (synchronous). With noncoherent FSK, the transmitter and receiver
are not frequency or phase synchronized. With coherent FSK, local receiver reference sig-
nals are in frequency and phase lock with the transmitted signals. The probability of error
for noncoherent FSK is

1 Eb
P1e2  exp ¢ ≤ (53)
2 2N0
The probability of error for coherent FSK is

Eb
P1e2  erfc (54)
B N0
Figure 50 shows probability of error curves for both coherent and noncoherent FSK for sev-
eral values of Eb/N0. From Equations 53 and 54, it can be determined that the probability
of error for noncoherent FSK is greater than that of coherent FSK for equal energy per bit-
to-noise power density ratios.

QUESTIONS
1. Explain digital transmission and digital radio.
2. Define information capacity.
3. What are the three most predominant modulation schemes used in digital radio systems?

106
Digital Modulation

4. Explain the relationship between bits per second and baud for an FSK system.
5. Define the following terms for FSK modulation: frequency deviation, modulation index, and dev-
iation ratio.
6. Explain the relationship between (a) the minimum bandwidth required for an FSK system and
the bit rate and (b) the mark and space frequencies.
7. What is the difference between standard FSK and MSK? What is the advantage of MSK?
8. Define PSK.
9. Explain the relationship between bits per second and baud for a BPSK system.
10. What is a constellation diagram, and how is it used with PSK?
11. Explain the relationship between the minimum bandwidth required for a BPSK system and the
bit rate.
12. Explain M-ary.
13. Explain the relationship between bits per second and baud for a QPSK system.
14. Explain the significance of the I and Q channels in a QPSK modulator.
15. Define dibit.
16. Explain the relationship between the minimum bandwidth required for a QPSK system and the
bit rate.
17. What is a coherent demodulator?
18. What advantage does OQPSK have over conventional QPSK? What is a disadvantage of OQPSK?
19. Explain the relationship between bits per second and baud for an 8-PSK system.
20. Define tribit.
21. Explain the relationship between the minimum bandwidth required for an 8-PSK system and the
bit rate.
22. Explain the relationship between bits per second and baud for a 16-PSK system.
23. Define quadbit.
24. Define QAM.
25. Explain the relationship between the minimum bandwidth required for a 16-QAM system and the
bit rate.
26. What is the difference between PSK and QAM?
27. Define bandwidth efficiency.
28. Define carrier recovery.
29. Explain the differences between absolute PSK and differential PSK.
30. What is the purpose of a clock recovery circuit? When is it used?
31. What is the difference between probability of error and bit error rate?

PROBLEMS
1. Determine the bandwidth and baud for an FSK signal with a mark frequency of 32 kHz, a space
frequency of 24 kHz, and a bit rate of 4 kbps.
2. Determine the maximum bit rate for an FSK signal with a mark frequency of 48 kHz, a space fre-
quency of 52 kHz, and an available bandwidth of 10 kHz.
3. Determine the bandwidth and baud for an FSK signal with a mark frequency of 99 kHz, a space
frequency of 101 kHz, and a bit rate of 10 kbps.
4. Determine the maximum bit rate for an FSK signal with a mark frequency of 102 kHz, a space
frequency of 104 kHz, and an available bandwidth of 8 kHz.
5. Determine the minimum bandwidth and baud for a BPSK modulator with a carrier frequency of
40 MHz and an input bit rate of 500 kbps. Sketch the output spectrum.
6. For the QPSK modulator shown in Figure 17, change the 90° phase-shift network to 90° and
sketch the new constellation diagram.
7. For the QPSK demodulator shown in Figure 21, determine the I and Q bits for an input signal of
sin ωct  cos ωct.

107
Digital Modulation

8. For an 8-PSK modulator with an input data rate (fb) equal to 20 Mbps and a carrier frequency of
100 MHz, determine the minimum double-sided Nyquist bandwidth (fN) and the baud. Sketch the
output spectrum.
9. For the 8-PSK modulator shown in Figure 23, change the reference oscillator to cos ωct and
sketch the new constellation diagram.
10. For a 16-QAM modulator with an input bit rate (fb) equal to 20 Mbps and a carrier frequency of
100 MHz, determine the minimum double-sided Nyquist bandwidth (fN) and the baud. Sketch the
output spectrum.
11. For the 16-QAM modulator shown in Figure 33, change the reference oscillator to cos ωct and
determine the output expressions for the following I, I , Q, and Q input conditions: 0000, 1111,
1010, and 0101.
12. Determine the bandwidth efficiency for the following modulators:
a. QPSK, fb  10 Mbps
b. 8-PSK, fb  21 Mbps
c. 16-QAM, fb  20 Mbps
13. For the DBPSK modulator shown in Figure 40a, determine the output phase sequence for the fol-
lowing input bit sequence: 00110011010101 (assume that the reference bit  1).
14. For a QPSK system and the given parameters, determine
a. Carrier power in dBm.
b. Noise power in dBm.
c. Noise power density in dBm.
d. Energy per bit in dBJ.
e. Carrier-to-noise power ratio.
f. Eb/N0 ratio.
C  1013 W fb  30 kbps
N  0.06  1015 W B  60 kHz
15. Determine the minimum bandwidth required to achieve a P(e) of 106 for an 8-PSK system op-
erating at 20 Mbps with a carrier-to-noise power ratio of 11 dB.
16. Determine the minimum bandwidth and baud for a BPSK modulator with a carrier frequency of
80 MHz and an input bit rate fb  1 Mbps. Sketch the output spectrum.
17. For the QPSK modulator shown in Figure 17, change the reference oscillator to cos ωct and
sketch the new constellation diagram.
18. For the QPSK demodulator shown in Figure 21, determine the I and Q bits for an input signal
sin ωct  cos ωct.
19. For an 8-PSK modulator with an input bit rate fb  10 Mbps and a carrier frequency fc  80 MHz,
determine the minimum Nyquist bandwidth and the baud. Sketch the output spectrum.
20. For the 8-PSK modulator shown in Figure 23, change the 90° phase-shift network to a 90°
phase shifter and sketch the new constellation diagram.
21. For a 16-QAM modulator with an input bit rate fb  10 Mbps and a carrier frequency fc  60 MHz,
determine the minimum double-sided Nyquist frequency and the baud. Sketch the output spectrum.
22. For the 16-QAM modulator shown in Figure 33, change the 90° phase shift network to a  90°
phase shifter and determine the output expressions for the following I, I , Q, and Q input condi-
tions: 0000, 1111, 1010, and 0101.
23. Determine the bandwidth efficiency for the following modulators:
a. QPSK, fb  20 Mbps
b. 8-PSK, fb  28 Mbps
c. 16-PSK, fb  40 Mbps
24. For the DBPSK modulator shown in Figure 40a, determine the output phase sequence for the fol-
lowing input bit sequence: 11001100101010 (assume that the reference bit is a logic 1).

108
Digital Modulation

ANSWERS TO SELECTED PROBLEMS


1. 16 kHz, 4000 baud
3. 22 kHz, 10 kbaud
5. 5 MHz, 5 Mbaud
7. I  1, Q  0
9. Q I C Phase
0 0 0 22.5°
0 0 1 67.5°
0 1 0 22.5°
0 1 1 67.5°
1 0 0 157.5°
1 0 1 112.5°
1 1 0 157.5°
1 1 1 112.5°

11. Q Q’ I I’ Phase

0 0 0 0 45°
1 1 1 1 135°
1 0 1 0 135°
0 1 0 1 45°

13. Input 00110011010101


XNOR 101110111001100
15. 40 MHz
17. Q I Phase

0 0 135°
0 1 45°
1 0 135°
1 1 45°

19. 3.33 MHz, 3.33 Mbaud


21. 2.5 MHz, 2.5 Mbaud
23. a. 2 bps/Hz
b. 3 bps/Hz
c. 4 bps/Hz

109
110
Introduction to Data
Communications and Networking

CHAPTER OUTLINE

1 Introduction 6 Open Systems Interconnection


2 History of Data Communications 7 Data Communications Circuits
3 Data Communications Network Architecture, 8 Serial and Parallel Data Transmission
Protocols, and Standards 9 Data Communications Circuit Arrangements
4 Standards Organizations for Data 10 Data Communications Networks
Communications 11 Alternate Protocol Suites
5 Layered Network Architecture

OBJECTIVES

■ Define the following terms: data, data communications, data communications circuit, and data communications
network
■ Give a brief description of the evolution of data communications
■ Define data communications network architecture
■ Describe data communications protocols
■ Describe the basic concepts of connection-oriented and connectionless protocols
■ Describe syntax and semantics and how they relate to data communications
■ Define data communications standards and explain why they are necessary
■ Describe the following standards organizations: ISO, ITU-T, IEEE, ANSI, EIA, TIA, IAB, ETF, and IRTF
■ Define open systems interconnection
■ Name and explain the functions of each of the layers of the seven-layer OSI model
■ Define station and node
■ Describe the fundamental block diagram of a two-station data communications circuit and explain how the fol-
lowing terms relate to it: source, transmitter, transmission medium, receiver, and destination
■ Describe serial and parallel data transmission and explain the advantages and disadvantages of both types of trans-
missions

From Chapter 3 of Advanced Electronic Communications Systems, Sixth Edition. Wayne Tomasi.
Copyright © 2004 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
111
Introduction to Data Communications and Networking

■ Define data communications circuit arrangements


■ Describe the following transmission modes: simplex, half duplex, full duplex, and full/full duplex
■ Define data communications network
■ Describe the following network components, functions, and features: servers, clients, transmission media, shared
data, shared printers, and network interface card
■ Define local operating system
■ Define network operating system
■ Describe peer-to-peer client/server and dedicated client/server networks
■ Define network topology and describe the following: star, bus, ring, mesh, and hybrid
■ Describe the following classifications of networks: LAN, MAN, WAN, GAN, building backbone, campus back-
bone, and enterprise network
■ Briefly describe the TCP/IP hierarchical model
■ Briefly describe the Cisco three-layer hierarchical model

1 INTRODUCTION

Since the early 1970s, technological advances around the world have occurred at a phe-
nomenal rate, transforming the telecommunications industry into a highly sophisticated
and extremely dynamic field. Where previously telecommunications systems had only
voice to accommodate, the advent of very large-scale integration chips and the accom-
panying low-cost microprocessors, computers, and peripheral equipment has dramati-
cally increased the need for the exchange of digital information. This, of course, neces-
sitated the development and implementation of higher-capacity and much faster means
of communicating.
In the data communications world, data generally are defined as information that is
stored in digital form. The word data is plural; a single unit of data is a datum. Data com-
munications is the process of transferring digital information (usually in binary form) be-
tween two or more points. Information is defined as knowledge or intelligence. Information
that has been processed, organized, and stored is called data.
The fundamental purpose of a data communications circuit is to transfer digital in-
formation from one place to another. Thus, data communications can be summarized as the
transmission, reception, and processing of digital information. The original source infor-
mation can be in analog form, such as the human voice or music, or in digital form, such as
binary-coded numbers or alphanumeric codes. If the source information is in analog form,
it must be converted to digital form at the source and then converted back to analog form at
the destination.
A network is a set of devices (sometimes called nodes or stations) interconnected
by media links. Data communications networks are systems of interrelated computers
and computer equipment and can be as simple as a personal computer connected to a
printer or two personal computers connected together through the public telephone
network. On the other hand, a data communications network can be a complex com-
munications system comprised of one or more mainframe computers and hundreds,
thousands, or even millions of remote terminals, personal computers, and worksta-
tions. In essence, there is virtually no limit to the capacity or size of a data communi-
cations network.
Years ago, a single computer serviced virtually every computing need. Today, the
single-computer concept has been replaced by the networking concept, where a large num-
ber of separate but interconnected computers share their resources. Data communications
networks and systems of networks are used to interconnect virtually all kinds of digital
computing equipment, from automatic teller machines (ATMs) to bank computers; per-
sonal computers to information highways, such as the Internet; and workstations to main-

112
Introduction to Data Communications and Networking

frame computers. Data communications networks can also be used for airline and hotel
reservation systems, mass media and news networks, and electronic mail delivery systems.
The list of applications for data communications networks is virtually endless.

2 HISTORY OF DATA COMMUNICATIONS

It is highly likely that data communications began long before recorded time in the form of
smoke signals or tom-tom drums, although they surely did not involve electricity or an elec-
tronic apparatus, and it is highly unlikely that they were binary coded. One of the earliest
means of communicating electrically coded information occurred in 1753, when a proposal
submitted to a Scottish magazine suggested running a communications line between vil-
lages comprised of 26 parallel wires, each wire for one letter of the alphabet. A Swiss in-
ventor constructed a prototype of the 26-wire system, but current wire-making technology
proved the idea impractical.
In 1833, Carl Friedrich Gauss developed an unusual system based on a five-by-five
matrix representing 25 letters (I and J were combined). The idea was to send messages over
a single wire by deflecting a needle to the right or left between one and five times. The ini-
tial set of deflections indicated a row, and the second set indicated a column. Consequently,
it could take as many as 10 deflections to convey a single character through the system.
If we limit the scope of data communications to methods that use binary-coded electri-
cal signals to transmit information, then the first successful (and practical) data communica-
tions system was invented by Samuel F. B. Morse in 1832 and called the telegraph. Morse also
developed the first practical data communications code, which he called the Morse code. With
telegraph, dots and dashes (analogous to logic 1s and 0s) are transmitted across a wire using
electromechanical induction. Various combinations of dots, dashes, and pauses represented bi-
nary codes for letters, numbers, and punctuation marks. Because all codes did not contain the
same number of dots and dashes, Morse’s system combined human intelligence with electron-
ics, as decoding was dependent on the hearing and reasoning ability of the person receiving the
message. (Sir Charles Wheatstone and Sir William Cooke allegedly invented the first telegraph
in England, but their contraption required six different wires for a single telegraph line.)
In 1840, Morse secured an American patent for the telegraph, and in 1844 the first tele-
graph line was established between Baltimore and Washington, D.C., with the first message
conveyed over this system being “What hath God wrought!” In 1849, the first slow-speed
telegraph printer was invented, but it was not until 1860 that high-speed (15-bps) printers
were available. In 1850, Western Union Telegraph Company was formed in Rochester, New
York, for the purpose of carrying coded messages from one person to another.
In 1874, Emile Baudot invented a telegraph multiplexer, which allowed signals from
up to six different telegraph machines to be transmitted simultaneously over a single wire.
The telephone was invented in 1875 by Alexander Graham Bell and, unfortunately, very lit-
tle new evolved in telegraph until 1899, when Guglielmo Marconi succeeded in sending ra-
dio (wireless) telegraph messages. Telegraph was the only means of sending information
across large spans of water until 1920, when the first commercial radio stations carrying
voice information were installed.
It is unclear exactly when the first electrical computer was developed. Konrad Zuis,
a German engineer, demonstrated a computing machine sometime in the late 1930s; how-
ever, at the time, Hitler was preoccupied trying to conquer the rest of the world, so the proj-
ect fizzled out. Bell Telephone Laboratories is given credit for developing the first special-
purpose computer in 1940 using electromechanical relays for performing logical
operations. However, J. Presper Eckert and John Mauchley at the University of Pennsylva-
nia are given credit by some for beginning modern-day computing when they developed the
ENIAC computer on February 14, 1946.

113
Introduction to Data Communications and Networking

In 1949, the U.S. National Bureau of Standards developed the first all-electronic
diode-based computer capable of executing stored programs. The U.S. Census Bureau in-
stalled the machine, which is considered the first commercially produced American com-
puter. In the 1950s, computers used punch cards for inputting information, printers for
outputting information, and magnetic tape reels for permanently storing information.
These early computers could process only one job at a time using a technique called batch
processing.
The first general-purpose computer was an automatic sequence-controlled calculator
developed jointly by Harvard University and International Business Machines (IBM) Cor-
poration. The UNIVAC computer, built in 1951 by Remington Rand Corporation, was the
first mass-produced electronic computer.
In the 1960s, batch-processing systems were replaced by on-line processing systems
with terminals connected directly to the computer through serial or parallel communica-
tions lines. The 1970s introduced microprocessor-controlled microcomputers, and by the
1980s personal computers became an essential item in the home and workplace. Since then,
the number of mainframe computers, small business computers, personal computers, and
computer terminals has increased exponentially, creating a situation where more and more
people have the need (or at least think they have the need) to exchange digital information
with each other. Consequently, the need for data communications circuits, networks, and
systems has also increased exponentially.
Soon after the invention of the telephone, the American Telephone and Telegraph
Company (AT&T) emerged, providing both long-distance and local telephone service
and data communications service throughout the United States. The vast AT&T system
was referred to by some as the “Bell System” and by others as “Ma Bell.” During this
time, Western Union Corporation provided telegraph service. Until 1968, the AT&T op-
erating tariff allowed only equipment furnished by AT&T to be connected to AT&T
lines. In 1968, a landmark Supreme Court decision, the Carterfone decision, allowed
non-Bell companies to interconnect to the vast AT&T communications network. This
decision started the interconnect industry, which has led to competitive data communi-
cations offerings by a large number of independent companies. In 1983, as a direct re-
sult of an antitrust suit filed by the federal government, AT&T agreed in a court settle-
ment to divest itself of operating companies that provide basic local telephone service
to the various geographic regions of the United States. Since the divestiture, the com-
plexity of the public telephone system in the United States has grown even more in-
volved and complicated.
Recent developments in data communications networking, such as the Internet, in-
tranets, and the World Wide Web (WWW), have created a virtual explosion in the data com-
munications industry. A seemingly infinite number of people, from homemaker to chief ex-
ecutive officer, now feel a need to communicate over a finite number of facilities. Thus, the
demand for higher-capacity and higher-speed data communications systems is increasing
daily with no end in sight.
The Internet is a public data communications network used by millions of people all
over the world to exchange business and personal information. The Internet began to evolve
in 1969 at the Advanced Research Projects Agency (ARPA). ARPANET was formed in the
late 1970s to connect sites around the United States. From the mid-1980s to April 30, 1995,
the National Science Foundation (NSF) funded a high-speed backbone called NSFNET.
Intranets are private data communications networks used by many companies to ex-
change information among employees and resources. Intranets normally are used for secu-
rity reasons or to satisfy specific connectivity requirements. Company intranets are gener-
ally connected to the public Internet through a firewall, which converts the intranet
addressing system to the public Internet addressing system and provides security function-
ality by filtering incoming and outgoing traffic based on addressing and protocols.

114
Introduction to Data Communications and Networking

The World Wide Web (WWW) is a server-based application that allows subscribers to
access the services offered by the Web. Browsers, such as Netscape Communicator and Mi-
crosoft Internet Explorer, are commonly used for accessing data over the WWW.

3 DATA COMMUNICATIONS NETWORK ARCHITECTURE,


PROTOCOLS, AND STANDARDS

3-1 Data Communications Network Architecture


A data communications network is any system of computers, computer terminals, or com-
puter peripheral equipment used to transmit and/or receive information between two or
more locations. Network architectures outline the products and services necessary for the
individual components within a data communications network to operate together.
In essence, network architecture is a set of equipment, transmission media, and proce-
dures that ensures that a specific sequence of events occurs in a network in the proper order
to produce the intended results. Network architecture must include sufficient information to
allow a program or a piece of hardware to perform its intended function. The primary goal of
network architecture is to give the users of the network the tools necessary for setting up the
network and performing data flow control. A network architecture outlines the way in which
a data communications network is arranged or structured and generally includes the concept
of levels or layers of functional responsibility within the architecture. The functional respon-
sibilities include electrical specifications, hardware arrangements, and software procedures.
Networks and network protocols fall into three general classifications: current,
legacy, and legendary. Current networks include the most modern and sophisticated net-
works and protocols available. If a network or protocol becomes a legacy, no one really
wants to use it, but for some reason it just will not go away. When an antiquated network or
protocol finally disappears, it becomes legendary.
In general terms, computer networks can be classified in two different ways: broadcast
and point to point. With broadcast networks, all stations and devices on the network share a
single communications channel. Data are propagated through the network in relatively short
messages sometimes called frames, blocks, or packets. Many or all subscribers of the net-
work receive transmitted messages, and each message contains an address that identifies
specifically which subscriber (or subscribers) is intended to receive the message. When mes-
sages are intended for all subscribers on the network, it is called broadcasting, and when
messages are intended for a specific group of subscribers, it is called multicasting.
Point-to-point networks have only two stations. Therefore, no addresses are needed.
All transmissions from one station are intended for and received by the other station. With
point-to-point networks, data are often transmitted in long, continuous messages, some-
times requiring several hours to send.
In more specific terms, point-to-point and broadcast networks can be subdivided into
many categories in which one type of network is often included as a subnetwork of another.

3-2 Data Communications Protocols


Computer networks communicate using protocols, which define the procedures that the sys-
tems involved in the communications process will use. Numerous protocols are used today to
provide networking capabilities, such as how much data can be sent, how it will be sent, how it
will be addressed, and what procedure will be used to ensure that there are no undetected errors.
Protocols are arrangements between people or processes. In essence, a protocol is a
set of customs, rules, or regulations dealing with formality or precedence, such as diplomatic
or military protocol. Each functional layer of a network is responsible for providing a spe-
cific service to the data being transported through the network by providing a set of rules,
called protocols, that perform a specific function (or functions) within the network. Data
communications protocols are sets of rules governing the orderly exchange of data within

115
Introduction to Data Communications and Networking

the network or a portion of the network, whereas network architecture is a set of layers and
protocols that govern the operation of the network. The list of protocols used by a system is
called a protocol stack, which generally includes only one protocol per layer. Layered net-
work architectures consist of two or more independent levels. Each level has a specific set
of responsibilities and functions, including data transfer, flow control, data segmentation and
reassembly, sequence control, error detection and correction, and notification.

3-2-1 Connection-oriented and connectionless protocols. Protocols can be gen-


erally classified as either connection oriented or connectionless. With a connection-ori-
ented protocol, a logical connection is established between the endpoints (e.g., a virtual cir-
cuit) prior to the transmission of data. Connection-oriented protocols operate in a manner
similar to making a standard telephone call where there is a sequence of actions and ac-
knowledgments, such as setting up the call, establishing the connection, and then discon-
necting. The actions and acknowledgments include dial tone, Touch-Tone signaling, ring-
ing and ring-back signals, and busy signals.
Connection-oriented protocols are designed to provide a high degree of reliability for
data moving through the network. This is accomplished by using a rigid set of procedures
for establishing the connection, transferring the data, acknowledging the data, and then
clearing the connection. In a connection-oriented system, each packet of data is assigned a
unique sequence number and an associated acknowledgement number to track the data as
they travel through a network. If data are lost or damaged, the destination station requests
that they be re-sent. A connection-oriented protocol is depicted in Figure 1a. Characteris-
tics of connection-oriented protocols include the following:
1. A connection process called a handshake occurs between two stations before any
data are actually transmitted. Connections are sometimes referred to as sessions,
virtual circuits, or logical connections.
2. Most connection-oriented protocols require some means of acknowledging the
data as they are being transmitted. Protocols that use acknowledgment procedures
provide a high level of network reliability.
3. Connection-oriented protocols often provide some means of error control (i.e., er-
ror detection and error correction). Whenever data are found to be in error, the re-
ceiving station requests a retransmission.
4. When a connection is no longer needed, a specific handshake drops the connection.
Connectionless protocols are protocols where data are exchanged in an unplanned
fashion without prior coordination between endpoints (e.g., a datagram). Connectionless
protocols do not provide the same high degree of reliability as connection-oriented proto-
cols; however, connectionless protocols offer a significant advantage in transmission speed.
Connectionless protocols operate in a manner similar to the U.S. Postal Service, where in-
formation is formatted, placed in an envelope with source and destination addresses, and
then mailed. You can only hope the letter arrives at its destination. A connectionless proto-
col is depicted in Figure 1b. Characteristics of connectionless protocols are as follow:
1. Connectionless protocols send data with a source and destination address without
a handshake to ensure that the destination is ready to receive the data.
2. Connectionless protocols usually do not support error control or acknowledgment
procedures, making them a relatively unreliable method of data transmission.
3. Connectionless protocols are used because they are often more efficient, as the
data being transmitted usually do not justify the extra overhead required by
connection-oriented protocols.

116
Introduction to Data Communications and Networking

NETWORK

Setup request

Setup response

Data transmitted

Data acknowledgment

Connection clear request


Station 1 Station 2
Clear response

(a)

NETWORK

Data

Data

Data

Data

Data
Station 1 Station 2

(b)

FIGURE 1 Network protocols: (a) connection-oriented; (b) connectionless

3-2-2 Syntax and semantics. Protocols include the concepts of syntax and se-
mantics. Syntax refers to the structure or format of the data within the message, which in-
cludes the sequence in which the data are sent. For example, the first byte of a message
might be the address of the source and the second byte the address of the destination. Se-
mantics refers to the meaning of each section of data. For example, does a destination ad-
dress identify only the location of the final destination, or does it also identify the route the
data takes between the sending and receiving locations?

3-3 Data Communications Standards


During the past several decades, the data communications industry has grown at an astro-
nomical rate. Consequently, the need to provide communications between dissimilar com-
puter equipment and systems has also increased. A major issue facing the data communi-
cations industry today is worldwide compatibility. Major areas of interest are software and
programming language, electrical and cable interface, transmission media, communica-
tions signal, and format compatibility. Thus, to ensure an orderly transfer of information, it
has been necessary to establish standard means of governing the physical, electrical, and
procedural arrangements of a data communications system.
A standard is an object or procedure considered by an authority or by general consent
as a basis of comparison. Standards are authoritative principles or rules that imply a model
or pattern for guidance by comparison. Data communications standards are guidelines that

117
Introduction to Data Communications and Networking

have been generally accepted by the data communications industry. The guidelines outline
procedures and equipment configurations that help ensure an orderly transfer of informa-
tion between two or more pieces of data communications equipment or two or more data
communications networks. Data communications standards are not laws, however—they
are simply suggested ways of implementing procedures and accomplishing results. If
everyone complies with the standards, everyone’s equipment, procedures, and processes
will be compatible with everyone else’s, and there will be little difficulty communicating
information through the system. Today, most companies make their products to comply
with standards.
There are two basic types of standards: proprietary (closed) system and open sys-
tem. Proprietary standards are generally manufactured and controlled by one company.
Other companies are not allowed to manufacture equipment or write software using this
standard. An example of a proprietary standard is Apple Macintosh computers. Advan-
tages of proprietary standards are tighter control, easier consensus, and a monopoly.
Disadvantages include lack of choice for the customers, higher financial investment,
overpricing, and reduced customer protection against the manufacturer going out of
business.
With open system standards, any company can produce compatible equipment or
software; however, often a royalty must be paid to the original company. An example of an
open system standard is IBM’s personal computer. Advantages of open system standards
are customer choice, compatibility between venders, and competition by smaller compa-
nies. Disadvantages include less product control and increased difficulty acquiring agree-
ment between vendors for changes or updates. In addition, standard items are not always as
compatible as we would like them to be.

4 STANDARDS ORGANIZATIONS FOR DATA COMMUNICATIONS

A consortium of organizations, governments, manufacturers, and users meet on a regular ba-


sis to ensure an orderly flow of information within data communications networks and sys-
tems by establishing guidelines and standards. The intent is that all data communications
equipment manufacturers and users comply with these standards. Standards organizations
generate, control, and administer standards. Often, competing companies will form a joint
committee to create a compromised standard that is acceptable to everyone. The most promi-
nent organizations relied on in North America to publish standards and make recommenda-
tions for the data, telecommunications, and networking industries are shown in Figure 2.

4-1 International Standards Organization (ISO)


Created in 1946, the International Standards Organization (ISO) is the international or-
ganization for standardization on a wide range of subjects. The ISO is a voluntary, nontreaty
organization whose membership is comprised mainly of members from the standards com-
mittees of various governments throughout the world. The ISO creates the sets of rules and
standards for graphics and document exchange and provides models for equipment and sys-
tem compatibility, quality enhancement, improved productivity, and reduced costs. The
ISO is responsible for endorsing and coordinating the work of the other standards organi-
zations. The member body of the ISO from the United States is the American National Stan-
dards Institute (ANSI).

4-2 International Telecommunications Union—


Telecommunications Sector
The International Telecommunications Union—Telecommunications Sector (ITU-T), formerly
the Comité Consultatif Internationale de Télégraphie et Téléphonie (CCITT), is one of four per-
manent parts of the International Telecommunications Union based in Geneva, Switzerland.

118
Introduction to Data Communications and Networking

ISO

ITU-T IEEE ANSI

EIA TIA

IAB

FIGURE 2 Standards organizations


IETF IRTF for data and network
communications

Membership in the ITU-T consists of government authorities and representatives from many
countries. The ITU-T is now the standards organization for the United Nations and develops
the recommended sets of rules and standards for telephone and data communications. The ITU-T
has developed three sets of specifications: the V series for modem interfacing and data trans-
mission over telephone lines; the X series for data transmission over public digital networks,
e-mail, and directory services; and the I and Q series for Integrated Services Digital Network
(ISDN) and its extension Broadband ISDN (sometimes called the Information Superhighway).
The ITU-T is separated into 14 study groups that prepare recommendations on the
following topics:

Network and service operation


Tariff and accounting principles
Telecommunications management network and network maintenance
Protection against electromagnetic environment effects
Outside plant
Data networks and open system communications
Characteristics of telematic systems
Television and sound transmission
Language and general software aspects for telecommunications systems
Signaling requirements and protocols
End-to-end transmission performance of networks and terminals
General network aspects
Transport networks, systems, and equipment
Multimedia services and systems

4-3 Institute of Electrical and Electronics Engineers


The Institute of Electrical and Electronics Engineers (IEEE) is an international professional
organization founded in the United States and is comprised of electronics, computer, and
communications engineers. The IEEE is currently the world’s largest professional society

119
Introduction to Data Communications and Networking

with over 200,000 members. The IEEE works closely with ANSI to develop communica-
tions and information processing standards with the underlying goal of advancing theory,
creativity, and product quality in any field associated with electrical engineering.

4-4 American National Standards Institute


The American National Standards Institute (ANSI) is the official standards agency for the
United States and is the U.S. voting representative for the ISO. However,ANSI is a completely
private, nonprofit organization comprised of equipment manufacturers and users of data pro-
cessing equipment and services. Although ANSI has no affiliations with the federal govern-
ment of the United States, it serves as the national coordinating institution for voluntary stan-
dardization in the United States. ANSI membership is comprised of people from professional
societies, industry associations, governmental and regulatory bodies, and consumer groups.

4-5 Electronics Industry Association


The Electronics Industries Associations (EIA) is a nonprofit U.S. trade association that es-
tablishes and recommends industrial standards. EIA activities include standards develop-
ment, increasing public awareness, and lobbying. The EIA is responsible for developing the
RS (recommended standard) series of standards for data and telecommunications.

4-6 Telecommunications Industry Association


The Telecommunications Industry Association (TIA) is the leading trade association in the
communications and information technology industry. The TIA facilitates business devel-
opment opportunities and a competitive marketplace through market development, trade
promotion, trade shows, domestic and international advocacy, and standards development.
The TIA represents manufacturers of communications and information technology prod-
ucts and services providers for the global marketplace through its core competencies. The
TIA also facilitates the convergence of new communications networks while working for a
competitive and innovative market environment.

4-7 Internet Architecture Board


In 1957, the Advanced Research Projects Agency (ARPA), the research arm of the Department
of Defense, was created in response to the Soviet Union’s launching of Sputnik. The original
purpose ofARPA was to accelerate the advancement of technologies that could possibly be use-
ful to the U.S. military. When ARPANET was initiated in the late 1970s, ARPA formed a com-
mittee to oversee it. In 1983, the name of the committee was changed to the Internet Activities
Board (IAB). The meaning of the acronym was later changed to the Internet Architecture Board.
Today the IAB is a technical advisory group of the Internet Society with the follow-
ing responsibilities:

1. Oversees the architecture protocols and procedures used by the Internet


2. Manages the processes used to create Internet standards and serves as an appeal
board for complaints of improper execution of the standardization processes
3. Is responsible for the administration of the various Internet assigned numbers
4. Acts as representative for Internet Society interests in liaison relationships with
other organizations concerned with standards and other technical and organiza-
tional issues relevant to the worldwide Internet
5. Acts as a source of advice and guidance to the board of trustees and officers of the
Internet Society concerning technical, architectural, procedural, and policy mat-
ters pertaining to the Internet and its enabling technologies

4-8 Internet Engineering Task Force


The Internet Engineering Task Force (IETF) is a large international community of network
designers, operators, venders, and researchers concerned with the evolution of the Internet
architecture and the smooth operation of the Internet.

120
Introduction to Data Communications and Networking

4-9 Internet Research Task Force


The Internet Research Task Force (IRTF) promotes research of importance to the evolution
of the future Internet by creating focused, long-term and small research groups working on
topics related to Internet protocols, applications, architecture, and technology.

5 LAYERED NETWORK ARCHITECTURE

The basic concept of layering network responsibilities is that each layer adds value to ser-
vices provided by sets of lower layers. In this way, the highest level is offered the full set of
services needed to run a distributed data application. There are several advantages to using
a layered architecture. A layered architecture facilitates peer-to-peer communications pro-
tocols where a given layer in one system can logically communicate with its corresponding
layer in another system. This allows different computers to communicate at different lev-
els. Figure 3 shows a layered architecture where layer N at the source logically (but not nec-
essarily physically) communicates with layer N at the destination and layer N of any inter-
mediate nodes.
5-1 Protocol Data Unit
When technological advances occur in a layered architecture, it is easier to modify one
layer’s protocol without having to modify all the other layers. Each layer is essentially in-
dependent of every other layer. Therefore, many of the functions found in lower layers have
been removed entirely from software tasks and replaced with hardware. The primary dis-
advantage of layered architectures is the tremendous amount of overhead required. With
layered architectures, communications between two corresponding layers requires a unit of
data called a protocol data unit (PDU). As shown in Figure 4, a PDU can be a header added
at the beginning of a message or a trailer appended to the end of a message. In a layered ar-
chitecture, communications occurs between similar layers; however, data must flow
through the other layers. Data flows downward through the layers in the source system and
upward through the layers in the destination system. In intermediate systems, data flows up-
ward first and then downward. As data passes from one layer into another, headers and trail-
ers are added and removed from the PDU. The process of adding or removing PDU infor-
mation is called encapsulation/decapsulation because it appears as though the PDU from
the upper layer is encapsulated in the PDU from the lower layer during the downward

Source Destination

Network

Peer-to-peer communications
Layer N + 1 Layer N + 1 to Layer N + 1 Layer N + 1

Layer N to Layer N
Layer N Layer N

Layer N – 1 to Layer N – 1
Layer N – 1 Layer N – 1

FIGURE 3 Peer-to-peer data communications

121
Introduction to Data Communications and Networking

User information Overhead Overhead User information

Layer N PDU Header Trailer Layer N PDU

(a) (b)

FIGURE 4 Protocol data unit: (a) header; (b) trailer

System A System B
(PDU – data) (PDU – data)

Network

Layer N + 1 N + 1 Layer N + 1 N + 1
PDU Header PDU Header

Encapsulation Decapsulation

Layer N N Layer N N
PDU Header PDU Header

Encapsulation Decapsulation

Layer N – 1 N–1 Layer N – 1 N–1


PDU Header PDU Header

Encapsulation Decapsulation

FIGURE 5 Encapsulation and decapsulation

movement and decapsulated during the upward movement. Encapsulate means to place in
a capsule or other protected environment, and decapsulate means to remove from a capsule
or other protected environment. Figure 5 illustrates the concepts of encapsulation and de-
capsulation.
In a layered protocol such as the one shown in Figure 3, layer N receive services from
the layer immediately below it (N  1) and provides services to the layer directly above it
(N  1). Layer N can provide service to more than one entity in layer N  1 by using a
service access point (SAP) address to define which entity the service is intended.

122
Introduction to Data Communications and Networking

Information and network information passes from one layer of a multilayered archi-
tecture to another layer through a layer-to-layer interface. A layer-to-layer interface defines
what information and services the lower layer must provide to the upper layer. A well-de-
fined layer and layer-to-layer interface provide modularity to a network.

6 OPEN SYSTEMS INTERCONNECTION

Open systems interconnection (OSI) is the name for a set of standards for communicating
among computers. The primary purpose of OSI standards is to serve as a structural guide-
line for exchanging information between computers, workstations, and networks. The OSI
is endorsed by both the ISO and ITU-T, which have worked together to establish a set of
ISO standards and ITU-T recommendations that are essentially identical. In 1983, the ISO
and ITU-T (CCITT) adopted a seven-layer communications architecture reference model.
Each layer consists of specific protocols for communicating.
The ISO seven-layer open systems interconnection model is shown in Figure 6. This
hierarchy was developed to facilitate the intercommunications of data processing equip-
ment by separating network responsibilities into seven distinct layers. As with any layered
architecture, overhead information is added to a PDU in the form of headers and trailers. In
fact, if all seven levels of the OSI model are addressed, as little as 15% of the transmitted
message is actually source information, and the rest is overhead. The result of adding head-
ers to each layer is illustrated in Figure 7.

ISO Layer and name Function

Layer 7 User networking applications and interfacing to the network


Application

Layer 6 Encoding language used in transmission


Presentation

Layer 5 Job management tracking


Session

Layer 4 Data tracking as it moves through a network


Transport

Layer 3 Network addressing and packet transmission on the network


Network

Layer 2 Frame formatting for transmitting data across a physical


Data link communications link

Layer 1 Transmission method used to propagate bits through a network


Physical

FIGURE 6 OSI seven-layer protocol hierarchy

123
Introduction to Data Communications and Networking

Host A Host B
Applications data exchange
Application Data Application
A A

Layer 7 H7 Data Layer 7


Applications Applications

Layer 6 H6 H7 Data Layer 6


Presentation Presentation

Layer 5 H5 H6 H7 Data Layer 5


Session Session

Layer 4 H4 H5 H6 H7 Data Layer 4


Transport Transport

Layer 3 H3 H4 H5 H6 H7 Data Layer 3


Network Network

Layer 2 H2 H3 H4 H5 H6 H7 Data Layer 2


Data link Data link

Layer 1 H1 H2 H3 H4 H5 H6 H7 Data Layer 1


Physical Physical
Protocol headers (overhead)
System A System B

FIGURE 7 OSI seven-layer international protocol hierarchy. H7—applications header, H6—


presentation header, H5—session header, H4—transport header, H3—network header, H2—
data-link header, H1—physical header

In recent years, the OSI seven-layer model has become more academic than standard,
as the hierarchy does not coincide with the Internet’s four-layer protocol model. However,
the basic functions of the layers are still performed, so the seven-layer model continues to
serve as a reference model when describing network functions.
Levels 4 to 7 address the applications aspects of the network that allow for two host
computers to communicate directly. The three bottom layers are concerned with the actual
mechanics of moving data (at the bit level) from one machine to another. A brief summary
of the services provided by each layer is given here.
1. Physical layer. The physical layer is the lowest level of the OSI hierarchy and is
responsible for the actual propagation of unstructured data bits (1s and 0s) through a transmis-

124
Introduction to Data Communications and Networking

Wall jack

User computer Hub Hub


Optical fiber cable

Patch Twisted-pair cable,


panel coax, or optical
fiber

A B C D E F
Hub User computers User computers

(a) (b)

FIGURE 8 OSI layer 1—physical: (a) computer-to-hub; (b) connectivity devices

sion medium, which includes how bits are represented, the bit rate, and how bit synchroniza-
tion is achieved. The physical layer specifies the type of transmission medium and the trans-
mission mode (simplex, half duplex, or full duplex) and the physical, electrical, functional, and
procedural standards for accessing data communications networks. Definitions such as con-
nections, pin assignments, interface parameters, timing, maximum and minimum voltage lev-
els, and circuit impedances are made at the physical level. Transmission media defined by the
physical layer include metallic cable, optical fiber cable, or wireless radio-wave propagation.
The physical layer for a cable connection is depicted in Figure 8a.
Connectivity devices connect devices on cabled networks. An example of a connec-
tivity device is a hub. A hub is a transparent device that samples the incoming bit stream
and simply repeats it to the other devices connected to the hub. The hub does not examine
the data to determine what the destination is; therefore, it is classified as a layer 1 compo-
nent. Physical layer connectivity for a cabled network is shown in Figure 8b.
The physical layer also includes the carrier system used to propagate the data signals
between points in the network. Carrier systems are simply communications systems that
carry data through a system using either metallic or optical fiber cables or wireless arrange-
ments, such as microwave, satellites, and cellular radio systems. The carrier can use analog
or digital signals that are somehow converted to a different form (encoded or modulated)
by the data and then propagated through the system.
2. Data-link layer. The data-link layer is responsible for providing error-free com-
munications across the physical link connecting primary and secondary stations (nodes)
within a network (sometimes referred to as hop-to-hop delivery). The data-link layer pack-
ages data from the physical layer into groups called blocks, frames, or packets and provides
a means to activate, maintain, and deactivate the data communications link between nodes.
The data-link layer provides the final framing of the information signal, provides synchro-
nization, facilitates the orderly flow of data between nodes, outlines procedures for error
detection and correction, and provides the physical addressing information. A block dia-
gram of a network showing data transferred between two computers (A and E) at the data-
link level is illustrated in Figure 9. Note that the hubs are transparent but that the switch
passes the transmission on to only the hub serving the intended destination.
3. Network layer. The network layer provides details that enable data to be routed be-
tween devices in an environment using multiple networks, subnetworks, or both. Network-
ing components that operate at the network layer include routers and their software. The

125
Introduction to Data Communications and Networking

Switch
Hub Hub

C D G H
Hub
Hub Hub

A B E F

FIGURE 9 OSI layer 2—data link

Subnet

Router

Subnet Subnet

Router Subnets Router

Subnet Subnet

Router
Hub Hub

Subnet

FIGURE 10 OSI layer 3—network

network layer determines which network configuration is most appropriate for the function
provided by the network and addresses and routes data within networks by establishing,
maintaining, and terminating connections between them. The network layer provides the
upper layers of the hierarchy independence from the data transmission and switching tech-
nologies used to interconnect systems. It accomplishes this by defining the mechanism in
which messages are broken into smaller data packets and routed from a sending node to a
receiving node within a data communications network. The network layer also typically
provides the source and destination network addresses (logical addresses), subnet informa-
tion, and source and destination node addresses. Figure 10 illustrates the network layer of
the OSI protocol hierarchy. Note that the network is subdivided into subnetworks that are
separated by routers.
4. Transport layer. The transport layer controls and ensures the end-to-end integrity
of the data message propagated through the network between two devices, which provides

126
Introduction to Data Communications and Networking

Data

Acknowledgment

Network
Data

Acknowledgment
Computer A Computer B

FIGURE 11 OSI layer 4—transport

Client

Service request

Hub

Client Service response


Server

Client

FIGURE 12 OSI layer 5—session

for the reliable, transparent transfer of data between two endpoints. Transport layer re-
sponsibilities includes message routing, segmenting, error recovery, and two types of basic
services to an upper-layer protocol: connectionless oriented and connectionless. The trans-
port layer is the highest layer in the OSI hierarchy in terms of communications and may
provide data tracking, connection flow control, sequencing of data, error checking, and ap-
plication addressing and identification. Figure 11 depicts data transmission at the transport
layer.
5. Session layer. The session layer is responsible for network availability (i.e., data stor-
age and processor capacity). Session layer protocols provide the logical connection entities at
the application layer. These applications include file transfer protocols and sending e-mail.
Session responsibilities include network log-on and log-off procedures and user authentica-
tion. A session is a temporary condition that exists when data are actually in the process of be-
ing transferred and does not include procedures such as call establishment, setup, or discon-
nect. The session layer determines the type of dialogue available (i.e., simplex, half duplex,
or full duplex). Session layer characteristics include virtual connections between applications
entities, synchronization of data flow for recovery purposes, creation of dialogue units and ac-
tivity units, connection parameter negotiation, and partitioning services into functional
groups. Figure 12 illustrates the establishment of a session on a data network.
6. Presentation layer. The presentation layer provides independence to the applica-
tion processes by addressing any code or syntax conversion necessary to present the data to
the network in a common communications format. The presentation layer specifies how
end-user applications should format the data. This layer provides for translation between
local representations of data and the representation of data that will be used for transfer be-
tween end users. The results of encryption, data compression, and virtual terminals are ex-
amples of the translation service.

127
Introduction to Data Communications and Networking

Type Options
Images JPEG, PICT, GIF
Video MPEG, MIDI
Data ASCII, EBCDIC

Computer A
Network
FIGURE 13 OSI layer 6—
Computer B presentation

Networking Applications
File transfer
Email
Printing

PC
Applications
Database
Word
processing
Spreadsheets FIGURE 14 OSI layer 7—
applications

The presentation layer translates between different data formats and protocols.
Presentation functions include data file formatting, encoding, encryption and decryption
of data messages, dialogue procedures, data compression algorithms, synchronization,
interruption, and termination. The presentation layer performs code and character set
translation (including ASCII and EBCDIC) and formatting information and determines
the display mechanism for messages. Figure 13 shows an illustration of the presentation
layer.
7. Application layer. The application layer is the highest layer in the hierarchy
and is analogous to the general manager of the network by providing access to the
OSI environment. The applications layer provides distributed information services
and controls the sequence of activities within an application and also the sequence of
events between the computer application and the user of another application. The ap-
plication layer (shown in Figure 14) communicates directly with the user’s application
program.
User application processes require application layer service elements to access the net-
working environment. There are two types of service elements: CASEs (common applica-
tion service elements), which are generally useful to a variety of application processes and
SASEs (specific application service elements), which generally satisfy particular needs of
application processes. CASE examples include association control that establishes, main-
tains, and terminates connections with a peer application entity and commitment, concur-
rence, and recovery that ensure the integrity of distributed transactions. SASE examples in-
volve the TCP/IP protocol stack and include FTP (file transfer protocol), SNMP (simple
network management protocol), Telnet (virtual terminal protocol), and SMTP (simple mail
transfer protocol).

128
Introduction to Data Communications and Networking

7 DATA COMMUNICATIONS CIRCUITS

The underlying purpose of a data communications circuit is to provide a transmission


path between locations and to transfer digital information from one station to another us-
ing electronic circuits. A station is simply an endpoint where subscribers gain access to
the circuit. A station is sometimes called a node, which is the location of computers,
computer terminals, workstations, and other digital computing equipment. There are al-
most as many types of data communications circuits as there are types of data commu-
nications equipment.
Data communications circuits utilize electronic communications equipment and fa-
cilities to interconnect digital computer equipment. Communications facilities are physical
means of interconnecting stations within a data communications system and can include
virtually any type of physical transmission media or wireless radio system in existence.
Communications facilities are provided to data communications users through public tele-
phone networks (PTN), public data networks (PDN), and a multitude of private data com-
munications systems.
Figure 15 shows a simplified block diagram of a two-station data communications
circuit. The fundamental components of the circuit are source of digital information, trans-
mitter, transmission medium, receiver, and destination for the digital information. Although
the figure shows transmission in only one direction, bidirectional transmission is possible
by providing a duplicate set of circuit components in the opposite direction.

Source. The information source generates data and could be a mainframe computer,
personal computer, workstation, or virtually any other piece of digital equipment. The
source equipment provides a means for humans to enter data into the system.
Transmitter. Source data is seldom in a form suitable to propagate through the trans-
mission medium. For example, digital signals (pulses) cannot be propagated through
a wireless radio system without being converted to analog first. The transmitter en-
codes the source information and converts it to a different form, allowing it to be more
efficiently propagated through the transmission medium. In essence, the transmitter
acts as an interface between the source equipment and the transmission medium.
Transmission medium. The transmission medium carries the encoded signals from
the transmitter to the receiver. There are many different types of transmission media,
such as free-space radio transmission (including all forms of wireless transmission,
such as terrestrial microwave, satellite radio, and cellular telephone) and physical fa-
cilities, such as metallic and optical fiber cables. Very often, the transmission path is
comprised of several different types of transmission facilities.
Receiver. The receiver converts the encoded signals received from the transmission
medium back to their original form (i.e., decodes them) or whatever form is used in
the destination equipment. The receiver acts as an interface between the transmission
medium and the destination equipment.
Destination. Like the source, the destination could be a mainframe computer, per-
sonal computer, workstation, or virtually any other piece of digital equipment.

Digital Digital
Transmission
information Transmitter Receiver information
medium
source destination

FIGURE 15 Simplified block diagram of a two-station data communications circuit

129
Introduction to Data Communications and Networking

Transmitted data
0 Transmitted data
MSB A3 A3
1 0 1 1 0
A2 A2 output input
1
A1 A1
0
LSB A0 A0 Station A Station B
Station A Station B
TC TC TC TC
TC Clock Clock
Clock Clock

(b)
(a)

FIGURE 16 Data transmission: (a) parallel; (b) serial

8 SERIAL AND PARALLEL DATA TRANSMISSION

Binary information can be transmitted either in parallel or serially. Figure 16a shows how
the binary code 0110 is transmitted from station A to station B in parallel. As the figure
shows, each bit position (A0 to A3) has its own transmission line. Consequently, all four bits
can be transmitted simultaneously during the time of a single clock pulse (TC). This type of
transmission is called parallel by bit or serial by character.
Figure 16b shows the same binary code transmitted serially. As the figure shows,
there is a single transmission line and, thus, only one bit can be transmitted at a time. Con-
sequently, it requires four clock pulses (4TC) to transmit the entire four-bit code. This type
of transmission is called serial by bit.
Obviously, the principal trade-off between parallel and serial data transmission is
speed versus simplicity. Data transmission can be accomplished much more rapidly using
parallel transmission; however, parallel transmission requires more data lines. As a general
rule, parallel transmission is used for short-distance data communications and within a
computer, and serial transmission is used for long-distance data communications.

9 DATA COMMUNICATIONS CIRCUIT ARRANGEMENTS

Data communications circuits can be configured in a multitude of arrangements depending


on the specifics of the circuit, such as how many stations are on the circuit, type of transmis-
sion facility, distance between stations, and how many users are at each station. A data com-
munications circuit can be described in terms of circuit configuration and transmission mode.

9-1 Circuit Configurations


Data communications networks can be generally categorized as either two point or multi-
point.A two-point configuration involves only two locations or stations, whereas a multipoint
configuration involves three or more stations. Regardless of the configuration, each station
can have one or more computers, computer terminals, or workstations. A two-point circuit
involves the transfer of digital information between a mainframe computer and a personal
computer, two mainframe computers, two personal computers, or two data communications
networks. A multipoint network is generally used to interconnect a single mainframe com-
puter (host) to many personal computers or to interconnect many personal computers.

9-2 Transmission Modes


Essentially, there are four modes of transmission for data communications circuits: simplex,
half duplex, full duplex, and full/full duplex.

130
Introduction to Data Communications and Networking

9-2-1 Simplex. In the simplex (SX) mode, data transmission is unidirectional; in-
formation can be sent in only one direction. Simplex lines are also called receive-only,
transmit-only, or one-way-only lines. Commercial radio broadcasting is an example of sim-
plex transmission, as information is propagated in only one direction—from the broadcast-
ing station to the listener.

9-2-2 Half duplex. In the half-duplex (HDX) mode, data transmission is possible
in both directions but not at the same time. Half-duplex communications lines are also
called two-way-alternate or either-way lines. Citizens band (CB) radio is an example of
half-duplex transmission because to send a message, the push-to-talk (PTT) switch must be
depressed, which turns on the transmitter and shuts off the receiver. To receive a message,
the PTT switch must be off, which shuts off the transmitter and turns on the receiver.

9-2-3 Full duplex. In the full-duplex (FDX) mode, transmissions are possible in
both directions simultaneously, but they must be between the same two stations. Full-du-
plex lines are also called two-way simultaneous, duplex, or both-way lines. A local tele-
phone call is an example of full-duplex transmission. Although it is unlikely that both par-
ties would be talking at the same time, they could if they wanted to.

9-2-4 Full/full duplex. In the full/full duplex (F/FDX) mode, transmission is pos-
sible in both directions at the same time but not between the same two stations (i.e., one sta-
tion is transmitting to a second station and receiving from a third station at the same time).
Full/full duplex is possible only on multipoint circuits. The U.S. postal system is an exam-
ple of full/full duplex transmission because a person can send a letter to one address and re-
ceive a letter from another address at the same time.

10 DATA COMMUNICATIONS NETWORKS


Any group of computers connected together can be called a data communications network,
and the process of sharing resources between computers over a data communications net-
work is called networking. In its simplest form, networking is two or more computers con-
nected together through a common transmission medium for the purpose of sharing data. The
concept of networking began when someone determined that there was a need to share soft-
ware and data resources and that there was a better way to do it than storing data on a disk and
literally running from one computer to another. By the way, this manual technique of mov-
ing data on disks is sometimes referred to as sneaker net. The most important considerations of
a data communications network are performance, transmission rate, reliability, and security.
Applications running on modern computer networks vary greatly from company to
company. A network must be designed with the intended application in mind. A general cat-
egorization of networking applications is listed in Table 1. The specific application affects
how well a network will perform. Each network has a finite capacity. Therefore, network
designers and engineers must be aware of the type and frequency of information traffic on
the network.

Table 1 Networking Applications

Application Examples

Standard office applications E-mail, file transfers, and printing


High-end office applications Video imaging, computer-aided drafting, computer-aided design, and soft-
ware development
Manufacturing automation Process and numerical control
Mainframe connectivity Personal computers, workstations, and terminal support
Multimedia applications Live interactive video

131
Introduction to Data Communications and Networking

End stations

Applications

Local area networks


Wide area networks
Networks
Metropolitan area networks
Global area networks

FIGURE 17 Basic network components

There are many factors involved when designing a computer network, including the
following:
1. Network goals as defined by organizational management
2. Network security
3. Network uptime requirements
4. Network response-time requirements
5. Network and resource costs
The primary balancing act in computer networking is speed versus reliability. Too often,
network performance is severely degraded by using error checking procedures, data en-
cryption, and handshaking (acknowledgments). However, these features are often required
and are incorporated into protocols.
Some networking protocols are very reliable but require a significant amount of over-
head to provide the desired high level of service. These protocols are examples of connection-
oriented protocols. Other protocols are designed with speed as the primary parameter and,
therefore, forgo some of the reliability features of the connection-oriented protocols. These
quick protocols are examples of connectionless protocols.

10-1 Network Components, Functions, and Features


Computer networks are like snowflakes—no two are the same. The basic components of
computer networks are shown in Figure 17. All computer networks include some combi-
nation of the following: end stations, applications, and a network that will support the data
traffic between the end stations. A computer network designed three years ago to support
the basic networking applications of the time may have a difficult time supporting recently

132
Introduction to Data Communications and Networking

File request

Copy of requested file

User computer File server FIGURE 18 File server operation

developed high-end applications, such as medical imaging and live video teleconferencing.
Network designers, administrators, and managers must understand and monitor the most
recent types and frequency of networked applications.
Computer networks all share common devices, functions, and features, including
servers, clients, transmission media, shared data, shared printers and other peripherals,
hardware and software resources, network interface card (NIC), local operating system
(LOS), and the network operating system (NOS).

10-1-1 Servers. Servers are computers that hold shared files, programs, and the net-
work operating system. Servers provide access to network resources to all the users of the
network. There are many different kinds of servers, and one server can provide several func-
tions. For example, there are file servers, print servers, mail servers, communications servers,
database servers, directory/security servers, fax servers, and Web servers, to name a few.
Figure 18 shows the operation of a file server. A user (client) requests a file from the
file server. The file server sends a copy of the file to the requesting user. File servers allow
users to access and manipulate disk resources stored on other computers. An example of a
file server application is when two or more users edit a shared spreadsheet file that is stored
on a server. File servers have the following characteristics:

1. File servers are loaded with files, accounts, and a record of the access rights of
users or groups of users on the network.
2. The server provides a shareable virtual disk to the users (clients).
3. File mapping schemes are implemented to provide the virtualness of the files (i.e.,
the files are made to look like they are on the user’s computer).
4. Security systems are installed and configured to provide the server with the re-
quired security and protection for the files.
5. Redirector or shell software programs located on the users’ computers transpar-
ently activate the client’s software on the file server.

10-1-2 Clients. Clients are computers that access and use the network and shared
network resources. Client computers are basically the customers (users) of the network, as
they request and receive services from the servers.

10-1-3 Transmission media. Transmission media are the facilities used to inter-
connect computers in a network, such as twisted-pair wire, coaxial cable, and optical fiber
cable. Transmission media are sometimes called channels, links, or lines.

10-1-4 Shared data. Shared data are data that file servers provide to clients, such
as data files, printer access programs, and e-mail.

10-1-5 Shared printers and other peripherals. Shared printers and peripherals are
hardware resources provided to the users of the network by servers. Resources provided
include data files, printers, software, or any other items used by clients on the network.

133
Introduction to Data Communications and Networking

NIC Card

Computer

04 60 8C 49 F2 3B

MAC (media access control) address


(six bytes – 12 hex characters – 48 bits)

FIGURE 19 Network interface card (NIC)

10-1-6 Network interface card. Each computer in a network has a special expan-
sion card called a network interface card (NIC). The NIC prepares (formats) and sends data,
receives data, and controls data flow between the computer and the network. On the trans-
mit side, the NIC passes frames of data on to the physical layer, which transmits the data to
the physical link. On the receive side, the NIC processes bits received from the physical
layer and processes the message based on its contents. A network interface card is shown
in Figure 19. Characteristics of NICs include the following:

1. The NIC constructs, transmits, receives, and processes data to and from a PC and
the connected network.
2. Each device connected to a network must have a NIC installed.
3. A NIC is generally installed in a computer as a daughterboard, although some com-
puter manufacturers incorporate the NIC into the motherboard during manufacturing.
4. Each NIC has a unique six-byte media access control (MAC) address, which is
typically permanently burned into the NIC when it is manufactured. The MAC ad-
dress is sometimes called the physical, hardware, node, Ethernet, or LAN address.
5. The NIC must be compatible with the network (i.e., Ethernet—10baseT or token
ring) to operate properly.
6. NICs manufactured by different vendors vary in speed, complexity, manageabil-
ity, and cost.
7. The NIC requires drivers to operate on the network.

10-1-7 Local operating system. A local operating system (LOS) allows per-
sonal computers to access files, print to a local printer, and have and use one or more
disk and CD drives that are located on the computer. Examples of LOSs are MS-DOS,
PC-DOS, Unix, Macintosh, OS/2, Windows 3.11, Windows 95, Windows 98, Windows
2000, and Linux. Figure 20 illustrates the relationship between a personal computer and
its LOS.

10-1-8 Network operating system. The network operating system (NOS) is a pro-
gram that runs on computers and servers that allows the computers to communicate over a net-
work. The NOS provides services to clients such as log-in features, password authentication,

134
Introduction to Data Communications and Networking

UNIX
MS-DOS

Windows
Macintosh

FIGURE 20 Local operating system


Personal computer (LOS)

NOS

Server

NOS

NOS
Client
FIGURE 21 Network operating
Client system (NOS)

printer access, network administration functions, and data file sharing. Some of the more pop-
ular network operating systems are Unix, Novell NetWare, AppleShare, Macintosh System 7,
IBM LAN Server, Compaq Open VMS, and Microsoft Windows NT Server. The NOS is soft-
ware that makes communications over a network more manageable. The relationship between
clients, servers, and the NOS is shown in Figure 21, and the layout of a local network operat-
ing system is depicted in Figure 22. Characteristics of NOSs include the following:
1. A NOS allows users of a network to interface with the network transparently.
2. A NOS commonly offers the following services: file service, print service, mail ser-
vice, communications service, database service, and directory and security services.
3. The NOS determines whether data are intended for the user’s computer or whether
the data needs to be redirected out onto the network.
4. The NOS implements client software for the user, which allows them to access
servers on the network.

10-2 Network Models


Computer networks can be represented with two basic network models: peer-to-peer
client/server and dedicated client/server. The client/server method specifies the way in which
two computers can communicate with software over a network. Although clients and servers
are generally shown as separate units, they are often active in a single computer but not at the
same time. With the client/server concept, a computer acting as a client initiates a software re-
quest from another computer acting as a server. The server computer responds and attempts

135
Introduction to Data Communications and Networking

User 3
User 2 User 4
User 1 User 5

Hub

NOS

Database Communications
server server

File Mail
server Print server
server
To other
networks
and servers

Printer

FIGURE 22 Network layout using a network operating system (NOS)

Client/server 1

Hub

Client/server 2 FIGURE 23 Client/server concept

to satisfy the request from the client. The server computer might then act as a client and re-
quest services from another computer. The client/server concept is illustrated in Figure 23.

10-2-1 Peer-to-peer client/server network. A peer-to-peer client/server network


is one in which all computers share their resources, such as hard drives, printers, and so on,
with all the other computers on the network. Therefore, the peer-to-peer operating sys-
tem divides its time between servicing the computer on which it is loaded and servicing

136
Introduction to Data Communications and Networking

Client/server 1 Client/server 2

Hub

FIGURE 24 Peer-to-peer
Client/server 3 Client/server 4 client/server network

requests from other computers. In a peer-to-peer network (sometimes called a workgroup),


there are no dedicated servers or hierarchy among the computers.
Figure 24 shows a peer-to-peer client/server network with four clients/servers
(users) connected together through a hub. All computers are equal, hence the name
peer. Each computer in the network can function as a client and/or a server, and no sin-
gle computer holds the network operating system or shared files. Also, no one com-
puter is assigned network administrative tasks. The users at each computer determine
which data on their computer are shared with the other computers on the network. In-
dividual users are also responsible for installing and upgrading the software on their
computer.
Because there is no central controlling computer, a peer-to-peer network is an appro-
priate choice when there are fewer than 10 users on the network, when all computers are lo-
cated in the same general area, when security is not an issue, or when there is limited growth
projected for the network in the immediate future. Peer-to-peer computer networks should
be small for the following reasons:
1. When operating in the server role, the operating system is not optimized to effi-
ciently handle multiple simultaneous requests.
2. The end user’s performance as a client would be degraded.
3. Administrative issues such as security, data backups, and data ownership may be
compromised in a large peer-to-peer network.

10-2-2 Dedicated client/server network. In a dedicated client/server network, one


computer is designated the server, and the rest of the computers are clients. As the network
grows, additional computers can be designated servers. Generally, the designated servers
function only as servers and are not used as a client or workstation. The servers store all the
network’s shared files and applications programs, such as word processor documents, com-
pilers, database applications, spreadsheets, and the network operating system. Client com-
puters can access the servers and have shared files transferred to them over the transmission
medium.
Figure 25 shows a dedicated client/server-based network with three servers and three
clients (users). Each client can access the resources on any of the servers and also the re-
sources on other client computers. The dedicated client/server-based network is probably

137
Introduction to Data Communications and Networking

Client 1 Client 2 Client 3

Hub

Dedicated Dedicated Dedicated


file server print server mail server

FIGURE 25 Dedicated client/server


Printer network

the most commonly used computer networking model. There can be a separate dedicated
server for each function (i.e., file server, print server, mail server, etc.) or one single general-
purpose server responsible for all services.
In some client/server networks, client computers submit jobs to one of the servers.
The server runs the software and completes the job and then sends the results back to the
client computer. In this type of client/server network, less information propagates through
the network than with the file server configuration because only data and not applications
programs are transferred between computers.
In general, the dedicated client/server model is preferable to the peer-to-peer client/server
model for general-purpose data networks. The peer-to-peer model client/server model is usu-
ally preferable for special purposes, such as a small group of users sharing resources.

10-3 Network Topologies


Network topology describes the layout or appearance of a network—that is, how the com-
puters, cables, and other components within a data communications network are intercon-
nected, both physically and logically. The physical topology describes how the network is
actually laid out, and the logical topology describes how data actually flow through the
network.
In a data communications network, two or more stations connect to a link, and one or
more links form a topology. Topology is a major consideration for capacity, cost, and reli-
ability when designing a data communications network. The most basic topologies are
point to point and multipoint. A point-to-point topology is used in data communications
networks that transfer high-speed digital information between only two stations. Very of-
ten, point-to-point data circuits involve communications between a mainframe computer
and another mainframe computer or some other type of high-capacity digital device. A two-
point circuit is shown in Figure 26a.
A multipoint topology connects three or more stations through a single transmission
medium. Examples of multipoint topologies are star, bus, ring, mesh, and hybrid.

138
Hub

(a) (b)

Bus

(c) (d)

Bus

Ring

(e) (f)

FIGURE 26 Network topologies: (a) point-to-point; (b) star; (c) bus; (d) ring; (e) mesh; (f) hybrid

139
Introduction to Data Communications and Networking

10-3-1 Star topology. A star topology is a multipoint data communications net-


work where remote stations are connected by cable segments directly to a centrally located
computer called a hub, which acts like a multipoint connector (see Figure 26b). In essence,
a star topology is simply a multipoint circuit comprised of many two-point circuits where
each remote station communicates directly with a centrally located computer. With a star
topology, remote stations cannot communicate directly with one another, so they must re-
lay information through the hub. Hubs also have store-and-forward capabilities, enabling
them to handle more than one message at a time.

10-3-2 Bus topology. A bus topology is a multipoint data communications circuit


that makes it relatively simple to control data flow between and among the computers be-
cause this configuration allows all stations to receive every transmission over the network.
With a bus topology, all the remote stations are physically or logically connected to a sin-
gle transmission line called a bus. The bus topology is the simplest and most common
method of interconnecting computers. The two ends of the transmission line never touch
to form a complete loop. A bus topology is sometimes called multidrop or linear bus, and
all stations share a common transmission medium. Data networks using the bus topology
generally involve one centrally located host computer that controls data flow to and from
the other stations. The bus topology is sometimes called a horizontal bus and is shown in
Figure 26c.

10-3-3 Ring topology. A ring topology is a multipoint data communications net-


work where all stations are interconnected in tandem (series) to form a closed loop or cir-
cle. A ring topology is sometimes called a loop. Each station in the loop is joined by point-
to-point links to two other stations (the transmitter of one and the receiver of the other) (see
Figure 26d). Transmissions are unidirectional and must propagate through all the stations
in the loop. Each computer acts like a repeater in that it receives signals from down-line
computers then retransmits them to up-line computers. The ring topology is similar to the
bus and star topologies, as it generally involves one centrally located host computer that
controls data flow to and from the other stations.

10-3-4 Mesh topology. In a mesh topology, every station has a direct two-point
communications link to every other station on the circuit as shown in Figure 26e. The
mesh topology is sometimes called fully connected. A disadvantage of a mesh topol-
ogy is a fully connected circuit requires n(n  1)/2 physical transmission paths to in-
terconnect n stations and each station must have n  1 input/output ports. Advantages
of a mesh topology are reduced traffic problems, increased reliability, and enhanced
security.

10-3-5 Hybrid topology. A hybrid topology is simply combining two or more of


the traditional topologies to form a larger, more complex topology. Hybrid topologies are
sometimes called mixed topologies. An example of a hybrid topology is the bus star topol-
ogy shown in Figure 26f. Other hybrid configurations include the star ring, bus ring, and
virtually every other combination you can think of.

10-4 Network Classifications


Networks are generally classified by size, which includes geographic area, distance
between stations, number of computers, transmission speed (bps), transmission me-
dia, and the network’s physical architecture. The four primary classifications of net-
works are local area networks (LANs), metropolitan area networks (MANs), wide

140
Introduction to Data Communications and Networking

Table 2 Primary Network Types

Network Type Characteristics

LAN (local area network) Interconnects computer users within a department, company,
or group
MAN (metropolitan area network) Interconnects computers in and around a large city
WAN (wide area network) Interconnects computers in and around an entire country
GAN (global area network) Interconnects computers from around the entire globe
Building backbone Interconnects LANs within a building
Campus backbone Interconnects building LANs
Enterprise network Interconnects many or all of the above
PAN (personal area network) Interconnects memory cards carried by people and in computers
that are in close proximity to each other
PAN (power line area network, Virtually no limit on how many computers it can interconnect
sometimes called PLAN) and covers an area limited only by the availability of power
distribution lines

area networks (WANs), and global area networks (GANs). In addition, there are three
primary types of interconnecting networks: building backbone, campus backbone,
and enterprise network. Two promising computer networks of the future share the
same acronym: the PAN (personal area network) and PAN (power line area network,
sometimes called PLAN). The idea behind a personal area network is to allow people
to transfer data through the human body simply by touching each other. Power line
area networks use existing ac distribution networks to carry data wherever power
lines go, which is virtually everywhere.
When two or more networks are connected together, they constitute an internetwork
or internet. An internet (lowercase i) is sometimes confused with the Internet (uppercase
I). The term internet is a generic term that simply means to interconnect two or more net-
works, whereas Internet is the name of a specific worldwide data communications net-
work. Table 2 summarizes the characteristics of the primary types of networks, and Figure
27 illustrates the geographic relationship among computers and the different types of net-
works.

10-4-1 Local area network. Local area networks (LANs) are typically privately
owned data communications networks in which 10 to 40 computer users share data re-
sources with one or more file servers. LANs use a network operating system to provide
two-way communications at bit rates typically in the range of 10 Mbps to 100 Mbps and
higher between a large variety of data communications equipment within a relatively
small geographical area, such as in the same room, building, or building complex (see
Figure 28). A LAN can be as simple as two personal computers and a printer or could
contain dozens of computers, workstations, and peripheral devices. Most LANs link
equipment that are within a few miles of each other or closer. Because the size of most
LANs is limited, the longest (or worst-case) transmission time is bounded and known by
everyone using the network. Therefore, LANs can utilize configurations that otherwise
would not be possible.
LANs were designed for sharing resources between a wide range of digital equip-
ment, including personal computers, workstations, and printers. The resources shared can
be software as well as hardware. Most LANs are owned by the company or organization

141
Introduction to Data Communications and Networking

Local area network

Single building

Metropolitan area network

Multiple buildings or
entire city

Wide area network

Entire country Global area network

Entire world

Personal area network

Between people and


computers

FIGURE 27 Computer network types

that uses it and have a connection to a building backbone for access to other departmental
LANs, MANs, WANs, and GANs.

10-4-2 Metropolitan area network. A metropolitan area network (MAN) is a


high-speed network similar to a LAN except MANs are designed to encompass larger
areas, usually that of an entire city (see Figure 29). Most MANs support the trans-
mission of both data and voice and in some cases video. MANs typically operate at

142
Introduction to Data Communications and Networking

NOS client NOS client


software software

LAN Laptop PC Workstation

Scanner NOS
server
Wall jack

CD-ROM/WORM

Patch panel File/application/


print server

Hub/repeater FAX machine

Router or
switch

To building
backbone

FIGURE 28 Local area network (LAN) layout

speeds of 1.5 Mbps to 10 Mbps and range from five miles to a few hundred miles in
length. A MAN generally uses only one or two transmission cables and requires no
switches. A MAN could be a single network, such as a cable television distribution net-
work, or it could be a means of interconnecting two or more LANs into a single, larger
network, enabling data resources to be shared LAN to LAN as well as from station to
station or computer to computer. Large companies often use MANS to interconnect all
their LANs.
A MAN can be owned and operated entirely by a single, private company, or it
could lease services and facilities on a monthly basis from the local cable or telephone
company. Switched Multimegabit Data Services (SMDS) is an example of a service of-
fered by local telephone companies for handling high-speed data communications for
MANs. Other examples of MANs are FDDI (fiber distributed data interface) and ATM
(asynchronous transfer mode).

10-4-3 Wide area network. Wide area networks (WANs) are the oldest type of data
communications network that provide relatively slow-speed, long-distance transmission of
data, voice, and video information over relatively large and widely dispersed geographical
areas, such as a country or an entire continent (see Figure 30). WANs typically interconnect
cities and states. WANs typically operate at bit rates from 1.5 Mbps to 2.4 Gbps and cover
a distance of 100 to 1000 miles.
WANs may utilize both public and private communications systems to provide serv-
ice over an area that is virtually unlimited; however, WANs are generally obtained through
service providers and normally come in the form of leased-line or circuit-switching tech-
nology. Often WANs interconnect routers in different locations. Examples of WANs are

143
Introduction to Data Communications and Networking

Phoenix
metropolitan area

Manufacturing
facility Research
facility
Service
provider
MAN

Headquarters Shipping
building facility

FIGURE 29 Metropolitan area network (MAN)

ISDN (integrated services digital network), T1 and T3 digital carrier systems, frame relay,
X.25, ATM, and using data modems over standard telephone lines.

10-4-4 Global area network. Global area networks (GANs) provide connects be-
tween countries around the entire globe (see Figure 31). The Internet is a good example of
a GAN, as it is essentially a network comprised of other networks that interconnects virtu-
ally every country in the world. GANs operate from 1.5 Mbps to 100 Gbps and cover thou-
sands of miles

10-4-5 Building backbone. A building backbone is a network connection that nor-


mally carries traffic between departmental LANs within a single company. A building back-
bone generally consists of a switch or a router (see Figure 32) that can provide connectiv-
ity to other networks, such as campus backbones, enterprise backbones, MANs, WANs, or
GANs.

144
Introduction to Data Communications and Networking

San Diego, CA
Seatle, WA Router
Router

Service
provider
WAN
Router
Router

Router

Miami, FL

Tempe, AZ

Oriskany, NY

FIGURE 30 Wide area network (WAN)

GAN

London, England
Loa Angeles, CA – USA

Rome, Italy

Sidney, Australia

FIGURE 31 Global area network (GAN)

10-4-6 Campus backbone. A campus backbone is a network connection used to


carry traffic to and from LANs located in various buildings on campus (see Figure 33). A
campus backbone is designed for sites that have a group of buildings at a single location,
such as corporate headquarters, universities, airports, and research parks.
A campus backbone normally uses optical fiber cables for the transmission media be-
tween buildings. The optical fiber cable is used to connect interconnecting devices, such as

145
Introduction to Data Communications and Networking

Wall jacks

Hub LAN
PC
Building backbone
optical fiber cables

Hub LAN
Workstation

To WAN,
MAN, or
Patch cables Switch campus
Patch panels Router

FIGURE 32 Building backbone

WAN

Router/switch

Building Building
1 LANs Fiber LANs 2
cables
Router/switch
Router/switch

Router/switch
WAN LAN

Building 3 FIGURE 33 Campus backbone

bridges, routers, and switches. Campus backbones must operate at relatively high trans-
mission rates to handle the large volumes of traffic between sites.

10-4-7 Enterprise networks. An enterprise network includes some or all of the previ-
ously mentioned networks and components connected in a cohesive and manageable fashion.

146
Introduction to Data Communications and Networking

11 ALTERNATE PROTOCOL SUITES

The functional layers of the OSI seven-layer protocol hierarchy do not line up well with certain
data communications applications, such as the Internet. Because of this, there are several other
protocols that see widespread use, such as TCP/IP and the Cisco three-layer hierarchical model.

11-1 TCP/IP Protocol Suite


The TCP/IP protocol suite (transmission control protocol/Internet protocol) was actu-
ally developed by the Department of Defense before the inception of the seven-layer
OSI model. TCP/IP is comprised of several interactive modules that provide specific
functionality without necessarily operating independent of one another. The OSI seven-
layer model specifies exactly which function each layer performs, whereas TCP/IP is
comprised of several relatively independent protocols that can be combined in many
ways, depending on system needs. The term hierarchical simply means that the upper-
level protocols are supported by one or more lower-level protocols. Depending on
whose definition you use, TCP/IP is a hierarchical protocol comprised of either three
or four layers.
The three-layer version of TCP/IP contains the network, transport, and application
layers that reside above two lower-layer protocols that are not specified by TCP/IP (the
physical and data link layers). The network layer of TCP/IP provides internetworking func-
tions similar to those provided by the network layer of the OSI network model. The net-
work layer is sometimes called the internetwork layer or internet layer.
The transport layer of TCP/IP contains two protocols: TCP (transmission control pro-
tocol) and UDP (user datagram protocol). TCP functions go beyond those specified by the
transport layer of the OSI model, as they define several tasks defined for the session layer.
In essence, TCP allows two application layers to communicate with each other.
The applications layer of TCP/IP contains several other protocols that users and pro-
grams utilize to perform the functions of the three uppermost layers of the OSI hierarchy
(i.e., the applications, presentation, and session layers).
The four-layer version of TCP/IP specifies the network access, Internet, host-to-host,
and process layers:

Network access layer. Provides a means of physically delivering data packets using
frames or cells
Internet layer. Contains information that pertains to how data can be routed through
the network
Host-to-host layer. Services the process and Internet layers to handle the reliability
and session aspects of data transmission
Process layer. Provides applications support

TCP/IP is probably the dominant communications protocol in use today. It provides


a common denominator, allowing many different types of devices to communicate over a
network or system of networks while supporting a wide variety of applications.

11-2 Cisco Three-Layer Model


Cisco defines a three-layer logical hierarchy that specifies where things belong, how they fit
together, and what functions go where. The three layers are the core, distribution, and access:

Core layer. The core layer is literally the core of the network, as it resides at the top
of the hierarchy and is responsible for transporting large amounts of data traffic reli-
ably and quickly. The only purpose of the core layer is to switch traffic as quickly as
possible.

147
Introduction to Data Communications and Networking

Distribution layer. The distribution layer is sometimes called the workgroup layer.
The distribution layer is the communications point between the access and the core
layers that provides routing, filtering, WAN access, and how many data packets are
allowed to access the core layer. The distribution layer determines the fastest way to
handle service requests, for example, the fastest way to forward a file request to a
server. Several functions are performed at the distribution level:
1. Implementation of tools such as access lists, packet filtering, and queuing
2. Implementation of security and network policies, including firewalls and address
translation
3. Redistribution between routing protocols
4. Routing between virtual LANS and other workgroup support functions
5. Define broadcast and multicast domains
Access layer. The access layer controls workgroup and individual user access to inter-
networking resources, most of which are available locally. The access layer is some-
times called the desktop layer. Several functions are performed at the access layer level:
1. Access control
2. Creation of separate collision domains (segmentation)
3. Workgroup connectivity into the distribution layer

QUESTIONS
1. Define the following terms: data, information, and data communications network.
2. What was the first data communications system that used binary-coded electrical signals?
3. Discuss the relationship between network architecture and protocol.
4. Briefly describe broadcast and point-to-point computer networks.
5. Define the following terms: protocol, connection-oriented protocols, connectionless protocols,
and protocol stacks.
6. What is the difference between syntax and semantics?
7. What are data communications standards, and why are they needed?
8. Name and briefly describe the differences between the two kinds of data communications stan-
dards.
9. List and describe the eight primary standards organizations for data communications.
10. Define the open systems interconnection.
11. Briefly describe the seven layers of the OSI protocol hierarchy.
12. List and briefly describe the basic functions of the five components of a data communications cir-
cuit.
13. Briefly describe the differences between serial and parallel data transmission.
14. What are the two basic kinds of data communications circuit configurations?
15. List and briefly describe the four transmission modes.
16. List and describe the functions of the most common components of a computer network.
17. What are the differences between servers and clients on a data communications network?
18. Describe a peer-to-peer data communications network.
19. What are the differences between peer-to-peer client/server networks and dedicated client/server
networks?
20. What is a data communications network topology?
21. List and briefly describe the five basic data communications network topologies.
22. List and briefly describe the major network classifications.
23. Briefly describe the TCP/IP protocol model.
24. Briefly describe the Cisco three-layer protocol model.

148
Fundamental Concepts of Data
Communications

CHAPTER OUTLINE

1 Introduction 8 Data Communications Hardware


2 Data Communications Codes 9 Data Communications Circuits
3 Bar Codes 10 Line Control Unit
4 Error Control 11 Serial Interfaces
5 Error Detection 12 Data Communications Modems
6 Error Correction 13 ITU-T Modem Recommendations
7 Character Synchronization

OBJECTIVES

■ Define data communication code


■ Describe the following data communications codes: Baudot, ASCII, and EBCDIC
■ Explain bar code formats
■ Define error control, error detection, and error correction
■ Describe the following error-detection mechanisms: redundancy, checksum, LRC, VRC, and CRC
■ Describe the following error-correction mechanisms: FEC, ARQ, and Hamming code
■ Describe character synchronization and explain the differences between asynchronous and synchronous data formats
■ Define the term data communications hardware
■ Describe data terminal equipment
■ Describe data communications equipment
■ List and describe the seven components that make up a two-point data communications circuit
■ Describe the terms line control unit and front-end processor and explain the differences between the two
■ Describe the basic operation of a UART and outline the differences between UARTs, USRTs, and USARTs
■ Describe the functions of a serial interface
■ Explain the physical, electrical, and functional characteristics of the RS-232 serial interface
■ Compare and contrast the RS-232, RS-449, and RS-530 serial interfaces

From Chapter 4 of Advanced Electronic Communications Systems, Sixth Edition. Wayne Tomasi.
Copyright © 2004 by Pearson Education, Inc. Published by Pearson Prentice Hall. All rights reserved.
149
Fundamental Concepts of Data Communications

■ Describe data communications modems


■ Explain the block diagram of a modem
■ Explain what is meant by Bell System–compatible modems
■ Describe modem synchronization and modem equalization
■ Describe the ITU-T modem recommendations

1 INTRODUCTION

To understand how a data communications network works as an entity, it is necessary first


to understand the fundamental concepts and components that make up the network. The
fundamental concepts of data communications include data communications code, error
control (error detection and correction), and character synchronization, and fundamental
hardware includes various pieces of computer and networking equipment, such as line con-
trol units, serial interfaces, and data communications modems.

2 DATA COMMUNICATIONS CODES

Data communications codes are often used to represent characters and symbols, such as let-
ters, digits, and punctuation marks. Therefore, data communications codes are called
character codes, character sets, symbol codes, or character languages.

2-1 Baudot Code


The Baudot code (sometimes called the Telex code) was the first fixed-length character
code developed for machines rather than for people. A French postal engineer named
Thomas Murray developed the Baudot code in 1875 and named the code after Emile Bau-
dot, an early pioneer in telegraph printing. The Baudot code (pronounced baw-dough) is a
fixed-length source code (sometimes called a fixed-length block code). With fixed-length
source codes, all characters are represented in binary and have the same number of symbols
(bits). The Baudot code is a five-bit character code that was used primarily for low-speed
teletype equipment, such as the TWX/Telex system and radio teletype (RTTY). The latest
version of the Baudot code is recommended by the CCITT as the International Alphabet
No. 2 and is shown in Table 1.

2-2 ASCII Code


In 1963, in an effort to standardize data communications codes, the United States adopted
the Bell System model 33 teletype code as the United States of America Standard Code for
Information Exchange (USASCII), better known as ASCII-63. Since its adoption, ASCII
(pronounced as-key) has progressed through the 1965, 1967, and 1977 versions, with the
1977 version being recommended by the ITU as International Alphabet No. 5, in the United
States as ANSI standard X3.4-1986 (R1997), and by the International Standards Organiza-
tion as ISO-14962 (1997).
ASCII is the standard character set for source coding the alphanumeric character
set that humans understand but computers do not (computers only understand 1s and 0s).
ASCII is a seven-bit fixed-length character set. With the ASCII code, the least-significant
bit (LSB) is designated b0 and the most-significant bit (MSB) is designated b7 as shown here:
b7 b6 b5 b4 b3 b2 b1 b0
MSB LSB
Direction of propagation
The terms least and most significant are somewhat of a misnomer because character
codes do not represent weighted binary numbers and, therefore, all bits are equally sig-

150
Fundamental Concepts of Data Communications

Table 1 Baudot Code

Bit

Letter Figure Bit: 4 3 2 1 0

A — 1 1 0 0 0
B ? 1 0 0 1 1
C : 0 1 1 1 0
D $ 1 0 0 1 0
E 3 1 0 0 0 0
F ! 1 0 1 1 0
G & 0 1 0 1 1
H # 0 0 1 0 1
I 8 0 1 1 0 0
J ' 1 1 0 1 0
K ( 1 1 1 1 0
L ) 0 1 0 0 1
M . 0 0 1 1 1
N , 0 0 1 1 0
O 9 0 0 0 1 1
P 0 0 1 1 0 1
Q 1 1 1 1 0 1
R 4 0 1 0 1 0
S bel 1 0 1 0 0
T 5 0 0 0 0 1
U 7 1 1 1 0 0
V ; 0 1 1 1 1
W 2 1 1 0 0 1
X / 1 0 1 1 1
Y 6 1 0 1 0 1
Z ″ 1 0 0 0 1
Figure shift 1 1 1 1 1
Letter shift 1 1 0 1 1
Space 0 0 1 0 0
Line feed (LF) 0 1 0 0 0
Blank (null) 0 0 0 0 0

nificant. Bit b7 is not part of the ASCII code but is generally reserved for an error detec-
tion bit called the parity bit, which is explained later in this chapter. With character codes,
it is more meaningful to refer to bits by their order than by their position; b0 is the zero-
order bit, b1 the first-order bit, b7 the seventh-order bit, and so on. However, with serial
data transmission, the bit transmitted first is generally called the LSB. With ASCII, the
low-order bit (b0) is transmitted first. ASCII is probably the code most often used in data
communications networks today. The 1977 version of the ASCII code with odd parity is
shown in Table 2 (note that the parity bit is not included in the hex code).

2-3 EBCDIC Code


The extended binary-coded decimal interchange code (EBCDIC) is an eight-bit fixed-
length character set developed in 1962 by the International Business Machines Corporation
(IBM). EBCDIC is used almost exclusively with IBM mainframe computers and peripheral
equipment. With eight bits, 28, or 256, codes are possible, although only 139 of the 256
codes are actually assigned characters. Unspecified codes can be assigned to specialized
characters and functions. The name binary coded decimal was selected because the second
hex character for all letter and digit codes contains only the hex values from 0 to 9, which
have the same binary sequence as BCD codes. The EBCDIC code is shown in Table 3.

151
Fundamental Concepts of Data Communications

Table 2 ASCII-77: Odd Parity

Binary Code Binary Code

Bit 7 6 5 4 3 2 1 0 Hex Bit 7 6 5 4 3 2 1 0 Hex

NUL 1 0 0 0 0 0 0 0 00 @ 0 1 0 0 0 0 0 0 40
SOH 0 0 0 0 0 0 0 1 01 A 1 1 0 0 0 0 0 1 41
STX 0 0 0 0 0 0 1 0 02 B 1 1 0 0 0 0 1 0 42
ETX 1 0 0 0 0 0 1 1 03 C 0 1 0 0 0 0 1 1 43
EOT 0 0 0 0 0 1 0 0 04 D 1 1 0 0 0 1 0 0 44
ENQ 1 0 0 0 0 1 0 1 05 E 0 1 0 0 0 1 0 1 45
ACK 1 0 0 0 0 1 1 0 06 F 0 1 0 0 0 1 1 0 46
BEL 0 0 0 0 0 1 1 1 07 G 1 1 0 0 0 1 1 1 47
BS 0 0 0 0 1 0 0 0 08 H 1 1 0 0 1 0 0 0 48
HT 1 0 0 0 1 0 0 1 09 I 0 1 0 0 1 0 0 1 49
NL 1 0 0 0 1 0 1 0 0A J 0 1 0 0 1 0 1 0 4A
VT 0 0 0 0 1 0 1 1 0B K 1 1 0 0 1 0 1 1 4B
FF 1 0 0 0 1 1 0 0 0C L 0 1 0 0 1 1 0 0 4C
CR 0 0 0 0 1 1 0 1 0D M 1 1 0 0 1 1 0 1 4D
SO 0 0 0 0 1 1 1 0 0E N 1 1 0 0 1 1 1 0 4E
SI 1 0 0 0 1 1 1 1 0F O 0 1 0 0 1 1 1 1 4F
DLE 0 0 0 1 0 0 0 0 10 P 1 1 0 1 0 0 0 0 50
DC1 0 0 0 1 0 0 0 1 11 Q 0 1 0 1 0 0 0 1 51
DC2 1 0 0 1 0 0 1 0 12 R 0 1 0 1 0 0 1 0 52
DC3 0 0 0 1 0 0 1 1 13 S 1 1 0 1 0 0 1 1 53
DC4 1 0 0 1 0 1 0 0 14 T 0 1 0 1 0 1 0 0 54
NAK 0 0 0 1 0 1 0 1 15 U 1 1 0 1 0 1 0 1 55
SYN 0 0 0 1 0 1 1 0 16 V 1 1 0 1 0 1 1 0 56
ETB 1 0 0 1 0 1 1 1 17 W 0 1 0 1 0 1 1 1 57
CAN 1 0 0 1 1 0 0 0 18 X 0 1 0 1 1 0 0 0 58
EM 0 0 0 1 1 0 0 1 19 Y 1 1 0 1 1 0 0 1 59
SUB 0 0 0 1 1 0 1 0 1A Z 1 1 0 1 1 0 1 0 5A
ESC 1 0 0 1 1 0 1 1 1B [ 0 1 0 1 1 0 1 1 5B
FS 0 0 0 1 1 1 0 0 1C \ 1 1 0 1 1 1 0 0 5C
GS 1 0 0 1 1 1 0 1 1D ] 0 1 0 1 1 1 0 1 5D
RS 1 0 0 1 1 1 1 0 1E ⵩ 0 1 0 1 1 1 1 0 5E
US 0 0 0 1 1 1 1 1 1F - 1 1 0 1 1 1 1 1 5F
SP 0 0 1 0 0 0 0 0 20 ` 1 1 1 0 0 0 0 0 60
! 1 0 1 0 0 0 0 1 21 a 0 1 1 0 0 0 0 1 61
″ 1 0 1 0 0 0 1 0 22 b 0 1 1 0 0 0 1 0 62
# 0 0 1 0 0 0 1 1 23 c 1 1 1 0 0 0 1 1 63
$ 1 0 1 0 0 1 0 0 24 d 0 1 1 0 0 1 0 0 64
% 0 0 1 0 0 1 0 1 25 e 1 1 1 0 0 1 0 1 65
& 0 0 1 0 0 1 1 0 26 f 1 1 1 0 0 1 1 0 66
′ 1 0 1 0 0 1 1 1 27 g 0 1 1 0 0 1 1 1 67
( 1 0 1 0 1 0 0 0 28 h 0 1 1 0 1 0 0 0 68
) 0 0 1 0 1 0 0 1 29 i 1 1 1 0 1 0 0 1 69
* 0 0 1 0 1 0 1 0 2A j 1 1 1 0 1 0 1 0 6A
 1 0 1 0 1 0 1 1 2B k 0 1 1 0 1 0 1 1 6B
, 0 0 1 0 1 1 0 0 2C l 1 1 1 0 1 1 0 0 6C
- 1 0 1 0 1 1 0 1 2D m 0 1 1 0 1 1 0 1 6D
. 1 0 1 0 1 1 1 0 2E n 0 1 1 0 1 1 1 0 6E
/ 0 0 1 0 1 1 1 1 2F o 1 1 1 0 1 1 1 1 6F
0 1 0 1 1 0 0 0 0 30 p 0 1 1 1 0 0 0 0 70
1 0 0 1 1 0 0 0 1 31 q 1 1 1 1 0 0 0 1 71
2 0 0 1 1 0 0 1 0 32 r 1 1 1 1 0 0 1 0 72
3 1 0 1 1 0 0 1 1 33 s 0 1 1 1 0 0 1 1 73
4 0 0 1 1 0 1 0 0 34 t 1 1 1 1 0 1 0 0 74
5 1 0 1 1 0 1 0 1 35 u 0 1 1 1 0 1 0 1 75
6 1 0 1 1 0 1 1 0 36 v 0 1 1 1 0 1 1 0 76
7 0 0 1 1 0 1 1 1 37 w 1 1 1 1 0 1 1 1 77
8 0 0 1 1 1 0 0 0 38 x 1 1 1 1 1 0 0 0 78
(Continued )

152
Fundamental Concepts of Data Communications

Table 2 (Continued)

Binary Code Binary Code

Bit 7 6 5 4 3 2 1 0 Hex Bit 7 6 5 4 3 2 1 0 Hex

9 1 0 1 1 1 0 0 1 39 y 0 1 1 1 1 0 0 1 79
: 1 0 1 1 1 0 1 0 3A z 0 1 1 1 1 0 1 0 7A
; 0 0 1 1 1 0 1 1 3B { 1 1 1 1 1 0 1 1 7B

1 0 1 1 1 1 0 0 3C | 0 1 1 1 1 1 0 0 7C
 0 0 1 1 1 1 0 1 3D } 1 1 1 1 1 1 0 1 7D
0 0 1 1 1 1 1 0 3E ⬃ 1 1 1 1 1 1 1 0 7E
? 1 0 1 1 1 1 1 1 3F DEL 0 1 1 1 1 1 1 1 7F

NUL  null VT  vertical tab SYN  synchronous


SOH  start of heading FF  form feed ETB  end of transmission block
STX  start of text CR  carriage return CAN  cancel
ETX  end of text SO  shift-out SUB  substitute
EOT  end of transmission SI  shift-in ESC  escape
ENQ  enquiry DLE  data link escape FS  field separator
ACK  acknowledge DC1  device control 1 GS  group separator
BEL  bell DC2  device control 2 RS  record separator
BS  back space DC3  device control 3 US  unit separator
HT  horizontal tab DC4  device control 4 SP  space
NL  new line NAK  negative acknowledge DEL  delete

Table 3 EBCDIC Code

Binary Code Binary Code

Bit 0 1 2 3 4 5 6 7 Hex Bit 0 1 2 3 4 5 6 7 Hex

NUL 0 0 0 0 0 0 0 0 00 1 0 0 0 0 0 0 0 80
SOH 0 0 0 0 0 0 0 1 01 a 1 0 0 0 0 0 0 1 81
STX 0 0 0 0 0 0 1 0 02 b 1 0 0 0 0 0 1 0 82
ETX 0 0 0 0 0 0 1 1 03 c 1 0 0 0 0 0 1 1 83
0 0 0 0 0 1 0 0 04 d 1 0 0 0 0 1 0 0 84
PT 0 0 0 0 0 1 0 1 05 e 1 0 0 0 0 1 0 1 85
0 0 0 0 0 1 1 0 06 f 1 0 0 0 0 1 1 0 86
0 0 0 0 0 1 1 1 07 g 1 0 0 0 0 1 1 1 87
0 0 0 0 1 0 0 0 08 h 1 0 0 0 1 0 0 0 88
0 0 0 0 1 0 0 1 09 i 1 0 0 0 1 0 0 1 89
0 0 0 0 1 0 1 0 0A 1 0 0 0 1 0 1 0 8A
0 0 0 0 1 0 1 1 0B 1 0 0 0 1 0 1 1 8B
FF 0 0 0 0 1 1 0 0 0C 1 0 0 0 1 1 0 0 8C
0 0 0 0 1 1 0 1 0D 1 0 0 0 1 1 0 1 8D
0 0 0 0 1 1 1 0 0E 1 0 0 0 1 1 1 0 8E
0 0 0 0 1 1 1 1 0F 1 0 0 0 1 1 1 1 8F
DLE 0 0 0 1 0 0 0 0 10 1 0 0 1 0 0 0 0 90
SBA 0 0 0 1 0 0 0 1 11 j 1 0 0 1 0 0 0 1 91
EUA 0 0 0 1 0 0 1 0 12 k 1 0 0 1 0 0 1 0 92
IC 0 0 0 1 0 0 1 1 13 l 1 0 0 1 0 0 1 1 93
0 0 0 1 0 1 0 0 14 m 1 0 0 1 0 1 0 0 94
NL 0 0 0 1 0 1 0 1 15 n 1 0 0 1 0 1 0 1 95
0 0 0 1 0 1 1 0 16 o 1 0 0 1 0 1 1 0 96
0 0 0 1 0 1 1 1 17 p 1 0 0 1 0 1 1 1 97
0 0 0 1 1 0 0 0 18 q 1 0 0 1 1 0 0 0 98
EM 0 0 0 1 1 0 0 1 19 r 1 0 0 1 1 0 0 1 99
0 0 0 1 1 0 1 0 1A 1 0 0 1 1 0 1 0 9A
0 0 0 1 1 0 1 1 1B 1 0 0 1 1 0 1 1 9B
DUP 0 0 0 1 1 1 0 0 1C 1 0 0 1 1 1 0 0 9C
SF 0 0 0 1 1 1 0 1 1D 1 0 0 1 1 1 0 1 9D
FM 0 0 0 1 1 1 1 0 1E 1 0 0 1 1 1 1 0 9E
(Continued )

153
Fundamental Concepts of Data Communications

Table 3 (Continued)

Binary Code Binary Code

Bit 0 1 2 3 4 5 6 7 Hex Bit 0 1 2 3 4 5 6 7 Hex

ITB 0 0 0 1 1 1 1 1 1F 1 0 0 1 1 1 1 1 9F
0 0 1 0 0 0 0 0 20 1 0 1 0 0 0 0 0 A0
0 0 1 0 0 0 0 1 21 ⬃ 1 0 1 0 0 0 0 1 A1
0 0 1 0 0 0 1 0 22 s 1 0 1 0 0 0 1 0 A2
0 0 1 0 0 0 1 1 23 t 1 0 1 0 0 0 1 1 A3
0 0 1 0 0 1 0 0 24 u 1 0 1 0 0 1 0 0 A4
0 0 1 0 0 1 0 1 25 v 1 0 1 0 0 1 0 1 A5
ETB 0 0 1 0 0 1 1 0 26 w 1 0 1 0 0 1 1 0 A6
ESC 0 0 1 0 0 1 1 1 27 x 1 0 1 0 0 1 1 1 A7
0 0 1 0 1 0 0 0 28 y 1 0 1 0 1 0 0 0 A8
0 0 1 0 1 0 0 1 29 z 1 0 1 0 1 0 0 1 A9
0 0 1 0 1 0 1 0 2A 1 0 1 0 1 0 1 0 AA
0 0 1 0 1 0 1 1 2B 1 0 1 0 1 0 1 1 AB
0 0 1 0 1 1 0 0 2C 1 0 1 0 1 1 0 0 AC
ENQ 0 0 1 0 1 1 0 1 2D 1 0 1 0 1 1 0 1 AD
0 0 1 0 1 1 1 0 2E 1 0 1 0 1 1 1 0 AE
0 0 1 0 1 1 1 1 2F 1 0 1 0 1 1 1 1 AF
0 0 1 1 0 0 0 0 30 1 0 1 1 0 0 0 0 B0
0 0 1 1 0 0 0 1 31 1 0 1 1 0 0 0 1 B1
SYN 0 0 1 1 0 0 1 0 32 1 0 1 1 0 0 1 0 B2
0 0 1 1 0 0 1 1 33 1 0 1 1 0 0 1 1 B3
0 0 1 1 0 1 0 0 34 1 0 1 1 0 1 0 0 B4
0 0 1 1 0 1 0 1 35 1 0 1 1 0 1 0 1 B5
0 0 1 1 0 1 1 0 36 1 0 1 1 0 1 1 0 B6
BOT 0 0 1 1 0 1 1 1 37 1 0 1 1 0 1 1 1 B7
0 0 1 1 1 0 0 0 38 1 0 1 1 1 0 0 0 B8
0 0 1 1 1 0 0 1 39 1 0 1 1 1 0 0 1 B9
0 0 1 1 1 0 1 0 3A 1 0 1 1 1 0 1 0 BA
0 0 1 1 1 0 1 1 3B 1 0 1 1 1 0 1 1 BB
RA 0 0 1 1 1 1 0 0 3C 1 0 1 1 1 1 0 0 BC
NAK 0 0 1 1 1 1 0 1 3D 1 0 1 1 1 1 0 1 BD
0 0 1 1 1 1 1 0 3E 1 0 1 1 1 1 1 0 BE
SUB 0 0 1 1 1 1 1 1 3F 1 0 1 1 1 1 1 1 BF
SP 0 1 0 0 0 0 0 0 40 { 1 1 0 0 0 0 0 0 C0
0 1 0 0 0 0 0 1 41 A 1 1 0 0 0 0 0 1 C1
0 1 0 0 0 0 1 0 42 B 1 1 0 0 0 0 1 0 C2
0 1 0 0 0 0 1 1 43 C 1 1 0 0 0 0 1 1 C3
0 1 0 0 0 1 0 0 44 D 1 1 0 0 0 1 0 0 C4
0 1 0 0 0 1 0 1 45 E 1 1 0 0 0 1 0 1 C5
0 1 0 0 0 1 1 0 46 F 1 1 0 0 0 1 1 0 C6
0 1 0 0 0 1 1 1 47 G 1 1 0 0 0 1 1 1 C7
0 1 0 0 1 0 0 0 48 H 1 1 0 0 1 0 0 0 C8
0 1 0 0 1 0 0 1 49 I 1 1 0 0 1 0 0 1 C9
¢ 0 1 0 0 1 0 1 0 4A 1 1 0 0 1 0 1 0 CA
- 0 1 0 0 1 0 1 1 4B 1 1 0 0 1 0 1 1 CB

0 1 0 0 1 1 0 0 4C 1 1 0 0 1 1 0 0 CC
( 0 1 0 0 1 1 0 1 4D 1 1 0 0 1 1 0 1 CD
 0 1 0 0 1 1 1 0 4E 1 1 0 0 1 1 1 0 CE
| 0 1 0 0 1 1 1 1 4F 1 1 0 0 1 1 1 1 CF
& 0 1 0 1 0 0 0 0 50 } 1 1 0 1 0 0 0 0 D0
0 1 0 1 0 0 0 1 51 J 1 1 0 1 0 0 0 1 D1
0 1 0 1 0 0 1 0 52 K 1 1 0 1 0 0 1 0 D2
0 1 0 1 0 0 1 1 53 L 1 1 0 1 0 0 1 1 D3
0 1 0 1 0 1 0 0 54 M 1 1 0 1 0 1 0 0 D4
0 1 0 1 0 1 0 1 55 N 1 1 0 1 0 1 0 1 D5
0 1 0 1 0 1 1 0 56 O 1 1 0 1 0 1 1 0 D6
(Continued )

154
Fundamental Concepts of Data Communications

Table 3 (Continued)

Binary Code Binary Code

Bit 0 1 2 3 4 5 6 7 Hex Bit 0 1 2 3 4 5 6 7 Hex

0 1 0 1 0 1 1 1 57 P 1 1 0 1 0 1 1 1 D7
0 1 0 1 1 0 0 0 58 Q 1 1 0 1 1 0 0 0 D8
0 1 0 1 1 0 0 1 59 R 1 1 0 1 1 0 0 1 D9
! 0 1 0 1 1 0 1 0 5A 1 1 0 1 1 0 1 0 DA
$ 0 1 0 1 1 0 1 1 5B 1 1 0 1 1 0 1 1 DB
* 0 1 0 1 1 1 0 0 5C 1 1 0 1 1 1 0 0 DC
) 0 1 0 1 1 1 0 1 5D 1 1 0 1 1 1 0 1 DD
: 0 1 0 1 1 1 1 0 5E 1 1 0 1 1 1 1 0 DE
¬ 0 1 0 1 1 1 1 1 5F 1 1 0 1 1 1 1 1 DF
 0 1 1 0 0 0 0 0 60 \ 1 1 1 0 0 0 0 0 E0
/ 0 1 1 0 0 0 0 1 61 1 1 1 0 0 0 0 1 E1
 0 1 1 0 0 0 1 0 62 S 1 1 1 0 0 0 1 0 E2
0 1 1 0 0 0 1 1 63 T 1 1 1 0 0 0 1 1 E3
0 1 1 0 0 1 0 0 64 U 1 1 1 0 0 1 0 0 E4
0 1 1 0 0 1 0 1 65 V 1 1 1 0 0 1 0 1 E5
0 1 1 0 0 1 1 0 66 W 1 1 1 0 0 1 1 0 E6
0 1 1 0 0 1 1 1 67 X 1 1 1 0 0 1 1 1 E7
0 1 1 0 1 0 0 0 68 Y 1 1 1 0 1 0 0 0 E8
0 1 1 0 1 0 0 1 69 Z 1 1 1 0 1 0 0 1 E9
0 1 1 0 1 0 1 0 6A 1 1 1 0 1 0 1 0 EA
0 1 1 0 1 0 1 1 6B 1 1 1 0 1 0 1 1 EB
% 0 1 1 0 1 1 0 0 6C 1 1 1 0 1 1 0 0 EC
0 1 1 0 1 1 0 1 6D 1 1 1 0 1 1 0 1 ED
0 1 1 0 1 1 1 0 6E 1 1 1 0 1 1 1 0 EE
? 0 1 1 0 1 1 1 1 6F 1 1 1 0 1 1 1 1 EF
0 1 1 1 0 0 0 0 70 0 1 1 1 1 0 0 0 0 F0
0 1 1 1 0 0 0 1 71 1 1 1 1 1 0 0 0 1 F1
0 1 1 1 0 0 1 0 72 2 1 1 1 1 0 0 1 0 F2
0 1 1 1 0 0 1 1 73 3 1 1 1 1 0 0 1 1 F3
0 1 1 1 0 1 0 0 74 4 1 1 1 1 0 1 0 0 F4
0 1 1 1 0 1 0 1 75 5 1 1 1 1 0 1 0 1 F5
0 1 1 1 0 1 1 0 76 6 1 1 1 1 0 1 1 0 F6
0 1 1 1 0 1 1 1 77 7 1 1 1 1 0 1 1 1 F7
0 1 1 1 1 0 0 0 78 8 1 1 1 1 1 0 0 0 F8
䊱 0 1 1 1 1 0 0 1 79 9 1 1 1 1 1 0 0 1 F9
: 0 1 1 1 1 0 1 0 7A 1 1 1 1 1 0 1 0 FA
# 0 1 1 1 1 0 1 1 7B 1 1 1 1 1 0 1 1 FB
@ 0 1 1 1 1 1 0 0 7C 1 1 1 1 1 1 0 0 FC
䊱 0 1 1 1 1 1 0 1 7D 1 1 1 1 1 1 0 1 FD
 0 1 1 1 1 1 1 0 7E 1 1 1 1 1 1 1 0 FE
” 0 1 1 1 1 1 1 1 7F 1 1 1 1 1 1 1 1 FF

DLE  data-link escape ITB  end of intermediate transmission block


DUP  duplicate NUL  null
EM  end of medium PT  program tab
ENQ  enquiry RA  repeat to address
EOT  end of transmission SBA  set buffer address
ESC  escape SF  start field
ETB  end of transmission block SOH  start of heading
ETX  end of text SP  space
EUA  erase unprotected to address STX  start of text
FF  form feed SUB  substitute
FM  field mark SYN  synchronous
IC  insert cursor NAK  negative acknowledge

155
Fundamental Concepts of Data Communications

FIGURE 1 Typical bar code

3 BAR CODES

Bar codes are those omnipresent black-and-white striped stickers that seem to appear on
virtually every consumer item in the United States and most of the rest of the world. Al-
though bar codes were developed in the early 1970s, they were not used extensively un-
til the mid-1980s. A bar code is a series of vertical black bars separated by vertical white
bars (called spaces). The widths of the bars and spaces along with their reflective abili-
ties represent binary 1s and 0s, and combinations of bits identify specific items. In addi-
tion, bar codes may contain information regarding cost, inventory management and con-
trol, security access, shipping and receiving, production counting, document and order
processing, automatic billing, and many other applications. A typical bar code is shown
in Figure 1.
There are several standard bar code formats. The format selected depends on what
types of data are being stored, how the data are being stored, system performance, and
which format is most popular with business and industry. Bar codes are generally classified
as being discrete, continuous, or two-dimensional (2D).

Discrete code. A discrete bar code has spaces or gaps between characters. Therefore,
each character within the bar code is independent of every other character. Code 39
is an example of a discrete bar code.
Continuous code. A continuous bar code does not include spaces between characters.
An example of a continuous bar code is the Universal Product Code (UPC).
2D code. A 2D bar code stores data in two dimensions in contrast with a conventional
linear bar code, which stores data along only one axis. 2D bar codes have a larger
storage capacity than one-dimensional bar codes (typically 1 kilobyte or more per
data symbol).

3-1 Code 39
One of the most popular bar codes was developed in 1974 and called Code 39 (also called
Code 3 of 9 and 3 of 9 Code). Code 39 uses an alphanumeric code similar to the ASCII
code. Code 39 is shown in Table 4. Code 39 consists of 36 unique codes representing the
10 digits and 26 uppercase letters. There are seven additional codes used for special char-
acters, and an exclusive start/stop character coded as an asterisk (*). Code 39 bar codes are
ideally suited for making labels, such as name badges.
Each Code 39 character contains nine vertical elements (five bars and four spaces).
The logic condition (1 or 0) of each element is encoded in the width of the bar or space
(i.e., width modulation). A wide element, whether it be a bar or a space, represents a logic 1,
and a narrow element represents a logic 0. Three of the nine elements in each Code 39
character must be logic 1s, and the rest must be logic 0s. In addition, of the three logic
1s, two must be bars and one a space. Each character begins and ends with a black bar
with alternating white bars in between. Since Code 39 is a discrete code, all characters
are separated with an intercharacter gap, which is usually one character wide. The aster-
isks at the beginning and end of the bar code are start and stop characters, respectively.

156
Fundamental Concepts of Data Communications

Table 4 Code 39 Character Set

Character Binary Code Bars Spaces Check Sum


b8 b7 b6 b5 b4 b3 b2 b1 b0 b8b6b4b2b0 b7b5b3b1 Value

0 0 0 0 1 1 0 1 0 0 00110 0100 0
1 1 0 0 1 0 0 0 0 1 10001 0100 1
2 0 0 1 1 0 0 0 0 1 01001 0100 2
3 1 0 1 1 0 0 0 0 0 11000 0100 3
4 0 0 0 1 1 0 0 0 1 00101 0100 4
5 1 0 0 1 1 0 0 0 0 10100 0100 5
6 0 0 1 1 1 0 0 0 0 01100 0100 6
7 0 0 0 1 0 0 1 0 1 00011 0100 7
8 1 0 0 1 0 0 1 0 0 10010 0100 8
9 0 0 1 1 0 0 1 0 0 01010 0100 9
A 1 0 0 0 0 1 0 0 1 10001 0010 10
B 0 0 1 0 0 1 0 0 1 01001 0010 11
C 1 0 1 0 0 1 0 0 0 11000 0010 12
D 0 0 0 0 1 1 0 0 1 00101 0010 13
E 1 0 0 0 1 1 0 0 0 10100 0010 14
F 0 0 1 0 1 1 0 0 0 01100 0010 15
G 0 0 0 0 0 1 1 0 1 00011 0010 16
H 1 0 0 0 0 1 1 0 0 10010 0010 17
I 0 0 1 0 0 1 1 0 0 01010 0010 18
J 0 0 0 0 1 1 1 0 0 00110 0010 19
K 1 0 0 0 0 0 0 1 1 10001 0001 20
L 0 0 1 0 0 0 0 1 1 01001 0001 21
M 1 0 1 0 0 0 0 1 0 11000 0001 22
N 0 0 0 0 1 0 0 1 1 00101 0001 23
O 1 0 0 0 1 0 0 1 0 10100 0001 24
P 0 0 1 0 1 0 0 1 0 01100 0001 25
Q 0 0 0 0 0 0 1 1 1 00011 0001 26
R 1 0 0 0 0 0 1 1 0 10010 0001 27
S 0 0 1 0 0 0 1 1 0 01010 0001 28
T 0 0 0 0 1 0 1 1 0 00110 0001 29
U 1 1 0 0 0 0 0 0 1 10001 1000 30
V 0 1 1 0 0 0 0 0 1 01001 1000 31
W 1 1 1 0 0 0 0 0 0 11000 1000 32
X 0 1 0 0 1 0 0 0 1 00101 1000 33
Y 1 1 0 0 1 0 0 0 0 10100 1000 34
Z 0 1 1 0 1 0 0 0 0 01100 1000 35
 0 1 0 0 0 0 1 0 1 00011 1000 36
. 1 1 0 0 0 0 1 0 0 10010 1000 37
space 0 1 1 0 0 0 1 0 0 01010 1000 38
* 0 1 0 0 1 0 1 0 0 00110 1000 —
$ 0 1 0 1 0 1 0 0 0 00000 1110 39
/ 0 1 0 1 0 0 0 1 0 00000 1101 40
 0 1 0 0 0 1 0 1 0 00000 1011 41
% 0 0 0 1 0 1 0 1 0 00000 0111 42

Figure 2 shows the Code 39 representation of the start/stop code (*) followed by an in-
tercharacter gap and then the Code 39 representation of the letter A.

3-2 Universal Product Code


The grocery industry developed the Universal Product Code (UPC) sometime in the early
1970s to identify their products. The National Association of Food Chains officially
adopted the UPC code in 1974. Today UPC codes are found on virtually every grocery item
from a candy bar to a can of beans.

157
Fundamental Concepts of Data Communications

Intercharacter gap Intercharacter gap

X 3X

Bar
code

Binary code 0 1 00 1 0 1 00 1 0000 1 00 1

Character asterisk (*) A Next character


Start guard pattern

X = width of narrow bar or space


3X = width of wide bar or space

FIGURE 2 Code 39 bar code

Figures 3a, b, and c show the character set, label format, and sample bit patterns for the
standard UPC code. Unlike Code 39, the UPC code is a continuous code since there are no in-
tercharacter spaces. Each UPC label contains a 12-digit number. The two long bars shown in
Figure 3b on the outermost left- and right-hand sides of the label are called the start guard pat-
tern and the stop guard pattern, respectively. The start and stop guard patterns consist of a 101
(bar-space-bar) sequence, which is used to frame the 12-digit UPC number. The left and right
halves of the label are separated by a center guard pattern, which consists of two long bars in
the center of the label (they are called long bars because they are physically longer than the
other bars on the label). The two long bars are separated with a space between them and have
spaces on both sides of the bars. Therefore, the UPC center guard pattern is 01010 as shown
in Figure 3b.The first six digits of the UPC code are encoded on the left half of the label (called
the left-hand characters), and the last six digits of the UPC code are encoded on the right half
(called the right-hand characters). Note in Figure 3a that there are two binary codes for each
character. When a character appears in one of the first six digits of the code, it uses a left-hand
code, and when a character appears in one of the last six digits, it uses a right-hand code. Note
that the right-hand code is simply the complement of the left-hand code. For example, if the
second and ninth digits of a 12-digit code UPC are both 4s, the digit is encoded as a 0100011
in position 2 and as a 1011100 in position 9. The UPC code for the 12-digit code 012345
543210 is

0001101 0011001 0010011 0111101 0100011 1011100 0110001 1001110 1000010 1101100 1100110 1110010
0 1 2 3 4 5 5 4 3 2 1 0

left-hand codes right-hand codes

The first left-hand digit in the UPC code is called the UPC number system character,
as it identifies how the UPC symbol is used. Table 5 lists the 10 UPC number system char-
acters. For example, the UPC number system character 5 indicates that the item is intended
to be used with a coupon. The other five left-hand characters are data characters. The first
five right-hand characters are data characters, and the sixth right-hand character is a check
character, which is used for error detection. The decimal value of the number system char-
acter is always printed to the left of the UPC label, and on most UPC labels the decimal
value of the check character is printed on the right side of the UPC label.
With UPC codes, the width of the bars and spaces does not correspond to logic 1s
and 0s. Instead, the digits 0 through 9 are encoded into a combination of two variable-

158
Fundamental Concepts of Data Communications

UPC Character Set

Left-hand Decimal Right-hand


character digit character

0001101 0 1110010
0011001 1 1100110
0010011 2 1101100
0111101 3 1000010
0100011 4 1011100
0110001 5 1001110
0101111 6 1010000
0111011 7 1000100
0110111 8 1001000
0001011 9 1110100

(a)

Number system Center guard Character


character pattern check

Start guard Stop guard


Five left-hand Five right-hand
pattern pattern
data characters data characters
(35 bits) (35 bits)

101 6 digits 01010 6 digits 101

(b)

0 1 0 0 0 1 1 1 0 1 1 1 0 0

Left-hand character 4 Right-hand character 4

(c)

FIGURE 3 (a) UPC version A character set; (b) UPC label format; (c) left- and right-hand bit
sequence for the digit 4

width bars and two variable-width spaces that occupy the equivalent of seven bit positions.
Figure 3c shows the variable-width code for the UPC character 4 when used in one of the
first six digit positions of the code (i.e., left-hand bit sequence) and when used in one of the
last six digit positions of the code (i.e., right-hand bit sequence). A single bar (one bit po-
sition) represents a logic 1, and a single space represents a logic 0. However, close exami-
nation of the UPC character set in Table 5 will reveal that all UPC digits are comprised of
bit patterns that yield two variable-width bars and two variable-width spaces, with the bar
and space widths ranging from one to four bits. For the UPC character 4 shown in Figure
3c, the left-hand character is comprised of a one-bit space followed in order by a one-bit
bar, a three-bit space, and a two-bit bar. The right-hand character is comprised of a one-bit
bar followed in order by a one-bit space, a three-bit bar, and a two-bit space.

159
Fundamental Concepts of Data Communications

Table 5 UPC Number System Characters

Character Intended Use

0 Regular UPC codes


1 Reserved for future use
2 Random-weight items that are symbol marked at the store
3 National Drug Code and National Health Related Items Code
4 Intended to be used without code format restrictions and with
check digit protection for in-store marking of nonfood items
5 For use with coupons
6 Regular UPC codes
7 Regular UPC codes
8 Reserved for future use
9 Reserved for future use

0 0 0 1 1 0 1 1 1 1 0 0 1 0

Left-hand version of the character 0 Right-hand version of the character 0

FIGURE 4 UPC character 0

Example 1
Determine the UPC label structure for the digit 0.
Solution From Figure 3a, the binary sequence for the digit 0 in the left-hand character field is
0001101, and the binary sequence for the digit 0 in the right-hand character field is 1110010.
The left-hand sequence is comprised of three successive 0s, followed by two 1s, one 0, and one 1.
The three successive 0s are equivalent to a space three bits long. The two 1s are equivalent to a bar
two bits long. The single 0 and single 1 are equivalent to a space and a bar, each one bit long.
The right-hand sequence is comprised of three 1s followed by two 0s, a 1, and a 0. The three
1s are equivalent to a bar three bits long. The two 0s are equivalent to a space two bits long. The sin-
gle 1 and single 0 are equivalent to a bar and a space, each one bit long each. The UPC pattern for the
digit 0 is shown in Figure 4.

4 ERROR CONTROL

A data communications circuit can be as short as a few feet or as long as several thousand
miles, and the transmission medium can be as simple as a pair of wires or as complex as a
microwave, satellite, or optical fiber communications system. Therefore, it is inevitable that
errors will occur, and it is necessary to develop and implement error-control procedures.
Transmission errors are caused by electrical interference from natural sources, such as
lightning, as well as from man-made sources, such as motors, generators, power lines, and
fluorescent lights.
Data communications errors can be generally classified as single bit, multiple bit, or
burst. Single-bit errors are when only one bit within a given data string is in error. Single-bit
errors affect only one character within a message. A multiple-bit error is when two or more
nonconsecutive bits within a given data string are in error. Multiple-bit errors can affect one or
more characters within a message. A burst error is when two or more consecutive bits within a
given data string are in error. Burst errors can affect one or more characters within a message.

160
Fundamental Concepts of Data Communications

Error performance is the rate in which errors occur, which can be described as either
an expected or an empirical value. The theoretical (mathematical) expectation of the rate at
which errors will occur is called probability of error (P[e]), whereas the actual historical
record of a system’s error performance is called bit error rate (BER). For example, if a sys-
tem has a P(e) of 105, this means that mathematically the system can expect to experience
one bit error for every 100,000 bits transported through the system (105  1/105 
1/100,000). If a system has a BER of 105, this means that in the past there was one bit er-
ror for every 100,000 bits transported. Typically, a BER is measured and then compared
with the probability of error to evaluate system performance. Error control can be divided
into two general categories: error detection and error correction.

5 ERROR DETECTION

Error detection is the process of monitoring data transmission and determining when errors
have occurred. Error-detection techniques neither correct errors nor identify which bits are
in error—they indicate only when an error has occurred. The purpose of error detection is
not to prevent errors from occurring but to prevent undetected errors from occurring.
The most common error-detection techniques are redundancy checking, which in-
cludes vertical redundancy checking, checksum, longitudinal redundancy checking, and
cyclic redundancy checking.

5-1 Redundancy Checking


Duplicating each data unit for the purpose of detecting errors is a form of error detection
called redundancy. Redundancy is an effective but rather costly means of detecting errors,
especially with long messages. It is much more efficient to add bits to data units that check
for transmission errors. Adding bits for the sole purpose of detecting errors is called
redundancy checking. There are four basic types of redundancy checks: vertical redundancy
checking, checksums, longitudinal redundancy checking, and cyclic redundancy checking.
5-1-1 Vertical redundancy checking. Vertical redundancy checking (VRC) is
probably the simplest error-detection scheme and is generally referred to as character par-
ity or simply parity. With character parity, each character has its own error-detection bit
called the parity bit. Since the parity bit is not actually part of the character, it is considered
a redundant bit. An n-character message would have n redundant parity bits. Therefore, the
number of error-detection bits is directly proportional to the length of the message.
With character parity, a single parity bit is added to each character to force the total
number of logic 1s in the character, including the parity bit, to be either an odd number (odd
parity) or an even number (even parity). For example, the ASCII code for the letter C is 43
hex, or P1000011 binary, where the P bit is the parity bit. There are three logic 1s in the
code, not counting the parity bit. If odd parity is used, the P bit is made a logic 0, keeping
the total number of logic 1s at three, which is an odd number. If even parity is used, the P
bit is made a logic 1, making the total number of logic 1s four, which is an even number.
The primary advantage of parity is its simplicity. The disadvantage is that when an
even number of bits are received in error, the parity checker will not detect them because
when the logic condition of an even number of bits is changed, the parity of the character
remains the same. Consequently, over a long time, parity will theoretically detect only 50%
of the transmission errors (this assumes an equal probability that an even or an odd number
of bits could be in error).
Example 2
Determine the odd and even parity bits for the ASCII character R.
Solution The hex code for the ASCII character R is 52, which is P1010010 in binary, where P des-
ignates the parity bit.

161
Fundamental Concepts of Data Communications

For odd parity, the parity bit is a 0 because 52 hex contains three logic 1s, which is an odd num-
ber. Therefore, the odd-parity bit sequence for the ASCII character R is 01010010.
For even parity, the parity bit is 1, making the total number of logic 1s in the eight-bit sequence
four, which is an even number. Therefore, the even-parity bit sequence for the ASCII character R is
11010010.
Other forms of parity include marking parity (the parity bit is always a 1), no parity (the par-
ity bit is not sent or checked), and ignored parity (the parity bit is always a 0 bit if it is ignored). Mark-
ing parity is useful only when errors occur in a large number of bits. Ignored parity allows receivers
that are incapable of checking parity to communicate with devices that use parity.

5-1-2 Checksum. Checksum is another relatively simple form of redundancy error


checking where each character has a numerical value assigned to it. The characters within
a message are combined together to produce an error-checking character (checksum),
which can be as simple as the arithmetic sum of the numerical values of all the characters
in the message. The checksum is appended to the end of the message. The receiver repli-
cates the combining operation and determines its own checksum. The receiver’s checksum
is compared to the checksum appended to the message, and if they are the same, it is as-
sumed that no transmission errors have occurred. If the two checksums are different, a
transmission error has definitely occurred.

5-1-3 Longitudinal redundancy checking. Longitudinal redundancy checking


(LRC) is a redundancy error detection scheme that uses parity to determine if a transmis-
sion error has occurred within a message and is therefore sometimes called message parity.
With LRC, each bit position has a parity bit. In other words, b0 from each character in the
message is XORed with b0 from all the other characters in the message. Similarly, b1, b2,
and so on are XORed with their respective bits from all the characters in the message. Es-
sentially, LRC is the result of XORing the “character codes” that make up the message,
whereas VRC is the XORing of the bits within a single character. With LRC, even parity is
generally used, whereas with VRC, odd parity is generally used.
The LRC bits are computed in the transmitter while the data are being sent and then
appended to the end of the message as a redundant character. In the receiver, the LRC is re-
computed from the data, and the recomputed LRC is compared to the LRC appended to the
message. If the two LRC characters are the same, most likely no transmission errors have
occurred. If they are different, one or more transmission errors have occurred.
Example 3 shows how are VRC and LRC are calculated and how they can be used to-
gether.

Example 3
Determine the VRCs and LRC for the following ASCII-encoded message: THE CAT. Use odd parity
for the VRCs and even parity for the LRC.
Solution
Character T H E sp C A T LRC
Hex 54 48 45 20 43 41 54 2F
ASCII code b0 0 0 1 0 1 1 0 1
b1 0 0 0 0 1 0 0 1
b2 1 0 1 0 0 0 1 1
b3 0 1 0 0 0 0 0 1
b4 1 0 0 0 0 0 1 0
b5 0 0 0 1 0 0 0 1
b6 1 1 1 0 1 1 1 0
Parity bit b7 0 1 0 0 0 1 0 0
(VRC)

162
Fundamental Concepts of Data Communications

The LRC is 00101111 binary (2F hex), which is the character “/” in ASCII. Therefore, after the LRC
character is appended to the message, it would read “THE CAT/.”
The group of characters that comprise a message (i.e., THE CAT) is often called a block or
frame of data. Therefore, the bit sequence for the LRC is often called a block check sequence (BCS)
or frame check sequence (FCS).
With longitudinal redundancy checking, all messages (regardless of their length) have the same
number of error-detection characters. This characteristic alone makes LRC a better choice for systems
that typically send long messages.
Historically, LRC detects between 95% and 98% of all transmission errors. LRC will not de-
tect transmission errors when an even number of characters has an error in the same bit position. For
example, if b4 in an even number of characters is in error, the LRC is still valid even though multiple
transmission errors have occurred.

5-1-4 Cyclic redundancy checking. Probably the most reliable redundancy check-
ing technique for error detection is a convolutional coding scheme called cyclic redundancy
checking (CRC). With CRC, approximately 99.999% of all transmission errors are de-
tected. In the United States, the most common CRC code is CRC-16. With CRC-16, 16 bits
are used for the block check sequence. With CRC, the entire data stream is treated as a long
continuous binary number. Because the BCS is separate from the message but transported
within the same transmission, CRC is considered a systematic code. Cyclic block codes are
often written as (n, k) cyclic codes where n  bit length of transmission and k  bit length
of message. Therefore, the length of the BCC in bits is
BCC  n  k
A CRC-16 block check character is the remainder of a binary division process. A
data message polynominal G(x) is divided by a unique generator polynominal function
P(x), the quotient is discarded, and the remainder is truncated to 16 bits and appended
to the message as a BCS. The generator polynominal must be a prime number (i.e., a
number divisible by only itself and 1). CRC-16 detects all single-bit errors, all double-
bit errors (provided the divisor contains at least three logic 1s), all odd number of bit
errors (provided the division contains a factor 11), all error bursts of 16 bits or less, and
99.9% of error bursts greater than 16 bits long. For randomly distributed errors, it is es-
timated that the likelihood of CRC-16 not detecting an error is 1014, which equates
to one undetected error every two years of continuous data transmission at a rate of
1.544 Mbps.
With CRC generation, the division is not accomplished with standard arithmetic di-
vision. Instead, modulo-2 division is used, where the remainder is derived from an exclu-
sive OR (XOR) operation. In the receiver, the data stream, including the CRC code, is di-
vided by the same generating function P(x). If no transmission errors have occurred, the
remainder will be zero. In the receiver, the message and CRC character pass through a block
check register. After the entire message has passed through the register, its contents should
be zero if the receive message contains no errors.
Mathematically, CRC can be expressed as
G1x2
 Q1x2  R1x2 (1)
P1x2
where G(x)  message polynominal
P(x)  generator polynominal
Q(x)  quotient
R(x)  remainder
The generator polynomial for CRC-16 is
P(x)  x16  x15  x2  x0

163
Fundamental Concepts of Data Communications

CRC-16 polynominal, G(x) = X16 + X15 + X2 + X0

X2 X15 X16

15 14 + 13 12 11 10 9 8 7 6 5 4 3 2 1 + 0 +
MSB XOR XOR LSB XOR

Data input
BCC output

FIGURE 5 CRC-16 generating circuit

The number of bits in the CRC code is equal to the highest exponent of the gener-
ating polynomial. The exponents identify the bit positions in the generating polynomial
that contain a logic 1. Therefore, for CRC-16, b16, b15, b2, and b0 are logic 1s, and all
other bits are logic 0s. The number of bits in a CRC character is always twice the num-
ber of bits in a data character (i.e., eight-bit characters use CRC-16, six-bit characters use
CRC-12, and so on).
Figure 5 shows the block diagram for a circuit that will generate a CRC-16 BCC. A
CRC generating circuit requires one shift register for each bit in the BCC. Note that there
are 16 shift registers in Figure 5. Also note that an XOR gate is placed at the output of the
shift registers for each bit position of the generating polynomial that contains a logic 1, ex-
cept for x0. The BCC is the content of the 16 registers after the entire message has passed
through the CRC generating circuit.

Example 4
Determine the BCS for the following data and CRC generating polynomials:
Data G(x)  x7  x5  x4  x2  x1  x0
 10110111
CRC P(x)  x5  x4  x1  x0
 110011

Solution First, G(x) is multiplied by the number of bits in the CRC code, which is 5:
x5(x7  x5  x4  x2  x1  x0)  x12  x10  x9  x7  x6  x5  1011011100000
Then the result is divided by P(x):

1 1 0 1 0 1 1 1
1 1 0 0 1 1 | 1 0 1 1 0 1 1 1 0 0 0 0 0
1 1 0 0 1 1
1 1 1 1 0 1
1 1 0 0 1 1
1 1 1 0 1 0
1 1 0 0 1 1
1 0 0 1 0 0
1 1 0 0 1 1
1 0 1 1 1 0
1 1 0 0 1 1
1 1 1 0 1 0
1 1 0 0 1 1
0 1 0 0 1  CRC

164
Fundamental Concepts of Data Communications

The CRC is appended to the data to give the following data stream:
G(x) CRC



1 0 1 1 0 1 1 1 0 1 0 0 1

At the receiver, the data are again divided by P(x) :

1 1 0 1 0 1 1 1
1 1 0 0 1 1 | 1 0 1 1 0 1 1 1 0 1 0 0 1
1 1 0 0 1 1
1 1 1 1 0 1
1 1 0 0 1 1
1 1 1 0 1 0
1 1 0 0 1 1
1 0 0 1 1 0
1 1 0 0 1 1
1 0 1 0 1 0
1 1 0 0 1 1
1 1 0 0 1 1
1 1 0 0 1 1
0 0 0 0 0 0 Remainder  0,
which means there
were no transmis-
sion errors

6 ERROR CORRECTION

Although detecting errors is an important aspect of data communications, determining what


to do with data that contain errors is another consideration. There are two basic types of er-
ror messages: lost message and damaged message. A lost message is one that never arrives
at the destination or one that arrives but is damaged to the extent that it is unrecognizable.
A damaged message is one that is recognized at the destination but contains one or more
transmission errors.
Data communications network designers have developed two basic strategies for han-
dling transmission errors: error-detecting codes and error-correcting codes. Error-de-
tecting codes include enough redundant information with each transmitted message to en-
able the receiver to determine when an error has occurred. Parity bits, block and frame
check characters, and cyclic redundancy characters are examples of error-detecting codes.
Error-correcting codes include sufficient extraneous information along with each message
to enable the receiver to determine when an error has occurred and which bit is in error.
Transmission errors can occur as single-bit errors or as bursts of errors, depending on
the physical processes that caused them. Having errors occur in bursts is an advantage when
data are transmitted in blocks or frames containing many bits. For example, if a typical
frame size is 10,000 bits and the system has a probability of error of 104 (one bit error in
every 10,000 bits transmitted), independent bit errors would most likely produce an error
in every block. However, if errors occur in bursts of 1000, only one or two blocks out of
every 1000 transmitted would contain errors. The disadvantage of bursts of errors is they
are more difficult to detect and even more difficult to correct than isolated single-bit errors.
In the modern world of data communications, there are two primary methods used for er-
ror correction: retransmission and forward error correction.

6-1 Retransmission
Retransmission, as the name implies, is when a receive station requests the transmit station to re-
send a message (or a portion of a message) when the message is received in error. Because the
receive terminal automatically calls for a retransmission of the entire message, retransmission

165
Fundamental Concepts of Data Communications

is often called ARQ, which is an old two-way radio term that means automatic repeat request or
automatic retransmission request.ARQ is probably the most reliable method of error correction,
although it is not necessarily the most efficient. Impairments on transmission media often occur
in bursts. If short messages are used, the likelihood that impairments will occur during a trans-
mission is small. However, short messages require more acknowledgments and line turnarounds
than do long messages. Acknowledgments are when the recipient of data sends a short message
back to the sender acknowledging receipt of the last transmission. The acknowledgment can in-
dicate a successful transmission (positive acknowledgment) or an unsuccessful transmission
(negative acknowledgment). Line turnarounds are when a receive station becomes the transmit
station, such as when acknowledgments are sent or when retransmissions are sent in response
to a negative acknowledgment. Acknowledgments and line turnarounds for error control are
forms of overhead (data other than user information that must be transmitted). With long mes-
sages, less turnaround time is needed, although the likelihood that a transmission error will oc-
cur is higher than for short messages. It can be shown statistically that messages between 256
and 512 characters long are the optimum size for ARQ error correction.
There are two basic types of ARQ: discrete and continuous. Discrete ARQ uses ac-
knowledgments to indicate the successful or unsuccessful reception of data. There are two
basic types of acknowledgments: positive and negative. The destination station responds
with a positive acknowledgment when it receives an error-free message. The destination sta-
tion responds with a negative acknowledgment when it receives a message containing er-
rors to call for a retransmission. If the sending station does not receive an acknowledgment
after a predetermined length of time (called a time-out), it retransmits the message. This is
called retransmission after time-out.
Another type of ARQ, called continuous ARQ, can be used when messages are di-
vided into smaller blocks or frames that are sequentially numbered and transmitted in suc-
cession, without waiting for acknowledgments between blocks. Continuous ARQ allows
the destination station to asynchronously request the retransmission of a specific frame (or
frames) of data and still be able to reconstruct the entire message once all frames have been
successfully transported through the system. This technique is sometimes called selective
repeat, as it can be used to call for a retransmission of an entire message or only a portion
of a message.

6-2 Forward Error Correction


Forward error correction (FEC) is the only error-correction scheme that actually detects
and corrects transmission errors when they are received without requiring a retransmission.
With FEC, redundant bits are added to the message before transmission. When an error is
detected, the redundant bits are used to determine which bit is in error. Correcting the bit is
a simple matter of complementing it. The number of redundant bits necessary to correct er-
rors is much greater than the number of bits needed to simply detect errors. Therefore, FEC
is generally limited to one-, two-, or three-bit errors.
FEC is ideally suited for data communications systems when acknowledgments are
impractical or impossible, such as when simplex transmissions are used to transmit mes-
sages to many receivers or when the transmission, acknowledgment, and retransmission
time is excessive, for example when communicating to far away places, such as deep-space
vehicles. The purpose of FEC codes is to eliminate the time wasted for retransmissions.
However, the addition of the FEC bits to each message wastes time itself. Obviously, a
trade-off is made between ARQ and FEC, and system requirements determine which
method is best suited to a particular application. Probably the most popular error-correction
code is the Hamming code.

6-2-1 Hamming code. A mathematician named Richard W. Hamming, who was an


early pioneer in the development of error-detection and -correction procedures, developed

166
Fundamental Concepts of Data Communications

One data unit contains m + n bits

d1 d2 d3 d4 d5 d6 dm h1 h2 h3 hn

m data bits n Hamming bits

FIGURE 6 Data unit comprised of m character bits and n Hamming bits

the Hamming code while working at Bell Telephone Laboratories. The Hamming code is
an error-correcting code used for correcting transmission errors in synchronous data
streams. However, the Hamming code will correct only single-bit errors. It cannot correct
multiple-bit errors or burst errors, and it cannot identify errors that occur in the Hamming
bits themselves. The Hamming code, as with all FEC codes, requires the addition of over-
head to the message, consequently increasing the length of a transmission.
Hamming bits (sometimes called error bits) are inserted into a character at random lo-
cations. The combination of the data bits and the Hamming bits is called the Hamming code.
The only stipulation on the placement of the Hamming bits is that both the sender and the
receiver must agree on where they are placed. To calculate the number of redundant Ham-
ming bits necessary for a given character length, a relationship between the character bits
and the Hamming bits must be established. As shown in Figure 6, a data unit contains m
character bits and n Hamming bits. Therefore, the total number of bits in one data unit is
m  n. Since the Hamming bits must be able to identify which bit is in error, n Hamming bits
must be able to indicate at least m  n  1 different codes. Of the m  n codes, one code in-
dicates that no errors have occurred, and the remaining m  n codes indicate the bit position
where an error has occurred. Therefore, m  n bit positions must be identified with n bits.
Since n bits can produce 2n different codes, 2n must be equal to or greater than m  n  1.
Therefore, the number of Hamming bits is determined by the following expression:
2n ≥ m  n  1 (2)
where n  number of Hamming bits
m  number of bits in each data character
A seven-bit ASCII character requires four Hamming bits (24 > 7  4  1), which
could be placed at the end of the character bits, at the beginning of the character bits, or in-
terspersed throughout the character bits. Therefore, including the Hamming bits with
ASCII-coded data requires transmitting 11 bits per ASCII character, which equates to a
57% increase in the message length.

Example 5
For a 12-bit data string of 101100010010, determine the number of Hamming bits required, arbitrar-
ily place the Hamming bits into the data string, determine the logic condition of each Hamming bit,
assume an arbitrary single-bit transmission error, and prove that the Hamming code will successfully
detect the error.
Solution Substituting m  12 into Equation 2, the number of Hamming bits is
for n  4 24  16 ≥ 12  4  1  17
Because 16 < 17, four Hamming bits are insufficient:
for n  5 25  32 ≥ 12  5  1  18
Because 32 > 18, five Hamming bits are sufficient, and a total of 17 bits make up the data stream (12
data plus five Hamming).

167
Fundamental Concepts of Data Communications

Arbitrarily placing five Hamming bits into bit positions 4, 8, 9, 13, and 17 yields
bit position 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
H 1 0 1 H 1 0 0 H H 0 1 0 H 0 1 0
To determine the logic condition of the Hamming bits, express all bit positions that contain a logic 1
as a five-bit binary number and XOR them together:
Bit position Binary number
2 00010
6 00110
XOR 00100
12 01100
XOR 01000
14 01110
XOR 00110
16 10000
XOR 10110  Hamming bits
b17  1, b13  0, b9  1, b8  1, b4  0
The 17-bit Hamming code is
H H H H H
1 1 0 1 0 1 0 0 1 1 0 1 0 0 0 1 0
Assume that during transmission, an error occurs in bit position 14. The received data stream is
1 1 0 0 0 1 0 0 1 1 0 1 0 0 0 1 0

error
At the receiver, to determine the bit position in error, extract the Hamming bits and XOR them with
the binary code for each data bit position that contains a logic 1:
Bit position Binary number
Hamming bits 10110
2 00010
XOR 10100
6 00110
XOR 10010
12 01100
XOR 11110
16 10000
XOR 01110  14
Therefore, bit position 14 contains an error.

7 CHARACTER SYNCHRONIZATION

In essence, synchronize means to harmonize, coincide, or agree in time. Character syn-


chronization involves identifying the beginning and end of a character within a message.
When a continuous string of data is received, it is necessary to identify which bits belong
to which characters and which bits are the MSBS and LSBS of the character. In essence,
this is character synchronization: identifying the beginning and end of a character code. In
data communications circuits, there are two formats commonly used to achieve character
synchronization: asynchronous and synchronous.

7-1 Asynchronous Serial Data


The term asynchronous literally means “without synchronism,” which in data communica-
tions terminology means “without a specific time reference.” Asynchronous data transmis-

168
Fundamental Concepts of Data Communications

Data or
Stop bits Parity bit Data bits Start
(1, 1.5, or 2) (odd/even) (5 to 8) bit

1 or 0
b6
(1) (1) or b7 b5 b4 b3 b2 b1 b0 0
MSB
(MSB) LSB

Time

FIGURE 7 Asynchronous data format

sion is sometimes called start-stop transmission because each data character is framed be-
tween start and stop bits. The start and stop bits identify the beginning and end of the char-
acter, so the time gaps between characters do not present a problem. For asynchronously
transmitted serial data, framing characters individually with start and stop bits is sometimes
said to occur on a character-by-character basis.
Figure 7 shows the format used to frame a character for asynchronous serial data
transmission. The first bit transmitted is the start bit, which is always a logic 0. The char-
acter bits are transmitted next, beginning with the LSB and ending with the MSB. The data
character can contain between five and eight bits. The parity bit (if used) is transmitted di-
rectly after the MSB of the character. The last bit transmitted is the stop bit, which is always
a logic 1, and there can be either one, one and a half, or two stop bits. Therefore, a data char-
acter may be comprised of between seven and 11 bits.
A logic 0 is used for the start bit because an idle line condition (no data transmis-
sion) on a data communications circuit is identified by the transmission of continuous
logic 1s (called idle line 1s). Therefore, the start bit of a character is identified by a high-
to-low transition in the received data, and the bit that immediately follows the start bit is
the LSB of the character code. All stop bits are logic 1s, which guarantees a high-to-low
transition at the beginning of each character. After the start bit is detected, the data and par-
ity bits are clocked into the receiver. If data are transmitted in real time (i.e., as the opera-
tor types data into the computer terminal), the number of idle line 1s between each char-
acter will vary. During this dead time, the receive will simply wait for the occurrence of
another start bit (i.e., high-to-low transition) before clocking in the next character. Obvi-
ously, both slipping over and slipping under produce errors. However, the errors are some-
what self-inflicted, as they occur in the receiver and are not a result of an impairment that
occurred during transmission.
With asynchronous data, it is not necessary that the transmit and receive clocks be
continuously synchronized; however, their frequencies should be close, and they should be
synchronized at the beginning of each character. When the transmit and receive clocks are
substantially different, a condition called clock slippage may occur. If the transmit clock
is substantially lower than the receive clock, underslipping occurs. If the transmit clock is
substantially higher than the receive clock, a condition called overslipping occurs. With
overslipping, the receive clock samples the receive data slower than the bit rate. Conse-
quently, each successive sample occurs later in the bit time until finally a bit is completely
skipped.
Example 6
For the following sequence of bits, identify the ASCII-encoded character, the start and stop bits, and
the parity bits (assume even parity and two stop bits):

169
Fundamental Concepts of Data Communications

Solution
time
1 1 1 1 1 1 0 1 0 0 0 0 0 1 0 1 1 1 1 0 0 0 1 0 0 0
one asynchronous character one asynchronous character

Idle line Stop Start Stop Start


41 hex 44 hex
ones bits bit bits bit
A D

1 1 1 1 1 1 0 1 0 0 0 0 0 1 0 1 1 1 1 0 0 0 1 0 0 0

MSB LSB MSB LSB

Parity Parity
bit bit

7-2 Synchronous Serial Data


Synchronous data generally involves transporting serial data at relatively high speeds in
groups of characters called blocks or frames. Therefore, synchronous data are not sent in
real time. Instead, a message is composed or formulated and then the entire message is
transmitted as a single entity with no time lapses between characters. With synchronous
data, rather than frame each character independently with start and stop bits, a unique se-
quence of bits, sometimes called a synchronizing (SYN) character, is transmitted at the be-
ginning of each message. For synchronously transmitted serial data, framing characters in
blocks is sometimes said to occur on a block-by-block basis. For example, with ASCII code,
the SYN character is 16 hex. The receiver disregards incoming data until it receives one or
more SYN characters. Once the synchronizing sequence is detected, the receiver clocks in
the next eight bits and interprets them as the first character of the message. The receiver
continues clocking in bits, interpreting them in groups of eight until it receives another
unique character that signifies the end of the message. The end-of-message character varies
with the type of protocol being used and what type of message it is associated with. With
synchronous data, the transmit and receive clocks must be synchronized because character
synchronization occurs only once at the beginning of a message.
With synchronous data, each character has two or three bits added to each character
(one start and either one, one and a half, or two stop bits). These bits are additional over-
head and, thus, reduce the efficiency of the transmission (i.e., the ratio of information bits
to total transmitted bits). Synchronous data generally has two SYN characters (16 bits of
overhead) added to each message. Therefore, asynchronous data are more efficient for short
messages, and synchronous data are more efficient for long messages.

Example 7
For the following string of ASCII-encoded characters, identify each character (assume odd parity):
Solution
time
0 1 0 0 1 1 1 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 1 0 1 1
4F hex 54 hex 16 hex
O T SYN Character

0 1 0 0 1 1 1 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 1 0 1 1

MSB LSB MSB LSB MSB LSB

Parity Parity Parity


bit bit bit

170
Fundamental Concepts of Data Communications

8 DATA COMMUNICATIONS HARDWARE

Digital information sources, such as personal computers, communicate with each other us-
ing the POTS (plain old telephone system) telephone network in a manner very similar to
the way analog information sources, such as human conversations, communicate with each
other using the POTS telephone network. With both digital and analog information sources,
special devices are necessary to interface the sources to the telephone network.
Figure 8 shows a comparison between human speech (analog) communications and
computer data (digital) communications using the POTS telephone network. Figure 8a shows
how two humans communicate over the telephone network using standard analog telephone
sets. The telephone sets interface human speech signals to the telephone network and vice
versa. At the transmit end, the telephone set converts acoustical energy (information) to
electrical energy and, at the receive end, the telephone set converts electrical energy back to
acoustical energy. Figure 8b shows how digital data are transported over the telephone net-
work. At the transmitting end, a telco interface converts digital data from the transceiver to
analog electrical energy, which is transported through the telephone network. At the re-
ceiving end, a telco interface converts the analog electrical energy received from the tele-
phone network back to digital data.
In simplified terms, a data communications system is comprised of three basic ele-
ments: a transmitter (source), a transmission path (data channel), and a receiver (destina-
tion). For two-way communications, the transmission path would be bidirectional and the
source and destination interchangeable. Therefore, it is usually more appropriate to de-
scribe a data communications system as connecting two endpoints (sometimes called
nodes) through a common communications channel. The two endpoints may not possess
the same computing capabilities; however, they must be configured with the same basic
components. Both endpoints must be equipped with special devices that perform unique
functions, make the physical connection to the data channel, and process the data before
they are transmitted and after they have been received. Although the special devices are

Acoustical energy Electrical energy Electrical energy Acoustical energy

Telephone
network

Human Telephone Telephone Human


set set

(a)

Analog Analog
electrical electrical
Digital data energy energy Digital data

Telephone 1
Telco Telco
Transceiver network Transceiver
interface interface

(b)

FIGURE 8 Telephone communications network: (a) human communications; (b) digital data
communications

171
Fundamental Concepts of Data Communications

sometimes implemented as a single unit, it is generally easier to describe them as separate


entities. In essence, all endpoints must have three fundamental components: data terminal
equipment, data communications equipment, and a serial interface.

8-1 Data Terminal Equipment


Data terminal equipment (DTE) can be virtually any binary digital device that generates,
transmits, receives, or interprets data messages. In essence, a DTE is where information
originates or terminates. DTEs are the data communications equivalent to the person in a
telephone conversation. DTEs contain the hardware and software necessary to establish and
control communications between endpoints in a data communications system; however,
DTEs seldom communicate directly with other DTEs. Examples of DTEs include video
display terminals, printers, and personal computers.
Over the past 50 years, data terminal equipment has evolved from simple on-line
printers to sophisticated high-level computers. Data terminal equipment includes the con-
cept of terminals, clients, hosts, and servers. Terminals are devices used to input, output,
and display information, such as keyboards, printers, and monitors. A client is basically a
modern-day terminal with enhanced computing capabilities. Hosts are high-powered, high-
capacity mainframe computers that support terminals. Servers function as modern-day
hosts except with lower storage capacity and less computing capability. Servers and hosts
maintain local databases and programs and distribute information to clients and terminals.

8-2 Data Communications Equipment


Data communications equipment (DCE) is a general term used to describe equipment that in-
terfaces data terminal equipment to a transmission channel, such as a digital T1 carrier or an
analog telephone circuit. The output of a DTE can be digital or analog, depending on the ap-
plication. In essence, a DCE is a signal conversion device, as it converts signals from a DTE
to a form more suitable to be transported over a transmission channel. A DCE also converts
those signals back to their original form at the receive end of a circuit. DCEs are transparent
devices responsible for transporting bits (1s and 0s) between DTEs through a data communi-
cations channel. The DCEs neither know nor do they care about the content of the data.
There are several types of DCEs, depending on the type of transmission channel used.
Common DCEs are channel service units (CSUs), digital service units (DSUs), and data
modems. CSUs and DSUs are used to interface DTEs to digital transmission channels. Data
modems are used to interface DTEs to analog telephone networks. Because data commu-
nications channels are terminated at each end in a DCE, DCEs are sometimes called data
circuit-terminating equipment (DCTE). Data modems are described in subsequent sections
of this chapter.

9 DATA COMMUNICATIONS CIRCUITS

A data modem is a DCE used to interface a DTE to an analog telephone circuit commonly
called a POTS. Figure 9a shows a simplified diagram for a two-point data communications
circuit using a POTS link to interconnect the two endpoints (endpoint A and endpoint B).
As shown in the figure, a two-point data communications circuit is comprised of the seven
basic components:

1. DTE at endpoint A
2. DCE at endpoint A
3. DTE/DCE interface at endpoint A
4. Transmission path between endpoint A and endpoint B
5. DCE at endpoint B
6. DTE at endpoint B
7. DTE/DCE interface at endpoint B

172
Fundamental Concepts of Data Communications

Transmission path
DTE/DCE DTE/DCE
Interface Interface
POTS
Telephone
network
DTE DCE DCE DTE

Endpoint A Endpoint B

(a)

Transmission path
DTE/DCE DTE/DCE
Interface Interface
POTS
Telephone
network
Modem Modem
PC PC

Endpoint A Endpoint B

(b)

FIGURE 9 Two point data communications circuit: (a) DTE/DCE representation; (b) device
representation

The DTEs can be terminal devices, personal computers, mainframe computers, front-
end processors, printers, or virtually any other piece of digital equipment. If a digital com-
munications channel were used, the DCE would be a CSU or a DSU. However, because the
communications channel is a POTS link, the DCE is a data modem.
Figure 9b shows the same equivalent circuit as is shown in Figure 9a, except the DTE
and DCE have been replaced with the actual devices they represent—the DTE is a personal
computer, and the DCE is a modem. In most modern-day personal computers for home use,
the modem is simply a card installed inside the computer.
Figure 10 shows the block diagram for a centralized multipoint data communications
circuit using several POTS data communications links to interconnect three endpoints. The
circuit is arranged in a bus topology with central control provided by a mainframe computer
(host) at endpoint A. The host station is sometimes called the primary station. Endpoints B
and C are called secondary stations. The primary station is responsible for establishing and
maintaining the data link and for ensuring an orderly flow of data between itself and each
of the secondary stations. Data flow is controlled by an applications program stored in the
mainframe computer at the primary station.
At the primary station, there is a mainframe computer, a front-end processor (DTE),
and a data modem (DCE). At each secondary station, there is a modem (DCE), a line control
unit (DTE), and a cluster of terminal devices (personal computers, printers, and so on). The
line control unit at the secondary stations is referred to as a cluster controller, as it controls
data flow between several terminal devices and the data communications channel. Line con-
trol units at secondary stations are sometimes called station controllers (STACOs), as they
control data flow to and from all the data communications equipment located at that station.
For simplicity, Figure 10 only shows one data circuit served by the mainframe com-
puter at the primary station. However, there can be dozens of different circuits served by
one mainframe computer. Therefore, the primary station line control unit (i.e., the front-end
processor) must have enhanced capabilities for storing, processing, and retransmitting data
it receives from all secondary stations on all the circuits it serves. The primary station
stores software for database management of all the circuits it serves. Obviously, the duties

173
Fundamental Concepts of Data Communications

Endpoint A
Primary Station

Parallel Serial
interface interface

Mainframe Front-end Modem


computer processor (DTE) (DCE)
(host)

Transmission medium (POTS links)

Modem (DCE) Modem (DCE)

Serial interface Serial interface


Line control Line control
unit (DTE) unit (DTE)

Printer
Printer

PC1 PC2 PC1 PC2


Terminal devices Terminal devices

Endpoint B Endpoint C
Secondary Secondary

FIGURE 10 Multipoint data communications circuit using POTS links

performed by the front-end processor at the primary station are much more involved than
the duties performed by the line control units at the secondary stations. The FEP directs data
traffic to and from many different circuits, which could all have different parameters (i.e.,
different bit rates, character codes, data formats, protocols, and so on). The LCU at the sec-
ondary stations directs data traffic between one data communications link and a relative few
terminal devices, which all transmit and receive data at the same speed and use the same
data-link protocol, character code, data format, and so on.

10 LINE CONTROL UNIT

As previously stated, a line control unit (LCU) is a DTE, and DTEs have several important
functions. At the primary station, the LCU is often called a FEP because it processes infor-
mation and serves as an interface between the host computer and all the data communica-
tions circuits it serves. Each circuit served is connected to a different port on the FEP. The
FEP directs the flow of input and output data between data communications circuits and
their respective application programs. The data interface between the mainframe computer
and the FEP transfers data in parallel at relatively high bit rates. However, data transfers be-
tween the modem and the FEP are accomplished in serial and at a much lower bit rate. The
FEP at the primary station and the LCU at the secondary stations perform parallel-to-serial

174
Fundamental Concepts of Data Communications

and serial-to-parallel conversions. They also house the circuitry that performs error detec-
tion and correction. In addition, data-link control characters are inserted and deleted in the
FEP and LCUs.
Within the FEP and LCUs, a single special-purpose integrated circuit performs many
of the fundamental data communications functions. This integrated circuit is called a
universal asynchronous receiver/transmitter (UART) if it is designed for asynchronous data
transmission, a universal synchronous receiver/transmitter (USRT) if it is designed for syn-
chronous data transmission, and a universal synchronous/asynchronous receiver/transmitter
(USART) if it is designed for either asynchronous or synchronous data transmission. All
three types of circuits specify general-purpose integrated-circuit chips located in an LCU
or FEP that allow DTEs to interface with DCEs. In modern-day integrated circuits, UARTs
and USRTs are often combined into a single USART chip that is probably more popular to-
day simply because it can be adapted to either asynchronous or synchronous data trans-
mission. USARTs are available in 24- to 64-pin dual in-line packages (DIPs).
UARTS, USRTS, and USARTS are devices that operate external to the central
processor unit (CPU) in a DTE that allow the DTE to communicate serially with other data
communications equipment, such as DCEs. They are also essential data communications
components in terminals, workstations, PCs, and many other types of serial data commu-
nications devices. In most modern computers, USARTs are normally included on the moth-
erboard and connected directly to the serial port. UARTs, USRTs, and USARTs designed
to interface to specific microprocessors often have unique manufacturer-specific names.
For example, Motorola manufactures a special purpose UART chip it calls an asynchronous
communications interface adapter (ACIA).

10-1 UART
A UART is used for asynchronous transmission of serial data between a DTE and a DCE.
Asynchronous data transmission means that an asynchronous data format is used, and there
is no clocking information transferred between the DTE and the DCE. The primary func-
tions performed by a UART are the following:
1. Parallel-to-serial data conversion in the transmitter and serial-to-parallel data con-
version in the receiver
2. Error detection by inserting parity bits in the transmitter and checking parity bits
in the receiver
3. Insert start and stop bits in the transmitter and detect and remove start and stop bits
in the receiver
4. Formatting data in the transmitter and receiver (i.e., combining items 1 through 3
in a meaningful sequence)
5. Provide transmit and receive status information to the CPU
6. Voltage level conversion between the DTE and the serial interface and vice versa
7. Provide a means of achieving bit and character synchronization
Transmit and receive functions can be performed by a UART simultaneously because
the transmitter and receiver have separate control signals and clock signals and share a bidi-
rectional data bus, which allows them to operate virtually independently of one another. In
addition, input and output data are double buffered, which allows for continuous data trans-
mission and reception.
Figure 11 shows a simplified block diagram of a line control unit showing the rela-
tionship between the UART and the CPU that controls the operation of the UART. The CPU
coordinates data transfer between the line-control unit (or FEP) and the modem. The CPU
is responsible for programming the UART’s control register, reading the UART’s status reg-
ister, transferring parallel data to and from the UART transmit and receive buffer registers,
providing clocking information to the UART, and facilitating the transfer of serial data be-
tween the UART and the modem.

175
Fundamental Concepts of Data Communications

Line-Control Unit (LCU) or


Front-End Processor (FEP)

UART

Parallel data bus Internal


Registers
Central
processing Transmit buffer register
Address CRS To
unit TSO
Decoder TDS DCE
(CPU) Address bus Receive buffer register
SWE
RDAR From
Control register RSI
RDE DCE
Transmit clock pulse Status word register
TCP
Transmit buffer empty
TBMT

FIGURE 11 Line control unit UART interface

Control word
inputs Parallel input data (5-8 bits) from CPU
(8-bits) (TD7 through TD0)

CRS Control TEOC


TDS Transmit buffer register
register

Transmit buffer register

Data, parity, and stop-bit steering logic

TCP 1 1 Parity d6 d5 d4 d3 d2 d1 d0 0 Output TSO


stop stop bit data data data data data data data start circuit Serial
Transmit shift register data
out
TCP to
Transmit shift register empty logic circuit modem

SWE TBMT
Status word register

FIGURE 12 UART transmitter block diagram

176
Fundamental Concepts of Data Communications

Table 6 UART Control Register Inputs

D7 and D6
Number of stop bits
NSB1 NSB2 No. of Bits
0 0 Invalid
0 1 1
1 0 1.5
1 1 2
D5 and D4
NPB (parity or no parity)
1 No parity bit (RPE disabled in receiver)
0 Insert parity bits in transmitter and check parity bits in receiver
POE (parity odd or even)
1 Even parity
0 Odd parity
D3 and D2
Character length
NDB1 NDB2 Bits per Word
0 0 5
0 1 6
1 0 7
1 1 8
D1 and D0
Receive clock (baud rate factor)
RC1 RC2 Clock Rate
0 0 Synchronous mode
0 1 1X
1 0 16X
1 1 32X

A UART can be divided into two functional sections: the transmitter and the receiver.
Figure 12 shows a simplified block diagram of a UART transmitter. Before transferring data
in either direction, an eight-bit control word must be programmed into the UART control reg-
ister to specify the nature of the data. The control word specifies the number of data bits per
character; whether a parity bit is included with each character and, if so, whether it is odd or
even parity; the number of stop bits inserted at the end of each character; and the receive clock
frequency relative to the transmit clock frequency. Essentially, the start bit is the only bit in
the UART that is not optional or programmable, as there is always one start bit, and it is al-
ways a logic 0. Table 6 shows the control-register coding format for a typical UART.
As specified in Table 6, the parity bit is optional and, if used, can be either odd or even.
To select parity, NPB is cleared (logic 0), and to exclude the parity bit, NBP is set (logic 1).
Odd parity is selected by clearing POE (logic 0), and even parity is selected by setting POE
(logic 1). The number of stop bits is established with the NSB1 and NSB2 bits and can be one,
one and a half, or two. The character length is determined by NDB1 and NDB2 and can be
five, six, seven, or eight bits long. The maximum character length is 11 bits (i.e., one start bit,
eight data bits, and two stop bits or one start bit, seven data bits, one parity bit, and two stop
bits). Using a 22-bit character format with ASCII encoding is sometimes called full ASCII.
Figure 13 shows three of the character formats possible with a UART. Figure 13a shows
an 11-bit data character comprised of one start bit, seven ASCII data bits, one odd-parity
bit, and two stop bits (i.e., full ASCII). Figure 13b shows a nine-bit data character com-
prised of one start bit, seven ARQ data bits, and one stop bit, and Figure 13c shows another
nine-bit data character comprised of one start bit, five Baudot data bits, one odd parity bit,
and two stop bits.
A UART also contains a status word register, which is an n-bit data register that
keeps track of the status of the UART’s transmit and receive buffer registers. Typical status

177
Fundamental Concepts of Data Communications

1 1 1 1 0 1 0 1 1 0 0

Two Odd ASCII Uppercase Start


stop parity letter V (56 hex) bit
bits
11-bit asynchronous ASCII character code

(a)

1 1 0 1 0 0 0 1 0

One Start
ARQ Uppercase
stop bit
letter M (51 hex)
bit
9-bit asynchronous ARQ
character code

(b)

1 1 1 0 0 1 0 1 0

Two Odd Start


Baudot Uppercase
stop parity bit
letter H (05 hex)
bits
9-bit asynchronous
Baudot character code
FIGURE 13 Asynchronous charac-
ters: (a) ASCII character; (b) ARQ
(c) character; (c) Baudot character

conditions compiled by the status word register for the UART transmitter include the fol-
lowing conditions:

TBMT: transmit buffer empty. Transmit shift register has completed transmission of
a data character
RPE: receive parity error. Set when a received character has a parity error in it
RFE: receive framing error. Set when a character is received without any or with an
improper number of stop bits
ROR: receiver overrun. Set when a character in the receive buffer register is written
over by another receive character because the CPU failed to service an active con-
dition on REA before the next character was received from the receive shift register
RDA: receive data available. A data character has been received and loaded into the
receive data register

10-1-1 UART transmitter. The operation of the typical UART transmitter shown
in Figure 12a is quite logical. However, before the UART can send or receive data, the
UART control register must be loaded with the desired mode instruction word. This is ac-
complished by the CPU in the DTE, which applies the mode instruction word to the con-
trol word bus and then activates the control-register strobe (CRS).
Figure 14 shows the signaling sequence that occurs between the CPU and the
UART transmitter. On receipt of an active status word enable 1SWE2 signal, the UART
sends a transmit buffer empty (TBMT) signal from the status word register to the CPU to
indicate that the transmit buffer register is empty and the UART is ready to receive more

178
Fundamental Concepts of Data Communications

UART
1. SWE (status word enable)
Transmitter

2. TBMT (transmit buffer empty)

Central
processing 3. Parallel data (TD7-TD0)
unit
(CPU)
TEOC
4. TDS (transmit data strobe)
(Transmit
end of 5. TSO
character)
(Transmit
serial data)

FIGURE 14 UART transmitter signal sequence

data. When the CPU senses an active condition of TBMT, it applies a parallel data char-
acter to the transmit data lines (TD7 through TD0) and strobes them into the transmit
buffer register with an active signal on the transmit data strobe 1TDS2 signal. The con-
tents of the transmit buffer register are transferred to the transmit shift register when the
transmit end-of-character (TEOC) signal goes active (the TEOC signal is internal to the
UART and simply tells the transmit buffer register when the transmit shift register is
empty and available to receive data). The data pass through the steering logic circuit,
where it picks up the appropriate start, stop, and parity bits. After data have been loaded
into the transmit shift register, they are serially outputted on the transmit serial output
(TSO) pin at a bit rate equal to the transmit clock (TCP) frequency. While the data in the
transmit shift register are serially clocked out of the UART, the CPU applies the next char-
acter to the input of the transmit buffer register. The process repeats until the CPU has
transferred all its data.

10-1-2 UART receiver. A simplified block diagram for a UART receiver is shown
in Figure 15. The number of stop bits and data bits and the parity bit parameters specified
for the UART receiver must be the same as those of the UART transmitter. The UART re-
ceiver ignores the reception of idle line 1s. When a valid start bit is detected by the start bit
verification circuit, the data character is clocked into the receive shift register. If parity is
used, the parity bit is checked in the parity checker circuit. After one complete data char-
acter is loaded into the shift register, the character is transferred in parallel into the receive
buffer register, and the receive data available (RDA) flag is set in the status word register.
The CPU reads the status register by activating the status word enable 1SWE2 signal and,
if RDA is active, the CPU reads the character from the receive buffer register by placing an
active signal on the receive data enable (RDE) pin. After reading the data, the CPU places
an active signal on the receive data available reset 1RDAR 2 pin, which resets the RDA pin.
Meanwhile, the next character is received and clocked into the receive shift register, and the
process repeats until all the data have been received. Figure 16 shows the receive signaling
sequence that occurs between the CPU and the UART.

10-1-3 Start-bit verification circuit. With asynchronous data transmission, pre-


cise timing is less important than following an agreed-on format or pattern for the data.
Each transmitted data character must be preceded by a start bit and end with one or more

179
Fundamental Concepts of Data Communications

RCP (receive clock)

Start bit
RSI
verification Receive shift register
input data circuit
from modem

Parity RDE
checker Receive buffer register
circuit

RD7 RD6 RD5 RD4 RD3 RD2 RD1 RD0


Control
register Parallel output data

RDA
Status word register

RPE RFE RDA ROR SWE RDAR

FIGURE 15 UART receiver block diagram

3. SWE (status word enable) UART


Receiver
4. RDA (receive data available)
1. Valid start bit detected

5. SWE (status word enable) 2. Receive data character


loaded serially into
receive shift register
6. Status word transfered to CPU

Central
processing (RPE, RFE, and ROR) RSI
unit serial data
(CPU) received
7. RDE (receive data enable) from
modem
8. Parallel data (RD7-RD0)

9. RDAR (receive data available


reset)

FIGURE 16 UART receive signal sequence

180
Fundamental Concepts of Data Communications

stop bits. Because data received by a UART have been transmitted from a distant UART
whose clock is asynchronous to the receive UART, bit synchronization is achieved by es-
tablishing a timing reference at the center of each start bit. Therefore, it is imperative that
a UART detect the occurrence of a valid start bit early in the bit cell and establish a timing
reference before it begins to accept data.
The primary function of the start bit verification circuit is to detect valid start bits,
which indicate the beginning of a data character. Figure 17a shows an example of how a
noise hit can be misinterpreted as a start bit. The input data consist of a continuous string

Idle line 1s Idle line 1s misinterpreted as data


Noise impulse

1X RCP

Samples data Samples data Samples data Samples data


high – idle line 1 detects low interprets b0 interprets b1
invalid start bit as a logic 1 as a logic 1

(a)

Noise impulse
Idle line 1s Idle line 1s (ignored)

wait 7 clock pulses


before sampling again
16X RCP

1 2 3 4 5 6 7
Sample after 7 clock cycles
high – invalid start bit

(b)

Idle line 1s Start bit b0 b1

Wait seven Wait 16 clock pulses Wait 16 clock pulses


clock pulses sample data sample data
16X RCP

Detects Sample again bit b0 = 1 bit b1 = 0


low still low
valid start bit
(c)

FIGURE 17 Start bit verification: (a) 1X RCP; (b) 16X RCP; (c) valid start bit

181
Fundamental Concepts of Data Communications

of idle line 1s, which are typically transmitted when there is no information. Idle line 1s are
interpreted by a receiver as continuous stop bits (i.e., no data). If a noise impulse occurs that
causes the receive data to go low at the same time the receiver clock is active, the receiver
will interpret the noise impulse as a start bit. If this happens, the receiver will misinterpret
the logic condition present during the next clock as the first data bit (b0) and the follow-
ing clock cycles as the remaining data bits (b1, b2, and so on). The likelihood of misinter-
preting noise hits as start bits can be reduced substantially by clocking the UART receiver
at a rate higher than the incoming data. Figure 17b shows the same situation as shown in
Figure 17a, except the receive clock pulse (RCP) is 16 times (16) higher than the receive
serial data input (RSI). Once a low is detected, the UART waits seven clock cycles before
resampling the input data. Waiting seven clock cycles places the next sample very near the
center of the start bit. If the next sample detects a low, it assumes that a valid start bit has
been detected. If the data have reverted to the high condition, it is assumed that the high-
to-low transition was simply a noise pulse and, therefore, is ignored. Once a valid start bit
has been detected and verified (Figure 17c), the start bit verification circuit samples the in-
coming data once every 16 clock cycles, which essentially makes the sample rate equal to
the receive data rate (i.e., 16 RCP/16  RCP). The UART continues sampling the data
once every 16 clock cycles until the stop bits are detected, at which time the start bit ver-
ification circuit begins searching for another valid start bit. UARTs are generally pro-
grammed for receive clock rates of 16, 32, or 64 times the receive data rate (i.e., 16, 32,
and 64).
Another advantage of clocking a UART receiver at a rate higher than the actual re-
ceive data is to ensure that a high-to-low transition (valid start bit) is detected as soon as
possible. This ensures that once the start bit is detected, subsequent samples will occur
very near the center of each data bit. The difference in time between when a sample is
taken (i.e., when a data bit is clocked into the receive shift register) and the actual center
of a data bit is called the sampling error. Figure 18 shows a receive data stream sampled
at a rate 16 times higher (16 RCP) than the actual data rate (RCP). As the figure shows, the
start bit is not immediately detected. The difference in time between the beginning of a
start bit and when it is detected is called the detection error. The maximum detection er-
ror is equal to the time of one receive clock cycle (tcl  1/Rcl). If the receive clock rate
equaled the receive data rate, the maximum detection error would approach the time of one
bit, which would mean that a start bit would not be detected until the very end of the bit
time. Obviously, the higher the receive clock rate, the earlier a start bit would be detected.

Idle line 1s Center of start bit Center of bit b0 Center of bit b1


Sampling
error

Sampling Sampling
error error
Detection
error

Wait seven Wait 16 clock pulses Wait 16 clock pulses


clock pulses sample data sample data
16X RCP

Detects Sample again bit b0 = 1 bit b1 = 0


low still low
valid start bit

FIGURE 18 16X receive clock rate

182
Fundamental Concepts of Data Communications

Because of the detection error, successive samples occur slightly off from the center
of the data bit. This would not present a problem with synchronous clocks, as the sampling
error would remain constant from one sample to the next. However, with asynchronous
clocks, the magnitude of the sampling error for each successive sample would increase (the
clock would slip over or slip under the data), eventually causing a data bit to be either sam-
pled twice or not sampled at all, depending on whether the receive clock is higher or lower
than the transmit clock.
Figure 19 illustrates how sampling at a higher rate reduces the sampling error. Figures
19a and b show data sampled at a rate eight times the data rate (8) and 16 times the data
rate (16), respectively. It can be seen that increasing the sample rate moves the sample
time closer to the center of the data bit, thus decreasing the sampling error.
Placing stop bits at the end of each data character also helps reduce the clock slip-
page (sometimes called clock skew) problem inherent when using asynchronous trans-
mit and receive clocks. Start and stop bits force a high-to-low transition at the beginning
of each character, which essentially allows the receiver to resynchronize to the start bit
at the beginning of each data character. It should probably be mentioned that with
UARTs the data rates do not have to be the same in each direction of propagation (e.g.,
you could transmit data at 1200 bps and receive at 600 bps). However, the rate at which
data leave a transmitter must be the same as the rate at which data enter the receiver at
the other end of the circuit. If you transmit at 1200 bps, it must be received at the other
end at 1200 bps.

10-2 Universal Synchronous Receiver/Transmitter


A universal synchronous receiver/transmitter (USRT) is used for synchronous transmis-
sion of data between a DTE and a DCE. Synchronous data transmission means that a syn-
chronous data format is used, and clocking information is generally transferred between
the DTE and the DCE. A USRT performs the same basic functions as a UART, except for

Detects Center of start bit


low
Sampling error
Idle b0
line 1s
8X
1 2 3
(Wait 3 clocks before sampling again)

(a)

Center of start bit


Detects Sampling
low error
Idle b0
line 1s
16X
1 2 3 4 5 6 7
(Wait 7 clocks before sampling again)

(b)

FIGURE 19 Sampling error: (a) 8X RCP; (b) 16X RCP

183
Fundamental Concepts of Data Communications

synchronous data (i.e., the start and stop bits are omitted and replaced by unique synchro-
nizing characters). The primary functions performed by a USRT are the following:

1. Serial-to-parallel and parallel-to-serial data conversions


2. Error detection by inserting parity bits in the transmitter and checking parity bits
in the receiver.
3. Insert and detect unique data synchronization (SYN) characters
4. Formatting data in the transmitter and receiver (i.e., combining items 1 through 3
in a meaningful sequence)
5. Provide transmit and receive status information to the CPU
6. Voltage-level conversion between the DTE and the serial interface and vice versa
7. Provide a means of achieving bit and character synchronization

11 SERIAL INTERFACES

To ensure an orderly flow of data between a DTE and a DCE, a standard serial interface is
used to interconnect them. The serial interface coordinates the flow of data, control signals,
and timing information between the DTE and the DCE.
Before serial interfaces were standardized, every company that manufactured data
communications equipment used a different interface configuration. More specifically, the
cable arrangement between the DTE and the DCE, the type and size of the connectors, and
the voltage levels varied considerably from vender to vender. To interconnect equipment
manufactured by different companies, special level converters, cables, and connectors had
to be designed, constructed, and implemented for each application. A serial interface stan-
dard should provide the following:

1. A specific range of voltages for transmit and receive signal levels


2. Limitations for the electrical parameters of the transmission line, including source
and load impedance, cable capacitance, and other electrical characteristics out-
lined later in this chapter
3. Standard cable and cable connectors
4. Functional description of each signal on the interface

In 1962, the Electronics Industries Association (EIA), in an effort to standardize inter-


face equipment between data terminal equipment and data communications equipment,
agreed on a set of standards called the RS-232 specifications (RS meaning “recommended
standard”). The official name of the RS-232 interface is Interface Between Data Terminal
Equipment and Data Communications Equipment Employing Serial Binary Data Inter-
change. In 1969, the third revision, RS-232C, was published and remained the industrial stan-
dard until 1987, when the RS-232D was introduced, which was followed by the RS-232E in
the early 1990s. The RS-232D standard is sometimes referred to as the EIA-232 standard.Ver-
sions D and E of the RS-232 standard changed some of the pin designations. For example,
data set ready was changed to DCE ready, and data terminal ready was changed to DTE ready.
The RS-232 specifications identify the mechanical, electrical, functional, and proce-
dural descriptions for the interface between DTEs and DCEs. The RS-232 interface is sim-
ilar to the combined ITU-T standards V.28 (electrical specifications) and V.24 (functional
description) and is designed for serial transmission up to 20 kbps over a maximum distance
of 50 feet (approximately 15 meters).

11-1 RS-232 Serial Interface Standard


The mechanical specification for the RS-232 interface specifies a cable with two connec-
tors. The standard RS-232 cable is a sheath containing 25 wires with a DB25P-compatible

184
Fundamental Concepts of Data Communications

13 12 11 10 9 8 7 6 5 4 3 2 1
25 24 23 22 21 20 19 18 17 16 15 14

(a) (b)

1 2 3 4 5
6 7 8 9
(d)
FIGURE 20 RS-232 serial interface
connector: (a) DB25P; (b) DB25S; (c)
(c) (d) DB9P; (d) DB9S

1 (R)
2 (CD)
1 3 (DTR)
2
3 4 (SG)
4
5 5 (RD)
6
7 6 (TD)
8
7 (CTS)
FIGURE 21 EIA-561 modular
8 (RTS) connector

male connector (plug) on one end and a DB25S-compatible female connector (receptacle)
on the other end. The DB25P-compatible and DB25S-compatible connectors are shown in
Figures 20a and b, respectively. The cable must have a plug on one end that connects to
the DTE and a receptacle on the other end that connects to the DCE. There is also a spe-
cial PC nine-pin version of the RS-232 interface cable with a DB9P-compatible male
connector on one end and a DB9S-compatible connector at the other end. The DB9P-
compatible and DB9S-compatible connectors are shown in Figures 20c and d, respec-
tively (note that there is no correlation between the pin assignments for the two connec-
tors). The nine-pin version of the RS-232 interface is designed for transporting
asynchronous data between a DTE and a DCE or between two DTEs, whereas the 25-pin
version is designed for transporting either synchronous or asynchronous data between a
DTE and a DCE. Figure 21 shows the eight-pin EIA-561 modular connector, which is
used for transporting asynchronous data between a DTE and a DCE when the DCE is con-
nected directly to a standard two-wire telephone line attached to the public switched tele-
phone network. The EIA-561 modular connector is designed exclusively for dial-up tele-
phone connections.
Although the RS-232 interface is simply a cable and two connectors, the standard also
specifies limitations on the voltage levels that the DTE and DCE can output onto or receive
from the cable. The DTE and DCE must provide circuits that convert their internal logic
levels to RS-232-compatible values. For example, a DTE using TTL logic interfaced to a DCE
using CMOS logic is not compatible. Voltage-leveling circuits convert the internal voltage
levels from the DTE and DCE to RS-232 values. If both the DCE and the DTE output and
accept RS-232 levels, they are electrically compatible regardless of which logic family they use
internally. A voltage leveler is called a driver if it outputs signals onto the cable and a

185
Fundamental Concepts of Data Communications

Table 7 RS-232 Voltage Specifications

Data Signals Control Signals

Logic 1 Logic 0 Enable (On) Disable (Off)

Driver (output) 5 V to 15 V 5 V to 15 V 5 V to 15 V 5 V to 15 V


Terminator (input) 3 V to 25 V 3 V to 25 V 3 V to 25 V 3 V to 25 V

terminator if it accepts signals from the cable. In essence, a driver is a transmitter, and a ter-
minator is a receiver. Table 7 lists the voltage limits for RS-232-compatible drivers and ter-
minators. Note that the data and control lines use non–return to zero, level (NRZ-L) bipolar
encoding. However, the data lines use negative logic, while the control lines use positive logic.
From examining Table 7, it can be seen that the voltage limits for a driver are more
inclusive than the voltage limits for a terminator. The output voltage range for a driver is
between 5 V and 15 V or between 5 V and 15 V, depending on the logic level. How-
ever, the voltage range in which a terminator will accept is between 3 V and 25 V or be-
tween 3 V and 25 V. Voltages between 3 V are undefined and may be interpreted by
a terminator as a high or a low. The difference in the voltage levels between the driver out-
put and the terminator input is called noise margin (NM). The noise margin reduces the sus-
ceptibility to interface caused by noise transients induced into the cable. Figure 22a shows
the relationship between the driver and terminator voltage ranges. As shown in Figure 22a,
the noise margin for the minimum driver output voltage is 2 V (5  3), and the noise mar-
gin for the maximum driver output voltage is 10 V (25 – 15). (The minimum noise margin
of 2 V is called the implied noise margin.) Noise margins will vary, of course, depending
on what specific voltages are used for highs and lows. When the noise margin of a circuit
is a high value, it is said to have high noise immunity, and when the noise margin is a low
value, it has low noise immunity. Typical RS-232 voltage levels are 10 V for a high and
10 V for a low, which produces a noise margin of 7 V in one direction and 15 V in the
other direction. The noise margin is generally stated as the minimum value. This relation-
ship is shown in Figure 22b. Figure 22c illustrates the immunity of the RS-232 interface to
noise signals for logic levels of 10 V and 10 V.
The RS-232 interface specifies single-end (unbalanced) operation with a common
ground between the DTE and DCE. A common ground is reasonable when a short cable is
used. However, with longer cables and when the DTE and DCE are powered from different
electrical buses, this may not be true.

Example 8
Determine the noise margins for an RS-232 interface with driver signal voltages of 6 V.
Solution The noise margin is the difference between the driver signal voltage and the terminator
receive voltage, or
NM  6  3  3 V or NM  25  6  19 V
The minimum noise margin is 3 V.

11-1-1 RS-232 electrical equivalent circuit. Figure 23 shows the equivalent elec-
trical circuit for the RS-232 interface, including the driver and terminator. With these elec-
trical specifications and for a bit rate of 20 kbps, the nominal maximum length of the RS-
232 interface cable is approximately 50 feet.

11-1-2 RS-232 functional description. The pins on the RS-232 interface cable
are functionally categorized as either ground (signal and chassis), data (transmit and re-

186
Fundamental Concepts of Data Communications

RS-232 Terminator
+25 V
RS-232 Terminator
+25 V

10 V Noise margin 15 V Noise margin


RS-232
Driver
RS-232 High
+15 V
Driver
High
High +10 V
High
7 V Noise margin

+5 V +3 V
2 V Noise margin +3 V 6V
0V Undefined
10 V 6V Zone
0V
Undefined Undefined
–3 V
Zone Zone
2 V Noise margin –3 V 7 V Noise margin
–5 V
Low –10 V

Low Low

Low
–15 V 15 V Noise margin

10 V Noise margin
–25 V

–25 V
(b)

(a)

FIGURE 22 RS-232 logic levels and noise margin: (a) driver and terminator voltage ranges; (b) noise margin with
a 10 V high and 10 V low (Continued)

ceive), control (handshaking and diagnostic), or timing (clocking signals). Although the
RS-232 interface as a unit is bidirectional (signals propagate in both directions), each indi-
vidual wire or pin is unidirectional. That is, signals on any given wire are propagated either
from the DTE to the DCE or from the DCE to the DTE but never in both directions. Table
8 lists the 25 pins (wires) of the RS-232 interface and gives the direction of signal propa-
gation (i.e., either from the DTE toward the DCE or from the DCE toward the DTE). The
RS-232 specification designates the first letter of each pin with the letters A, B, C, D, or S.
The letter categorizes the signal into one of five groups, each representing a different type
of circuit. The five groups are as follows:

A—ground
B—data
C—control
D—timing (clocking)
S—secondary channel

187
Fundamental Concepts of Data Communications

RS-232 Terminator
Signal variations +25 V
due to noise

RS-232 High
Driver
High +10 V

+3 V
6V
0V Undefined
Noise violation
Zone
–3 V
Signal variations
due to noise
Low –10 V

Low

–25 V
FIGURE 22 (Continued)
(c) (c) noise violation

RS-232 Connector RS-232 Connector

Driver Terminator

RS-232 Interface cable

Rout

Vout RL
CO CL
Vi

Signal ground

Vout — open-circuit voltage at the output of a driver (±5 V to ±15 V)


Vi — terminated voltage at the input to a terminator (±3 V to ±25 V)
CL — load capacitance associated with the terminator, including the cable (2500 pF maximum)
CO — capacitance seen by the driver including the cable (2500 pF maximum)
RL — terminator input resistance (3000 Ω to 7000 Ω)
Rout — driver output resistance (300 Ω maximum)

FIGURE 23 RS-232 Electrical specifications

188
Fundamental Concepts of Data Communications

Table 8 EIA RS-232 Pin Designations and Direction of Propagation

Pin Number Pin Name Direction of Propagation

1 Protective ground (frame ground or chassis ground) None


2 Transmit data (send data) DTE to DCE
3 Receive data DCE to DTE
4 Request to send DTE to DCE
5 Clear to send DCE to DTE
6 Data set ready (modem ready) DCE to DTE
7 Signal ground (reference ground) None
8 Receive line signal detect (carrier detect or data carrier
detect) DCE to DTE
9 Unassigned None
10 Unassigned None
11 Unassigned None
12 Secondary receive line signal detect (secondary carrier
detect or secondary data carrier detect) DCE to DTE
13 Secondary clear to send DCE to DTE
14 Secondary transmit data (secondary send data) DTE to DCE
15 Transmit signal element timing—DCE (serial clock
transmit—DCE) DCE to DTE
16 Secondary receive data DCE to DTE
17 Receive signal element timing (serial clock receive) DCE to DTE
18 Unassigned None
19 Secondary request to send DTE to DCE
20 Data terminal ready DTE to DCE
21 Signal quality detect DCE to DTE
22 Ring indicator DCE to DTE
23 Data signal rate selector DTE to DCE
24 Transmit signal element timing—DTE (serial clock
transmit—DTE) DTE to DCE
25 Unassigned None

Because the letters are nondescriptive designations, it is more practical and useful to use
acronyms to designate the pins that reflect the functions of the pins. Table 9 lists the EIA
signal designations plus the nomenclature more commonly used by industry in the United
States to designate the pins.
Twenty of the 25 pins on the RS-232 interface are designated for specific purposes or
functions. Pins 9, 10, 11, 18, and 25 are unassigned (unassigned does not necessarily imply
unused). Pins 1 and 7 are grounds; pins 2, 3, 14, and 16 are data pins; pins 15, 17, and 24
are timing pins; and all the other pins are used for control or handshaking signals. Pins 1
through 8 are used with both asynchronous and synchronous modems. Pins 15, 17, and 24
are used only with synchronous modems. Pins 12, 13, 14, 16, and 19 are used only when
the DCE is equipped with a secondary data channel. Pins 20 and 22 are used exclusively
when interfacing a DTE to a modem that is connected to standard dial-up telephone circuits
on the public switched telephone network.
There are two full-duplex data channels available with the RS-232 interface; one
channel is for primary data (actual information), and the second channel is for
secondary data (diagnostic information and handshaking signals). The secondary chan-
nel is sometimes used as a reverse or backward channel, allowing the receive DCE to
communicate with the transmit DCE while data are being transmitted on the primary
data channel.

189
Fundamental Concepts of Data Communications

Table 9 EIA RS-232 Pin Designations and Designations

Pin Pin EIA Common U.S.


Number Name Nomenclature Acronyms

1 Protective ground (frame ground or chassis ground) AA GWG, FG, or CG


2 Transmit data (send data) BA TD, SD, TxD
3 Receive data BB RD, RxD
4 Request to send CA RS, RTS
5 Clear to send CB CS, CTS
6 Data set ready (modem ready) CC DSR, MR
7 Signal ground (reference ground) AB SG, GND
8 Receive line signal detect (carrier detect or data CF RLSD, CD,
carrier detect) DCD
9 Unassigned — —
10 Unassigned — —
11 Unassigned — —
12 Secondary receive line signal detect (secondary carrier SCF SRLSD,
detect or secondary data carrier detect) SCD, SDCD
13 Secondary clear to send SCB SCS, SCTS
14 Secondary transmit data (secondary send data) SBA STD, SSD, STxD
15 Transmit signal element timing—DCE (serial clock DB TSET,
transmit—DCE) SCT-DCE
16 Secondary receive data SBB SRD, SRxD
17 Receive signal element timing (serial clock receive) DD RSET, SCR
18 Unassigned — —
19 Secondary request to send SCA SRS, SRTS
20 Data terminal ready CD DTR
21 Signal quality detect CG SQD
22 Ring indicator CE RI
23 Data signal rate selector CH DSRS
24 Transmit signal element timing—DTE (serial clock DA TSET,
transmit—DTE) SCT-DTE
25 Unassigned — —

The functions of the 25 RS-232 pins are summarized here for a DTE interfacing with
a DCE where the DCE is a data communications modem:

Pin 1—protective ground, frame ground, or chassis ground (GWG, FG, or CG). Pin
1 is connected to the chassis and used for protection against accidental electrical
shock. Pin 1 is generally connected to signal ground (pin 7).
Pin 2—transmit data or send data (TD, SD, or TxD). Serial data on the primary data
channel are transported from the DTE to the DCE on pin 2. Primary data are the ac-
tual source information transported over the interface. The transmit data line is a
transmit line for the DTE but a receive