Givone Text Book
Givone Text Book
onal UbWvonge
ADDITIONAL RESOURCES
m= An appendix containing two tutorials—one for the Altera software and one for
LogicWorks™4. These tutorials, provided by two separate contributors, are
meant to offer basic introductions to these software packages for students who
are using them in their course.
= A website at http://www.mhhe.com/givone with several tools for both the
instructor and the student—
Donald D. Givone
University at Buffalo
The State University of New York
Boston Burr Ridge, IL Dubuque, !A Madison, WI New York SanFrancisco St. Louis
Bangkok Bogota Caracas KualaLumpur Lisbon London Madrid Mexico City
Milan Montreal New Delhi Santiago Seoul Singapore Sydney Taipei Toronto
McGraw-Hill Higher Education 57
A Division of The McGraw-Hill Companies
Published by McGraw-Hill, a business unit of The McGraw-Hill Companies, Inc., 1221 Avenue of the
Americas, New York, NY 10020. Copyright © 2003 by The McGraw-Hill Companies, Inc. All rights
reserved. No part of this publication may be reproduced or distributed in any form or by any means, or
stored in a database or retrieval system, without the prior written consent of The McGraw-Hill
Companies, Inc., including, but not limited to, in any network or other electronic storage or
transmission, or broadcast for distance learning.
Some ancillaries, including electronic and print components, may not be available to customers outside
the United States.
ISBN 0-07-—252503-7
ISBN 0-07-—119520-3 (ISE)
Givone, Donald D.
Digital principles and design / Donald D. Givone. — Ist ed.
p. cm.
ISBN 0-07-252503-7—ISBN 0-07-—119520-3 (ISE)
1. Digital electronics. I. Title.
www.mhhe.com
To my children Donna and David and my brother Billy
Introduction 1
ao
NY
WO
&= Logic Design with MSI Components and Programmable Logic
Devices 230
Flip-Flops and Simple Flip-Flop Applications 301
Oo
rn
oo Asynchronous Sequential Networks 505
Bibliography 684
Index 687
vi
Preface xiii 2.5.4 Verification of the Iterative Method
for Fractions 23
Chapter
1_ 2.5.5. A Final Example 23
Vii
viii DIGITAL PRINCIPLES AND DESIGN
4.8 The Quine-McCluskey Method of Generating 4.14.4 Incompletely Specified Functions 213
Prime Implicants and Prime Implicates 166 4.14.5 Maps Whose Entries Are Not Single-
4.8.1 Prime Implicants and the Quine-McCluskey Variable Functions 218
Method 167 Problems 222
4.8.2 Algorithm for Generating Prime
Implicants 170
Chapter
5—
4.8.3. Prime Implicates and the Quine-McCluskey
Logic Design with MSI
Method 173
Components and Programmable
4.9 Prime-Implicant/Prime-Implicate Tables and Logic Devices 230
Irredundant Expressions 174
5.1 Binary Adders and Subtracters 231
4.9.1 Petrick’s Method of Determining
Irredundant Expressions 175 5.1.1 Binary Subtracters 233
4.9.2 Prime-Implicate Tables and Irredundant 5.1.2 Carry Lookahead Adder 236
Conjunctive Normal Formulas 178 5.1.3. Large High-Speed Adders Using the Carry
Lookahead Principle 238
4.10 Prime-Implicant/Prime-Implicate Table
Reductions 178 5.2 Decimal Adders 242
4.10.1 Essential Prime Implicants 179 5.3. Comparators 246
4.10.2 Column and Row Reductions 180 5.4 Decoders 248
4.10.3 A Prime-Implicant Selection 5.4.1 Logic Design Using Decoders 249
Procedure 184 5.4.2 Decoders with an Enable Input 256
4.11 Decimal Method for Obtaining Prime 5.5 Encoders 260
Implicants 184 5.6 Multiplexers 262
4.12 The Multiple-Output Simplification 5.6.1 Logic Design with Multiplexers 266
Problem 187 5.7. Programmable Logic Devices (PLDs) 276
4.12.1 Multiple-Output Prime Implicants 191 5.7.1 PLD Notation 279
4.13 Obtaining Multiple-Output Minimal Sums and 5.8 Programmable Read-Only Memories
Products 191 (PROMs) 279
4.13.1 Tagged Product Terms 192 5.9 Programmable Logic Arrays (PLAs) 283
4.13.2 Generating the Multiple-Output Prime 5.10 Programmable Array Logic (PAL)
Implicants 193 Devices 292
4.13.3 Multiple-Output Prime-Implicant
Problems 294
Tables 195
4.13.4 Minimal Sums Using Petrick’s
Method 196 Chapter
6—
4.13.5 Minimal Sums Using Table Reduction Flip-Flops and Simple Flip-Flop
Techniques 198 Applications 301
4.13.6 Multiple-Output Minimal Products 201 6.1 The Basic Bistable Element 302
4.14 Variable-Entered Karnaugh Maps 202 6.2 Latches 303
4.14.1 Constructing Variable-Entered Maps 203 6.2.1 The SRLatch 304
4.14.2 Reading Variable-Entered Maps 6.2.2. An Application of the SR Latch: A Switch
for Minimal Sums 207 Debouncer 305
4.14.3 Minimal Products 212 6.2.3. TheSRLatch 307
x DIGITAL PRINCIPLES AND DESIGN
6.2.4 The Gated SR Latch 308 Ife Analysis of Clocked Synchronous Sequential
6.2.5 The Gated D Latch 309 Networks 371
6.3 Timing Considerations 310 Thee Excitation and Output Expressions 373
6.3.1 Propagation Delays 310 Tded Transition Equations 374
6.3.2 Minimum Pulse Width 312 Tepes) Transition Tables 375
6.3.3 Setup and Hold Times 312 7.2.4 Excitation Tables 377
6.9 Design of Synchronous Counters 347 7.6 Completing the Design of Clocked
6.9.1 Design of a Synchronous Mod-6 Counter
Synchronous Sequential Networks 424
Using Clocked JK Flip-Flops 348 7.6.1 Realizations Using Programmable Logic
6.9.2 Design of a Synchronous Mod-6 Counter Devices 432
Using Clocked D, T, or SR Flip-Flops 352 Problems 436
6.9.3 Self-Correcting Counters 356
Problems 358
Chapter8 —
Algorithmic State Machines 444
Chapter
7_ 8.1 The Algorithmic State Machine 444
Synchronous Sequential 8.2 ASM Charts 447
Networks 367
8.2.1 The State Box 448
7.1 Structure and Operation of Clocked 8.2.2 The Decision Box 449
Synchronous Sequential Networks 368 8.2.3. The Conditional Output Box 450
CONTENTS xi
8.2.4 ASM Blocks 450 9.4.2 The Primitive Flow Table for Example
8.2.5 ASMCharts 456 9.4 526
8.2.6 Relationship between State Diagrams and 9.5 Reduction of Input-Restricted Flow
ASM Charts 459 Tables 529
8.3. Two Examples of Synchronous Sequential 9.5.1 Determination of Compatible Pairs
Network Design Using ASM Charts 461 of States 530
8.3.1 A Sequence Recognizer 461 9.5.2 Determination of Maximal
8.3.2 A Parallel (Unsigned) Binary Compatibles 533
Multiplier 463 9.5.3 Determination of Minimal Collections
of Maximal Compatible Sets 535
8.4 State Assignments 468
9.5.4 Constructing the Minimal-Row Flow
8.5 ASM Tables 470 Table 536
8.5.1 ASM Transition Tables 470 9.6 A General Procedure to Flow Table
8.5.2 Assigned ASM Transition Tables 472 Reduction 538
8.5.3. Algebraic Representation of Assigned 9.6.1 Reducing the Number of Stable
Transition Tables 475 States 538
8.5.4 ASM Excitation Tables 477 9.6.2 Merging the Rows of a Primitive Flow
8.6 ASM Realizations 479 Table 540
8.6.1 Realizations Using Discrete Gates 479 9.6.3 The General Procedure Applied to Input-
8.6.2 Realizations Using Multiplexers 484 Restricted Primitive Flow Tables 543
8.6.3 Realizations Using PLAs 487 ON, The State-Assignment Problem and the
8.6.4 Realizations Using PROMs 490 Transition Table 545
9.7.1 The Transition Table for Example 9.3 546
8.7. Asynchronous Inputs 491
9.7.2 The Transition Table for Example 9.4 550
Problems 493
9.7.3 The Need for Additional State
Variables 551
9.7.4 A Systematic State-Assignment
Chapter
9—
Procedure 555
Asynchronous Sequential
9.8 Completing the Asynchronous Sequential
Networks 505
Network Design 557
9.1 Structure and Operation of Asynchronous 99 Static and Dynamic Hazards in Combinational
Sequential Networks 506 Networks 561
9.2 Analysis of Asynchronous Sequential 9.9.1 Static Hazards 562
Networks 510 9.9.2 Detecting Static Hazards 565
9.2.1 The Excitation Table 512 9.9.3 Eliminating Static Hazards 568
9.2.2 The Transition Table 514 9.9.4 Dynamic Hazards 570
9.2.3 The State Table 516 9.9.5 Hazard-Free Combinational Logic
9.2.4 The Flow Table 517 Networks 571
9.2.5 The Flow Diagram 519 9.9.6 Hazards in Asynchronous Networks
Involving Latches 571
9.3. Races in Asynchronous Sequential
Networks 520 9.10 Essential Hazards 573
9.4 The Primitive Flow Table 522 9.10.1 Example of an Essential Hazard 574
9.10.2 Detection of Essential Hazards 575
9.4.1 The Primitive Flow Table for Example
O28 Problems 578
xii DIGITAL PRINCIPLES AND DESIGN
Appendix
A A.9 The MOS Field-Effect Transistor 641
With the strong impact of digital technology on our everyday lives, it is not surpris-
ing that a course in digital concepts and design is a standard requirement for majors
in computer engineering, computer science, and electrical engineering. An introduc-
tory course is frequently encountered in the first or second year of their undergradu-
ate programs. Additional courses are then provided to refine and extend the basic
concepts of the introductory course.
This book is suitable for an introductory course in digital principles with em-
phasis on logic design as well as for a more advanced course. With the exception of
the appendix, it assumes no background on the part of the reader. The intent of the
author is not to just present a set of procedures commonly encountered in digital de-
sign but, rather, to provide justifications underlying such procedures. Since no back-
ground is assumed, the book can be used by students in computer engineering, com-
puter science, and electrical engineering.
The approach taken in this book is a traditional one. That is, emphasis is on the
presentation of basic principles of logic design and the illustration of each of these
principles. The philosophy of the author is that a first course in logic design should
establish a strong foundation of basic principles as provided by a more traditional
approach before engaging in the use of computer-aided design tools. Once basic
concepts are mastered, the utilization of design software becomes more meaningful
and allows the student to use the software more effectively. Thus, it is the under-
standing of basic principles on which this book focuses and the application of these
principles to the analysis and design of combinational and sequential logic net-
works. Each topic is approached by first introducing the basic theory and then illus-
trating how it applies to design. For those people who want to use CAD tools, we
have included a CD-ROM containing Altera MAX+plus II 10.1 Student Edition, as
well as software tutorials in an Appendix.
xiii
xiv DIGITAL PRINCIPLES AND DESIGN
sented. First, these networks are analyzed to establish various tabular representa-
tions of network behavior. Then, the process is reversed and synthesis is discussed.
Chapter 8 also involves the design of clocked synchronous sequential networks;
however, this time using the algorithmic state machine model. The relationship be-
tween the classic Mealy/Moore models and the algorithmic state machine model is
discussed as well as the capability of the algorithmic state machine model to handle
the controlling of an architecture of devices.
In Chapter 9 asynchronous sequential networks are studied. Paralleling the ap-
proach taken for synchronous sequential networks, the analysis of asynchronous se-
quential networks is first undertaken and then, by reversing the analysis procedure,
the synthesis of these networks is presented. Included in this chapter is also a dis-
cussion on static and dynamic hazards. Although these hazards occur in combina-
tional networks, their study is deferred to this chapter, since these hazards can have
a major effect on asynchronous network behavior. A great deal of attention is given
to the many design constraints that must be satisfied to achieve a functional design
of an asynchronous network. In addition to the static and dynamic hazards, the con-
cepts of races, the importance of the state assignment, and the effects of essential
hazards are addressed.
An appendix on digital electronics is included for completeness. It is not in-
tended to provide an in-depth study on digital electronics, since such a study should
be reserved for a course in itself. Rather, its inclusion is to provide the interested
reader an introduction to actual circuits that can occur in digital systems and the
source of constraints placed upon a logic designer. For this reason, the appendix does
not delve into circuit design but, rather, only into the analysis of electronic digital
circuits. Emphasis is placed on the principles of operation of TTL, ECL, and MOS
logic circuits. Since circuits are analyzed, the appendix does assume the reader has
an elementary knowledge of linear circuit analysis. In particular, the reader should be
familiar with Ohm’s law along with Kirchhoff’s current and voltage laws.
Another appendix with software tutorials is also included. These tutorials, pro-
vided by two contributors, include one on Altera MAX+plus II 10.1 Student Edition
and one on LogicWorks'™4. The tutorials are meant to provide basic introductions to
these tools for those people who are using them in their course.
HOMEWORK PROBLEMS
With the exception of Chapter 1, each chapter includes a set of problems. Some of
these problems provide for reinforcement of the reader’s understanding of the ma-
terial, some extend the concepts presented in the chapter, and, finally, some are
applications-oriented.
ADDITIONAL RESOURCES
The expanded book website at Attp://www.mhhe.com/givone includes a download-
able version of the Solutions Manual for instructors only and PowerPoint slides.
There are also a variety of labs using both the Altera Software and LogicWorks. A
xvi DIGITAL PRINCIPLES AND DESIGN
A few formal proofs have been included for the interested reader. However, these
proofs are clearly delineated and can be skipped without loss of continuity.
ACKNOWLEDGMENTS
I would like to thank the reviewers of the manuscript for their comments and sug-
gestions. These include:
Also, I would like to acknowledge the efforts of the staff at McGraw-Hill, particu-
larly Betsy Jones, Michelle Flomenhoft, and Susan Brusch. In addition, I want to
express my appreciation to Donna and David, who helped with the typing and art-
work in the early drafts of the manuscript. I would also like to thank David for his
work that led to the image on the cover of this book. Finally, I am grateful to my
wife Louise for her support and patience during this project.
Donald D. Givone
AB T THE AUT
Donald D. Givone received his B.S.E.E. degree from Rensselaer Polytechnic In-
stitute and the M.S. and Ph.D. degrees in Electrical Engineering from Cornell Uni-
versity. In 1963, he joined the faculty at the University at Buffalo, where he is cur-
rently a Professor in the Department of Electrical Engineering.
He has received several awards for excellence in teaching. He is also the author
of the textbook /ntroduction to Switching Circuit Theory and the coauthor of the
textbook Microprocessors/Microcomputers: An Introduction, both of which were
published by McGraw-Hill Book Company.
xviii
CHAPTER
introduction
way to measure human progress is through inventions that ease mental and
physical burdens. The digital computer is one such great invention. The ap-
plications of this device seem to have no bounds, and, consequently, new
vistas have opened up for humans to challenge.
The digital computer, however, is only one of many systems whose design and
operation is based on digital concepts. The idea of representing information in a dis-
crete form and the manipulation of such information is fundamental to all digital
systems. In the ensuing chapters of this book we study number and algebraic con-
cepts, logic design, digital networks, and digital circuits. These are the digital prin-
ciples that serve as the foundation for the understanding and design of digital com-
puters and, in general, digital systems. @
Data
and Final
ae
instructions results
Intermediate
and
Decision final results
information
Arithmetic
— Information signals :
--- Control signals ee ee ee re le
called registers. In short, the operation of a digital computer and, in general, a digi-
tal system involves a series of data and instruction transfers from register to register
with modifications and manipulations occurring during these transfers. To achieve
this, many registers occur within a digital system.
Most mathematical and logical operations on data are performed in the arith-
metic unit. The simple mathematical operations of a computer are addition, subtrac-
tion, multiplication, and division. The more complex mathematical operations, such
as integration, taking of square roots, and formulation of trigonometric functions,
can be reduced to the basic operations possible in the computer and are performed
by a program. One of the more important of the logical operations a digital com-
puter can perform is that of sensing the sign of a number. Depending upon whether
the computer senses a positive or negative sign on a number, it can determine
whether to perform one set of computations or an alternate set.
Referring to Fig. 1.1, it is seen that there is a two-way communication between
the arithmetic unit and the memory unit. The arithmetic unit receives from the
memory unit numbers on which operations are to be performed and sends interme-
diate and final results to the memory unit. All necessary data for the solution to the
problem being run on the computer are stored in the memory unit. The memory unit
is divided into substorage units (i.e., registers), each referenced by an address,
which is simply an integral numerical designator. Only one number is stored at a
particular address. Also stored in the memory unit, each at a separate address, are
the instructions. Each of these consists of a command, which identifies the type of
operation to be performed, plus one or more addresses, which indicate where the
numerical data used in the operation are located. The instructions and the numerical
data are placed in the memory unit before the start of the program run.
The control unit receives instructions, one at a time, from the memory unit for
interpretation. Normally, the program instructions are in sequential order within the
memory unit. By means of a program counter located in the control unit, indicating
the next instruction’s address, the correct instruction is transferred into the control
unit, where it is held in the instruction register for decoding. After the decoding
process, the control unit causes connections to be made in the various units so that
each individual instruction is properly carried out. With the connections properly
made, the arithmetic unit is caused to act as, say, an adder, subtracter, multiplier, or
divider, as the particular instruction demands. The control unit also causes connec-
tions to be made in the memory unit so that the correct data for the instruction are
obtained by the arithmetic unit. As shown in Fig. 1.1, the control unit is capable of
receiving information from the arithmetic unit. It is along this path that, say, the
sign of a number is sent. Such information gives the computer its decision-making
capability mentioned previously. In addition, the control unit sends signals to the
input and output units in order to have these units work at the proper time.
The input and output units are the contacts between the computer and the out-
side world. They act as buffers, translating data between the different speeds and
languages with which computers and humans, or other systems, operate. The input
unit receives data and instructions from the outside world and sends them to the
memory unit. The output unit receives numerical results and communicates them to
CHAPTER 1 Introduction
the user or to another system. Input and output units may be of a very simple nature
or highly complicated, depending on the application the computer is expected to
handle.
1.4 AN OVERVIEW
As indicated in the previous section, this book is not about digital computers per se
but rather about the fundamentals behind the logic design and behavior of digital
systems. The digital computer was introduced simply as an excellent example of a
DIGITAL PRINCIPLES AND DESIGN
digital system incorporating networks whose design and operation are the subject of
this book. In general, digital systems involve the manipulation of numbers. Thus, in
Chap. 2, the concept of numbers as a representation of discrete information is stud-
ied, along with how they are manipulated to achieve arithmetic operations.
It is the function of digital networks to provide for the manipulation of discrete
information. In Chap. 3 an algebra, called a Boolean algebra, is introduced that is
capable of describing the behavior and structure of logic networks, i.e., networks
that make up a digital system. In this way, digital network design is accomplished
via the manipulation of expressions in the algebra. The study of digital network de-
sign from this point of view is referred to as logic design. Chapter 4 then continues
with the study of Boolean algebra to achieve expressions describing optimal logic
networks.
Having developed mathematical tools for the logic design of digital networks,
Chap. 5 studies several basic logic networks commonly encountered in digital sys-
tems, e.g., adders, subtracters, and decoders. These are all networks found in the
digital computer. In addition, generic, complex logic networks have also been de-
veloped, referred to as programmable logic devices. In the second part of Chap. 5,
these devices are studied for the purpose of showing how they are used for the de-
sign of specialized logic networks.
It was seen in the discussion on the digital computer that there is a need for the
storage of digital information. The basic digital storage device in a digital system is
the flip-flop. Chapter 6 deals with the operation of several flip-flop structures. Fi-
nally, the application of flip-flops to the logic design of registers and counters is
presented.
The remaining three chapters continue with the logic design of those networks
that involve the storage of information. These logic networks are referred to as sequen-
tial networks. There are two general classes of sequential networks—synchronous and
asynchronous. Chapters 7 and 8 deal with the logic design of synchronous sequen-
tial networks, and Chap. 9 deals with the logic design of asynchronous sequential
networks.
For completeness, an appendix is included which discusses the electronics of
digital circuits. In the previous chapters, all designs are achieved on a logic level,
i.e., without regard to the actual electronic circuits used. Several types of electronic
digital circuits have been developed to realize the logic elements used in the net-
works of the previous chapters. It is the analysis of these electronic circuits, along
with a comparison of their advantages and disadvantages, upon which the appendix
focuses.
CHAP
Number Systems,
Arithmetic, and Codes
f\ Ss was mentioned in the previous chapter, the study of digital principles deals
with discrete information, that is, information that is represented by a finite
set of symbols. Certainly, numerical quantities are examples of discrete in-
formation. However, symbols can also be associated with information other than
numerical quantities, as, for example, the letters of the alphabet. Nonnumeric sym-
bols can always be encoded using a set of numeric symbols with the net result that
discrete information appears as numbers.
At this time various number systems and how they denote numerical quantities
are studied. One number system in particular, the binary number system, is very
useful since it needs only two digit symbols. This is important, since many two-
state circuits exist which can then be used for the processing of these symbols. This
chapter is also concerned with how arithmetic is performed with these different
number systems. Finally, it is shown how the numerical symbols are used to encode
information. The encoding process can result in encoded information having desir-
able properties that provide for reliability and ease of interpretation. H
quantity 100 (or, 10°), while the 7 is weighted by only 10 (or, 10’). If the 8 and 7
were interchanged, then the 7 would be weighted by 100 and the 8 by 10. In gen-
eral, the weighting factor is totally determined by the location of the symbol within
the number. Thus, the quantity being denoted by a symbol (say, 8) is a function of
both the symbol itself and its position.
The decimal number system is an example of a radix-weighted positional num-
ber system or, simply, positional number system.* In a positional number system
there is a finite set of symbols called digits. Each digit represents a nonnegative in-
teger quantity. The number of distinct digits in the number system defines the base
or radix of the number system. Formally, a positional number system is a method
for representing quantities by means of the juxtaposition of digits, called numbers,
such that the value contributed by each digit depends upon the digit symbol itself
and its position within the number relative to a radix point. The radix point is a de-
limiter used to separate the integer and fraction parts of a number. Within this posi-
tioning arrangement each succeeding digit is weighted by a consecutive power of
the base. Considering again the decimal number system, there are 10 distinct digits,
0, 1, 2,...,9, and hence numbers in this system are said to be to the base 10.
From the above definitions it now follows that a general number JN in a posi-
tional number system is represented by
where d; denotes a digit in the number system such that 0 = d; = (r — 1), ris the base
of the number system, n is the number of digits in the integer part of N, and m is the
number of digits in the fraction part of N." When referring to a particular digit in a
number, it is frequently referenced by its order; that is, the power of the base that
weights the digit. Thus, for integer quantities, the least significant digit is called the
Oth-order digit, followed by the Ist-order digit, the 2nd-order digit, etc. Table 2.1 lists
some of the more common positional number systems and their set of digit symbols.
Consider now the binary number system. In this system there are only two digit
symbols, 0 and 1. A digit in the binary number system is usually referred to as a bit,
an acronym for binary digit. Thus, 1101.101 is a binary number consisting of seven
binary digits or bits. To avoid possible confusion when writing a number, fre-
quently a decimal subscript is appended to the number to indicate its base. In this
case, the binary number 1101.101 is written as 1101.101,,). This convention will be
adhered to when the base is not apparent from the context or when attention is to be
called to the base of the number system.
*The Roman number system is an example of a nonpositional number system. In this case many
different symbols are used, each having a fixed definite quantity associated with it. The relative location
of a symbol in the number plays a minimal role in determining the total quantity being represented. The
only situation in which position becomes significant is in the special case where the symbol has a
subtractive property as, for example, the I in IV indicates that one should be subtracted from five.
‘The powers of the base are written, by convention, as decimal numbers since they are being used as an
index to indicate the number of repeated multiplications of r; e.g., 7° denotes r X r X r. This is the only
circumstance in which digits not belonging to the number system itself appear in a number representation.
CHAPTER 2 Number Systems, Arithmetic, and Codes
From Table 2.1 it is seen that for those number systems whose base is less than
10, a subset of the digit symbols of the decimal number system is used. Thus,
732.16,g) is an example of an octal number. In the case of the duodecimal and hexa-
decimal number systems, however, new digit symbols are introduced to denote inte-
ger quantities greater than nine. Typically, the first letters of the alphabet are used
for this purpose. In the duodecimal number system, the quantity 10 is denoted by A
and the quantity 11 is denoted by B. For the hexadecimal number system, the sym-
bols A, B,..., F denote the decimal quantities 10, 11,..., 15, respectively. Hence,
1A6.B2,,,, is an example of a number in the duodecimal number system, while
8AC3F.1D4,;¢) is a number in the hexadecimal number system.
The binary number system is the most important number system in digital tech-
nology. This is attributed to the fact that components and circuits which are binary
in nature (1.e., have two distinct states associated with them) are easily constructed
and are highly reliable. Even in those situations where the binary number system is
not used as such, binary codes are employed to represent information. In computer
technology, the octal and hexadecimal number systems also play a significant role.
This is due to the simplicity of the conversion between the numbers in these sys-
tems and the binary number system, as is shown in Sec. 2.6.
above cycling procedure is then repeated in the Oth-order and Ist-order digit posi-
tions, etc. As is seen from the counting process, all quantities are represented in a
positional number system by means of only a finite number of digit symbols merely
by introducing more digit positions.
The concept of counting is illustrated in Table 2.2, where the first 32 integers in
the binary, ternary, octal, and hexadecimal number systems are given, along with
their decimal equivalents.
Table 2.2 The first 32 integers in the binary, ternary, octal, and hexadecimal number
systems, along with their decimal equivalents
0 0 0 0
1 | l |
2 10 2 2}
3 ili 10 3
4 100 11 4
5 101 12 5
6 110 20 6
7 itil 21 i
8 1000 22 10
9 1001 100 15]
10 1010 101 12
11 1011 102 13
12 1100 110 14
13 1101 i 15
14 1110 12 16
15 1111 120 17 WS
IS
ON
SN
SA
es,
We)
(ee)
(@)
1S)
Jeol
ag
oS
SS
I
16 10000 121 20 10
17 10001 122 21
18 10010 200 22
19 10011 201 a3}
20 10100 202 24
21 10101 210 D5
22 10110 211 26
me} 10111 2D) ay
24 11000 220 30
25 11001 221 31
26 11010 222 32
Pa 11011 1000 33
28 11100 1001 34
29 11101 1002 35
30 11110 1010 36
31 Ua 1011 37
CHAPTER 2 Number Systems, Arithmetic, and Codes 11
2.3.1 Addition -
First, consider two-digit addition. This operation is easily stated in tabular form.
Table 2.3 summarizes two-digit binary addition and two-digit ternary addition. The
entry in each cell corresponds to the sum digit of a + b and a carry if appropriate.
As in the decimal number system, when the sum of the two digits equals or exceeds
the base, a carry is generated to the next-higher-order digit position. Thus, in the
lower right cell of Table 2.3a, corresponding to the binary sum 1,9, + 1) = 10.) =
2,10), the sum digit is 0 and a carry of | is produced. The entries in these tables are
easily checked by counting or using Table 2.2.
When the two numbers being added no longer are each single digits, the addi-
tion tables are still used by forming the sum in a serial manner. That is, after align-
ing the radix points of the two numbers, corresponding digits of the same order are
added along with the carry, if generated, from the previous-order digit addition. The
following examples illustrate the addition process.
11 = carries al = carries
11010.) = augend 11.01.) = augend
+ 11001,., = addend + 10.01.) = addend
110011,.) = sum 101.10, = sum
2.3.2 Subtraction
The inverse operation of addition is subtraction. Two-digit subtraction in the binary
and ternary number systems is given in Table 2.4. The entry in each cell denotes the
resulting difference digit and if a borrow is needed to obtain the difference digit. As
in decimal subtraction, when a larger digit is subtracted from a smaller digit, it is
12 DIGITAL PRINCIPLES AND DESIGN
a ar Ie b
0 1
0 0 1
= 0
i 1 and
carry |
(a)
ase 1p) b
0 | 2
0 0 | 2
0
a | | 7) and
carry |
0 |
2 2 and and
carry | carry |
ils
(b)
necessary to perform borrowing. Borrowing is the process of bringing back to the next-
lower-order digit position a quantity equal to the base of the number system. For exam-
ple, in Table 2.4a, the upper right cell denotes the difference 0,., — 1,2). In order to sub-
tract the larger digit from the smaller digit, borrowing is necessary. Since this is binary
subtraction, a borrow corresponds to bringing back the quantity 2, i.e., the base of the
number system, from which | is subtracted. Thus, in the upper right cell of Table 2.4,
CHAPTER 2 Number Systems, Arithmetic, and Codes 13
a0 b
0 |
1
0 0 and
l borrow |
a
l 1 0
le
(a)
ab b
0 | 2
2) 1
0 0 and and
borrow | borrow |
Dy
a i i 0 and
borrow |
D 2 | 0
es
| ee |
(b)
the entry “1 and borrow 1” denotes that under the assumption that a borrow from the
next-higher-order minuend digit is performed, the difference digit of | results.
As in the case of addition, subtraction tables can be used to form the difference
between two numbers when they each involve more than a single digit. Again, after
the radix points are aligned, subtraction is performed in a serial fashion starting
with the least significant pair of digits. When borrowing is necessary, the next-
14 DIGITAL PRINCIPLES AND DESIGN
0 020 = minuend
2X02,3;) = minuend XOX0.22/,. = subtrahend
—1021,,) = subtrahend — 21.02(, = difference
1011(3) = difference 212.20.)
2.3.3 Multiplication
The third basic arithmetic operation is multiplication. Table 2.5 summarizes two-
digit multiplication in the binary and ternary number systems. When the multiplier
consists of more than a single digit, a tabular array of partial products is constructed
and then added according to the rules of addition for the base of the numbers in-
volved. The entries in this tabular array must be shifted such that the least signifi-
cant digit of the partial product aligns with its respective multiplier digit. Multipli-
cation is illustrated by the following examples.
_
EXAMPLE
2.5
10.11.) = multiplicand
x 101) multiplier
10 11
000 0 |array of partial products
1011
1101.11,., = product
2102,,. = multiplicand
X 102.) = multiplier
11211
0000 array of partial products
2102
2221113) = product
CHAPTER 2 Number Systems, Arithmetic, and Codes 15
aXxb b
0 1
0 0 0
| 0 1
[ee eee =| ae |
(a)
axb / b
0 | D,
—
0 0 0 0
a 1 0 | 2)
i) jo) N i} i=}joy
16 DIGITAL PRINCIPLES AND DESIGN
2.3.4 Division
The final arithmetic operation to be considered is division. This process consists of
multiplications and subtractions. The only difference in doing division in a base-r
number system and in decimal is that base-r multiplication and subtractions are per-
formed. The following two examples illustrate binary and ternary division.
110.1,5, = quotient
divisor = 11,))10100.1,, = dividend
10.) = remainder
EXAMPLE 2.8
102,3, = quotient
divisor = ie) 2010,,, = dividend
23) = remainder
The four basic arithmetic operations have been illustrated for only the binary
and ternary number systems. However, the above concepts are readily extendable to
handle any base-r positional number system by constructing the appropriate addi-
tion, subtraction, and multiplication tables.
integer bases greater than |, integers in one number system become integers in an-
other number system; while fractions in one number system become fractions in an-
other number system. There are two basic procedures for converting numbers in one
number system into another number system: the polynomial method and the itera-
tive method. Both methods involve arithmetic computations. The difference be-
tween the two methods lies in whether the computations are performed in the source
or target number system. In this section the polynomial method is presented; the it-
erative method is discussed in the next section.
A number expressed in a base-r, number system has the form
a Ne cies (2.1)
The second subscript, (1), associated with each digit symbol d; and base symbol r,
in these equations is used to emphasize the fact that these are base-r, quantities.
Furthermore, the base of a number system expressed in its own number system, 1.e.,
r,,, 18 always 10,,:). This is readily seen in Table 2.2, where the quantity 2 in binary
is 10,2), the quantity 3 in ternary is 10,3), the quantity 8 in octal is 10,g), etc. This im-
plies that Eq. (2.1) can be written as
Equation (2.2) is the general form of any number in a positional number system
expressed in its own number system. Since this number denotes a quantity, the
equivalent quantity is obtained by simply replacing each of the quantities in the
right side of Eq. (2.2) by its equivalent quantity in the base-r, number system. That
is, by replacing each of the digit symbols and the weighting factors as governed by
the position of each digit in Eq. (2.2) by its equivalent quantity in base r, Eq. (2.2)
becomes
To illustrate the above algorithm, consider the conversion of the binary number
1101 into its equivalent decimal number:*
AOT pe yea WOp yer lay Lay cr Une 105, Ua 0,
v ene: J J J J i
= Lio) x Dabs ms Lio) x Bee ie 0.10) x 2) - Lio) x Qu)
=8+4+0+1
= 13 40)
L
EXAMPLE
2.9
Convert the binary number 101.011 into decimal.
Sy lb
It should be noted in Example 2.10 that the fraction part of the number having a
finite number of digits in one number system converts into a fraction part with an
*TIn replacing the digits of base r, by those of base r>, Table 2.2 is used to determine the necessary
equivalences. The equivalence symbol (=) is used in these equations to emphasize the fact that the same
quantity is being denoted even though it is expressed in a different number system.
CHAPTER 2 Number Systems, Arithmetic, and Codes 19
= (Pil se 22 se @) ae se @
= 222%
LEXAMPLE
ccc 2.14
Conversion of 43,;9) into its binary equivalent proceeds as follows:
43.40) + 200) = 21 ao) + remainder of 1,19); 1(9) = 12, = Oth-order digit
210) + 20) = 10g) + remainder of 1,4q; 1(19) = 1) = Ist-order digit
100) = 20) = Sco) + remainder of 0/19); O¢19) = Oia) = 2nd-order digit
Sao) + 2a0) = 20) + remainder of 119); 149) = 1,2) = 3rd-order digit
200) ~ 240) = lao) + remainder of 0,19); 0440) = 02) = 4th-order digit
lio) + 200) = Oo) + remainder of 1 (19); 1(49) = 12) = Sth-order digit
Therefore, 43,;9, = 101011,
EXAMPLE
2.16 [i
Conversion of 1O001011,.) into its decimal equivalent by the iterative method re-
quires repeated base-2 division by the quantity 10 expressed in binary, which is
1010.2). The steps of the conversion are:
1LOO1011 (5) + LOL0, = 111) + remainder of 101,.); 101) = 549) = Oth-order digit
I11() + 10102) = Og) + remainder of 111(.); 111(.) = 7,9) = Ist-order digit
Therefore, 1001011) = 75,19)
CHAPTER 2 Number Systems, Arithmetic, and Codes 21
which is the general form of a number in its own system where d,,denotes a digit
in the base-r, number system. Thus, the following equivalence is established:
Not) =d,-\,, x 1075, 27) a 10732 4L 3 G0 de aye x 1055 ah do,., x Loe (2.4)
N~ £(79)
7 Noy
Remainder Pra Tih
tr)
Recall that do % is the base-r, representation of dy ,. It now follows that if this re-
mainder, do ,, is converted into its equivalent digit1
in base r, by Table 2.2, then the
Oth-order digit of the base-r, number is obtained.
The above argument can now be applied to the resulting integer quotient,
If the integer quotient is divided by 7,,,, upon converting the remainder, d,,,, into a
base-r, digit, then the 1st-order digit of the base-r, number is obtained. By repeating
this process until the resulting quotient is zero, the coefficients of Eq. (2.4) are
generated.
Cra =
Conversion of 0.8125,,9) into its equivalent binary fraction proceeds as follows:
EXAMPLE 2.18
>
“(rl)
x Not) — dey
(rl)
RT a (rl)
Ip dg rl)
If the integer part of this product, d_, , is converted into its equivalent digit in base
ry, then the most significant digit of the base-r, fraction is established. Repeating
this process of multiplying the fraction part of the above product by 7, and con-
verting the resulting integer part, the next-most-significant digit of the base-r, frac-
tion is generated. By continuing this multiplication and converting process, the re-
maining digits of the base-r, fraction are obtained one at a time starting with the
most significant digit of the fraction.
EXAMPLE 2.19
Integer part:
201.3) + 2,3, = 100.3, + remainder of 1,3); 1/3) = 1/2) = Least significant integer digit
100) + 2,3) = 11) + remainder of 1); 1) = 1
11,3) + 2/3) = 24) + remainder of 0,3); 0) = 0.)
23) + 2,3) = 1g) + remainder of0,3); 03) = 0,2)
13) + 2,3) = 0) + remainder of 1,3); 14) = 1/2, = Most significant integer digit
Therefore, 201,3, = 100112,
Fraction part:
0.12;3. X 24) = 1.01); 1,3) = 1) = Most significant fraction digit
0.01(3)X 2) = 0.02.3); 0,3) = Oe)
0.02(3) x 203) = 108 1 (3); 0.3) = 0.2)
Repeating
0.12,,, = 0.100011,,)
Combining the integer and fraction parts:
Repeating
011111101.001100,,
NeNO ae)
CHAPTER 2 Number Systems, Arithmetic, and Codes 25
Then, each group of three bits is converted into its equivalent octal digit. For the
above number, we have
011111101.001100,,)
SI SS)
Jeeves |
SGAsae mee
Thus, the octal equivalent of 11111101.0011,,) is 375.14,g).
By reversing the above procedure, octal numbers are readily converted into
their equivalent binary numbers. In particular, to convert an octal number into its
equivalent binary number, each octal digit is replaced by its equivalent three binary
digits. For example, the octal number 173.24,g) is converted into binary as follows:
Wake DHaIOOL A
less bere at
ESE LONI
001111011.010100,
It is relatively simple to justify this algorithm for number conversions between
base 2 and base 8. Consider the binary number Ni.) = +++ dgd, +++ d\dp.d_\d_,d_3°**
Then, its decimal equivalent is given by
=---+(d,x?+da,xX2+4X2)xX8+idax?t+dax+da,xX2xsi+
(ea) de ne Be (dak 2 td x 2) td xX2) x 8
+ -e+-5
The right side of this last equation has the form of an octal number with its coeffi-
cients, lying in the range 0 to 7, given in the form of binary numbers (where both
the binary and octal forms are expressed in the decimal number system). Hence, by
replacing each group of three bits by its equivalent octal digit, the conversion from
binary to octal is achieved. By reversing this argument, the procedure for convert-
ing octal numbers into binary follows.
A similar procedure exists for conversions between binary and hexadecimal
numbers since 2* = 16. In this case, however, four binary digits are associated with
a single hexadecimal digit. Thus, in converting a binary number into a hexadecimal
number, the bits of the binary number are blocked off (working left and right from
the hexadecimal point) in groups of four (adding leading and trailing 0’s to com-
plete a block if necessary), and each group of four bits is replaced by its equivalent
hexadecimal digit. Conversely, when converting a hexadecimal number into a bi-
nary number, each hexadecimal digit is simply replaced by its equivalent four
26 DIGITAL PRINCIPLES AND DESIGN
binary digits. For example, the binary number 1010110110.111(,) is converted into
its hexadecimal equivalent as follows:
001010110110.1110,)
SE REI NS
aaa! J \
2 &B Bh Evi)
while the hexadecimal number 3AB.2,;,) is converted into its binary equivalent as
follows:
001110101011.0010,,
The binary number system is the most frequently used number system in digital
systems. Hence the numbers in the memory and the various registers are strings of
0’s and 1’s. In these systems, it is often more convenient to regard these numbers as
being either octal or hexadecimal rather than binary when referring to them since
fewer digits are involved.
Origin
———+—1+3
—3 oO
a
ttt
a 5
—5 |
the reference for the measurement is known. The asterisk was used in writing the
number simply to indicate that the measurement is relative to the offset origin
rather than the true origin. Furthermore, if the largest negative quantity to be rep-
resented is known, then it is always possible to place the offset origin suffi-
ciently far to the left of the true origin so that all negative quantities are mea-
sured rightward of the offset origin. Since in digital systems the number of digits
allocated to a quantity is fixed, the largest negative number that can be repre-
sented is always known.
Again consider a point to the left of the true origin which is denoted as an (un-
signed) number N, representing a measurement relative to the true origin. When this
same point is denoted as an (unsigned) number N, representing a measurement rela-
tive to the offset origin, N, is called the complement of N,. Thus, in Fig. 2.2, it is
For the special case of decimal numbers, the r’s-complement is called the /0’s-
complement (or tens-complement) and the (r — 1)’s-complement is called the 9’s-
complement (or nines-complement); while for binary numbers, the r’s-complement
is called the 2’s-complement (or twos-complement) and the (7 — 1)’s-complement is
called the /’s-complement (or ones-complement).
The purpose of introducing the complements of numbers is to provide an-
other means for denoting signed quantities. These are known as complement
representations. Depending upon the offset used, signed numbers are expressed
in either the r’s-complement representation or (r — 1)’s-complement representa-
tion. In both cases the true origin is used for dealing with positive quantities, and
an offset origin for dealing with negative quantities. In addition, a “sign digit”
is appended to a number. The sign digit serves to indicate which origin is being
used for a measurement. The binary digit 0 is appended to denote a positive
number, i.e., a measurement relative to the true origin, and the digit 1 to denote
Negative numbers:
Sign-magnitude representation: | ,N
Signed 7’s-complement representation: *
WGA” — IN) SL IO) INP)
Signed (r — 1)’s-complement representation: *
|Oo Na Gl nO ate a Na)
*Recall the base of a number expressed in its own number system is 10.
CHAPTER 2 Number Systems, Arithmetic, and Codes 29
*The single exception to this rule is 1,00 +++ 0, representing the quantity — 10”, in the r’s-complement
representation, which cannot be complemented.
CHAPTER 2 Number Systems, Arithmetic, and Codes 31
Consider the two decimal numbers N,; = 532 and N, = 146. The 10’s-complement
of 146isNj, = 854. The difference N, — N, is now obtained by forming the sum
N, + No.
Conventional Subtraction by addition
subtraction of the 10’s-complement
Since an end carry has occurred, it is known that the result is positive.
*[t should be carefully noted that a negative answer always appears in complement form.
32 DIGITAL PRINCIPLES AND DESIGN
To illustrate the effect of a negative difference, again consider the decimal numbers
N, = 532 and N, = 146 where the 10’s-complement of 532 is 468.
|
EXAMPLE
2.26
Consider the binary_ numbers N; = 11101.11 and N, = Q1O1L. 10 The 2is-
complement of N, is N, = 10100.10.
Using the same numbers as the above example, the difference N, — N,; = N, + N,
is obtained as follows where N, = 00010.01:
*Note that leading and trailing 0’s are used in N, so that both N, and N, have the same number of digits.
CHAPTER 2 Number Systems, Arithmetic, and Codes 33
*Note that under the assumption of n integer digits in the operands, the highest-order digit position is the
n-|st-order. Thus, 10” corresponds to a | in the nth-order digit position.
‘Although this approach for handling the sign digits appears rather awkward, when dealing with
signed binary numbers no inconvenience is encountered since this simply implies that it is not
necessary to distinguish the sign digit from the rest of the measurement digits. Alternatively, for
nonbinary signed numbers, frequently the digit b — 1 is used as the sign digit of a negative quantity.
For example, in the case of decimal numbers, the digit 9 is used to denote a negative quantity rather
than 1. If this is done, then regular base-h addition can be performed on the sign digits instead of
binary addition.
34 DIGITAL PRINCIPLES AND DESIGN
EXAMPLE 2.28
Consider the signed decimal numbers N, = 0,856.7 and N, = 0,275.3. The signed
10’s-complement of 0,275.3 is 1,724.7. The difference N, — N, is obtained by form-
ing the sum N, + N,.
N, ae 856.7 N, = 0,856.7
SONNE 75 3 +N, = + 1.724.7 Binary addition is performed
N,-N,= 581.4 Nu NS = -140,581.4 in this column
EXAMPLE 2.29
Using the numbers N, and N, of the previous example, NV, — N, is formed as follows
where the signed 10’s-complement of 0,856.7 is 1,143.3:
The | in the sign digit position indicates that the answer is negative and the mea-
surement digits are in the 10’s-complement form.
Ny = 11O1MO1 N, 0,11011.01
— N, = S1011000 + N, lI= + 1,01001.10
N,—N,= 00100.11 N, + N, = _1}0,00100.11
Although the above discussion was based on the subtraction of two signed
positive numbers, the concept presented applies to the algebraic addition and
subtraction of any two signed numbers in r’s-complement representation. As
long as the measurement portions of the numbers are added, possibly after form-
ing the signed r’s-complement of the subtrahend when subtraction is specified,
digit position by digit position using base-r arithmetic and the signs are added
using binary addition, then the result is algebraically correct.* For example, if
one of the operands is negative, and hence in its r’s-complement form, then
under the specification of addition the sum of the two operands results in the dif-
ference being obtained. On the other hand, if the difference between two
operands is to be calculated, then the signed r’s-complement of the subtrahend is
simply added to the minuend. If the subtrahend is initially a negative quantity
(and thereby expressed in r’s-complement form), then taking its signed r’s-com-
plement results in a positive representation since the negative of a negative quan-
tity is a positive quantity.
There is, however, one complication that can arise when dealing with algebraic
addition and subtraction of signed numbers. Depending upon the signs of the
operands, when the two signed numbers are added, possibly after the signed r’s-
complement of the subtrahend is formed when a subtraction operation is specified,
it is possible that the operands are both positive or both negative. In such a case, if p
digits are allocated to the measurement portion of the operands, then their sum
could require p + | digits to properly indicate the measurement portion. However,
it has been assumed in this discussion that a fixed number of digits are available for
the measurement representation. When the resulting sum requires more digits than
are available, an overflow condition’ is said to occur. If an overflow occurs, then the
*It is seen shortly that there is a constraint that the algebraic result must be expressible with the number
of digits allocated for the measurement portion.
‘The reader should carefully note that an overflow condition is not the same as the end carry discussed
previously.
36 DIGITAL PRINCIPLES AND DESIGN
Consider the signed binary numbers N, = 0,11011.01 and N,= 0,10110.10. When
the numbers are added to form N, + N5, the following results:
N,= il 0,11011.01
+N, ll= +0,10110.10
Not N,= 1,1000011
In this case an overflow condition has occurred since the addition of two positive
numbers results in a negative sum. The overflow condition is a consequence of
an insufficient number of measurement digits to properly represent the sum. Al-
ternatively, in the above addition, an overflow is detected since there is a carry
into the sign digit position while no carry results from the addition of the sign
digits.
It should be noted that the end carry resulting upon forming the sum N, + Nj is
added to the least-significant-digit position to obtain the final result.
se
Again let N,; = 85.2(;9, and N, = 32.5(;9). The 9’s-complement of N, is Ser
To obtain the difference N, — N,:
In this case, no end carry occurred when adding. Thus, the result is in 9’s-
complement form.
*It should be noted that a zero difference appears in (r — 1)’s-complement form and hence as a negative
quantity. The single exception is when a negative zero is subtracted from a positive zero. In this case, a
positive zero is obtained since after the negative zero is complemented, a positive zero is added to a
positive zero.
38 DIGITAL PRINCIPLES AND DESIGN
CE
Let N,; = 0,54.2(;9) and Ny = 0,32.8,;9). The signed 9’s-complement of N, is No =
1.67.1. To obtain N, — N;%:
* Although this approach for handling the sign digits appears rather awkward, when dealing with
signed binary numbers no inconvenience is encountered since this simply implies that it is not
necessary to distinguish the sign digit from the rest of the measurement digits. Alternatively, for
nonbinary signed numbers, frequently the digit b — 1 is used as the sign digit of a negative quantity.
For example, in the case of decimal numbers, the digit 9 is used to denote a negative quantity rather
than 1. If this is done, then regular base-b addition can be performed on the sign digits instead of
binary addition.
"As is discussed shortly, adding two signed numbers that are both positive or both negative can cause an
overflow condition, in which case the algebraic result is incorrect.
40 DIGITAL PRINCIPLES AND DESIGN
EXAMPLE 2.38
Again let N, = 0,54.2,,9 and Nj = 0,32.8,;9). The signed 9’s-complement of N, is
N, = 145.7. To obtain N, — N;:
Conventional Subtraction by addition
subtraction of the 9’s-complement
N, = INE — =7)i\ 4
N, +N, = 1,78.5 = Negative difference in
9’s-complement form
[EXAMPLE
[ooo 2.39
Let N,; = 0,110.101,.) and N, = 0,010.110,.). The signed 1’s-complement of N, is
N, = 1,101.001. To obtain NV, — N,:
EXAMPLE 2.40
In the above discussion, N,; and N, were assumed to be two signed positive
numbers. The remarks at the end of Sec. 2.8 regarding the algebraic addition and
subtraction of signed numbers and the concept of overflow are equally applicable
for numbers in the (r — 1)’s-complement representation. That is, when the operands
CHAPTER 2 Number Systems, Arithmetic, and Codes 41
are in signed form, their base-r addition, with the possible need to handle the end-
around carry, of the measurement digits and binary addition of the sign digits gives
the correct algebraic result subject to the constraint of an overflow condition. Recall
that an overflow condition is the lack of sufficient measurement digits to represent a
quantity. When an overflow condition occurs, the result is not algebraically correct.
The reader should carefully note that overflow is not the end-around carry needed to
achieve the valid algebraic result. The detection of an overflow when using the
signed (r — 1)’s-complement representation of numbers is exactly the same as
when using the signed r’s-complement representation.
2.10 CODES
Because of the availability and reliability of circuits that have two physical states
associated with them, the binary number system is a natural choice for data han-
dling and manipulation in digital systems. To one of these physical states of the cir-
cuit is associated the binary digit 0, while the binary digit 1 is associated with the
other physical state. On the other hand, humans prefer the decimal number system.
In addition, it is often desirable to have a digital system handle information other
than numerical quantities, such as letters of the alphabet. A solution to this problem
is to encode each of the decimal symbols, letters of the alphabet, and other special
symbols by a unique string of binary digits, called a code group. In this way the dig-
ital system considers each code group as a single entity and can process and manip-
ulate the elements comprising these code groups as binary digits.
*TIn actuality, there are 16!/6! 4-bit codes for the 10 decimal digits.
‘Because of its popularity, this coding scheme is often simply referred to as BCD.
42 DIGITAL PRINCIPLES AND DESIGN
the 8421 code, the Oth-order bit in the code group has the weight 1, the Ist-order bit
has the weight 2, the 2nd-order bit has the weight 4, and the 3rd-order bit has the
weight 8. In general, if the 4 bits of a code group are written as b;b,b,b,) and the
weights for the corresponding bits are w3, w>, w;, and wo, then the decimal digit N is
given by
N= b; X w3 + bp X w, + db, X w, + do X Wo
using base-10 arithmetic. To form the 8421 BCD representation of a decimal num-
ber, each decimal digit is simply replaced by its 4-bit code group. For example, the
8421 BCD representation of the decimal number 392 is 001110010010. It is impor-
tant to note that the number of bits within each code group must be fixed to avoid
any ambiguity in interpreting a coded number. Furthermore, it should be understood
that a coded decimal number is not the binary equivalent of the decimal number as
was discussed earlier in this chapter.
There are numerous other weighted codes. Among these are the 242] code and
the 542/ code, both of which are shown in Table 2.7. Again the code group for each
decimal digit consists of 4 bits, and the weights associated with the bits in each code
group are given by the name of the code.
Although the 8421 code is the most common BCD scheme, the 2421 code does
have an important property not possessed by the 8421 code. In some codes, the 9’s-
complement of each decimal digit is obtained by forming the I’s-complement of its
respective code group. That is, if each 0 is replaced by a | and each | is replaced by
a Q in the code group for the decimal digit X, then the code group for the decimal
digit 9-X results. Codes having this property are said to be sel/f-complementing. As
indicated in Table 2.7, the 2421 code does have this property. For example, the 9’s-
complement of 7 is 2, and the 1’s-complement of 1101 is 0010. As a consequence,
the 9’s-complement of an entire decimal number is obtained by forming the 1’s-
complement of its coded representation. As an illustration, the decimal number 328
appears as 001100101110 in 2421 code. Forming the 1’s-complement of this coded
CHAPTER 2 Number Systems, Arithmetic, and Codes 43
representation results in 110011010001. This is the coded form for 671, which is
the 9’s-complement of 328. A self-complementing coding scheme is convenient
in a digital system where subtraction is performed by the addition of the 9’s-
complement.
It is even possible to have codes with negative weights. An example of such a
code is the 7536 code, where the bar over the 6 indicates that the weight associated
with the Oth-order bit in the code group is —6. The code group for each decimal
digit in the 7536 code is enumerated in Table 2.7. It should be noted that this is also
a self-complementing code.
Although all the codes mentioned thus far have 4 bits, there are also weighted
codes having more than 4 bits. The best known of these is the biquinary code. As
indicated in Table 2.7, the weights for the biquinary code are 5043210. The name of
the biquinary code is derived from the fact that for each decimal digit, two 1’s ap-
pear in the code group—one of these among the first 2 bits of the code group (hence
the prefix bi) and the other among the remaining 5 bits of the code group (hence the
suffix guinary).
A well-known 4-bit nonweighted BCD scheme is the excess-three code (or XS-
3 code) shown in Table 2.8. This code is derived by adding the binary equivalent of
the decimal 3, that is, 0011,.), to each of the code groups of the 8421 code. Thus, in
this coding scheme, the decimal number 961 appears as 110010010100. As is evi-
dent from Table 2.8, this code is self-complementing.
The final decimal code to be introduced is the 2-out-of-5 code, also shown in
Table 2.8. This is a nonweighted code* in which exactly 2 of the 5 bits in each code
group are |’s, the remaining 3 bits being 0’s. An advantage of this code, as well as
the biquinary code, is that it has error-detecting properties. Error detection is dis-
cussed in the next section.
*Except for the coding of the numeral 0, it is a weighted code having the weights 74210.
44 DIGITAL PRINCIPLES AND DESIGN
Frame Frame
bar bar
Check
sum digit
An interesting application of the 2-out-of-5 code is the U.S. Postal Service bar
code. Figure 2.3 shows the bar code that would appear on a piece of mail having the
ZIP code 14263-1045. Each digit is coded using the 2-out-of-5 code where the bi-
nary digit 0 of a code group occurs as a short bar and the binary digit 1 of a code
group occurs as a tall bar. The ZIP code appears between two tall bars, called frame
bars, which serve to define the beginning and ending of the bar code. The frame
bars are used for aligning the scanner which reads the bar code. A final check sum
digit is also included in the bar code, e.g., the digit 4 in Fig. 2.3. The check sum
digit is used for error correction and is discussed in Sec. 2.12.
*Thus, the angular position is given as increments of 360/16 = 22.5°. Greater refinement is easily
achieved by using more digits and an encoder with more sectors.
CHAPTER 2 Number Systems, Arithmetic, and Codes 45
If the conventional binary encoder wheel is used, then when moving from sec-
tor 3 to 4, the code word must change from 0011 to 0100. This requires 3 bits to
change value. On the other hand, in the case of the Gray code encoder, the code
word must change from 0010 to 0110, which involves only a change of 1 bit. Now
assume that the photosensing devices are out of alignment such that the photosensing
Photo Photo
sensors sensors
Figure 2.4 Angular position encoders. (a) Conventional binary encoder. (b) Gray code encoder.
46 DIGITAL PRINCIPLES AND DESIGN
Photo Photo
sensors sensors
(a)
Figure 2.5 Angular position encoders with misaligned photosensing devices. (a) Conventional binary encoder.
(b) Gray code encoder.
device reading the second most significant bit in the code word is slightly advanced,
causing it to read ahead of the others. This is shown in Fig. 2.5a. When the angular
position of the encoder is near the transition between sectors 3 and 4, it is possible
that a reading of 0111 might result since the photosensing device for the second
most significant digit is already reading sector 4 while the other photosensing de-
vices are reading sector 3. A significant error has now occurred since a reading of
0111 indicates sector 7, which is several sectors away. On the other hand, if a unit-
distance code is used, such as the Gray code, then the effect of the misalignment re-
sults in the code for the next sector being obtained prematurely as illustrated in Fig.
2.5b. With such a coding scheme, however, the error encountered in a digital repre-
sentation can never exceed that of one sector since only | bit must change value be-
tween any two adjacent sectors.
Table 2.10 The 7-bit American Standard Code for Information Interchange (ASCII)
bo bsb,
bz by by by O10 O11 100 101 110 111
ORO ROMEO SP 0 @ RB, p
0 0 Kou l A Qube q
OP Oa ea wy » B R b r
oo @ i i # 3 Cc S c S
Oe OF 0 $ ah |p eTe wT d t
Omori %o 5 ee e u
Oot 1 @ & 6 Re V f
Oa aia : 7 G W
1G, S0980 ex 8 H Xx
l © @ fl ) g) | I BY
i @ st @ = : J I,
I @ tf a 5 K [
Lo Oh ; << IL, \
bats Oe al - = M ]
dS aie (0) : > | N e
el aL / ? O
Control Characters
NUL Null DCI Device Control |
SOH Start of Heading DC2 Device Control 2
STX Start of Text Des Device Control 3
ETX End of Text DC4 Device Control 4
BOLT End of Transmission NAK Negative Acknowledge
ENQ Enquiry SYN Synchronous Idle
ACK Acknowledge EIB End of Transmission Block
BEL Bell CAN Cancel
BS Backspace EM End of Medium
HT Horizontal Tab SUB Substitute
LE Line Feed ESE Escape
VT Vertical Tab FS File Separator
BE Form Feed GS Group Separator
CR Carriage Return RS Record Separator
SO Shift Out US Unit Separator
SI Shift In SP Space
DIET: Data Link Escape DEL Delete
have been developed, the best known alphanumeric code is the 7-bit American Stan-
dard Code for Information Interchange, or ASCII code. Table 2.10 gives a listing of
this code. As with the numeric codes using code groups, alphanumeric information is
coded by the juxtaposition of the appropriate code groups for the characters.
More recently, the Unicode Standard has been developed. This 16-bit character
coding system provides for the encoding of not only the English characters but also
those of foreign languages including those of the Middle East and Asia. In addition,
the Unicode Standard includes punctuation marks, mathematical symbols, technical
symbols, geometric shapes, and dingbats.
48 DIGITAL PRINCIPLES AND DESIGN
Table 2.11 8421 code with a parity bit. (a) Odd-parity scheme.
(b) Even-parity scheme
Decimal digit 8421p Decimal digit 8421p
0 00001 0 00000
1 00010 | 00011
2 00100 2 00101
3 00111 3 00110
4 01000 4 01001
5 01011 5 01010
6 01101 6 01100
7 01110 Wf Oll11
8 10000 8 10001
9 10011 9 10010
(a) (b)
Given the code groups for two characters in some coding scheme, the number
of bits that must be changed in the first code group so that the second code group re-
sults is defined as the distance between the two code groups. Thus, referring to the
2-out-of-5 code, the distance between the code groups for the decimal digits 2, i.e.,
00101, and 5, i.e., 01010, is four. Furthermore, the minimum distance of a code is
the smallest distance between any two valid code groups appearing in the coding
scheme. Again referring to the 2-out-of-5 code and considering all pairs of code
groups, the minimum distance is two. Similarly, the minimum distance for the
biquinary code is also two.
The significance of the minimum distance is that it is related to the error-
detecting capability of a coding scheme, 1.e., the maximum number of bits in error
that is always detectable. In particular,
D=M- 1 (a)
where D is the error-detecting capability of the code and M is its minimum dis-
tance. To see the validity of Eq. (2.11), consider a code whose minimum distance
is two. This means that at least 2 bits must change before any valid code group be-
comes another valid code group in the same coding scheme. If only | bit should er-
roneously change, then the resulting code group consists of a combination of 0's
and 1’s that do not appear in the definition of the code. Hence, when such a code
group is detected it can be concluded that an error has occurred in transmission.
Since, in general, at least M bits must change in a code with minimum distance M
before one code group can become another valid code group in the same coding
scheme, any M — 1, or fewer, bit changes always results in an invalid code group.
Thus, the existence of any of these code groups indicates the presence of de-
tectable errors.
50 DIGITAL PRINCIPLES AND DESIGN
In this equation C is the number of erroneous bits that always can be corrected, D is
the number of erroneous bits that always can be detected, and M is the minimum
distance of the code. The restriction C = D is necessary since no error can be cor-
rected without its first being detected. An implication of Eq. (2.12) is that for a
given minimum distance, it is possible to perform a trade-off between the amount of
the error-correcting and error-detecting capabilities of a code.
From Eq. (2.12), it is seen that single-error correction 1s achievable when a
code has a minimum distance of three, i.e., when C = 1 and D = 1. In sucha
code at least 3 bits must change before a valid code group can convert into an-
other valid code group. If a single error should occur, then the resulting code
group is within | bit of matching the intended code group and within at least 2
bits of matching any other valid code group. By noting which single bit must be
changed to obtain a valid code group it is possible to achieve the correction.
However, if two errors occur, then erroneous correction might result since the in-
valid code group can now be within | bit of matching a nonintended code group.
Thus, in a code having a minimum distance of three, only single-error correction
is possible.
On the other hand, a code having a minimum distance of three can be used
for double error-detection rather than single-error correction, i.e., when C = 0
and D = 2 in Eq. (2.12). If the code is being used in this manner, then it is only
necessary to observe that an invalid code group is received. Since the minimum
distance is three, the occurrence of one or two errors always results in an invalid
code group.
By having codes with a minimum distance greater than three, it is possible to
provide for additional error-detecting and error-correcting capabilities. This is
shown in Table 2.12, which is obtained by evaluating Eq. (2.12).
Table 2.12 Amount of error detection D and error correction C possible with a code
having a minimum distance M
fey ea 3 4 | 5 6 |
PO area 3. oe ae ee eee
moa OFcee Ons ea ae =r; 24) SORE eae
CHAPTER 2 Number Systems, Arithmetic, and Codes 51
iv 6 5 4 3 2 it Position
b, b, b; P3 b, Po P| Code group format
where the 7 bits of the code group are numbered from right to left, the three p’s de-
note the parity bits, and the four b’s denote the information bits being encoded. The
values of the parity bits are determined by the following rules:
That is, p; is a O when there are an even number of 1’s in positions 3, 5, and
7; otherwise, p, is a 1. In this way, there are an even number of 1|’s in the four
positions 1, 3, 5, and 7. In a similar manner, the values of p, and p; are deter-
mined so as to have an even number of 1|’s over their selected positions of the
code group.
To illustrate the above rules for constructing a Hamming code group, assume
the four information bits are b4b3b,b, = 0110. These bits appear in positions 3, 5, 6,
and 7 of the Hamming code group; that is,
7] 6 5 4 3 2 1 Position
b, b; by P3 b, Po P\ Code group format
0 | 1 0 Hamming code group
with information bits
placed
The next step is to determine the values of the parity bits in the code group. Parity
bit p, is used to establish even parity over positions 1, 3, 5, and 7. Since position 5
has a 1 and positions 3 and 7 have 0s, parity bitp;must be a 1. In a similar manner,
parity bit p, is a 1 so that even parity is established over positions 2, 3, 6, and 7.
52 DIGITAL PRINCIPLES AND DESIGN
The information bits in a Hamming code group may themselves be a code for a
character. For example, Table 2.13 gives the complete Hamming code for the 10
decimal digits when they are coded by the 8421 BCD scheme.
Now consider how a 7-bit Hamming code group, upon being received, is checked
for an error during transmission and, if necessary, how the location of a single error is
determined. Upon the receipt of a Hamming code group, the parity bits are recalcu-
lated using the same even-parity scheme over the selected bit positions as was done
for encoding. From the recalculation, a binary check number, cc¥cj}, is constructed.
In particular, if the recalculation of parity bit p; is the same as the p; bit in the received
Hamming code group, then c;* is set equal to 0. If, on the other hand, the recalculated
value of p; is not the same as the p; bit in the received Hamming code group, then c** is
set equal to |. In other words, if there are an even number of 1’s in positions 1, 3, 5,
and 7, then cj‘ is set equal to 0; while if there are an odd number of 1’s in these posi-
tions, then c;* is set equal to 1. In a similar manner, c% is set equal to 0 if and only if
there are an even number of 1’s in positions 2, 3, 6, and 7; while c¥ is set equal to 0 if
and only if there are an even number of |’s in positions 4, 5, 6, and 7. Otherwise, c¥
and c} are set equal to 1. The binary check number c¥c3‘c¥ indicates the position of
the error if one has occurred. If c¥‘c3c¥ = 000, then no position is in need of correc-
tion, 1.e., no single error has occurred in the Hamming code group.
As an example, assume the previously constructed Hamming code group with
the information bits b,b3b)b, = 0110 is transmitted, i.e., 0110011, but the code group
0110111 is received. Thus, the bit in position 3 erroneously changed from 0 to | dur-
ing transmission. Referring to bit positions 1, 3, 5, and 7 of the received code group,
Table 2.13 The Hamming code for the 10 decimal digits weighted by the 8421 BCD
scheme
it is seen that there are an odd number of 1’s. As stated above, this requires c¥ to be
set to | in the binary check number. In a similar manner, since there are an odd num-
ber of 1’s in positions 2, 3, 6, and 7, it follows that c = 1. Finally, it is seen that
there are an even number of 1’s in positions 4, 5, 6, and 7. Thus, c¥ = 0. The binary
check number is therefore cc}c}/ = 011 which, in turn, is the binary equivalent of
the decimal number 3. This indicates that the bit in position 3 is incorrect. Now that
the location of the error is established, it is a simple matter to complement the bit in
position 3 of the received Hamming code group to obtain the transmitted code group.
In this example, it is concluded that the correctly transmitted Hamming code group
was 0110011 and that the actual information bits were b,b3b,b,; = 0110. Although
this example involved an error in one of the information bits, a change in a parity bit
can equally well occur and be located upon constructing the binary check number.
The above procedure for constructing a Hamming code group that enables
single-error correction is extendable to handle any number of information bits. If m
information bits are to be encoded, then k parity bits are needed in each Hamming
code group where
m=x2-k-1
Table 2.14 gives a listing of the minimum number of parity bits kK needed for various
ranges of m information bits, resulting in a Hamming code group with m + k bits.
For example, when encoding information consisting of from 5 to 11 bits, four parity
bits must appear in each code group. If the bit positions in a Hamming code group
are numbered right to left from | to m + k, then those positions corresponding to 2
raised to a nonnegative integer power, i.e., positions 1, 2, 4, 8, etc., are allocated to
the parity bits. The remaining bit positions are used for the information bits. Table
2.15 indicates which bit positions are associated with each parity bit for the purpose
of establishing even parity over selected bit positions. For example, the parity bit in
position 1, i.e., p;, checks the bit in every other position beginning with the bit in po-
sition 1. The parity bit in position 2, i.e., p,, considers every other group of 2 bits be-
ginning with the parity bit in position 2. The third parity bit, p,, considers every other
group of 4 bits beginning with the parity bit in position 4. In general, the parity bit in
position 2', i.e., p;, considers every other group of 2' bits beginning with the parity bit
in position 2’. To determine the location of a single error in a received Hamming
code group, the binary check number cf « - cxc¥ is formed. This is done by recal-
culating the parity bits or, equivalently, checking the parity over the selected bits
Table 2.14 The number of parity bits k needed to construct a Hamming code with m
information bits
Nn
YH
54 DIGITAL PRINCIPLES AND DESIGN
Table 2.15 Bit positions checked by each parity bit ina Hamming code
Parity bit
position Positions checked
[She Om LTE nS lS Se Sree
DOr Oy Ma La Sree
An Onl 2S l4As loe20y ce
SO On MMOS 4a 2A ees
oOo
fhN
-
indicated in Table 2.15. If the recalculated value of p; is the same as the p; bit re-
ceived in the code group, then c** is set equal to 0; otherwise, it is set equal to 1. The
binary check number cj * + + cc’ indicates the position of the error if one has oc-
curred. If cf + + + cc} = 0+++ 00, then a valid Hamming code group was received.
check sum digit is a single digit which when added to the sum of the digits in the ZIP
code results in a total sum that is a multiple of 10. Mathematically, this is written as
For the example in Fig. 2.3, the sum of the seven ZIP digits is 26. Therefore, the
check sum digit that is appended to the bar code is 4 since this digit increases the
sum to the next multiple of 10. The reason mod 10 is used in this calculation is that
there are 10 code groups in the bar code (or, equivalently, the 2-out-of-5 code).
To understand how single-error correction is obtained, again consider Fig. 2.3.
Upon scanning the bar code, if the condition of three short bars and two tall bars is
not satisfied for any block of five bars starting after the frame bar, then it is known
that a particular digit is in error, i.e., single-error detection. The value of the erro-
neous digit is determined by summing the correct digits and applying Eq. (2.13). For
example, assume the fifth digit read, i.e., digit 3, in the bar code of Fig. 2.3 is de-
tected as being erroneous. Then, it is seen that the sum of the digits in the bar code is
where e denotes the erroneous digit. Since Eq. (2.13) must be satisfied, 1.e.,
it immediately follows that the erroneous digit e = 3. Although the 2-out-of-5 code
allows the detection of more than one error digit in the bar code, correction is possi-
ble only if a single digit in the bar code is in error.
CHAPTER 2 PROBLEMS
2.1. Continue Table 2.2 by listing the next 19 integers in each of the stated
number systems.
2.2 Construct a table for the first 32 integers in the quaternary, quinary, and
duodecimal number systems, along with their decimal equivalents.
2.3 Perform the following additions in the binary number system.
a, ATO1- 1001 be OOn ela
Cm tOOOI SS O11 dy 11-01 £0011
TG USTCO a fee Ie ORO
2.4 Perform the following subtractions in the binary number system.
a gULO IO be SOOT — O14
Cp 00 IALOr depo Ot LO:t
Ce LOO LOR 11 101 POO SLO 4
2.5 Perform the following multiplications in the binary number system.
an etOMa 110 bs SiOx Lond
Ce lO1O Xeb-Ol d= 10RLS<11.01
56 DIGITAL PRINCIPLES AND DESIGN
2.15 Using the iterative method of number conversion, convert the binary number
11100010.1101 into
a. Ternary b. Octal
c. Decimal d. Hexadecimal
(Note: For the fraction part, form a sufficient number of digits so that an
exact equivalence or a repeating sequence is obtained.)
2.16 Using the iterative method of number conversion, convert the ternary
number 10112.1 into
a. Binary b. Octal
c. Decimal d. Hexadecimal
(Note: For the fraction part, form a sufficient number of digits so that an
exact equivalence or a repeating sequence is obtained.)
2.17 Convert each of the following binary numbers into its equivalent in the octal
and hexadecimal number systems.
a. 111111001.00111101 b. 1010001011.1
c. 10111100010.01001011 d. 11100100110.101
2.18 Convert each of the following octal numbers into its equivalent in the binary
number system.
eles ye) Bb. 45.1
Cagolts d. 724.06
2.19 Convert each of the following hexadecimal numbers into its equivalent in
the binary number system.
ae G3 Bb E2C
c. 450.B d. SEA59
2.20 Using the ideas of Sec. 2.6, determine an algorithm to convert numbers
between base 3 and base 9. Illustrate your algorithm by converting
21021.112,3) into base 9.
2.21 a. Show that the r’s-complement of the r’s-complement of a number is the
number itself.
b. Repeat for the (r — 1)’s-complement.
2.22 Form the 1’s-complement and 2’s-complement for each of the following
binary numbers. In each case express the complement with the same number
of integer and fraction digits as the original number.
a. 10111011 b. 101110100
c. 101100 d. 0110101
€. 010.11 to) A1014.100
g. 100101.101 h. 1010110.110
with the same number of integer and fraction digits as the original
number.
a. 285302 b. 39040
cy 059637 d. 610500
en 4289 f. 5263.4580
g. 0283.609 h. 134.5620
2.24 Form the r’s-complement and (r — 1)’s-complement for each of the
following numbers. In each case express the complement with the same
number of integer and fraction digits as the original number.
am O1202 ts by ORO.
c. 241.03,5 d. 031.240,
e, 4072105 f. 0156.0037,.)
g. 83D.9F6) h. 0070C.B6E6,
Perform the following unsigned binary subtractions by the addition of the
1’s-complement representation of the subtrahend. Repeat using the 2’s-
complement representation of the subtrahend. (Note: Recall that when
dealing with complement representations, the two operands must have the
same number of digits.)
ae LIOMOF shore b. 10100 — 110000
2 LOOTC0OTT LOU tr d OLLO.Of One er
€ TOL TOLL Ou Ole f. 101.1001 — 11010.010011
2.26 Perform the following binary subtractions by expressing the quantities
involved as signed numbers and using the |’s-complement representation of
the subtrahend. Repeat using the 2’s-complement representation of the
subtrahend. (Note: Recall that when dealing with complement
representations, the two operands must have the same number of
digits.)
a. LO110— 1101 bs aLOLI — 110100
e- TOA00P= 14.01 ds SLO10T = 10T0Ts
eo) JO De. OLOTs i. LOL O1OL aR TOTOE
2.21 Perform the following unsigned decimal subtractions by the addition of the
9’s-complement representation of the subtrahend. Repeat using the 10’s-
complement representation of the subtrahend. (Note: Recall that when
dealing with complement representations, the two operands must have the
same number of digits.)
a. 7842 — 3791 b. 265 — 894
€.. 5083: — 9457 de 13:08 538.9
€.. 427/208 3933 f. 804.2 — 3621.47
2.28 Perform the following decimal subtractions by expressing the quantities
involved as signed numbers and using the 9’s-complement representation of
CHAPTER 2 Number Systems, Arithmetic, and Codes 59
2.30 Consider the signed decimal numbers A = 0,601.7 and B = 1,754.2 where B
is in 10’s-complement form. Perform the operations
a. “A+ B bs BAB
Cie BoA. (6 os Pret 2
by taking the signed 10’s-complement of a signed operand when necessary
and doing signed addition.
2.32 Consider the signed decimal numbers A = 0,418.5 and B = 1,693.0 where B
is in 9’s-complement form. Perform the operations
a. A+B b> y4e=B
Cu iA de Ars
by taking the signed 9’s-complement of a signed operand when necessary
and doing signed addition.
Pree) Give the coded representation of the decimal number 853 in each of the
following BCD coding schemes.
a. 8421 code b. 7536 code c. Excess-3 code
d. Biquinary code e. 2-out-of-5 code
2.35 Encode each of the decimal digits 0 to 9 with 4 bits using the following
weighted BCD codes. State which of these codes are self-complementing.
a. 7635 code b. 8324 code
c. 8342 code d. 8641 code
2.36 Prove that for a weighted BCD code having all positive weights, one of the
weights must be 1, another must be either | or 2, and the sum of the weights
must be greater than or equal to 9.
2.37 Prove that in a 4-bit weighted BCD code with all positive weights, at most
one weight can exceed 4.
2.38 Prove that in a self-complementing BCD code, which can have both positive
and negative weights, the algebraic sum of all the weights must be 9.
2.39 Prove that in a 4-bit weighted BCD code, at most two of the weights can be
negative.
2.40 Give the coded representation for each of the following character strings
using the 7-bit ASCII code.
a. 960 bh ie ey eo 3Code
2.41 Assume an 8th bit, 7, serving as an even-parity bit is added to the 7-bit
ASCII code. Give the coded representation for each of the following
character strings as hexadecimal numbers.
aes b Z=1 c. Bits
2.42 Construct a Hamming code to be used in conjunction with the following
BCD codes.
a. 2421 code b. 2-out-of-5 code
2.43 Write the Hamming code groups for each of the following 8 bits of
information.
a. 11100011 b. 01011000 c. 10010101
2.44 Assume the following 7-bit Hamming code groups, consisting of 4
information bits and 3 parity bits, are received in which at most a single
error has occurred. Determine the transmitted 7-bit Hamming code groups.
a. 0011000 b. 1111000 C2 TIOLLOO
2.45 a. Construct a Hamming code group for the information bits 100101 that
enables single-error correction plus double-error detection where the
most significant bit position of the code group is for the overall even-
parity bit.
b. Assume that errors occur in bit positions 2 and 9 of the above code
group. Show how the double error is detected.
2.46 A certain code consists only of the following code groups:
00110, 01011, 10001, 11100
What error-detecting and error-correcting properties does this code have?
CHAPTER
61
62 DIGITAL PRINCIPLES AND DESIGN
sequential networks are not only a function of the current inputs but, in addition, de-
pend upon the past history of inputs. Thus, sequential networks have a memory
property, and the order in which the inputs are applied is significant. Both types of
logic networks are found in digital systems.
The principles of Boolean algebra are presented in this chapter. It is shown how
expressions written in this algebra are manipulated and simplified. In addition, it is
shown how the algebra is applied to the analysis and design of combinational logic
networks. The analysis and design of sequential logic networks is studied in later
chapters. @
Pl. The operations (+) and (-) are closed; i.e., for all x, y © B
(a) ser
(Ol yy Gap
P2. There exist identity elements in B, denoted by 0 and 1, relative to the
operations (+) and (*), respectively; i.e., for every x € B
(a) O+Fx=x+0=x
(by Lege xl =x
P3. The operations (+) and (-) are commutative; i.e., for all x, y € B
(Q)) 36 52 = Wr oe
(b) x*y=y'x
P4. Each operation (+) and (-) is distributive over the other; i.e., for all
I ON, eS 15)
(Ca 38 (GY Oe = (Ge op WO(Ge Sr a)
(DY Oy a) = ieee)
P5. For every element x in B there exists an element x in B, called the
complement of x, such that
(Qa easel
(DB) eae 10)
P6. There exist at least two elements x, y € B such that x # y.
CHAPTER 3 Boolean Algebra and Combinational Networks 63
Some important observations should be made about the above definition. The
actual elements in the set B (other than 0 and 1) are not specified and the operations
(+) and (-) are not defined. Thus, any mathematical system over a set B having two
operations is a Boolean algebra if these six postulates are satisfied. Hence, in actual-
ity, an entire class of algebras is being defined.
The elements of an algebra are called its constants. For a Boolean algebra the
constants are the elements in the set B of which only two are required: 0 and 1. It is
important that the reader realizes that these two symbols, in general, are nonnumeri-
cal and should not be confused with the binary symbols 0 and | studied in the previ-
ous chapter.
A symbol which represents an arbitrary element of an algebra is called a variable.
Since the postulates of a Boolean algebra make reference to arbitrary elements in the
set B, the symbols x, y, and z in the above definition are considered variables of the al-
gebra. Furthermore, the variables in the above postulates and the theorems to follow
can represent an entire expression as well as a single element. This follows from the
fact that the operations of a Boolean algebra are defined to be closed by Postulate P1
and hence an expression, when evaluated, always represents an element of the algebra.
The symbols (+) and (-) in the definition of a Boolean algebra are used to de-
note two arbitrary operations with properties satisfying the postulates. These sym-
bols are commonly used in equations describing logic networks and should not be
confused with the corresponding symbols in conventional arithmetic.
Before proceeding with the basic Boolean algebra theorems, however, let us
introduce two notational conveniences. This will simplify the writing of expres-
sions in the algebra. First, the product x + y usually is written as the juxtaposition of
x and y, that is, simply xy. This creates no ambiguities as long as variables are des-
ignated as single symbols, possibly with subscripts. Second, a hierarchy between
the operations (+) and (-) is assumed such that the operation (+) always takes
precedence over the operation (+). This allows less frequent use of parentheses
since xy + xz is understood to mean (x y) + (x° z). The reader should note that
these same two notational conveniences are used in the case of conventional alge-
bra and should offer no difficulty in understanding the expressions to which they
are applied.
Theorem 3.1
The element x in Postulate PS of a Boolean algebra is uniquely determined
by x.
Proof
Suppose that for a given element x there are two elements x, and x, satisfy-
ing Postulate PS. Then x + x, =1,x°x, =0,x + x, = 1, andx-x, =0.
XH=x,°!1 by P2(b)
=x,(x+x,) by substitution
=xx+xx, by P4(d)
= xx, + xx, by P3(b)
= Gk by substitution
= xx, + x,x, _ by substitution
=Xx+ xXx, by P3(b)
=x,(x+x,) by P4(b)
=x,° 1 by substitution
(> by P2(b)
Thus, any two elements that are the complement of the element x are
equal. This implies that x is uniquely determined by x. |
(i) eee
(Die OS 0
CHAPTER 3 Boolean Algebra and Combinational Networks 65
Proof
First consider part (a) of the theorem.
sear I = ho Gs ap by P2(b)
=(x xx. + 1)! by P5(@)
=x+(-1) by P4(a)
=x+x by P2(b)
= by PS(a)
Part (b) of the theorem follows directly from the principle of duality.
That is, if in the expression of part (a) (+) is replaced by (-) and 1 by 0,
then the expression of part (b) results. However, let us prove part (b) in
order that the steps of the proof can be compared with those of the first
part.
As indicated in Sec. 3.1, the letter symbols in the Boolean algebra postulates
and theorems can denote entire expressions as well as single elements. Thus, the
first part of Theorem 3.2 implies that if the constant | is summed with any
Boolean element or expression, then the result is equally well described by simply
the constant 1. A dual statement applies as a consequence of the second part of this
theorem.
Theorem 3.3
Each of the identity elements in a Boolean algebra is the complement of
the other; i.e.,
Proof
By Theorem 3.1 there exists for the identity element 0 a unique element
0 in a Boolean algebra. In Postulate P2(a) let the element x be 0. Then
0 + 0 = O. On the other hand, if x is 0 in Postulate P5(a), and hence
x = 0, then 0 + 0 = 1. Since it is always possible to equate something to
itself, in particular, 0 + 0 = 0 + 0, upon substitution it follows that 0 = I.
By duality it follows that T = 0.
66 DIGITAL PRINCIPLES AND DESIGN
LEE
Theorem 3.4
The idempotent law. For each element x in a Boolean algebra
(Q) 6 0
(b) xx =x
Proof
eo oe eet ie by P2(b)
= (e+ x(x X)e by PS)
=X XX by P4(a)
=xt+0 by P5(b)
=e by P2(a)
Theorem 3.5
The involution law. For every x in a Boolean algebra
(ee.
Proof >
Let x be the complement of x and ( (x) be the complement of x. Then, by
Postulate P5, x + x = 1,xx =0,x + (x) = 1, and x(x) = 0.
Theorem 3.6
The absorption law. For each pair of elements x and y in a Boolean algebra
(a) XK XV =X
(b) x(x+y)=x
CHAPTER 3 Boolean Algebra and Combinational Networks 67
Proof
Theorem 3.7
For each pair of elements x and y in a Boolean algebra
(GQ) X XV = xy
(b) x+y)
=xy
Proof
Theorem 3.8
In every Boolean algebra, each of the operations (+) and (-) is associative.
That is, for every x, y, and z in a Boolean algebra
(Oe yore 2) yr Z
(b)x(yz)
= @y)z
Proof
Let A = x + (y + z) and B = (x + y) + z. It must now be shown that
A = B. To begin with,
xA =xA
=x[x + (y+ z)] _ by substitution
=X by Theorem 3.6())
and xB = xB
=x[(x+ y) +z] _ by substitution
= x(x + y) + xz by P4(b)
=xt xz by Theorem 3.6(b)
=x by Theorem 3.6(a)
68 DIGITAL PRINCIPLES AND DESIGN
xA = XA
=ale + (y+ 2] by substitution
=xx+xy+2 by PA)
= xx + x(y + Z) by P3(b)
= OMe xcy Zz) by PS(b)
= (are) by P2(a)
and xB = xB
=A + y) + zl by substitution
II ee ave xe by P4(b)
= (xx tary) XZ by P4(b)
= (ax + xy) + xz by P3(b)
Oxy) sea by P5(b)
Ky Xe by P2(a)
= x(y + 2) by P4(b)
Therefore, xA = xB = x(y + 2).
To complete the proof,
xA + xA =xA+ XA
XA +xA=xB+xB by substituting for xA and xA
Ax + Ax = Bx + Bx by P3(b)
Alte == BOG ee) by P4(b)
A-1=B:1 by P5(a)
A=B by P2(b)
Infother words; esp (yaa 2) (oy)
Part (b) of the theorem follows from the principle of duality. &
As a result of Theorem 3.8, it is not necessary to write x(yz) and (xy)z; rather,
xyz is sufficient. The same is true for x + (y + z) and (x + y) + z, which simply
can be written as x + y + z. It is also possible to generalize the above theorem to
handle any number of elements. That is, for any n elements of a Boolean algebra,
the sum and product of the n elements is independent of the order in which they are
taken.
Theorem 3.9
DeMorgan’s law. For each pair of elements x and y in a Boolean algebra
CHAPTER 3 Boolean Algebra and Combinational Networks 69
Proof
By Theorem 3.1 and Postulate PS, for every x in a Boolean algebra there is
a unique x such that x + x = 1 and xx = 0. Thus, to prove part (a) of the
theorem, it is sufficient to show that xy is the complement of x + y. This is
achieved by showing that (x + y) + (xy) = 1 and (x + y) (xy)= 0.
Part (b) of the theorem follows from the principle of duality. &
The generalization of DeMorgan’s law is stated as: For the set of elements
WEBS bo a5 y, z} in a Boolean algebra,
— S
——— S
Theorem 3.10
The set B = {0,1} where 0 = | and | = 0 along with the operations (+),
called the or-operation, and (-), called the and-operation, defined by
a8 ar iY
is a Boolean algebra.
Let us now show that the postulates of a Boolean algebra are indeed satisfied
under the conditions stated in the theorem. First it is noted that the set B only con-
sists of two elements. The operations (+) and (-) work on pairs of elements in the
set B. In general, an operation is said to be closed if for every pair of elements in the
set the result of the operation is also in the set. Postulate P1, which requires closure,
is Satisfied since the entries in the tabular definitions of the operations are all speci-
fied and are elements of the set B = {0, 1}.
Postulate P2 requires the existence of identity elements relative to the two oper-
ations. These identity elements are detected by searching for a column and a row in
each of the tables defining an operation such that the entries for that column and
row are the same as the row and column designators, respectively. In the first table,
it is seen that for the 0 column, x + 0 = x for all x € B; and for the 0 row, 0 + x = x.
Similarly, in the second table, it is seen that for the 1 column, x- 1 = x for all x € B;
and for the | row, 1 - x = x. Thus, 0 is the identity element relative to the or-operation,
and | is the identity element relative to the and-operation.
The operations (+) and (+) are commutative since the tables which define these
operations are symmetrical about the diagonal from the upper left to lower right
corners. Hence, Postulate P3 is satisfied.
To show that Postulate P4 is satisfied, perfect induction is used. Perfect induction
is proof by exhaustion in which all possibilities are considered. In this case, by substi-
tuting all possible combinations of the elements 0 and | into Postulate P4 and apply-
ing the definitions of the operations, it becomes a simple matter to determine whether
both sides of the equality sign yield identical results for each combination. In Postu-
late P4 the symbols x, y, and z can each represent the elements 0 and |. Under this
condition there are eight possible combinations of 0 and | for x, y, and z. Table 3.2 has
one row for each of these eight combinations. In the fourth column of the table, x + yz
is evaluated for each combination; while in the fifth column, (x + y)(x + z) is evalu-
ated. For example, in the x= y = z = O row, x + yz becomes 0 + 07-0 =0+0=0
and (x + y)(x + z) becomes (0 + 0)-(0 + 0)= 0:0 = 0. Ina like manner, the entries
72 DIGITAL PRINCIPLES AND DESIGN
Table 3.2 Verifying Postulate P4 by perfect induction for a two-valued Boolean algebra
TE
_—
OS
SS
SooOo
ees
©
OF
=—=
Brerreroodoo§ots SO
Oo
CoO
OO
==
in the fourth and fifth columns of the remaining seven rows are determined. Since
these two columns are identical row by row, the identity of Postulate P4(a) is estab-
lished. Similarly, the sixth and seventh columns of Table 3.2 establish the identity of
Postulate P4(b). Thus, each operation is distributive over the other.
The fifth postulate requires there be an x for every x such that x + x= 1 and
x+x = 0. Table 3.3 is constructed using perfect induction, where 0 = 1 and | =
0 as stated in the theorem. Since the x + x column has all | entries and the x + x col-
umn has all 0 entries, Postulate PS is satisfied.
Finally, to satisfy Postulate P6 it is necessary to establish that 0 and | are dis-
tinct, i.c., 0 # 1. It is immediately seen that in the tables defining the two opera-
tions, only 0 is the identity element for the or-operation and only | is the identity el-
ement for the and-operation. This implies 0 # 1. Since the six postulates of a
Boolean algebra are satisfied, the special two-valued algebra specified in Theorem
3.10 is indeed a Boolean algebra.
The two-valued Boolean algebra just developed is also known as the switching
algebra. In the literature involving the analysis and design of logic networks, it is
common to refer to the two-valued Boolean algebra simply as a Boolean algebra
without any additional qualification. This convention is adhered to in this text. It
should be kept in mind that all the theorems previously established in Sec. 3.2 re-
main valid for this two-valued Boolean algebra. Frequently, the two constants, 0
and |, are referred to as logic-0 and logic-/. In this book, the two constants are usu-
ally referred to as simply 0 and 1. The more specific references of logic-0 and logic-
| are used for emphasis or when it is necessary to avoid confusion with the binary
arithmetic symbols 0 and 1.
In the literature, other symbols are occasionally used for the two operations of a
Boolean algebra. In particular, the symbols U and v are used to designate the or-
operation and 1) and ~ for the and-operation. Finally, the overbar (_) is considered
a unary operation since it uniquely determines the value of x for any x. This opera-
tion is called the not-operation. As previously mentioned, x is the complement of x.
It is also referred to as the negation of x. The not-operation is also designated in the
literature by a prime (’), in which case the complement of x is written as x’.
Again it should be mentioned that the words “product” and “sum” are com-
monly used when referring to the operations of “and” and “or,” respectively, owing
to the symbols used. However, it should be kept in mind that these operations have
been defined for a Boolean algebra and must always be interpreted in this context.
Even though conventional addition and multiplication occur occasionally in future
discussions, there should be no difficulty in determining from context at that time
the correct interpretation of the operation.
The value of the function fis easily determined for any set of values of x, y, and
z by applying the definitions of the and-, or-, and not-operations. For the above, the
value of not-x is first or-ed with the value of y to form the value of x + y. This, in
turn, is and-ed with the value of z. Because of the closure property of the Boolean
operations, the value of f always is either 0 or 1 for a given set of values of x, y, and
z. Hence, fis considered a dependent Boolean variable, while the variables x, y, and
z are the independent Boolean variables.
In general, an n-variable (complete*) Boolean function f(x, x),..., X,) 18 a
mapping that assigns a unique value, called the value of the function, for each com-
bination of values of the n independent variables in which all values are limited to
the set {0,1}. This definition of an n-variable Boolean function suggests that it can
be represented by a table with n + | columns in which the first n columns provide
for a complete listing of all the combinations of values of the n independent vari-
ables and the last column represents the value of the function for each combination.
Since each variable can assume two possible values, it immediately follows that if
there are n independent variables in the function, then there are 2” combinations of
values of the variables. Thus, the table has 2” rows. Such a table denoting a Boolean
function is called a truth table or table of combinations. A simple way of writing all
*In Sec. 3.8 the concept of an incomplete Boolean function is discussed. Unless otherwise indicated,
Boolean functions are assumed to be complete.
74 DIGITAL PRINCIPLES AND DESIGN
the combinations of values is to count in the binary number system from the decimal
equivalent of 0 to 2” — 1.* Once all the combinations of values are established, the
value of the function for each combination is entered. This is achieved by evaluating
the expression describing the function for each combination of values. Letting
FQ. Xo +o Xp) = FOO, 0,....., 0) denote the value. of the function when 4; = 9.
Xp = 0, ss Xe = OLX Gs 2 X,) = 00>... 1) denote. the value-of the tunc-
tion when x, = 0, x, = 0,...,x, = 1, etc., the general form of the truth table for an
n-variable function is shown in Table 3.4.
In line with common usage, the formula describing a function is referred to as
the function itself. For example, it hereafter is said “the function fix,y,z) = (x + y)z”
or “the function f = (x + y)z” rather than “the three-variable function whose for-
mulais (% —F ay)z.”
To illustrate the construction of a truth table, consider the function f = (x + y)z.
Since f is a function of three variables, there are 2’ = 8 different combinations of
values that are assigned to the variables. In Table 3.5 the eight combinations are
listed in the first three columns. It should be noted that these eight rows correspond
to the binary numbers 000 to 111 which, in turn, correspond to the decimal numbers
Oto? = ol =
To complete the construction of the truth table, the expression (x + y)z is eval-
uated for each of the eight combinations on a row-by-row basis. This results in the
last column of Table 3.5. However, an alternate approach to completing the truth
table is to carry out the Boolean operations on the columns of the table according to
the expression being evaluated. For example, since x appears in the equation, a
fourth column is added to the table such that the values in this column are the com-
plements of those in the first column. Next, the or-ing of the values of x given in the
fourth column is performed with the values of y given in the second column. This
results in the fifth column, which shows the evaluation of the expression x + y. Fi-
nally, the entries in the fifth column are and-ed with those in the third column. Thus,
“This rule for determining the rows of the truth table is a convenience that is possible by allowing a one-
to-one correspondence to exist between the Boolean constants and the binary digits. However, the
elements in the resulting table are the Boolean constants logic-0 and logic-1.
CHAPTER 3 Boolean Algebra and Combinational Networks 75
es f S(t wz
0 0 0 1 0
0 0 i 1
0 l 0 | 0
0 1 l | 1
1 0 0 0 0
1 0 I 0 0
1 | 0 0 0
1 i l 0 1
the final column shows the value of (x + y)z for each combination of values of the
variables x, y, and z. For the special case when x = 0, y = 1, and z = 0, it is seen
from the third row of the truth table that the value of the expression is 0.
As indicated above, a (complete) Boolean function of n variables is represented
by a truth table with 2” rows. It is now a simple matter to determine the number of
distinct Boolean functions of n variables. For each of the rows of the truth table,
there are two possible values that can be assigned as the value of the function. Con-
sequently, with 2” rows to the truth table, there are 2°” different ways in which the
last column of the truth table can be written, each representing a different Boolean
function.
Because of the closure property of the Boolean operations, every n-variable
Boolean formula describes a unique n-variable Boolean function. However, in
general, many appearingly different Boolean formulas describe the same Boolean
function. Thus, two Boolean formulas A and B are said to be equivalent, written
A = B, if and only if they describe the same Boolean function. The equality sign
occurring in the postulates and theorems of a Boolean algebra relates expressions
that yield identical results for any assignment to the variables. Thus, these theo-
rems and postulates can be applied to a Boolean formula in order to determine an
equivalent expression.
The expression consists of six literals since a total of six complemented and uncom-
plemented variables appear in the formula. A sum term is defined as either a literal
or a sum (also called disjunction) of literals. In the case of Eq. (3.2), it consists of
three sum terms, namely, z, (x + y), and (w + x + y). A Boolean formula which is
written as a single sum term or as a product (also called conjunction) of sum terms
is said to be in product-of-sums form or conjunctive normal form and is called a
conjunctive normal formula. The Boolean expression of Eq. (3.2) is an example of a
conjunctive normal formula.
product term xyz. If the values x = 0, y = 0, and z = 1 are substituted into this
product term, the term then evaluates to 1, i.e.,0°-0-1 = 1-1-1 = 1. In addition,
for all of the remaining possible seven combinations of values of x, y, and z, the
term x yz has the value 0, since at least one of the literals in the term has the value of
O. It is therefore seen that the single product term x yz has a functional value of 1 if
and only if x = 0, y = 0, and z = 1 and, consequently, can be used to algebraically
describe the conditions in which the second row of the truth table has a functional
value of 1. :
Again considering Table 3.6, the next row in which the function has the value
of 1 occurs in the fourth row. This row corresponds to the assignment x = 0, y = 1,
and z = 1. If this assignment of values is substituted into the product term xyz, then
the value of the term is 1. Furthermore, as can easily be checked, this is the only as-
signment of values to the variables that causes the term xyz to have the value of 1.
Thus, the conditions in which the fourth row of Table 3.6 has a functional value of 1
are algebraically described by the product term xyz.
The only remaining row of Table 3.6 in which the function has the value of 1 is
the fifth row. The assignment associated with this row is x = 1, y = 0, and z = O. If
this assignment is substituted into the product term xyz, the term then has the value
of 1. In addition, this product term has the property that it has the value of | only for
this assignment.
Combining the above results, the Boolean expression
*The word minterm is derived from the fact that the term describes a minimum number of rows of a
truth table, short of none at all, that have a functional value of 1.
78 DIGITAL PRINCIPLES AND DESIGN
uncomplemented if for that row the value of the variable is 1. If the minterms de-
scribing precisely those rows of the truth table having a functional value of | are
connected by or-operations, then the resulting expression is the minterm canonical
formula describing the function.
3.5.2 m-Notation
Each row of a truth table corresponds to an assignment of values to the independent
variables of the function. If this assignment is read as a binary number, then a row is
readily referenced by its decimal number equivalent. Using the letter m to symbol-
ize a minterm, the notation m; is used to denote the minterm that is constructed from
the row whose decimal equivalent of the independent variable assignment is 7.
Table 3.7 illustrates this notation for the case of three-variable truth tables. For ex-
ample, the minterm xyz is associated with the row in which x = 1, y = 1, and z = 0.
If 110 is read as a binary number, and thereby has the decimal equivalent of 6, then
the corresponding minterm is denoted by m,. This notation is readily extendable to
handle any number of truth table variables.
In an effort to simplify the writing of a minterm canonical formula for a func-
tion, the symbol m;, with the appropriate decimal subscript i, can replace each
minterm in the expression. In this way, the minterm canonical formula given by Eq.
(3.3) is written simply as
F(%Y,Z) = m, + m, + m, (3.4)
f(x%,y,z) = Ym(1,3,4)
It should be realized that no ambiguity results from this notation if the actual vari-
ables of the Boolean function are listed in the function symbol f(x,y,z) and are as-
sumed to be in the same order within a minterm.
Consider the four-variable Boolean function given in Table 3.8. The corresponding
minterm canonical formula is
w x y Z as
0 0 0 0 0
0 0 0 | 0
0 0 1 0 0
0 0 1 | 1
0 1 0 0 0
0 ] 0 I 1
0 1 1 0 0
0 1 1 i 0
1 0 0 0 ]
i 0 0 ] 0
1 0 1 0 1
1 0 1 1 1
1 1 0 0 0
1 1 0 1 0
1 ] | 0 0
1 1 1 1 0
*A single maxterm has the value 0 for only one combination of values and has the value of | for all
other combinations of values. Hence, its name depicts the fact that it is a term that assigns a functional
value of | to a maximum number of rows of a truth table, short of all of them.
CHAPTER 3 Boolean Algebra and Combinational Networks 81
Generalizing from the above discussion, a procedure can be stated for writing a
maxterm canonical formula for any truth table. Each row of a truth table for which
an n-variable function has the value of 0 is represented by a single sum term, i.e., a
maxterm, in which the n variables appear exactly once. Within each maxterm, a
variable appears complemented if for that row the value of the variable is 1 and un-
complemented if for that row the value of the variable is 0. If the maxterms describ-
ing precisely those rows of the truth table having a functional value of 0 are con-
nected by and-operations, then the resulting expression is the maxterm canonical
formula describing the function.
3.5.4 M-Notation
As was the case with minterms, a decimal notation is used for maxterms. Again, if
the variable assignment associated with a row of a truth table is read as a binary num-
ber, then the row is readily referenced by its decimal number equivalent. A maxterm
constructed for the row with decimal equivalent i is then denoted by M,. This nota-
tion is illustrated in Table 3.9 for the case of three-variable truth tables. For example,
the maxterm x + y + zis associated with the row in which x = 1, y = 1, andz = 0.
Regarding 110 as a binary number, the decimal equivalent is 6. Hence, the maxterm
is represented by M,. Although Table 3.9 gives the M-notation for three-variable
maxterms, this notation is extendable to handle any number of truth table variables.
Replacing each maxterm in a maxterm canonical formula by its corresponding
M,, the formula is written in a more compact form. For example, the maxterm
canonical formula given by Eq. (3.5) becomes
f(x,y,z) = TIM(0,2,5,6,7)
000 0 TE AE PAP M)
001 ] Se aya M,
010 y) AP WAP R M,
O11 3 Ke yz M,
100 4 AP Mare M,
101 ) x ae ar & Ms
hih@ 6 wer Poke M,
itil 7, Rae Por x M,
82 DIGITAL PRINCIPLES AND DESIGN
Under the assumption that the variables of the maxterms are always arranged in
the same order as they appear in the function notation f(x,y,z), no ambiguity results
from this decimal notation.
Consider the four-variable Boolean function given in Table 3.8. The corresponding
maxterm canonical formula is
FWA) = ey 2 ee oe ZW at yt Z).
(WE REE ZO x Py We ez)
Wee ye UW x ey ew ak ys)
(GP sie Te se SP Se ZO? ar ae ae BP a),
To obtain its algebraic form, it is only necessary to convert each of the decimal
subscripts into a three-digit binary number and then write the corresponding sum
term in which a 0-bit becomes an uncomplemented variable and a 1-bit becomes
a complemented variable. Under the assumption that the variables within the
maxterms appear in the same order as in the function notation, the algebraic form
becomes
__
Fates
The complementation of the Boolean expression
f = wxz + w(x + yz)
proceeds as follows using DeMorgan’s law (Theorem 3.9) and the involution law
(Theorem 3.5):
f = [wxz + w(x + yz)] = (wxz) [W(x + yz)]
=[(w) +x + (ZI) [w + & + ya)]
=(wt+xt+2[wt+xOol
=(w+x+z {w + x[(y)
+ Z]}
=(wtx+z[wt+xyt2]
84 DIGITAL PRINCIPLES AND DESIGN
X81 + X22
Ge PRG ee)
where g,, 25, M;, and h, are expressions not containing the variable x;. These special
forms of aBoolean formula f(x;, . . . , X;,.. . ,X,) are said to be expansions about the
variable x;. The expansions about a single variable are achieved by the following
theorem, known as Shannon’s expansion theorem.
Theorem 3.11
L
EXAMPLE
3.6 [a
Consider the Boolean expression
S(w,x%, y,Z) = wx + (wx + y)z
course, a criterion is needed that can be applied to a formula to measure the desir-
ability of the corresponding network. This has led to defining “simple” or “mini-
mal” expressions with the intent that they correspond to the least-cost network.
One possible way of measuring the simplicity of a Boolean expression is to ob-
serve the number of literals contained within the formula. For the purpose of this
chapter, the simplest form of an expression is defined as the normal formula having
the fewest number of literals.*
Me
EXAMPLE
3.7|
Consider the expression
(ey Ky) =P yz
EXAMPLE 3.8
Certainly it is not obvious that the final expression in Example 3.8 is the simplest
disjunctive normal formula. Nor was it obvious which theorems or postulates were
the most appropriate to apply in order to achieve the reduction. Clearly, there is a
need for systematic reduction techniques that guarantee minimal resulting expres-
sions. In the next chapter algorithmic procedures for obtaining expressions under
different measures of minimality are studied.
Theorem 3.12
CQ) eT Xe aes Ries 5 Hk eeOsea nee Lae tg)
(DY Xe AO ae ee 8 Kiy Dee pl ke asec eee eee
where f(%;; %,.«.+ k .«.5 %,), tor k-= 0, 1, denotes the formula
fl%), %, ++, Xj, .-+, X,) upon the substitution of the constant k for all oc-
currences of the variable x;.
Theorem 3.13
(G70) Mie Peay © te Omen cecerar ee© 2) an Ca 1 Ss Neeee Recetas Xn)
(2) AE She a ©, Bee. remnants OER Hecat? 9 Daa ay ah(Pehone So tenn Shekel, fae2
Denoting xy + wx(w + z)(y + wz) by g(w,x,y,z), the above expression has the form
S(w,x y,Z) = x + g(w,x% y,Z)
By Theorem 3.12(b), all occurrences of the x variable in g(w,x,y,z) can now be re-
placed by the Boolean constant 0. Therefore,
flw,~y,z) =xt+0-y+w-0- (wt ay + wz)
=x+y+ww + zy + wz)
It is next noted that by letting (w + z)(y + wz) be denoted by h(w,y,z), then
w(w + z)(y + wz) = w+ h(w,y,z)
By Theorem 3.13(a) all occurrences of the w variable in h(w,y,z) can be replaced by
the constant 0. Thus,
Sw,x,y,Z) = y + K(w,x,y,z)
By Theorem 3.13(b) the y variable in k(w,x,y,z) can be replaced by the constant 1. Thus,
Sw.%y,Z) = y + x + we(1 + 2)
= Var oe ar x
which is the simplest disjunctive normal formula describing the given function.
The duplication of the x literal in the third term and the identically 0 term yy are re-
moved. At this point a disjunctive normal formula is obtained, i.e.,
FORY:Z) = AY Pe XZ Xz
The first term in this expression is lacking the z variable. The variable is introduced
by and-ing the term with logic-1 in the form of z + z. In a similar manner, the miss-
ing variables in the second and third terms are introduced. Thus,
SOLD = yy La ee yz
XV CZ) Fe Fry ez te ave
Application of the distributive law of (-) over (+) results in
The first, second, fourth, and fifth terms are identically 1, by Postulate P5(a) and
Theorem T2(a), and, hence, are now dropped as well as the duplicate literal in the
last term. The resulting expression is
TCG Oe hee AC ety)
The first term is already a maxterm since all three variables appear. However, the
second term is not a maxterm since it lacks the z variable. The z variable is next in-
troduced into this term in the form zz, i.e.,
FOGY.Z) = (X Fy + Ox + y + 0)
= Fy 2 y+ zzz)
Finally, the distributive law of (+) over (+) is again applied to yield the maxterm
canonical formula
is given by
f(w,x,y,z) = Lm(2,4,5,6,7,10,11,13,14)
90 DIGITAL PRINCIPLES AND DESIGN
EXAMPLE 3.13
The complement of the maxterm canonical formula
f(w3x,y;z) = TIM1,2;6,10,12,13,14)
is given by
f(w,xy,z) = TIM(0,3,4,5,7,8,9, 11,15)
eM EXAMPLE 315°
Consider the minterm canonical formula
Forming the complement by listing the minterms not included in the original
expression results in
fW,% y,Z) = m, + mz + me + mo + my + My + Ms
Finally, taking the second complement by DeMorgan’s law results in the maxterm
canonical formula of the original function:
S(W,%Y,Z) = MM3MeMoMN
9M M15
= M)>M,M.M)M \\M,\M,5
= fC.)
Figure 3.1 Gate symbols. (a) And-gate. (b) Or-gate. (c) Not-gate.
3.7.1 Gates
Electronic circuits can be designed in which only two possible steady-state voltage
signal values appear at the terminals of the circuits at any time.* Such two-state cir-
cuits receive two-valued input signals and are capable of producing two-valued out-
put signals in the steady state. Rather than dealing with the actual voltage signal
values at the circuit terminals, it is possible to assign two arbitrary symbols to the
two steady-state voltage signal values. Let these symbols be logic-0 and logic-1.
An electronic circuit in which the output signal is a logic-1 if and only if all its
input signals are logic-1 is called an and-gate. This circuit is a physical realization
of the Boolean and-operation. Similarly, an electronic circuit in which the output
signal is a logic-1 if and only if at least one of its input signals is a logic-1 is called
an or-gate. The or-gate is a physical realization of the Boolean or-operation. Fi-
nally, an electronic circuit in which the output signal is always opposite to that of
the input signal is called a not-gate or inverter and is the physical realization of the
Boolean not-operation. Since the input and output lines, or terminals, of a gate have
different values, i.e., logic-0 and logic-1, at different times, each of these lines is as-
signed a two-valued variable. Hence, the terminal characteristics of each of these
gates are describable with a two-valued Boolean algebra. Figure 3.1 illustrates a set
of symbols for the above three gates and the corresponding algebraic expressions
for their behavior at the output terminals.
*TIn actuality two ranges of voltage signal values are associated with each terminal. However, for
simplicity in this discussion it is not necessary to regard the electrical signals as ranges, but rather two
values suffice. Gate properties are further discussed in Sec. 3.10.
CHAPTER 3 Boolean Algebra and Combinational Networks 93
Combinational
network Outputs
“m
essary, condition for a combinational network is that the network contains no closed
loops or feedback paths. Networks that satisfy this constraint are said to be acyclic.
A second type of logic network is the sequential network. Sequential networks
have a memory property so that the outputs from these networks are dependent not
only upon the current inputs but upon previous inputs as well. Feedback paths form
a necessary part of sequential networks. At this time, only acyclic gate networks are
studied. This ensures that the networks being considered are indeed combinational.
There is a very important inherent assumption in the establishment of Boolean
algebra as a mathematical model for a combinational gate network. A two-valued
Boolean algebra has only two different symbols to assign to the physical signal val-
ues present at any time at the gate terminals. However, when a physical signal
changes from one of its values to the other, a continuum of values appear at the
input or output terminals. Furthermore, in the real world, changes cannot occur in-
stantaneously, i.e., in zero time. To eliminate these transient-type problems, only
the steady-state conditions occurring in a network are considered. Under this
steady-state assumption, Boolean algebra can serve as a mathematical model for
combinational gate networks. In Sec. 3.10, further remarks are made about the na-
ture of the signals within a combinational network.
At this time, the analysis and synthesis of gate combinational networks is studied.
Analysis involves obtaining a behavioral description of the network. This is achieved
by writing a Boolean expression or, equivalently, by forming a truth table to describe
the network’s logic behavior. Synthesis, on the other hand, involves specifying the in-
terconnections of the gates, i.e., topological structure, for a desired behavior. This, in
turn, results in a logic diagram from which a physical realization is constructed.
The above analysis procedure is illustrated by the gate network of Fig. 3.3. The
or-gate whose output is labeled G, is described by the formula G, = y + z. Also, the
and-gate whose output is labeled G, is described by the formula G, = wxy. Next the
and-gate whose output is labeled G; is described in terms of the input variable w and
the previously labeled output G,. Thus, the formula for the output of the and-gate la-
beled G; is G; = w+ G, = w(y + 2). Finally, the output of the network is described
in terms of the labels G, and G;; that is, f(w,x,y,z) = G) + G3; = wxy + w(y + 2Z).
Once the expression is written, the corresponding truth table can be constructed, as
previously explained in Sec. 3.4.
(a) (b)
the or-gate whose output is labeled G,, the and-gate whose output is G;, and finally
the output or-gate. Thus, Fig. 3.3 shows a three-level gate network under the as-
sumption of double-rail logic.
To illustrate the construction of a logic diagram from a Boolean expression,
consider the Boolean function described by the formula
S(wW,x% y,Z) = wx + x(y + z)
The logic diagram of this equation is shown in Fig. 3.4a where the topological
arrangement of the gates is in direct correspondence with the evaluation of the
formula. That is, an or-gate producing y + z is followed by an and-gate to obtain
x(y + z). Concurrently, an and-gate is used to generate wx. Finally, an output
or-gate is used to combine the outputs of the subnetworks for x(y + z) and wx.
The resulting network consists of three levels.
When the above expression is rewritten in sum-of-products form, i.e., disjunc-
tive normal form, it becomes
f(w,%y,2) = wx + xy + xz
This formula suggests the two-level network shown in Fig. 3.4b. Since the two
expressions are equivalent, both of these networks have the same logical terminal
behavior.
w _ y V4 Pp
0 0 0 0 |
0 0 0 1 0
0 0 | 0 0
0 0 | 1 |
0 | 0 0 0
0 | 0 1 ]
0 | 1 0 |
0 I 1 | 0
1 0 0 0 0
1 0 0 |
! 0 | 0 1
| 0 1 i 0
| 1 0 0 |
1 i 0 1 0
i | ] 0 0
i i 1 1 1
number of 1’s in the input state of the network and a 0 parity bit when there is an
odd number of 1’s. In this way, the total number of 1’s in the input state and the par-
ity bit collectively is odd.
The truth table for this odd-parity-bit generator is given in Table 3.10. The vari-
ables w, x, y, and z denote the 4 bits of the input state where w corresponds to the
most significant bit and z corresponds to the least significant bit. The truth table has
16 rows corresponding to each of the possible input states. The functional values of
the truth table are determined by the statement of the problem. In this case, the
functional values, column p, are the desired parity bits the network produces and are
assigned so that the number of 1’s in each entire row of the table is odd.
Having obtained the truth table, a corresponding Boolean expression can be
written. For example, the minterm canonical formula for Table 3.10 is
P(W,x,y,Z) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz (3.7)
Although it is not obvious at this time, Eq. (3.7) is also the simplest disjunctive nor-
mal formula, i.e., a formula consisting of a sum of product terms, describing the
odd-parity-bit generator. The reader will be able to readily verify this fact after
studying techniques for obtaining minimal expressions in the next chapter. Alterna-
tively, the maxterm canonical formula for this example could be written, which is
the simplest conjunctive normal formula, i.e., a formula consisting of a product of
sum terms. Equation (3.7) is used to obtain the logic diagram shown in Fig. 3.5.
In general, a disjunctive normal formula always results in a network consisting
of a set of and-gates. followed by a single or-gate; while a conjunctive normal for-
CHAPTER 3 Boolean Algebra and Combinational Networks
=]
<1e1
Au
=|
nls
=|
&
aid
(it
NSIRI=
aS
ard
ees
y
called the value of the function, to a proper subset of the 2” combinations of values of
the n independent variables in which all specified values are limited to the set {0,1}.
Incomplete Boolean functions are also called incompletely specified functions.
As in the case of a complete Boolean function, an n-variable incomplete
Boolean function is represented by a truth table with n + 1 columns and 2” rows.
Again the first n columns provide for a complete listing of all the 0-1 combinations
of values of the n variables and the last column gives the value of the function for
each row. However, for those combinations of values in which a functional value is
not to be specified, a symbol, say, —, is entered as the functional value in the last
column of the table. Table 3.11a illustrates an incomplete Boolean function in
which functional values are not specified for the combinations (x,y,z) = (0,1,1) and
COM):
The complement, f(x,, %,..., x,), of an incomplete Boolean function
A(X, X), . .. , X,) 18 also an incomplete Boolean function having the same unspecified
rows in the truth table. The functional values in the remaining rows of the truth
table for f, however, are opposite to the functional values in the corresponding rows
of the truth table for f’ Table 3.11b shows the complement of the Boolean function
given in Table 3.1 1a.
x y eee f
0 0 0 ]
0 0 I l
0 1 0 0
0 I | —
| 0 0 0
| 0 | _
| | 0 0
| I 1 I
(a)
y z ji
0 0 0 0
0 0 l 0
0 | 0 1
0 I | =
0 0 l
0 | —
1 i 0 1
| ] | 0
(b)
CHAPTER 3 Boolean Algebra and Combinational Networks 99
dc(3,5)
These two types of don’t-care conditions are equivalent for the purpose of for-
malizing the statement of a problem. If an input state never occurs, then the output
state cannot be observed. For the case in which the output state is not specified for
some input state that can occur, by convention it is agreed that the output state is of
no consequence or, equivalently, is not to be observed. Thus, it is concluded that
when a design problem involves don’t-care conditions, a truth table may be formed
without the need of specifying all the functional values. These truth tables are in-
complete Boolean functions.
Once a gate network is synthesized, it is completely deterministic. That is, for
every input state some output state must result. This is true even if a don’t-care condi-
tion is applied to the network. Don’t-care conditions provide the designer with flexi-
bility. In particular, this allows the designer to judiciously choose the assignment of
the output states to the don’t-care conditions so that the realization is optimal. This
procedure does not violate the mathematical concept of incomplete Boolean functions
since incomplete Boolean functions describe a network’s required behavior only for a
selected set of input states. Thus, in designing such networks, the output states for the
required behavior are determined by the problem specifications, and the output states
for the don’t-care conditions are determined by a criterion of an optimal realization.
To illustrate the formulation of an incomplete Boolean function as a mathematical
model for a logic-design problem, assume it is desired to design a gate network that is
an odd-parity-bit generator for the decimal digits 0 to 9 represented in 8421 BCD.
Table 3.12 gives the truth table for the odd-parity-bit generator. The variable w denotes
the bit in the 8421 code group whose weight is 8, the variable x denotes the bit whose
w x y Pp
0 0 0 0
0 0 0 0
0 0 l 0 0
0 0
0 0 0 0
0 0
0 l 1
0
0 0 0
0 0 1
0 =
0 as
0 i
l 0 a
ze
1 SS=
nic
CHAPTER 3 Boolean Algebra and Combinational Networks 101
s1=|
a1
ita
aod
NS
weight is 4, the variable y denotes the bit whose weight is 2, and the variable z denotes
the bit whose weight is 1. The parity bit that is to be produced by the gate network is
shown in the p column. Functional values are only specified for the first 10 rows as de-
termined by the statement of the problem. The last six rows are don’t-care conditions
since the binary combinations 1010 to 1111 cannot occur because they do not represent
possible inputs to the logic network. That is, this network can only have as its inputs the
10 binary combinations associated with the 10 decimal digits expressed in 8421 BCD.
The minterm canonical formula describing Table 3.12 is
Using the procedures of the next chapter, a simplified expression for this function is
x y xy nand(x,y) = (xy)
0 0 0 f
0 | 0 |
1 0 0 |
| 0
Figure 3.7 Nand-gate symbols. (a) Normal symbol. (b) Alternate symbol.
*Occasionally a special symbol is used to denote the nand-operation on a set of variables. One such
symbol is |and is referred to as a stroke. Thus, the stroke-operation on a set of n variables is defined by
the expression x, |x, |---| x, = (4x. °° x,).
CHAPTER 3 Boolean Algebra and Combinational Networks 103
nor(x,y) = (x + y)
OO
Re
OC|a
- =
oOo
oo
(Ce Gok oe ke
Again inversion bubbles are used in the gate symbols to indicate that algebraically a
Boolean nct-operation occurs at the terminal at which the inversion bubble appears.
Verbally, the output of a nor-gate is logic-1 if and only if all of its inputs are at
logic-0; otherwise, the output is logic-0.
x} x| —
(a)
Figure 3.8 Nor-gate symbols. (a) Normal symbol. (b) Alternate symbol.
Occasionally a special symbol is used to denote the nor-operation on a set of variables. One such
symbol is |, referred to as a dagger. Thus, the dagger-operation on a set of n variables is defined by the
expressions, |x 1 > | %=@ Fmt Fx):
104 DIGITAL PRINCIPLES AND DESIGN
network configurations utilizing only a single type of gate result in the realizations
of the and-, or-, and not-functions, such a gate is called a universal gate. Both nand-
gates and nor-gates are universal gates. Another important property of nand-gates
and nor-gates is that their circuit realizations are more easily achieved.
The universal property of nand-gates is illustrated in Fig. 3.9. Since in Boolean
algebra xx = x, by complementing both sides of the expression it immediately fol-
lows that (xx) = x. Hence, as shown in Fig. 3.9a, a two-input nand-gate with its in-
puts tied together is equivalent to a not-gate. Alternatively, since in Boolean algebra
x* 1 =x, then (x + 1) = x implies that a two-input nand-gate in which one input is x
and the other is the constant logic-1 also serves as a not-gate. Figure 3.9 illustrates
how the or-function is realized by the use of just nand-gates. In particular, since
(xy) =x + y, using two nand-gates to form the complements of x and y and then
using these as the inputs to a third nand-gate, the overall behavior of the network is
that of the Boolean or-function. Finally, since [(xy) ]= xy, the Boolean and-function
is achieved by the network of Fig. 3.9c where the inputs x and y are applied to a sin-
gle nand-gate to form (xy) and then the output of the gate is complemented, using a
second nand-gate, to obtain the desired results.
Nor-gates are also universal gates. Thus, they can be used to form x, x + y, and
xy according to the relationships
a (xy) =x+y
Figure 3.9 The universal property of nand-gates. (a) Not realization. (b) Or realization.
(c) And realization.
CHAPTER 3 Boolean Algebra and Combinational Networks 105
ea xX
x ——
(c)
Figure 3.10 The universal property of nor-gates. (a) Not realization. (b) Or
realization. (c) And realization.
f(w,x,y,z) = wz + wz(x + y)
Using DeMorgan’s law, this expression is rewritten as
f=wzt w2zx + ¥) Z
[wz(x + y)] = {(we)lwex+9} x4 5=Gy)
(a) (b)
(c)
Figure 3.11 Steps involved to realize the Boolean expression f(w,x,y,Z) = WZ + wz(x + y)
using only nand-gates.
f(%y,2) = x(y + 2)
Since the highest-order operation is the and-operation connecting x with y + z, it is
necessary to first complement the expression to begin the nand-gate realization pro-
cedure; that is,
f%y,2 =O +2]
CHAPTER 3 Boolean Algebra and Combinational Networks 107
(a) (b)
(c)
Figure 3.12 Steps involved to realize the Boolean expression f(x,y,z) = x(y + Z) using only nand-gates.
Now that the desired form for a nand-gate realization is obtained, the procedure ex-
plained previously is carried out. This is illustrated in Fig. 3.12a and b. Finally, in
order to obtain a realization of the original expression f, the output of the network is
complemented as shown in Fig. 3.12c.
The above algebraic procedure to obtain a nand-gate realization can also be
performed graphically. This procedure makes use of the two gate symbols shown in
Fig. 3.7 for a nand-gate. By making use of both symbols within the same logic dia-
gram, the application of DeMorgan’s law, required in the above algebraic proce-
dure, is readily achieved. The steps in the graphical procedure are as follows:
1. Apply DeMorgan’s law to the expression so that all unary operations appear only
with single variables. Draw the logic diagram using and-gates and or-gates.
2. Replace each and-gate symbol by the nand-gate symbol of Fig. 3.7a and each
or-gate symbol by the nand-gate symbol of Fig. 3.70.
3. Check the bubbles occurring on all lines between two gate symbols. For every
bubble that is not compensated by another bubble along the same line, insert
the appropriate not-gate symbol from Fig. 3.13 so that the not-gate bubble
occurs on the same side as the gate bubble.
4. Whenever an input variable enters a gate symbol at a bubble, complement the
variable. If the output line has a bubble, then insert an output not-gate symbol.
5. Replace all not-gates by a nand-gate equivalent if desired.
To illustrate this graphical procedure, again consider the expression
flw.x,y,Z) = wz + w2(x + y)
(a) (b)
(c)
Figure 3.14 Steps illustrating the graphical procedure for obtaining a nand-gate realization of the expression
fW,X,Y,Z) = WZ + w2(x + Y).
The corresponding logic diagram using and-gates and or-gates is shown in Fig.
3.14a. Next, according to Step 2, each and-gate symbol is replaced by the nand-gate
symbol of Fig. 3.7a and each or-gate symbol is replaced by the nand-gate symbol of
Fig. 3.7b as shown in Fig. 3.14b. Since each gate output bubble is connected directly
to a gate input bubble, Step 3 is not needed. Finally, as indicated in Step 4, since the
inputs x and y in Fig. 3.146 are entering at bubbles, they are complemented as
shown in Fig. 3.14c. If the alternate nand-gate symbols are replaced by the conven-
tional nand-gate symbols, then the network of Fig. 3.14c becomes that of Fig. 3.1 le.
f(w,x%,y,2) = wz + wzx + y)
CHAPTER 3 Boolean Algebra and Combinational Networks 109
To construct a nor-gate realization for it, first it is noted that the highest-order oper-
ation is the or-operation appearing between the wz and wz(x + y) terms. Thus, it is
necessary to first complement the original expression, i.e.,
SI
=[w+z+(x+y)]
(a) (b)
~l (d)
Figure 3.15 Steps involved to realize the Boolean expression f(w,x,y,Z) = wz + wz(x + y) using only nor-gates.
110 DIGITAL PRINCIPLES AND DESIGN
1. Apply DeMorgan’s law to the expression so that all unary operations appear
only with single variables. Draw the logic diagram using and-gates and
or-gates.
Replace each or-gate symbol by the nor-gate symbol of Fig. 3.8a and each and-
gate symbol by the nor-gate symbol of Fig. 3.80.
Check the bubbles occurring on all lines between two gate symbols. For every
bubble that is not compensated by another bubble along the same line, insert
the appropriate not-gate symbol from Fig. 3.13 so that the not-gate bubble
occurs on the same side as the gate bubble.
Whenever an input variable enters a gate symbol at a bubble, complement the
variable. If the output line has a bubble, then insert an output not-gate symbol.
Replace all not-gates by a nor-gate equivalent if desired.
Figure 3.16 shows the steps of the graphical procedure again being applied to
the Boolean expression
fw.xy,Z) = wz + w2(x + y)
The logic diagram is first drawn using conventional and-gates and or-gates as
shown in Fig. 3.16a. As specified in Step 2 of the above procedure, each or-gate
symbol is replaced by the nor-gate symbol of Fig. 3.8a and each and-gate symbol is
replaced by the nor-gate symbol of Fig. 3.8b. At this point, the diagram appears as
(b)
Figure 3.16 Steps illustrating the graphical procedure for obtaining a nor-gate realization of the expression
f(W,X,y,Z) = wz + wz(x + Y).
CHAPTER 3 Boolean Algebra and Combinational Networks
in Fig. 3.16b. Checking all lines between gates, each inversion bubble at one end
has a matching inversion bubble at the other end. Hence, Step 3 does not have to be
applied. Finally, as indicated by Step 4, the four inputs entering at bubbles are com-
plemented and an output not-gate is appended since the output gate in Fig. 3.16b
has a bubble. This gives the logic diagram shown in Fig. 3.16c. Comparing Fig.
3.16c to Fig. 3.15d and recalling that two symbols are possible for a nor-gate, it is
seen that the same results are obtained.
KDY
= Ky or xy
Comparing the definition of the exclusive-or-function with that of the Boolean or-
function previously defined, it is seen that they differ only when both x and y have
the value of logic-1. To emphasize this distinction, the conventional Boolean or-
function is also referred to as the inclusive-or-function.
A special gate symbol has been defined for the exclusive-or-function. This
symbol is shown in Fig. 3.17 and is frequently referred to as an xor-gate. Normally,
xor-gates are available only as two-input gates.
The exclusive-or-function has many interesting properties. These are summa-
rized in Table 3.16. Uppercase letters are used in this table to emphasize the fact
that these variables can represent expressions as well as single variables. To reduce
the number of occurrences of parentheses, by convention, it is assumed that the and-
operation takes precedence over the exclusive-or-operation. No precedence between
the inclusive-or- and exclusive-or-operations is assumed and, hence, parentheses
x
x@y
y
(a) (b)
(i) X@Y = XY + XY =(X+ Y\X+Y) GOV) = XY = A ee)
(ii) XOO = x@1 =X
(iii) xox = 0 XOX =]
(iv) NOY = XOY X@Y = X@Y = (X@Y)
(v) X@Y = YOX
(vi) X@(YOZ) = (XOY)OZ = XOYOZ
(vii) X(Y®Z) = XY@XZ
(viii) X + Y= X®YOXY
(ix) XOY = X + Yif and only if XY = 0
(x) If X®Y = Z, then YOZ = X or XOZ = Y
are needed when these two operations appear within an expression. All of the prop-
erties given in Table 3.16 are easily proved by perfect induction.
To illustrate the use of the exclusive-or properties given in Table 3.16 and to
show the usefulness of xor-gates in logic design, again consider the odd-parity-bit
generator that was designed in Sec. 3.7. At that time the Boolean expression for the
generator was obtained, i.e., Eq. (3.7). Starting from that point, the following
Boolean manipulations can now be performed:
p(w,x%,y,Z) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz
wx(yz + yz) + wx(yz + yz) + wx(yz + yz) + wx(yz + yz)
wx(y@z) + wx(y@z) + wx(y@z) + wx(v@z) by Prop. (ia-b)
(wx + wx)(y@z) + (wx + wx)(v@z)
tad
rs
“4
Volts
Vi (max)
Voltage range
Nominal logic-1 for logic=1
Vi(min)
Forbidden range of operation
other than during transition
Vmax)
Voltage range
Nominal logic-0 for logic-0
Vic min)
voltage range between Vjimin) ANd Vijay), then it is assigned to logic-0. Similarly,
when a signal value is in some high-level voltage range between Vijniny ANd Vi max)s It
is assigned to logic-1. This is illustrated in Fig. 3.20.* As long as the signal values
stay within their assigned ranges, except during transit between ranges, the logic
gates behave as intended. Steady-state signals within the forbidden range result in
unreliable gate behavior. It is because a prespecified range of signal values is re-
garded as the same logic value that digital systems are highly reliable under such
conditions as induced noise, temperature variations, component fabrication varia-
tions, and power supply variations. However, for simplicity in discussion, nominal
values of the signals are frequently regarded as their true values.
In manufacturer’s literature, the terminal behavior of the logic elements is
stated in terms of the symbols L and H, denoting low and high voltage ranges,
rather than logic-0 and logic-1. For example, Table 3.18 illustrates how a manufac-
turer might specify the terminal behavior of some type of gate circuit. Upon substi-
tuting 0 and | for L and H, respectively, the Boolean definition of the and-function
results and, hence, such a circuit is a (positive) and-gate.’
There are several properties associated with logic gates that determine the envi-
ronment in which a digital system can operate as well as introduce constraints on its
topological structure. These include noise margins, fan-out, propagation delays,
and power dissipation. Noise margins are a measure of the capability of a circuit to
operate reliably in the presence of induced noise. The fan-out of a circuit is the
*This is the positive-logic concept. Alternatively, logic-O can be assigned to the higher voltage range
and logic-1 to the lower voltage range. Such as assignment is referred to as negative logic.
‘It should be noted that if 0 and 1 are substituted for H and L, respectively, in Table 3.18, then the circuit
behavior becomes that of an or-gate. Thus, under positive logic, a circuit that behaves as an and-gate
becomes, under negative logic, an or-gate. To avoid any confusion, this book has adopted the positive-
logic convention.
CHAPTER 3 Boolean Algebra and Combinational Networks 115
number of gates or loads that can be connected to the output of the circuit and still
maintain reliable operation. The propagation delays of a gate circuit are influencing
factors that determine the overall operating speed of a digital system since they es-
tablish how fast the circuit can perform its intended function. Finally, power dissi-
pation is the power consumed by the gate that, in turn, determines the size of the
power supply needed for the digital system.
Actual high-level
output range Allowable
Vonmin) high-level
oe
Worst-case
high-level
noise margin
Vie(min)
+ | Vizrnaxy
~~ Worst-case
low-level
noise margin Allowable
i low-level
Vor max)
le input range
! Giehs =.
| Actual low-level
Gate |
ae output range
=
te (b)
Figure 3.21 Noise effects. (a) Interconnection of two gates with induced noise.
(b) Noise margins.
Vonimin) ~ Vinrmin) does not affect the logic behavior of the two gates in cascade.
This is called the worst-case high-level noise margin. As seen from these defini-
tions, noise margins are a measure of a digital circuit’s immunity to the presence
of induced electrical noise.
It should be noted that the above noise margins are worst-case values. If the ac-
tual low-level output voltage of gate 1 is Vo;;, where Vo;,; S Vozimax), then the actual
low-level noise margin, NM_, is given by
Similarly, if the actual high-level output voltage of gate 1 is Voz,;, where Voy, =
Vormin)s then the actual high-level noise margin, NM jy, is
NM = Vom — Virmin)
3.10.2 Fan-Out
As discussed in the Appendix, the signal value at the output of a gate is dependent
upon the number of gates to which the output terminal is connected. Since for
proper operation the signal values must always remain within their allowable
ranges, this implies that there is a limitation to the number of gates that can serve as
loads to a given gate. This is known as the fan-out capability of the gate. Again,
manufacturers specify this limitation. It is then the responsibility of the designer of
a logic network to adhere to the limitation. To do this, circuits known as buffers,
which have no logic properties but rather serve as amplifiers, are sometimes incor-
porated into a logic network.
CHAPTER 3 Boolean Algebra and Combinational Networks
time
*Similar waveforms can be drawn for other types of gates under the assumption that all but one of the
gate inputs are held fixed and the remaining input is changed to cause the output to change, possibly, but
not necessarily, with an inversion.
118 DIGITAL PRINCIPLES AND DESIGN
CHAPTER 3 PROBLEMS
3.1 Using the basic Boolean identities given in Table 3.1, prove the following
relationships by going from the expression on the left side of the equals sign
to the expression on the right side. State which postulate or theorem is
applied at each step.
Ry OE AY = eh
(2+ ay Fixe + ty) = xy
Pye 2 az) = ay
Gy + yz xz) = xy + YZ + XZ
xy + yz + xz = xy + xz (consensus theorem)
sp
mono
(x + y)\(x + z) = xz + xy
SO WY wer) = ay + yz tog
het ey sey 2b ee Sy ch ez
3.2 Prove that in a Boolean algebra the cancellation law does not hold; that is,
show that, for every x, y, and z in a Boolean algebra, xy = xz does not imply
y =z. Doesx + y =x + zimply y = z?
3.3 Using the method of perfect induction, prove the following identities for a
two-valued Boolean algebra.
as ec boy = (eye + Zz)
b. @+tyt+2)=xyz
Ce ey Piyz exc ey oe ez (consensus theorem)
3.4 Prove that no Boolean algebra has exactly three distinct elements.
3.5 Construct the truth table for each of the following Boolean functions.
a. (XYZ) = yore + ye Z)
Ds fGoy.Z) = Gy x2) yz
CHAPTER 3 Boolean Algebra and Combinational Networks 119
CI yz y) tz) (4 4*z)
d. f(w,xy,z) = wxy + wy + z)
3.6 For each of the truth tables in Table P3.6, write the corresponding minterm
canonical formula in algebraic form and in m-notation.
AMI! Write each of the following minterm canonical formulas in algebraic form
and construct their corresponding truth tables.
a. f(x, y,z) = 2m(0,2,4,5,7)
b. f(w,xy,z) = Ym(1,3,7,8,9,14,15)
3.8 For each of the truth tables in Table P3.6, write the corresponding maxterm
canonical formula in algebraic form and in M-notation.
|
Table P3.6
eBeerFPr
O]#
OCC eFeororododrol|an
KEerPoorrools
Cor
EIS
OF
rFOoOrR
—Q ~~
Se
SoSeoSe
aH
sae
So):
So
eS
SSS
Sk &errr|
CO}
OO
KF
FFP eS
Oo
ee
1S
SK
OS
SS
oS
oe
SS
oe
SoS mi
omic
—ooo.
on
BrPododrrdododrrcoorrco
OCOOrF
(b)
120 DIGITAL PRINCIPLES AND DESIGN
c. flw,xy,z) = 2m(1,4,6,7,8,12,14)
d. f(w,x,y,z) = ILM(3,7,8,10,12,13)
3.17 Transform each of the following canonical expressions into its other
canonical form in decimal notation.
a. f(x,y,z) = &m(1,3,5)
b. fix y,z) = IIM(3,4)
c. flw,xy,z) = Ym(0,1,2,3,7,9,11,12,15)
d. f(w,xy,z) = ILM(0,2,5,6,7,8,9,11,12)
3.18 Write a Boolean expression for each of the logic diagrams in Fig. P3.18.
3.19 Draw the logic diagram using gates corresponding to the following Boolean
expressions. Assume that the input variables are available in both
complemented and uncomplemented forms.
ane SfWea, 2) = Ey ez) Pe
b. fvywxy2j=at+y{v+mtyvt+twt+avtxtz)}
c. f(v,w,xy,Z) = vlw(xy + z) + xz] + vw
axe
ANS
Figure P3.18
122 DIGITAL PRINCIPLES AND DESIGN
NST
(a)
=|
NS
Figure P3.20
3.20 For each of the gate networks shown in Fig. P3.20, determine an equivalent
gate network with as few gate inputs as possible.
3.21 Besides gate networks, networks consisting of other two-state devices are
also related to a Boolean algebra. For example, a Boolean expression for a
configuration of switches can be written to describe whether there is an open
or closed path between the network terminals. The Boolean constant | is
assigned to a closed switch and the existence of a closed path between the
terminals of the configuration of switches; while the Boolean constant 0 is
assigned to an open switch and the existence of an open path between the
terminals of the configuration of switches. Algebraically, each switch is
denoted by a Boolean variable in which the variable is uncomplemented if
the switch is normally open and complemented if it is normally closed.
Under this assignment, switches placed in series can be denoted by the and-
Figure P3.21
CHAPTER 3 Boolean Algebra and Combinational Networks 123
|
b. Construct the truth table of the complement function.
Write both the minterm and maxterm canonical formulas in decimal
notation for the complement function.
3.23 Show that the nand-operation is not associative, i.e., nand[x, nand(y, z)] #
nand[nand(x, y), z]. Is the nor-operation associative?
3.24 Write a Boolean expression for each of the logic diagrams in Fig. P3.24.
3.25 Using algebraic manipulations, obtain a logic diagram consisting of
only nand-gates for each of the following Boolean expressions. Do not
alter the given form of the expressions. Assume the independent
Table P3.22
x y z
0 0 0 0 _—
0 0 0 | —
0 0 1 0 1
0 0 1 ] 1
0 I 0 0 0
0 ] 0 | —
0 | | 0 1
0 1 ] | 0)
1 0 0 0 0
] 0 0 ] —
] 0 ] 0 i
| 0 ] | —
1 | 0 0 0
1 | 0 | —
| | 1 0 —_
i 1 1 i 1
124 DIGITAL PRINCIPLES AND DESIGN
2NINS
(c) (d)
Figure P3.24
Figure P3.29
De = S71
*Networks involving identical gate types following each other can evolve when using gates with a
limited number of inputs. In this network it is assumed that only two-input and-gates and or-gates are
available.
126 DIGITAL PRINCIPLES AND DESIGN
b= (a at ay
ee > |
forn-1=k=1
09i AS
a. Design an n-bit Gray-to-binary converter.
Using the above algorithm as a starting point, devise an algorithm to
convert an n-bit binary number into an n-bit Gray code group. Design
the corresponding n-bit binary-to-Gray converter.
3.35 For the Hamming code discussed in Sec. 2.12, design a logic network which
accepts the 7-bit code groups, where at most a single error has occurred, and
generates the corresponding corrected 7-bit code groups. To do this, first
design networks for cj", c3, and c¥. Then, design a network for correcting
the appropriate bit. Use exclusive-or-gates whenever possible. However,
assume that all available exclusive-or-gates have only two input terminals.
(Hint: The network for c* can be constructed with just two-input exclusive-
or-gates.)
CHAPTER
Simplification of Boolean
Expressions
|n Sec. 3.6 it was shown that by use of the theorems and postulates of a Boolean
algebra, it is possible to obtain “simple” expressions. Although the concept of
simplicity was not formally defined, it was observed that neither the approach to
equation simplification nor the capability of concluding when an equation is simple
is obvious.
At this time the simplification problem is studied in detail and formal tech-
niques for achieving simplification are developed. In particular, two general ap-
proaches are presented. First, a graphical method is introduced that can handle
Boolean expressions up to six variables. Then, a tabular procedure is developed that
is not bounded by six variables and that is capable of being programmed on a com-
puter since it is algorithmic. These approaches are applied to both a single Boolean
expression that describes single-output network behavior, and a collection of
Boolean expressions describing multiple-output networks. H
127
128 DIGITAL PRINCIPLES AND DESIGN
exhaustive list of items that should be considered in the network evaluation, nor are
they independent.
Even though all of the above factors are important, a single simple design pro-
cedure encompassing them all does not exist. However, if certain aspects of these
factors are considered of prominent importance, then a formal approach to the de-
sign of optimal logic networks can be developed.
4.2.1 implies
Consider two complete Boolean functions of n variables,f, and f,. The function f,
implies the function f; if there is no assignment of values to the n variables that
makes f, equal to 1 andf, equal to 0. Hence, for the complete Boolean functions/,
and f;, wheneverf, equals 1, thenf; must also equal 1; and, alternatively, whenever
fo equals 0, thenf,must equal 0. Since terms and formulas (or, expressions) describe
functions, the concept of implies may also be applied to terms and formulas, e.g.,
whether or not a particular term implies a function. To illustrate the concept of im-
plies, consider the functions f,(x,),z) = xy + yz and f(x, y,z) = xy + yz + xz tabu-
lated in Table 4.1a. By applying the above definition to the truth table, it is readily
seen that f, implies f,. As a second example, consider the functions f;(x,y,z) =
(x + y)(y + z\(x + z) and f,(x,y,z) = (x + y)(y + z) shown in Table 4.10. In this
casef; implies fy.
Now consider a single term that appears in the normal formula for a function.
In the case of a disjunctive normal formula, i.e., one in sum-of-products form, each
130 DIGITAL PRINCIPLES AND DESIGN
x y z fi hr
0 0 0 0 0
0 0 1 0 1
0 | 0 0 0
0 1 ] | |
| 0 0 0 0
| 0 | 0 0
| 1 0 | |
1 1 | 1 i
(a)
x y Zz 5 fs a Ss
0 0 0 0 0
0 0 1 0 0
0 ] 0 | ]
0 1 1 | 1
| 0 0 0 0
| 0 1 | |
| ] 0 0 l
| 1 1 1 |
(dD)
of its product terms implies the function being described by the formula. This fol-
lows from the fact that whenever the product term has the value 1, the function must
also have the value |. On the other hand, for a conjunctive normal formula, i.e., one
in product-of-sums form, each sum term is implied by the function, i.e., the function
implies the sum term. In this case, whenever the sum term has the value 0, the func-
tion must also have the value 0.
4.2.2 Subsumes
A comparison between two product terms or two sum terms is also possible. A term
t, is said to subsume a term f, if and only if all the literals of the term 1, are also lit-
erals of the term f;. As an example, consider the product terms xyz and xz. From the
definition of subsumes, the product term xyz subsumes the product term xz. In a
similar manner, for the two sum terms x + y + z and x + z, the sum term x + y + Zz
subsumes the sum term x + z.
From the above discussion it is seen that if a product term f, subsumes a prod-
uct term f,, then f, implies f, since whenever f, has the value 1, t, also has the value
1. On the other hand, if a sum term f#, subsumes a sum term ¢, then ft, implies f,
CHAPTER 4 Simplification of Boolean Expressions 131
since whenever f, has the value 0, t, also has the value 0. By the absorption law, i.e.,
Theorem 3.6, if one term subsumes another in an expression, then the subsuming
term can always be deleted from the expression without changing the function
being described.
SS —
ol;e
ne) oie
See
eas
rc)
—a SS)
OSS
SS
SS
—-.
132 DIGITAL PRINCIPLES AND DESIGN
the value | and the function has the value 0 when (x,y,z) = (1,1,1). Consequently,
xz is not an implicant of the function. Similarly, the product term xy is not an impli-
cant since it has the value 1 when (x,y,z) = (1,0,0) but the function has the value 0.
Now that it has been established that yz is an implicant of the function given in
Table 4.2, we can ask if yz is a prime implicant of the function. To answer this, con-
sider the possible product terms having one less literal that are subsumed by yz,
namely, the term y and the term z. It is easily checked with the aid of Table 4.2 that
neither the term x nor the term y implies the function. Hence, yz is an implicant of
the function that subsumes no other implicant of the same function. By definition,
yz is a prime implicant. By a similar analysis, x is also a prime implicant of the
function given in Table 4.2.
The significance of prime implicants is given by the following theorem:
Theorem 4.1
When the cost, assigned by some criterion, for a minimal Boolean for-
mula is such that decreasing the number of literals in the disjunctive nor-
mal formula does not increase the cost of the formula, there is at least one
minimal disjunctive normal formula that corresponds to a sum of prime
implicants.
Proof
To justify the above theorem, assume that there is a minimal disjunctive
normal formula of a given Boolean function that is not the sum of only
prime implicants. In particular, let t, be one such term. ¢, must still be an
implicant of the function since, being a term describing the function, it
must imply the function. By definition of a prime implicant, there must be
some term f, that is a prime implicant such that t; subsumes ¢,. By defini-
tion of subsumes, f, must have fewer literals. Since t, also implies the func-
tion, it may be added to the original formula without changing the function
being described. But f, subsumes ¢,. By the absorption law, ft, can be re-
moved, leaving an expression with the same number of terms but with
fewer literals. Since it is assumed that the cost of a formula does not in-
crease by decreasing the number of literals, the cost of the new expression
is no greater than the cost of the original expression. If this argument is ap-
plied to every term that was not originally a prime implicant, then an ex-
pression of only prime implicants and of minimal cost results. i
As a consequence of the above theorem, the prime implicants are of interest for
establishing a minimal disjunctive Boolean formula. This formula, in turn, suggests
a minimal two-level realization with and-gates followed by a single or-gate. The set
of prime implicants of a function can be obtained by forming all possible product
terms involving the variables of the function, testing to see which terms imply the
function, and then, for those that do, checking to see if they do not subsume some
other product terms that also imply the function. Efficient algorithmic procedures
can be developed to carry out this seemingly complex process.
CHAPTER 4 Simplification of Boolean Expressions 133
Theorem 4.2
For any cost criterion such that the cost of a formula does not increase
when a literal is removed, at least one minimal disjunctive normal formula
describing a function is an irredundant disjunctive normal formula.
Proof
If a minimal disjunctive normal formula is not an irredundant expres-
sion, then it must fail at least one of the two properties of an irredundant
expression. If any of the terms is not a prime implicant, then the term
can be replaced by a product term that is a prime implicant. In this way
the expression is written with fewer literals. If any term can be elimi-
nated from the expression, then the new expression without this term
also has fewer literals. In both cases the number of literals is decreased.
Since it is assumed that the cost of the expression does not increase by
decreasing the number of literals, the resulting expression is still mini-
mal. Furthermore, by definition, it is an irredundant disjunctive normal
formula.
an implicate of the function that subsumes no other implicate of the function with
fewer literals. Thus, a prime implicate is a sum term that is implied by the function
with the additional property that if any literal is removed from the term, then the re-
sulting sum term no longer is implied by the function.
The maxterms of a function are examples of its implicates. This follows from
the fact that any time a maxterm of a function has the value 0, the function must
also have the value 0. Therefore, by the definition of implies, the function must
imply its maxterms. Referring again to Table 4.2, it is readily observed that one of
the implicates of the function is the maxterm x + y + z. In addition, the sum term
x + zhas the value 0 for ihe two 3-tuples (x,y,z) = (1,0,0) and (1,1,0). Since the
function given in Table 4.2 has the value 0 for these two 3-tuples, it immediately
follows that the sum term x + z also is implied by the function and is therefore an
implicate of the function. Since x + y + z subsumes x + z, x + y + zis not a prime
implicate. Finally, neither the term x nor the term z is implied by the function. Thus,
x + z must be a prime implicate of the function.
Using an argument similar to that in the previous section, prime implicates can
be used to obtain minimal conjunctive normal formulas. Formally,
Theorem 4.3
When the cost, assigned by some criterion, for a minimal Boolean formula
is such that decreasing the number of literals in the conjunctive normal
formula does not increase the cost of the formula, there is at least one min-
imal conjunctive normal formula that corresponds to a product of prime
implicates.
Theorem 4.4
For any cost criterion such that the cost of a formula does not increase
when a literal is removed, at least one minimal conjunctive normal formula
describing a function is an irredundant conjunctive normal formula.
It should be noted that prime implicates are the dual concept to that of prime
implicants and irredundant conjunctive normal formulas are the dual concept to ir-
redundant disjunctive normal formulas. Future discussions revolve around both
minimal disjunctive normal formulas and minimal conjunctive normal formulas.
Although it is beyond the scope of this book to prove formally, it can be shown that
the prime implicates of a complete Boolean function f are precisely the comple-
ments of the prime implicants of the function f. In addition, the irredundant con-
junctive normal formulas of the complete Boolean function f are exactly the com-
CHAPTER 4 Simplification of Boolean Expressions 135
x
0 1
(b)
a coordinate system according to the axes labeling. Thus, the 2-tuple (x,y) = (0,1)
uniquely locates the cell in the first row, second column of Fig. 4.2b. The corre-
sponding functional values appear as entries in the cells. Note that two cells are
physically adjacent on the map if and only if their corresponding n-tuples differ in
exactly one element.
0 1
0 0 f(0,0)
S(0,0) |f(0,1)
0 | f(O.1)
vi AO f(1,0)
(yee fa.) ; ; )
(a) (b)
0 0 0 f(0,0,0)
Qaret0:s «I f(0,0,1)
0 0 F(0,1,0) ¥
0 Nay f(0,1,1) 00 01 11 10
(a) (c)
their respective 3-tuples differ in exactly one element, the labels along the top of the
map are 00, 01, 11, and 10.
To illustrate the mapping of a specific Boolean function, consider the truth
table of Fig. 4.4a. Since this is a three-variable function, the general map structure
of Fig. 4.3c is used. The completed Karnaugh map is shown in Fig. 4.45.
A four-variable Karnaugh map has 2* = 16 cells in which each cell is adjacent
to exactly four other cells. This is achieved by having the map appear on the surface
of a torus. By making two cuts on the torus and then unrolling it, a two-dimensional
(a)
f(0,0,0,0)
(0,0,0,1)
(0,0,1,0)
f(0,0,1,1)
f(0,1,0,0) ve
00 01 11 10
f(0,1,0,1) + =
f(0,1,1,0) 00 |,(0,0,0,0) |f(0,0,0,1) |£(0,0,1,1) | f(0,0,1,0)
rin enene)
f£(1,0,0,0)
fC,0,0,1) O1 |f(0,1,0,0) |f(0,1,0,1) | f(0,1,1,1) | f(0,1,1,0)
f(1,0,1,0) wx
f0,0,1,))
11 |fC,1,0,0) | fC.,1,0,1) | fC1..1,1,1) |fC.,1,1,0)
if(1,1,0,0) = ae)
f(1,1,0,1)
Gill) 10 | f(1,0,0,0) |f(1,0,0,1) | f,0,1,1) | fC,0,1,0)
Aloo)
(a) (b)
Figure 4.5 A four-variable Boolean function. (a) Truth table. (b) Karnaugh map.
representation of the map is obtained. This map is shown in Fig. 4.5. In this case it
is necessary to keep in mind that, from an interpretive point of view, the left and
right edges are connected and the top and bottom edges are connected. Under these
restrictions it should be noticed that each cell is adjacent to exactly four other cells
and that two cells are physically adjacent if and only if their respective 4-tuples dif-
fer in exactly one element.
A variation on Karnaugh map construction that is occasionally seen is shown in
Fig. 4.6. In this case, the axes are not labeled with the 0 and 1 elements, but rather a
bracket is used to indicate those rows and columns associated with a variable having
an assignment of 1. Thus, the map in Fig. 4.6a is analogous to the map in Fig. 4.3c
and the map in Fig. 4.6 is analogous to the map in Fig. 4.5).
= ale
‘Siaaa
(a) (b)
read from a map. For the map of Fig. 4.4b, each cell containing a 1 represents a
minterm of the function. Thus, directly from the map we can write
00
SEE 01
EERIE 11
EE 10
00 1 | 0 1
O1
It was also established in Chapter 3 that each row of a truth table is described
by a maxterm if 0’s of the n-tuples are used to denote uncomplemented variables
and 1’s of the n-tuples are used to denote complemented variables. Correspond-
ingly, each cell of a Karnaugh map can also be associated with a maxterm. The
maxterm canonical expression for a function is obtained by forming the product of
maxterms for those n-tuples in which the function has the value of 0. Thus, for each
cell of a Karnaugh map with a 0 entry, a maxterm can be written. For example, the
map of Fig. 4.4b corresponds to the maxterm canonical expression
The reverse of this process permits a Karnaugh map to be formed from a maxterm
canonical expression. If the expression is
fWexeyizT = GE x Ey ZW a ee Pe)
“(Wt k + y+ Zw +a y + Zw +X yz)
“(Wit x yt ZW Pe Vr 200 ae ye 2)
then by using O’s for uncomplemented variables and 1’s for complemented vari-
ables in the maxterms and by entering a 0 on the map for each n-tuple describing a
maxterm and a | otherwise, the map of Fig. 4.7 results.
Finally, recall that a decimal representation for minterms and maxterms was in-
troduced in Chapter 3. In particular, the decimal equivalent of the n-tuple for each
row of a truth table is used to designate the minterm or maxterm associated with that
row. Thus, each cell of a map can be referenced by a decimal number. Figure 4.8
shows the decimal numbers which define each cell of a Karnaugh map. In this way,
for the Karnaugh map of Fig. 4.4b, the decimal representation of the canonical ex-
pression is written directly as
f(%y,z) = Xm(0,2,4,5)
or f(x, y,z) = IIM(1,3,6,7)
YZ
00 01 i 10 ‘
00/0 | 3 2
2 ol] 4 5 7 6
00 01 i 10 Wx
=|
0] 0 | 3 2 Bae ae 13 15 14
‘ IL
Pl a4 5 7 6 iol 8 9| a 10
Figure 4.8 Karnaugh maps with cells designated by decimal numbers. (a) One-variable map. (b) Two-variable
map. (c) Three-variable map. (d) Four-variable map.
CHAPTER 4 Simplification of Boolean Expressions 141
yZ yZ
(Oe See Bue eld 00. 0h 11 a2 i0
00 a | |
Ol | 01 i"
Wx wx
1 ul
10 r 10
=
(a) (b)
YZ YZ
OQ emol lest
=) ali
mal |
lineal Cee Uae beelame
00 00 ‘ii
(c) (d)
Figure 4.9 Typical map subcubes for the elimination of one variable in
a product term. (a) wWxz. (b) xyz. (C) wxz. (d) xyZ.
With reference to Fig. 4.9a and the labels on the map’s axes, the variable w has
the value of 0 for both |-cells and the variable x has the value of | for both 1-cells.
Since these variables keep the same value for all cells in the subcube, they must ap-
pear in the product term as wx. In addition, the subcube occurs in the two center
columns of the map. As indicated by the map labels in these two columns, the y
variable changes value while for both |-cells the z variable has the value of 1. Thus,
the y variable is the one that is eliminated as a consequence of the cell adjacencies,
and the product term has the literal z as a consequence of the z variable having the
same value for both |-cells. Combining the results, the subcube of Fig. 4.9a corre-
sponds to the product term wxz. In a similar manner, the product terms correspond-
ing to the other subcubes in Fig, 4.9 are written.
Just as a subcube consisting of two |-cells corresponds to a product term with
n — | variables, any subcube of four l-cells represents a product term with two
variables less than the number of variables associated with the map. To illustrate
this, consider the four |-cells of Fig. 4.10a. Algebraically these four cells corre-
CHAPTER 4 Simplification of Boolean Expressions 143
(d) (e)
Figure 4.10 Typical map subcubes for the elimination of two variables in a product term. (a) xy. (b) yz. (C) WX.
() wz. (e) XZ.
spond to the expression wxyz + wxyz + wxyz + wxyz. The following algebraic ma-
nipulations can now be performed:
wxyz + wxyz + wxyz + wxyz = wxy(z + z) + wxy(z + Z)
= wxy + wxy
(w + w)xy
= xy
By inspecting the axes labels for these four cells, the product term is written di-
rectly. The subcube appears in the two center rows of the map, from which it is seen
that the variable w changes value (and hence is eliminated) while the variable x has
the value of 1. Thus, the literal x appears in the product term. Furthermore, the sub-
cube appears in the first two columns of this map, from which it is seen that y has
the value of 0 while z changes value (and hence is eliminated). This implies the
144 DIGITAL PRINCIPLES AND DESIGN
(d)
Figure 4.11 Typical map subcubes for the elimination of three variables
in a product term. (a) w. (b) Z. (c) xX. (d) Z.
product term has the literal y. Combining the results, it is concluded that this sub-
cube is associated with the product term xy.
As a final illustration, consider Fig. 4.10b. It is first noted that no row variables
have the same value for every 1-cell of the columnar subcube. Thus, neither the w
nor the x variable appears in the product term. Furthermore, since both the y and z
variables have the value of | for all l-cells of the subcube, the resulting product
term is yz.
Summarizing, any rectangular grouping of l-cells on an n-variable map having
dimensions 2% X 2” consists of 2“*” 1-cells and represents a product term with n —
a — b variables where a and b are nonnegative integers. The corresponding product
term has an uncomplemented variable if the variable has the value of | for every
1-cell associated with the subcube and a complemented variable if the variable has the
value of 0. The variables that are eliminated correspond to those that change values.
The above discussion was concerned with the recognition of product terms on a
Karnaugh map and how they are read directly. A similar discussion applies to
CHAPTER 4 Simplification of Boolean Expressions 145
11 10 00
Ol 01
wx wx = |
11 \]
10 10
(a) (b)
Figure 4.12 Typical map subcubes describing sum terms. (a) w+ X + y. (b) x + y. (c) y.
00 Ol
0 0 0
%
1 0 0
such that every |-cell, and no 0-cells, is included in at least one of these subcubes.
The corresponding product terms are all implicants of the function. Furthermore, by
summing the product terms associated with this set of subcubes, an algebraic ex-
pression is obtained that describes the function. One obvious case is if the individ-
ual 1-cells themselves are selected as 2° X 2° subcubes. Then, the resulting expres-
sion is the minterm canonical formula.
Although all the implicants of a function can be determined using a Karnaugh
map, it is the prime implicants that are of particular interest as evident by Theo-
rems 4.1 and 4.2 in Sec. 4.2. Recalling from Sec. 4.2, a prime implicant is an impli-
cant of a function that subsumes no smaller implicant that implies the same func-
tion. The question can now be asked: How is this related to the subcubes on the
Karnaugh map?
To answer this question, consider the map shown in Fig. 4.13. The 2° 2° sub-
cube labeled () corresponds to an implicant of the function; namely, the minterm
xyz. However, the cell associated with subcube @) can also be grouped with the cell
below it to form the 2' < 2° subcube @). The corresponding product term for this
subcube is yz and is also an implicant of the function. Note that subcube (A) is to-
tally contained within subcube (@). Furthermore, the term xyz subsumes the term yz
and, hence, xyz is not a prime implicant. As is seen by this illustration, as a subcube
gets larger, the corresponding product term gets smaller, i.e., has fewer literals. In
addition, if one subcube is totally contained within another subcube, then the literals
associated with the product term of the larger subcube are always a subset of the lit-
erals associated with the product term of the smaller subcube. Again consider the
map of Fig. 4.13. It can now be concluded that a smaller term than yz, which is sub-
sumed by yz, requires a subcube of 1-cells that totally contains subcube (@). How-
ever, since the next allowable larger size subcube of 1-cells must consist of four
1-cells,* it is seen that no such subcube is possible in Fig. 4.13. Hence, this leads to
*Recall that a 2° x 2? rectangular grouping must always consist of a power-of-two number of cells.
Furthermore, all references to subcubes on a Karnaugh map imply that they have the dimensions 2° X 2°.
CHAPTER 4 Simplification of Boolean Expressions 147
* As indicated in Sec. 4.1, the number of gate inputs in a two-level realization of a normal formula is
given by the sum of the number of literals and the number of terms having more than | literal in the
expression and then subtracting | if the expression consists of only a single term.
148 DIGITAL PRINCIPLES AND DESIGN
The list of
terms are
the prime
implicants
cube ®); however, it is not totally contained in either of these subcubes. Hence, the
product term associated with subcube ©) does not subsume either of the other two
product terms. Finally, there are no subcubes of 1-cells that consist of 2° = 1 cell
and that are not contained in one of the previous three subcubes. Thus, there are
three prime implicants of this function. It should be noted that all the 1-cells are
contained in some subcube if just the subcubes (4) and @)are considered. Conse-
quently, the function is describable by the expression f(x,y,z) = xy + xz. It can be
concluded from this example that not all the prime implicants are needed, in gen-
eral, to obtain a minimal disjunctive normal formula.
As another example of determining the prime implicants of a function from a
Karnaugh map, consider the function shown in Fig. 4.16. The largest subcubes of
1-cells forming a 2* x 2° = 2**? rectangle are subcubes () and @), each of which
consists of 2* = 4 1-cells. These subcubes represent the terms wz and wy, respec-
tively. Next it is necessary to find all subcubes of 1-cells that consist of 2’ = 2
cells and that are not entirely contained in any other single subcube already
found. Only subcube ©, which represents the term xyz, satisfies this condition.
Finally, all remaining 1-cells (subcubes of 2° = 1 cell) not contained in some al-
ready found subcube correspond to prime implicants. In this case, subcube ©) es-
tablishes that the term wx yz is a prime implicant. For this function there are four
prime implicants, all of which are necessary for the minimal disjunctive normal
formula.
With practice it becomes relatively easy to recognize the prime implicants on a
Karnaugh map. More important still, however, is the fact that an optimum set of
prime implicants is fairly evident by inspection.
Before continuing with the discussion of minimal expressions, a simple obser-
vation can be made. If a function is described by a Boolean formula not in canonical
form, then it is possible to draw the Karnaugh map without having to first obtain the
canonical formula or constructing its truth table. The map is obtained by first ma-
nipulating the expression into sum-of-products form and then placing 1’s in the ap-
propriate cells for each product term. For example, consider the function
implicant) in the collection. Thus, in Fig. 4.15, cells 0 and 7 are essential 1-cells and
xy and xz are essential prime implicants. Similarly, in Fig. 4.16, cells 1, 2, 6, 8, and
13 are essential 1-cells. All the prime implicants of Fig. 4.16 are essential prime im-
plicants. It should be noted in Fig. 4.16 that essential 1-cells 2 and 6 are both associ-
ated with the same prime implicant.
What is significant about essential prime implicants is that every essential
prime implicant of a function must appear in all the irredundant disjunctive normal
formulas of the function and, hence, in a minimal sum. That this is true follows
from the fact that each essential prime implicant has at least one 1-cell that is asso-
ciated with no other prime implicant. Certainly that particular 1-cell represents an n-
tuple for which the function is 1; and the corresponding essential prime implicant is
the only prime implicant that equals | for this n-tuple. Since an irredundant disjunc-
tive normal formula consists of a sum of prime implicants and the expression must
equal | for all n-tuples in which the function is 1, the essential prime implicants are
necessary in forming the irredundant expressions.
Using the Karnaugh map in reverse, each term is placed on the map as explained
previously. The resulting map is shown in Fig. 4.18a, where the indicated sub-
cubes correspond to the three product terms of the given function.* To obtain a
minimal sum, it is necessary to determine the essential prime implicants by first
*These subcubes are not necessarily the subcubes associated with a minimal sum.
152 DIGITAL PRINCIPLES AND DESIGN
(a)
detecting essential 1-cells. Consider cell 0. It is observed that all subcubes incorpo-
rating cell 0 are a subset of the single 2° X 2° subcube consisting of the first row of
cells. Thus, cell 0 is an essential 1-cell, indicated by an asterisk in Fig. 4.185, and
is associated with the prime implicant wx. Alternatively, it should be noted that
cell 2 is also an essential 1-cell for the same term. Next, it is noted that cell 5 is an
essential 1-cell and is associated with the prime implicant wz. Finally, it is ob-
served that cell 11 is an essential 1-cell (as well as cell 15) and is associated with
the prime implicant yz. It is now seen in Fig. 4.185 that all the 1-cells are included
in some subcube. Thus, the minimal sum consists of the three essential prime im-
plicants; 1.e.,
f(w,x%y,z) = wx + wzt yz
EXAMPLE
4.2 [a
Consider the function
f(w,x%y,Z) = &m(0,1,2,4,5,7,9,12)
shown in Fig. 4.19. The essential 1-cells are indicated by asterisks, and the subcubes
associated with the essential prime implicants are also shown. Since all 1-cells are
covered, the minimal sum is
fW,%Y,Z) = wxzt wxz t+ xyz + xyz
Note that if the essential 1-cells are not grouped first, then it would be tempting to
group the four l-cells in the upper left corner of the map. If this is done, then it
might erroneously be believed that the term w y, which is a prime implicant, should
be in the minimal sum.
CHAPTER 4 Simplification of Boolean Expressions 153
SE
EXAMPLE
ee 4.3
Consider the map in Fig. 4.20. There are four prime implicants of this function.
These are indicated by the four subcubes in the figure. Two of the prime implicants
are essential since they include the essential l-cells indicated by asterisks. The es-
sential prime implicants are yz and yz. After these two subcubes are formed, there is
only one I-cell that must still be grouped (cell 6). This cell can be placed in either
the subcube representing the term xy or the subcube representing the term xz, both
of which are indicated as dashed subcubes. Since only one of the two dashed sub-
cubes is needed to complete the covering of all the 1-cells, there are two minimal
sums for this function
TRG) Veet XY
and av) V2 eye a
EXAMPLE 4.4
Consider the map in Fig. 4.21. First the essential prime implicants are determined.
Cells 0, 2, and 9 are the only essential 1-cells. Cells 0 and 2 belong to the essential
prime implicant wz; while cell 9 belongs to the essential prime implicant wyz. At
154 DIGITAL PRINCIPLES AND DESIGN
this point, three 1-cells still need to be grouped, namely, cells 7, 12, and 15. Since
none of these cells are essential |-cells, the constraint of using as few subcubes as
possible and keeping the subcubes as large as possible is applied. This suggests that
cells 7 and 15 should be grouped together, which corresponds to the prime impli-
cant xyz. The remaining 1|-cell (cell 12) can be grouped with either the cell above it
or the cell next to it as indicated by the dashed subcubes in the figure. Only one. of
these subcubes is needed to complete the covering of all the 1-cells. Thus, there are
two minimal sums
flw,xy,Z) = wz + wyz + xyz + xyz
and f(w,x%y,z) = wz + wyz + xyz + wxy
|EXAMPLE 4.5
Consider the map in Fig. 4.22. None of the I-cells are essential 1-cells and hence
there are no essential prime implicants. Careful analysis reveals that there are six
prime implicants for this function. These correspond to the six subcubes shown col-
lectively in Fig. 4.22a and b. However, with the interest of using a minimum num-
YZ YZ
00 Ol 1] 10 00 01
tes Seto
0
(a) (b)
ber of prime implicants, either of the groupings shown in Fig. 4.22a or b suggests a
minimal sum. Thus, both
and Tikva ye te
are minimal sums.
f(w,x,y,z) = Xm(1,3,4,5,6,7,11,14,15)
Wx ate WZ + xy + yz
S(w,x,y,Z) =
Using the cost criterion of number of gate input terminals, the minimal sum has a
cost of 12. Now consider the 0-cells of this function and Fig. 4.23b. Cell 0, as well
as cells 2 and 10, is an essential 0-cell since all subcubes involving this cell are con-
tained within the 2' X 2' subcube labeled ().* The corresponding essential prime
implicate is (x + z). Similarly, cell 13, as well as cells 9 and 12, is an essential
0-cell. Thus, the subcube labeled corresponds to the essential prime implicate
(w + y). Since all the 0-cells are now included in at least one subcube, the minimal
product for this function is
f(w,x y,z) = (x + z)(w + y)
The cost of this expression is 6 using the cost criterion of number of gate input ter-
minals.
When a minimal two-level gate network is to be realized, it is necessary to de-
termine both the minimal sum and the minimal product of the function. Although
both expressions are equivalent, in general one form has a lower cost than the other.
There is no way to determine which form has the lower cost until both the minimal
sum and minimal product are obtained. However, since the minimal sum and mini-
mal product can both be determined from the same Karnaugh map, it is not difficult
to obtain both expressions. For the function of Fig. 4.23, a minimal two-level gate
network would be realized from the minimal product because of its lower cost.
As a second example, consider again the map of Fig. 4.21. This map is redrawn
in Fig. 4.24. All of the essential 0-cells are indicated with asterisks. These three cells
are used to determine the three essential prime implicates. At this point there remain
two 0-cells still ungrouped, 1.e., cells 3 and 11. In an effort to use as few subcubes as
possible, they are grouped together. Thus, for this function the minimal product is
fw.xy.2=wteyt+z(wt+yt2wt+xt+2atytz)
*Recall that for a 4-variable map, the top and bottom edges as well as the left and right edges are
connected to form a torus.
CHAPTER 4 Simplification of Boolean Expressions 157
In this case the minimal sum, having a cost of 15, has a lower cost than the minimal
product, having a cost of 16. It is interesting to note that for this example, there are
two minimal sums but there is only one minimal product.
There is a slight variation to the above procedure for obtaining minimal prod-
ucts that can be used instead. Recall that the complement of a function is obtained
by replacing all 0 functional values by 1’s and all 1 functional values by 0’s. Thus,
if the O-cells of a Karnaugh map are grouped and product terms are written for the
groupings (where, since product terms are being written, uncomplemented variables
correspond to | labels on the axes of the map and complemented variables corre-
spond to 0 labels on the axes), then a minimal sum for the complement function is
obtained. By applying DeMorgan’s law to the expression, and thereby complement-
ing it, the resulting product of sum terms corresponds to the minimal product of the
original function. Thus, for the map in Fig. 4.24 the minimal sum for the comple-
ment function, found by grouping the 0’s and writing product terms, is
*In this text, dashes are used in a map to signify don’t-care functional values to be consistent with the
entries in the truth table. Frequently, however, d’s, X’s, or @’s are used in a Karnaugh map to denote
don’ t-care conditions.
158 DIGITAL PRINCIPLES AND DESIGN
The prime implicants of an incomplete Boolean function are the prime impli-
cants of the complete Boolean function obtained by regarding all the don’t-care
conditions as having functional values of 1. Similarly, the prime implicates of an
incomplete Boolean function are the prime implicates of the complete Boolean
function in which all the don’t-care conditions are regarded as having 0 functional
values. Accepting these conclusions, it is a simple matter to obtain minimal sums
and minimal products for incomplete Boolean functions from a Karnaugh map.
00
Figure 4.25 Incomplete Boolean function f(w,x,y,z) = &m(0,1,3,7,8,12) + de(5,10,13,14). (a) Truth
table. (b) Karnaugh map for obtaining minimal sums. (b) Karnaugh map for obtaining
minimal products.
CHAPTER 4 Simplification of Boolean Expressions 159
complete Boolean functions, the essential prime implicants should first be obtained.
In the case of incomplete functions, however, only the actual 1-cells in the map are
candidates for essential 1-cells. The don’t-care cells are not candidates since the
don’t-care cells do not have to be included in at least one subcube.
As an example of obtaining a minimal sum for an incomplete Boolean func-
tion, consider the truth table in Fig. 4.25a. The corresponding Karnaugh map is
shown in Fig. 4.255. Cell 3 (as well as cell 7) is an essential 1-cell. The subcube
with this essential 1-cell corresponds to the only essential prime implicant of this
function. It should be noted that the subcube for the essential prime implicant in-
cludes a don’t-care cell since don’t-care cells are regarded as containing 1’s for the
purpose of maximizing the size of the subcubes. It is next observed that cell 12 can
be placed in a subcube of four cells by including two don’t-care cells. Cell 12 is not
an essential 1-cell since grouping it with don’t-care cell 13 forms another prime im-
plicant for this function. At this point only cell 0 still needs to be grouped. Two
prime-implicant subcubes, shown dashed, are possible, each containing two cells.
Hence, there are two minimal sums
Notice that don’t-care cell 13 is never grouped since only the |-cells must be covered.
fw.x%y,z) = (y + Zw + Zw tx + y)
and f(w,xy,z) = (y + zw + Z)(w t+ x + 2)
It should be noted that don’t-care cells 10 and 14 appear in both a subcube of
1-cells and a subcube of 0-cells. That is, their use when grouping the |’s did not
160 DIGITAL PRINCIPLES AND DESIGN
preclude their later use when grouping the 0’s. Using the gate input terminal
count as the cost criterion, all four of the minimal expressions for this function
have the same cost.
000 001
00 0 1
Ol 8 9
vw
11 24 25
10 16 17
(a)
are also permissible on each layer of the five-variable map. In addition, if each layer
contains a 2° X 2? subcube such that they can be viewed as being directly above and
below each other, then the two subcubes collectively form a single subcube consist-
ing of 2°*”*! cells. As was done previously, the literals of the corresponding term
are determined by noting which variables have the same values for all cells that
comprise the subcube.
162 DIGITAL PRINCIPLES AND DESIGN
vwz
WXYZ
fv,w,x9,2) = (W + yw + V+ + DEF)
In the minimal product, the sum term (w + y) is the result of grouping cells 8, 9, 12,
13, 24, 25, 28, and 29; while the sum term (w + z) is the result of grouping cells 8,
10, 12, 14, 24, 26, 28, and 30.
CHAPTER 4 Simplification of Boolean Expressions 163
vw
vw
000} 0 | r 3 2 | 6 a 5 4
ee
O01 8 9 itil 10 14 15 13 1
Oll| 24
|_|
25 pif 26
Lea
30 31 29
ie 28
010 16 17 19 18 | De pie) 21 20
uvw t = a ee ice et ae = |
110 48 49 SI 50 54 S05) 53 52
111 56 Sy) 59 58 62 63 6] 60
a ll Sata eo) | | a =a
101 | 40 41 43 42 46 47 45 44
— | ers oad
100 a2) 33 35 34 38 39 oi] 36
(a)
grouping is a mirror image of the other about both the horizontal and vertical
mirror-image lines. Then, the four groupings collectively form a single subcube and
correspond to a single term.
As in the case of the five-variable map, an alternate structure is also possible. In
this case, 4 four-variable maps are assumed to be layered one upon the other as
shown in Fig. 4.29b. For each layer, the values of the variables v and w are assumed
to be fixed. In Fig. 4.29b, uv = 00 in the top layer, uv = O1 in the second layer,
uv = I1 in the third layer, and uv = 10 in the bottom layer. Since each cell must be
adjacent to six cells in a six-variable map, in addition to the four adjacencies within
each layer, the fifth and sixth adjacencies occur between adjacent layers where it is
assumed that the top layer, i.e., the wv = OO layer, and the bottom layer, i.e., the
uv = 10 layer, of the overall structure are adjacent. For example, cell 21 on the sec-
ond layer is adjacent to cells 17, 20, 23, and 29 on that layer as well as cell 5 on the
first layer and cell 53 on the third layer. Similarly, cell 4 on the first layer is adjacent
to cells 0, 1, 5, and 6 on that layer as well as cell 20 on the second layer and cell 36
on the fourth layer. Subcubes occurring in corresponding positions on two adjacent
layers collectively form a single subcube. In addition, subcubes occurring in corre-
sponding positions on all four layers collectively form a single subcube.
Sy Le | ees —— ene
be ge om Cod
es
i | |
<4O]
S.
SE Uliece up
sere
al = a
oo. =4n Pose |
Oe ce Seare: in
Boaoe
ail
fl oe | r i
lee ce oc op oR
‘ PS |
N10
odo00 |
fat tor ee
4
e xM
“SO Be sa
ts] li TS
vs aa 8S Ss ied
~
| | Ea
x.
Sigg Se ss
ee(ilSaa oe = 6S ra ae
# = ] |
ae
+
=
oe es Seco LS
~ ek | if
ie ee 7
ce zs eeu WS -
ae ema4 hI9
Tice aang: (Gls
1
XN
Sk Se “=
Ss
L |.
(9)
8I aN Gon
as
Stans e
mE
+ ie
ete t I€ = cance
\.
10
PR. ee 61 ey
fe Mie | = By
ie =4n
Leen
LI
6
ates oe
-<
ne is| a | =
ati Be eeamlon
‘ainjoniys
Lio
:
8 Sh 00 I
a
NX
eri nt xm
2
JeAe7
01 i.
eSZ Ol
=
r
i SI van
€ L
(Gq)
II
(JuOD)
ZA re
a0 SS
I ic el 6
ern
einbi46e'p
Ke
00 0 p cle 8
”
|00 |10 iT {O01
XM
165
166 DIGITAL PRINCIPLES AND DESIGN
Wwxy
amu
000 l |
O11 1 1
010 |
eae te |
uvw
110 | l
fT
101
|
uvwxy uvwyz uwxy vwz
Figure 4.30 shows various subcubes on a reflective six-variable map and their
associated product terms. No attempt has been made to perform any type of mini-
mization on this map. The reader should pay particular attention to the subcube for
the vwz product term. This subcube corresponds to the situation in which part of the
grouping appears in each of the four quadrants. In particular, this subcube consists
of cells 25227,.29, 31, 57,09; Ol; and63.
Interpreting five-variable and six-variable maps can be a challenge. Although
the map concept is extendable to more than six variables, it is evident that another
procedure is needed when dealing with functions having a large number of variables.
form the product term wxz, which is entered in the second column. Furthermore,
since w x yz and w xyz subsume the term w xz, the two minterms are checked to indi-
cate that they are not prime implicants. The comparison process is continued by
next considering minterms | and 4. Notice that even though minterm | has a check
mark, it is still used for further comparisons. However, minterms | and 4 do not
combine since they differ in more than one literal.* Next minterms | and 5 are com-
pared. They are combined to form a single term in accordance with the relationship
AB + AB = B. This results in the term w yz being entered in the second column and
a check mark being placed next to minterm 5. Since minterm | already has a check
mark, it is not necessary to place a second check mark next to it. The reader can eas-
ily verify that upon comparing minterm | with the remaining minterms in the first
column, no additional terms are formed. Next, minterm 3 is compared with all the
minterms in the first column. However, it is only necessary to apply the comparison
process to the remaining minterms below the one currently being studied, in this
case minterm 3, due to the commutative property of a Boolean algebra. As a result,
minterms 3 and 4 are compared and then minterms 3 and 5. In both cases they differ
in more than a single literal and the relationship AB + AB = B is not applicable.
Next, minterms 3 and 7 are used to generate the term wyz shown in the second col-
umn. Furthermore, minterm 7 is checked off, since it subsumes the new term. Con-
tinuing as above, after all comparisons of the minterms in the first column are com-
pleted, the terms of the second column result and the minterms successfully used in
the comparison process are those with check marks.
The above comparison process is now carried out on the second column of
Table 4.3. Starting with the first term in the second column, terms w xz and wxz are
the first pair of terms that satisfy the relationship AB + AB = B where A corre-
sponds to x and B corresponds to wz. As a consequence, these two terms are used to
form the term wz shown in the third column. The two terms wxz and wxz are
checked to indicate that they are not prime implicants, since they both subsume the
generated term wz. Continuing the comparison process on all of the remaining pairs
of terms in the second column, the only other successful application of the relation-
*The check mark next to minterm 4 in Table 4.3 is the result of a future comparison.
CHAPTER 4 Simplification of Boolean Expressions 169
ship AB + AB = B involves terms w yz and wyz. These two terms again generate the
term wz, and hence the duplicate is not entered in the third column. However, the
two terms w yz and wyz are checked in the second column to again indicate they are
not prime implicants.
The process is now continued by comparing all pairs of terms in the third col-
umn. Since only one term appears, no new terms are generated. Since a fourth col-
umn is not formed, the comparison process terminates. All the terms appearing in
Table 4.3 are implicants of the original function, while those that are not checked
correspond to its prime implicants.
Before stating the above process formally as an algorithm, some general obser-
vations are appropriate that reduce and simplify the mechanics of the procedure.
Any product term of an n-variable function can be represented without ambiguity
by 0’s, 1’s, and dashes if the ordering of the variables is specified, where 0 is used
to represent a complemented variable, | is used to represent an uncomplemented
variable, and a dash (—) is used to represent the absence of a variable. For example,
if wy is a term of the function f(v,w,x,y,z), then it is represented as -O—-1— where the
ordering of the variables in the term is given by the arrangement of the variables in
the function notation. The first dash signifies that variable v does not appear in the
term, while the second and third dashes represent the absence of the x and z vari-
ables. The 0 represents the literal w, and the | represents the literal y. When repre-
senting minterms with this notation, there are no dashes, and this procedure simply
yields the binary representation of the minterm, as was discussed in the previous
chapter.
Now let the index of a term be defined as the number of 1’s appearing in the
0)-1-dash representation of the term. Any two minterms which satisfy the relationship
AB + AB = B where A is a single variable must have the same binary representa-
tion except for the one position corresponding to A. In this position one binary rep-
resentation has a O and the other a |. Thus, the indices of the two terms differ by ex-
actly 1. Furthermore, when one term is formed from combining two minterms, the
literals that are the same in both minterms are the literals appearing in the resulting
term. A representation for this combined term has the same 0’s and 1’s as the origi-
nal minterms except for that position in which the minterms differed. This position
has a dash.
In the comparison technique introduced above for generating prime implicants,
all pairs of minterms are inspected to see if they combine by the relationship AB +
AB = B. If the minterms of the function are divided into sets such that the minterms
in each set have the same index, then it is only necessary to compare the minterms
in those sets whose indices differ by exactly 1. Two terms whose indices are the
same or differ by more than | can never result in their combining into a single term.
This enables a reduction in the number of comparisons that must be made.
It has been seen that two product terms of a function can combine into a single
term if they have the same variables and differ in exactly one literal. In terms of the
0-1-dash notation, this implies that the dashes in the representations for both terms
must appear in the same relative positions, and that in all but one of the remaining
positions the two representations must have the same 0’s and 1’s. For example, the
170 DIGITAL PRINCIPLES AND DESIGN
terms vwy and vwy of a function f(v,w,x,y,z) can combine to form the term vy. The
first term is represented by 10—O— and the second term by 11—O-—. Since the
two 0-1-dash representations have their dashes in the same relative positions and
the representations differ only in the second position, they can combine to form
1--0-, which represents the term vy. The dash now appearing in the second posi-
tion denotes the elimination of the x variable as the result of the two terms being
combined.
fw,xy,zZ) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz
II +m(0,5,6,7,9, 10,13,14,15)
The process for obtaining the prime implicants using the above procedure is shown
in Table 4.4. To begin, all minterms with index zero are placed in the first column.
Only minterm 0 has index zero. This is indicated in Table 4.4 by the first entry in
the first column where minterm 0 is represented in both its decimal and binary
forms. (The decimal representation is included for easy reference and is not really
CHAPTER 4 Simplification of Boolean Expressions 171
necessary in the actual process.) A line is drawn after this minterm. Next all
minterms of index one are listed. Since there are no minterms for this function hav-
ing index one, no entries are made for it in the first column. Again a line is drawn.
All minterms of index two are next listed. These are minterms 5, 6, 9, and 10. A line
is drawn after the last minterm of this set. In a similar manner, minterms 7, 13, and
14, having index three, are added to the list. Finally, minterm 15, having index four,
is added. This completes the first three steps of the algorithm. Next, the minterm of
index zero is compared with all minterms of index one to see if they can combine
by the rule AB + AB = B. Since there are no minterms of index one, minterm 0
cannot be used in comparisons with another minterm. Similarly, no minterms of
index one combine with minterms of index two since the set of minterms having
index one is empty. Next the minterms of index two are compared with those
of index three. Minterms 5, represented by 0101, and 7, represented by 0111, com-
bine since they differ in exactly one bit position to form the term wxz, which is rep-
resented by Ol-I in the second column. Check marks are then placed after
minterms 5 and 7. For convenience, the 01—1 in the second column is labeled with
the decimal numbers 5,7 to indicate that these are the minterms that combined to
form the term represented by 01—1. Next minterms 5 and 13 are compared. Notice
that a check mark does not disqualify a minterm from further comparisons. The
minterms 5 and 13 form the term represented by —101 in the second column. Since
minterm 5 already has a check mark, it is not checked again. However, a check
mark is placed after minterm 13. Finally, minterms 5 and 14 are compared. Since
they cannot be combined because they differ in three bit positions, no new entry is
made in the second column as a result of this comparison and minterm 14 is not
checked (at this time). Next minterms 6 and 7, 6 and 13, and 6 and 14 are compared.
These comparisons account for the entries 011— and —110 in the second column and
check marks being placed next to minterms 6 and 14 in the first column. The
process is repeated until minterms 10 and 14 are compared. A line is then drawn
under the partially completed list of the second column. Next the minterms in the
172 DIGITAL PRINCIPLES AND DESIGN
first column of index three are compared with those of index four. These compar-
isons yield three terms in the second column. At this point all minterms, except
minterm 0, in the first column have been checked off, and no additional compar-
isons are possible. This completes steps 4, 5, and 6 of the algorithm.
Using the second column as a new list, each term in a group is compared with
each term in its adjacent group. First, term 01—1 is compared with —111. Since their
dashes are in different positions, they cannot be combined. Next term 01-1 is com-
pared with 11-1. This comparison forms a new term —1—1 in the third column since
their 0-l-dash representations differ in exactly one bit position. The two terms
which combined, i.e., 01-1 and 11-1 in the second column, are then checked. Term
01-1 is next compared with term 111—. These terms cannot be combined since their
dashes do not align. Next terms —101 and —111 are compared. This comparison also
yields the term —I—1 in the third column. Since this term already is written once, it
is not entered again. However, terms —101 and —111 must be checked. After all
comparisons are made, the newly formed list, column 3, has only two terms, both of
which appear in the same group since they have the same index. Since no compar-
isons are possible with the terms of the third column, the process terminates. It is
seen that five terms have no check marks. These terms, labeled A through E, are the
prime implicants of the given function. Replacing the symbols 0 and | by their as-
Table 4.5 Obtaining the prime implicants of the incomplete Boolean function
f(V,W,X,Y,Z) = &M(4,5,9,11,12,14,15,27,30) + de(1,17,25,26,31)
f0,W,xy,z) = Xm(1,4,5,9,11,12,14,15,17,25,26,27,30,31)
This is illustrated in Table 4.5. The nine prime implicants are labeled A through J,
1.€., X YZ, WXZ, WYZ, WXY, VWy, VW yz, VWxy, vxy z, and vwxz.
Although the procedure just presented is very tedious for hand computation, the
intent of the Quine-McCluskey method is to provide an algorithmic procedure for
obtaining prime implicants that can be programmed for a digital computer. This ob-
jective has been achieved.
*It should be noted that this is consistent with the binary notation for maxterms introduced in the
previous chapter.
174 DIGITAL PRINCIPLES AND DESIGN
4.9 PRIME-IMPLICANT/PRIME-IMPLICATE
TABLES AND IRREDUNDANT
EXPRESSIONS
Having obtained the set of prime implicants of a function, the next step is to deter-
mine the minimal sums. Since minimal sums consist of prime implicants, it now be-
comes a matter of determining which prime implicants should be used. To do this, a
subset of the prime implicants is selected, subject to some cost criterion, such that
each minterm of the function subsumes at least one prime implicant in the selected
subset. This is analogous to the selection of a set of subcubes on a Karnaugh map
such that each 1|-cell is included in at least one subcube.
A convenient way to show the relationship between the minterms and prime
implicants of a function is the prime-implicant table. The minterms are placed along
the abscissa of the table and the prime implicants are placed along the ordinate. At
the intersection of a row and column, an X is entered if the minterm of that column
subsumes the prime implicant of that row. For an incompletely specified Boolean
function, only those minterms which describe rows of the truth table where the
function equals | are placed along the abscissa. That is, the minterms describing the
don’t-care conditions are not included along the abscissa. Ignoring the don’t-care
terms is valid since the minimal-cost selection of prime implicants does not require
that don’t-care terms subsume at least one prime implicant.
The prime implicant table for the example of Table 4.4 is given in Table 4.6;
while that for the example of Table 4.5 is given in Table 4.7. In Tables 4.6 and 4.7
the single letter designating the prime implicant is also shown. For example, prime
implicant xz in Table 4.6 is also referred to as prime implicant A. The decimal num-
bers appearing next to the prime implicants of Tables 4.4 and 4.5 are the minterms
that subsume that prime implicant, since these were the minterms that combined to
form the prime implicant. These numbers simplify the process of determining the
entries in the prime-implicant table. For any particular prime implicant, these num-
bers indicate which columns of the prime-implicant table should contain an X. For
example, prime implicant A in Table 4.4 is the result of combining minterms 5, 7,
13, and 15. Thus, X’s appear in precisely these columns of the first row of the
prime-implicant table given in Table 4.6. In Table 4.7 it should be noted that those
conditions associated with don’t-cares do not appear along the abscissa. Corre-
my ms Ng m, My My Ore My Ms
A: XZ x x x x
B: Xx) x x x x
G: WYZ Xx x
Ds: wyz x x
ES Wee vz x
CHAPTER 4 Simplification of Boolean Expressions 175
Xyz
WXZ
WyZ x x x
wxy x x x
vwy x x
VWyz os
vwxy x x
VWXYZ x x
er
eeeVWxz x x
spondingly, even though prime implicant A in Table 4.5 results from combining
minterms 1, 9, 17, and 25, an X only appears in the m, column of Table 4.7.
In Sec. 4.2 an irredundant disjunctive normal formula was defined as a sum of
prime implicants such that no prime implicant from the sum can be removed with-
out changing the function being described. In addition, it was shown that minimal
sums are irredundant disjunctive normal formulas. In terms of the prime-implicant
table, any set of rows such that each column of the table has an X in at least one row
of the set and subject to the constraint that the removal of any row from the set re-
sults in at least one column no longer having an X in at least one row of the reduced
set defines an irredundant expression. The irredundant expression is formed by sum-
ming those prime implicants that are in the set that satisfy the above requirement.
This last expression, which was obtained from the expression above it by dropping
the subsuming terms indicated by slashes, implies that there are 10 irredundant dis-
junctive normal formulas for the Boolean function. The first term suggests that one
irredundant expression is the sum of prime implicants B, D, G, and /. Referring to
Table 4.5 or Table 4.7, this expression is
Notice that f£,(v,w,x,y,z) is irredundant even though it has more terms than
fi(v,w,x,y,Z), Since if any term from f5(v,w,x,y,z) is deleted, then it no longer de-
scribes the function of Table 4.5. Thus, not all irredundant disjunctive normal
*Slashed terms are deleted as a consequence of the absorption property of a Boolean algebra, i.e.,
Theorem 3.6.
178 DIGITAL PRINCIPLES AND DESIGN
4.10 PRIME-IMPLICANT/PRIME-IMPLICATE
TABLE REDUCTIONS
As was seen in the previous section, once a prime-implicant table for a Boolean
function is obtained, the irredundant disjunctive normal formulas and, in particular,
the minimal sums are readily determined. However, the amount of work necessary
CHAPTER 4 Simplification of Boolean Expressions 179
associated with the essential rows. Although in this particular example all the prime
implicants are essential, in general, this is not the case, even when the minimal sum
is unique.*
“Example 4.2 in Sec. 4.5 illustrates a function having a unique minimal sum in which not all its prime
implicants are essential.
CHAPTER 4 Simplification of Boolean Expressions 181
table. Applying this result, the reduced prime-implicant table of Table 4.9 can be
used to determine the irredundant covers of Table 4.8.
The concepts of dominance and equality can also be applied to rows of a
prime-implicant table. Two rows of a prime-implicant table are said to be equal if
they have X’s in exactly the same columns. A row r; of a prime-implicant table is
said to dominate another row r; of the same table if row r; has X’s in all the columns
in which row r; has X’s and if, in addition, row r; has at least one X in a column in
which row r; does not have an X. Referring again to Table 4.8, it is seen that row A
dominates both rows B and C, while row D dominates row E.
The row dominance and equality concepts can also be used for prime-implicant
table reductions. Assume that there is some irredundant cover of a prime-implicant
table and that this cover contains a row 1; that is dominated by or that is equal to a row
r;. Those columns covered by row r; are also covered by row r;, by definition of a dom-
inating or equal row. Hence, an irredundant cover still results if row r; is used instead
of r;. Furthermore, if the cost of row 7; is not greater than the cost of row r;, then the ex-
pression obtained by summing the prime implicants associated with the cover having
row r; does not have a higher cost than the disjunctive normal formula obtained if the
prime implicant associated with r; is used instead. Since a minimal cover is an irredun-
dant cover, a row r; of a prime-implicant table can be removed and at least one mini-
mal cover of the original table, from which a minimal sum is written, is obtainable
from the reduced table if (1) there is another row r; of the same table that is equal to
row r, and does not have a higher cost than r; or (2) there is another row 7, of the same
table which dominates row r; and that does not have a higher cost than row 1;.
It is important to note that the removal of an equal or dominated row 1s only ap-
plicable if it is not necessary to obtain all irredundant disjunctive normal formulas
or all minimal sums of a function. This is usually not a serious restriction, since in
most cases only one minimal sum is needed. In addition, the column and row reduc-
tion procedures can be applied any number of times to a prime-implicant table and
in any order.
To illustrate the application of the above column and row reduction concepts to
obtain minimal sums, again consider Table 4.7. The table is redrawn in Table 4.10a
with a cost column according to the second criterion for minimality. In particular,
since each prime implicant has more than a single literal, the cost assigned to each
row is the number of literals in its prime implicant plus one. Next, it is noted that
the column for m,, dominates the column for m,,. Since dominating columns can be
removed, Table 4.10b results. In the reduced table, row B dominates row A, row D
dominates row E, and row G dominates row F. Since in each case the dominated
row has a cost equal to its dominating row, the dominated rows are deleted. This re-
sults in Table 4.10c. Even though column and row reductions have been used, a
minimal sum can still be obtained by finding a minimal cover of Table 4.10c. It is
now noted that three columns of Table 4.10c have a single X. These X’s are cir-
cled. The rows in which the X’s appear must be selected for the minimal cover to
guarantee that the corresponding columns are covered. Rows B, D, and G have dou-
ble asterisks placed next to them to indicate that they are selected. The selection of
rows B, D, and G results in minterms m4, 5, M9, 7), 7,4, M5, and m3) being cov-
ered. Hence, these rows and columns are deleted from the table for the purpose of
determining which additional rows must be selected to obtain a minimal cover. The
resulting table is shown in Table 4.10d. Notice that row C does not appear in the
table since it does not cover any of the remaining minterms. Upon selection of ei-
ther row H or row J, a minimal cover is obtained. Therefore, one minimal cover
consists of rows B, D, G, and H, and another minimal cover consists of rows B, D,
G, and /. These two covers were also obtained in the previous section when Pet-
rick’s method was applied to Table 4.7. However, as was mentioned previously, not
all minimal covers are necessarily obtained when row reduction is applied, but at
least one minimal cover is guaranteed. For this example, Petrick’s method did yield
a third minimal cover. Finally, once a minimal cover is established, the minimal
sum expression can be written.
In general, it is not possible to obtain a minimal cover solely by applying the
table reduction procedures. A prime-implicant table in which each column contains
at least two X’s and in which no column or row can be deleted as a result of domi-
nance or equality is said to be a cyclic table. When this condition occurs, one can
revert to Petrick’s method for completing the determination of a minimal cover of
the table. In this way, columns and rows are deleted until a cyclic table results.
Once the reduced prime-implicant table becomes cyclic, Petrick’s method is ap-
plied. The rows selected by Petrick’s method plus any rows selected previously dur-
ing the prime-implicant table reduction procedures form a minimal cover of the
original table.
CHAPTER 4 Simplification of Boolean Expressions 183
Table 4.10 Prime-implicant table reduction applied to Table 4.7. (a) Table 4.7 with a
cost column. (6) After deleting dominating column. (c) After deleting
dominated rows. (d) After selecting rows B, D, and G
A x 4
B Xx x Xx 4
G x x x 4
D x x x 4
E x x 4
F x 5
G || Xx x 5
H x x 5
if x x 5)
(a)
E
le x
Gaim ><
lal \ xX x
I x x
(b)
CG x x 4
kD) x x ®&) 4
XG 4 ®) 5
lal x x 5
I x Xx 5
(c)
Mm, | Cost
(d)
184 DIGITAL PRINCIPLES AND DESIGN
(Sy
The next step is to compare the minterms in adjacent groups. In general, two
minterms represented by decimal numbers can combine to form a single product
term if and only if their decimal difference is a power of 2 and the smaller decimal
number represents the minterm with index i and the larger decimal number repre-
sents a minterm with index i + 1 where 7 = 0,1,2,.... Thus, two minterms m, and
m, of index i andi + 1, respectively, can combine if b — a is positive and a power
of 2. The combined term is written as a,b(c), where c is the power-of-2 difference.
Again referring to Table 4.11, since the second group of minterms, i.e., those
having index 1, is an empty set, no comparisons are possible between the first and
second groups and the second and third groups. However, applying the above rule
to the third and fourth groups of the first column generates the first group in the sec-
ond column. For example, minterms 5 and 7 combine, since 7 — 5 = 2, to form the
second-column entry 5,7(2). On the other hand, minterms 6 and 13 cannot combine
since the difference 13 — 6 = 7 is not a power of 2; whereas minterms m, and m,
cannot combine since the difference 7 — 9 is negative. Similarly, the fourth and
fifth groups of the first column yield the second group in the second column. As in
Sec. 4.8, terms that combine are checked. It should be noted that the power of 2 ap-
pearing in parentheses is the weight of the variable, under a binary representation,
which is eliminated.
To continue the comparison process, special attention must be given to the
numbers in parentheses since the combining of terms is only possible when they
have the same variables. Let a,,ay,°**,d>(C,,C>,"**,c,) denote a term, referred to as
term a, which is the result of combining minterms m,,, m,,, . . , Mq,,- That 1s, a), a,
- + +, a » are the decimal numbers of the minterms that combined, and c;, c,* + *, Cy
are the power-of-2 differences that represent the variables eliminated. Further-
more, let the decimal numbers a,, a>, * * *, a be in increasing numerical order;
similarly for the decimal numbers c;, c), * * *, cy. In an analogous manner, let
186 DIGITAL PRINCIPLES AND DESIGN
w a y Zz oe variables
De oa) vba to4 — weights of the binary representation
— — dash for position of weight 4
| == l’s for the positions that sum to the smallest minterm
number (i.e., 9)
0 Sas 0’s for all remaining positions
1 - 0 i == 0-1-dash representation
(a)
w x y Z << variables
2: De ee Oe —— weights of the binary representation
- - == dashes for positions of weights 2 and 8
1 | <= 1’s for the positions that sum to the smallest minterm
number (1.e., 5)
available for 0’s. Hence, no 0’s appear in the 0-1-dash representation. Since —1—1 is
the 0-1-dash representation of the prime implicant, its algebraic form is xz. The
reader can easily verify that if all the entries of Table 4.11 are transformed into
0-1-dash representation (including the checked terms), then Table 4.4 results.
The decimal process introduced in this section yields all the prime implicants
of a Boolean function. Once the prime implicants are obtained, the prime-implicant
table is drawn, and a minimal sum is found by the techniques of Secs. 4.9 and 4.10.
Although the above discussion pertained to a decimal method for obtaining
prime implicants, by starting with a set of decimal maxterms the procedure pro-
duces the prime implicates of a function.
Combinational
Inputs Outputs
network
fiQy,2) = xy + yz
foQoy,Z) = yz + xy
The corresponding realization is shown in Fig. 4.326, which has a total of 6 gates
and 12 gate inputs. However, since the two expressions share a common term, 1.e.,
yz, a more economical realization is shown in Fig. 4.32c having only 5 gates and 10
gate inputs.
Unfortunately, the multiple-output simplification problem is normally more
difficult than simply sharing common terms in the independently obtained minimal
expressions. For example, consider the pair of functions
fiey.z) = 2m ,3.5)
folx%y,z) = Xm(3,6,7)
0 0 0 | 0
0 0 | | 0
0 | 0 0 0
0 | l | 1
| 0 0 0 0
I 0 | 0 0
| | 0 0
| | | l |
CHAPTER 4 Simplification of Boolean Expressions 189
fi yz ha yz
11 10 00 Ol
0 l 0 | 0 0 0
ae x
1 1 0 | | 0 0
(a)
ue
y
6
fi =
y
y
zZ ff
y
y -
1D)
XS
to y
oe
(b) (c)
Figure 4.32 Realization of Table 4.13. (a) Karnaugh maps for minimal sums.
(b) Realization of individual minimal sums. (c) Realization based
on shared term.
Figure 4.33a shows the Karnaugh maps for the independently obtained minimal
sums that suggest the realization of Fig. 4.33b. This realization uses 6 gates with 12
gate inputs. On the other hand, if the 1-cells of the maps are grouped as shown in
Fig. 4.33c to obtain the expressions
fi@y,Z) = yz + xyz
TOYZ) = Ae oy
then the corresponding realization of Fig. 4.33d results, which uses 5 gates and 11
gate inputs. Certainly, this is a lower-cost realization if cost is measured by either
the number of gates or the number of gate input terminals. Furthermore, it was
achieved without every term in the expressions being prime implicants of the indi-
vidual functions. In particular, the term xyz is a prime implicant of neither f, nor f..
However, since the two functions have a term in common, it should be suspected
that a relationship exists between the common term and the product function f;, * f,,
that is, the function obtained when the two functions are and’ed together. In this
particular case, the product function has the single prime implicant xyz.
190 DIGITAL PRINCIPLES AND DESIGN
<1
nN
x
ae
fi
00 01 11 10 y
0 0 So 0 ig
; th
eo | (ao
LH Xe
(b)
Figure 4.33 Realization of the pair of functions f,(x,y,z) = &M(1,3,5) and £(x, y,Z) = 4(8,6,7). (a) Karnaugh
maps for minimal sums of individual functions. (b) Realization of independently obtained minimal
sums. (c) Karnaugh maps for alternate groupings. (da) Realization based on alternate map
groupings.
*In the case of multiple-output networks, the number of gates is closely related to the number of distinct
terms appearing in the set of output expressions.
CHAPTER 4 Simplification of Boolean Expressions 191
i ee oy UA alcoe eee
Ley poe sk
Thus, a multiple-output prime implicant is a prime implicant of any of the m single
functions f\, fs... . fn, aS Well as a prime implicant of any of the possible product func-
tions, 1.€., a prime implicant of any two of the functions f;- f,, any three of the functions
fi* f° f. up to and including a prime implicant of all m functions f, -f, + + + f,,. For the
special case of a single function, only the first part of the definition applies.
The significance of multiple-output prime implicants is given by the following
generalization to Theorem 4.1.
Theorem 4.5
Let the cost, assigned by some criterion, for a multiple-output minimal
sum be such that decreasing the number of literals in the set of disjunctive
normal formulas does not increase its cost. Then there is at least one set of
formulas for the multiple-output minimal sum that consists only of sums of
multiple-output prime implicants such that all the terms in the expression
forf;are prime implicants of f;or of a product function involvingf,.
implicants from the seven maps and then use the multiple-output prime-implicant
table, which is described later in this section, to perform the selection process. The
approach that is taken here is to extend the more formal Quine-McCluskey proce-
dure since it is intended for computer-aided logic design.
When considering a set of Boolean functions, it is possible that one or more
functions in the set are incomplete. For the purpose of determining the multiple-
output prime implicants, all don’t-cares are assigned a functional value of 1, as was
done with single functions earlier in this chapter. In this way, the incomplete functions
are transformed into complete functions. The prime implicants of these newly ob-
tained complete functions are also the prime implicants of the incomplete functions.
tid v z ti ip
0 0 0 | 0
0 0 | I 1
0 | 0 1 |
0 | | 0 1
| 0 0 0 0
| 0 1 1 -
i 0 0 0
1 i 1 | 1
CHAPTER 4 Simplification of Boolean Expressions 193
plies the f, function but not the f, function. The entire tagged product term is there-
fore written as xyz—f;, 011-/;, or 3-f;. In a similar manner, the sixth row of Table
4.14 is written as xyzf,fo, LOLA, or Sf,f:. Even though the functional value of f, for
the 3-tuple (x,y,z) = (1,0,1) is unspecified, a | is assigned to the functional value for
the purpose of determining the multiple-output prime implicants. The complete list
of tagged product terms for Table 4.14 is shown in Table 4.15 in all three forms. It
should be noted that no tagged product terms are written for those rows in which
both functional values are 0 since the kernels for these rows imply no functions.
of newly generated terms. The comparison process is then repeated on all the gener-
ated tagged product terms until all comparisons have been made and no new tagged
product terms are generated, at which time the unchecked terms are the tagged
multiple-output prime implicants.
It should be noted in the above procedure that there are only two modifications to
the Quine-McCluskey method when dealing with tagged product terms. First, there is
a tag which must be considered. When combining terms, the tag of the new term con-
sists of only those f;’s that are common to the two terms being combined. Second,
when two terms are combined, the two generating terms are not necessarily checked.
A generating term is checked only if its tag is identical to the tag of the generated term.
To illustrate the above procedure, consider the binary representations of the
tagged product terms shown in Table 4.155. These terms are grouped according to
the index of the kernels. This results in the first column of Table 4.16a. Terms
whose index differs by one are now compared. For example, consider the terms
OO00f,— and OOIf,f;. By the Quine-McCluskey method, the kernels are combined
since they differ in exactly 1 bit position. Furthermore, sincef,appears in both tags,
these two terms generate the term 00-f,- shown in the second column of Table
4.16a. At this time, however, only the term OOOf|— is checked since its kernel sub-
Table 4.16 Obtaining the tagged multiple-output prime implicants for the functions of
Table 4.14. (a) Using 0-1-dash notation. (6) Using decimal notation
sumes the kernel of the generated term and its tag is the same as the tag of the gen-
erated term. The term 0OIf,,f, is checked at a later time when it and the term 101f,f,
are used to generate the term —O1If,f;. After all comparisons of terms whose index
differs by one in the first column of Table 4.16a are completed, the comparison
process is applied to the second column to construct the third column. It should be
noticed in the second column that the two terms 0—-Of,— and 0-1-f, do not generate a
new term, even though the kernels can be combined by the Quine-McCluskey
method, since there is no f; symbol common to both of these terms. The seven
tagged product terms in Table 4.16a with a letter after them are the tagged multiple-
output prime implicants, and their kernels are simply the multiple-output prime im-
plicants. A tagged multiple-output prime implicant having more than one function
symbol in its tag corresponds to a multiple-output prime implicant for the product
function indicated by the tag.
The above procedure can also be carried out using the decimal representation
of the tagged product terms. In this case, the decimal Quine-McCluskey method, as
explained in Sec. 4.11, is applied to the kernels, while the tags are determined as
with the 0-1-dash notation. The procedure is illustrated in Table 4.16b.
sections, one for f, and the other for f;. The minterms associated with each function
are listed along the abscissa. As in Sec. 4.9, any minterms associated with don’t-
care conditions are not listed. Also in Table 4.17, the ordinate is partitioned accord-
ing to whether the multiple-output prime implicant implies f,, f, or f; * f. Those
multiple-output prime implicants associated only with f, or f, have X’s just in their
respective sections of the table; the multiple-output prime implicants associated
withf, :f; have X’s in both sections.
Upon manipulating it into its sum-of-products form using the distributive law and
dropping subsuming terms, the p-expression becomes
fxGy,Z) = z+ xyz
while the term C,D,D,E,F|F> suggests the expressions
f,Gy,Z) = xz + ye + xz
SQGY,2) = ye + xy + xz
If a multiple-output minimal sum is to be based on the fewest number of gate
input terminals as suggested by the second cost criterion introduced in the previ-
ous section, then it is necessary to evaluate each of the product terms in Eq. (4.3)
using this criterion. The number of gate input terminals is calculated as follows:
Let fi, fi, .--, ff, be the set of normal Boolean expressions describing a multiple-
output combinational network and let ¢), t,,..., t, be the set of all distinct terms
appearing in the m output expressions. Now let a; equal the number of terms in f;
unless there is only a single term, in which case let a; equal 0. Also, let B; equal
the number of literals in the term ¢; unless the term consists of a single literal, in
which case let 8, equal 0. The number of gate input terminals in the realization of
the multiple-output combinational network is given by the numerical quantity
ea NECA Under the assumption of the second cost criterion, only one
198 DIGITAL PRINCIPLES AND DESIGN
fC = ay 4 xe aye
folGy.Z) = 2+ xyz
BemmniGy nex Xx 3
Care| OX x 3
A: Zz x x x ]
Bs Xx) » x 3
Dy Vz Xx x x 3,4
Bs: XZ x ®) x 3,4
G: xyz x x 4,5
(a)
Bae y x x 3
Cia si x x 2
A: z x x x |
E: xy x x 3
DE yz x x 3,4
Ai de xz, x l
Ge XYZ x x 4,5
CRS) = ak ar PS
frY,2) SS 6x76
(b)
BK xy
CG 2
*2 A: Z |
E: xy 3
ibe yz 3,4
G: xyz 4,5
Vie ye XE > ot
TCR) = an OO
(c)
(Cont.)
200 DIGITAL PRINCIPLES AND DESIGN
B:
Gs
ine
Ds
G:
(d)
Fiyi2) Ske ye
HACER) = ar oS
(e)
counted. Two costs are associated with prime implicant yz. The first cost corre-
sponds to the situation in which the term is used forf, or f, but not both; the second
cost pertains to the situation in which it is used in the realization for both functions.
It is now noted in Table 4.18a that the fifth column has a single X. Thus, the
multiple-output prime implicant xz is essential for the function f;. The fourth and
fifth columns can be then removed since the selected prime implicant xz covers
minterms ms; and mz, of f;. However, the m, column in the f, section of the table can-
not be removed since xz is not essential forf5. At this point, the reduced version of
Table 4.18a appears as shown in Table 4.18). The symbol *1 next to row F indi-
cates that prime implicant xz is used in the expression for f,. The partial multiple-
output minimal sum thus far established is given beneath the table. Furthermore, the
CHAPTER 4 Simplification of Boolean Expressions 201
cost for prime implicant xz is changed to | to indicate that if the term is also to be
used for f,, then only the additional single input terminal of the output or-gate for f,
is needed.
Table 4.185 is now searched for dominated rows and dominating columns. It is
seen that row A dominates row F. Furthermore, since the cost associated with row A
is not greater than the cost associated with row F, it follows that row F can be
deleted. Once this is done, Table 4.18c results. It is now observed that the multiple-
output prime implicant-z must appear in the expression forf, since this is the only
term that covers m, of f,. Again the partial expressions for the multiple-output mini-
mal sum are given beneath the table. Upon selecting prime implicant z, those
columns in which X’s appear in row A are deleted. This results in Table 4.18d.
The cost associated with each multiple-output prime implicant is recalculated
and included in Table 4.18d. The only change from Table 4.18c is that prime impli-
cant yz now can be used only for f,; hence, its cost is simply 3. At this time row D can
be deleted since it is dominated by row B and since the cost associated with row B is
not greater than the cost associated with row D. However, row E cannot be deleted
even though it is dominated by row G since the cost associated with row G is greater
than the cost associated with row E. The reduced table appears as Table 4.18e.
It is now seen that the multiple-output prime implicant x y is needed in the ex-
pression for f,. This results in the partial expressions given beneath the table. After
the columns with X’s in row B are deleted, Table 4.18fis obtained. In this table it is
not permissible to delete the dominated rows C and E since their associated costs
are lower than the cost associated with their dominating row G. Thus, this table is
cyclic since no further reductions are possible. Formally, Petrick’s method can be
applied at this point to determine the remaining covers. However, by observation it
is seen that cost is minimized by letting the multiple-output prime implicant xyz ap-
pear in the expressions for both f,; and f;, with a cost of 5, rather than having xz in
the expression forf, and xy in the expression for f5, with a total cost of 6. This re-
sults in the multiple-output minimal sum
fiQey,2) =xz+ XV + XZ
F:G),2) = 2+ XYZ
A useful application for variable-entered maps arises in problems that have in-
frequently appearing variables. In such a situation, it is convenient to have the func-
tions of the infrequently appearing variables be the entries within a map, allowing a
high-order Boolean function to be represented by a low-order map.
FOGN:2) = 420 Foy + facy) + afi y + fy) + azn y + oy) + xz(fs°y + fy"y)
This expression suggests the compressed Karnaugh map shown in Fig. 4.34d. In
this case, x and z are the map variables and y is the map-entered variable. The third
possible factored form of Eq. (4.4) is
0 : | 0 |
x in y
(d) (e)
0 1
(f)
Figure 4.34 (Map compressions of a three-variable function. (a) A generic three-variable truth
table. (b) Conventional three-variable Karnaugh map. (c) Compressed Karnaugh
map of order 2 with x and y as the map variables and z as the map-entered
variable. (a) Compressed Karnaugh map of order 2 with x and z as the map
variables and y as the map-entered variable. (e) Compressed Karnaugh map of
order 2 with y and zas the map variables and x as the map-entered variable.
(f) Compressed Karnaugh map of order 1 with x as the map variable and y and z
as the map-entered variables.
CHAPTER 4 Simplification of Boolean Expressions 205
x y
0 0
0 0
0 I Onn aeee :
seal {A0 aleee
1 0 0 if, an
Je ae NE RR ei aan
1 1 0 te neat AB
1 1 ] i I frre
206 DIGITAL PRINCIPLES AND DESIGN
restricted to only 0’s and 1’s. Table 4.20 tabulates the four possible value assign-
ments to f; andf,,the evaluation of f;- v + f;* v, and the corresponding entries for a
variable-entered map. Later in this section the effects of don’t-cares are considered.
Figure 4.35 illustrates a three-variable truth table and the corresponding variable-
entered map. From the truth table we can write
------4-----4------
(a)
00 Ol Ll 10
OnE
- Figure 4.36 An example ofa
variable-entered
map with
infrequently
appearing
variables.
In this expression, the variables A and B appear infrequently; while the variables x,
y, and z appear in each term. By using x, y, and z as map variables and A and B as
map-entered variables, the variable-entered map of Fig. 4.36 is easily constructed.
The entries are simply the coefficients of the terms xyz where the asterisks indicate
various combinations of the complemented and uncomplemented form of the vari-
ables x, y, and z. In this case, a five-variable function is represented by a map of
order 3.
Figure 4.37 Variable-entered maps grouping techniques. (a) Grouping cells with the same literal.
(b) Grouping a 1-cell with both the Z literal and the Z literal. (c) Grouping a 1-cell with the z literal.
literal is then and-ed with the z literal occurring within the subcube, i.e., the map
entry. In general, normal Karnaugh map techniques are applied to form subcubes of
cells that contain the same literal. The described product term is obtained by and-
ing the map variables that do not change values with the literal used to form the
subcube.
As indicated in Table 4.20, four entries are possible in variable-entered maps
for complete Boolean functions, 1.e., a literal, its complement, 0, and 1. Literals and
their complements are grouped separately as just described. Furthermore, O’s are
never grouped when forming minimal sums. Now consider how 1-cells are handled.
The first map in Fig. 4.37b shows a situation in which z, z, and | appear as cell en-
tries. The cells containing the literals z and z correspond to the product terms xyz
and xyz. Now consider the 1-cell. From the laws of Boolean algebra, the constant |
can be written as z + z. This equivalent form of | is shown as an entry in the second
map of Fig. 4.375. In this case, the |-cell is regarded as the expression x yz + x yz.
Combining these results, the maps of Fig. 4.37 correspond to the expression xyz +
xyz + xyz + xyz. This expression can be factored as yz(x + x) + xz(y + y) =
yz + xz, which is a minimal sum. As indicated in Fig. 4.37b, the same result is
achieved by grouping the z-cell with the z portion of the (z + z)-cell to form the
term yz and the z-cell with the z portion of the (z + z)-cell to form the term xz. With
both the z and z portions being used, the 1-cell is said to be completely covered. In
general, 1-cells can be used when forming subcubes involving literals. Furthermore,
if a 1-cell appears in a subcube for a literal and in another subcube for the comple-
ment of the same literal, then no further consideration of the 1-cell is needed when
obtaining a minimal sum.
The final case that needs to be discussed is shown in the first map of Fig. 4.37c.
Here there is a |-cell and a z-cell, but there are no z-cells. Again the I-cell is re-
garded as a (z + z)-cell in the second map of Fig. 4.37c. This map corresponds to
the expression xyz + xyz + xyz. The first two terms in the expression can be com-
bined to form the term yz. This is analogous to forming the vertical subcube shown
in the second map of Fig. 4.37c. But what about the x yz term in the above expres-
sion? It is noted that the (z + z)-cell in the second map of Fig. 4.37c¢ is not com-
pletely covered since the z portion is not used. To complete the covering, the x yz
portion of the (z + z)-cell is grouped with the x yz portion of the same cell. This re-
CHAPTER 4 Simplification of Boolean Expressions 209
Step 1. Consider each map entry having the literals v and vy. Form an optimal
collection of subcubes involving the literal v using the cells containing
1’s as don’t-care cells and the cells containing the literal v as 0-cells.
Next form an optimal collection of subcubes involving the literal v
using the cells containing 1|’s as don’t-care cells and the cells
containing the literal v as O-cells. As in the case of regular Karnaugh
maps, by an optimal collection it is meant that the size of the subcubes
should be maximized and the number of subcubes should be
minimized.
Step 2. Having grouped the cells containing the literals v and v, an optimal
collection of subcubes involving the 1-cells not completely covered in
Step 1, i.e., 1-cells that were not used for both a subcube involving the
v literal and a subcube involving the v literal in Step 1, is next
determined. One approach for doing this is to let all cells containing
the literals v and v become 0-cells and all 1-cells that were completely
covered in Step 1 become don’t-care cells. Another way of handling
the not completely covered 1|-cells is to use v-cells or v-cells from Step
| that ensure that the 1-cells now become completely covered.*
Figure 4.38 illustrates how the above procedure is applied to obtain a minimal
sum from a variable-entered map. The compressed map is shown in Fig. 4.38a. The
first step is to consider each of the distinct literals z and z, in turn, and form optimal
subcube collections for each case. This step is shown in Fig. 4.385. In the first map,
in which the z-cell is replaced by 0 since it cannot be used in z-subcubes, the z-cell
is grouped with the 1-cell to its right since all 1-cells are regarded as don’t-cares in
this step. The resulting term is wxz. At this point all cells containing z entries ap-
pear in at least one subcube. Next, the cells containing z entries are considered. This
is shown in the second map of Fig. 4.38, where the z-cell is replaced by 0. Again
1-cells are used as don’t-cares for the purpose of maximizing the size of the sub-
cubes. The indicated subcube corresponds to the term yz. This completes Step | of
00 Ol 11 10
0 Zz 1 1 0
WwW
0 Z 1 0
(a)
the map-reading procedure. Step 2 involves those 1-cells that were not completely
covered in Step 1. Note that the 1-cell at location (w,x,y) = (0,0,1) in Fig. 4.38b was
used for the grouping of both the z literal and the z literal. Hence, this 1-cell is re-
garded as a don’t-care cell in Step 2. All the remaining |-cells must next be grouped
optimally since they were not completely covered. This is shown in Fig. 4.38c. Here
the 1|-cells are placed into a single subcube, yielding the term xy. Collecting the
three terms corresponding to the three subcubes that were formed, the resulting
minimal sum is
flw,xy,Z) = wxz + yz + xy
For illustrative purposes, the above example was done using three separate maps.
However, the entire process can readily be carried out on the variable-entered map.
Some care must be taken in applying the above two-step process if a mini-
mal sum is to be obtained. To see this, consider the variable-entered map shown
in Fig. 4.39, where all the subcubes are shown on a single map. Starting with the
z entry, the only possible subcube is with the I-cell to its right, since z-cells are
CHAPTER 4 Simplification of Boolean Expressions 211
regarded as O-cells, to form the term wyz. Now consider the z entries. Since the
z-cell in the lower left corner can only be grouped with the cell above it, the two
cells that comprise the first column of the map yield the term x yz. With a z entry
remaining ungrouped, another subcube is necessary. Two equal-sized subcubes
are possible in this case: the subcube shown that corresponds to the term xyz and
the subcube consisting of the z entry with the I-cell to its right that corresponds
to the term wxz. Although both subcubes appear to be equally good since they
consist of the same number of cells, it should be noted that the first possibility
uses a |-cell that was previously grouped with the z entry. Anticipating Step 2 of
the process, if the xyz subcube is selected rather than the wxz subcube, then the
1-cell corresponding to (w,x,y) = (1,1,1) becomes a don’t-care cell in Step 2,
since it is completely covered, and only the one remaining 1-cell needs to be
grouped at that time. If the alternate possibility is elected, then two 1-cells must
be grouped in Step 2. This results in an extra term at that time. Thus, the xyz sub-
cube must be selected. Finally, according to Step 2, the not completely covered
1-cell is grouped. This results in the term wxy. The minimal sum is
Figure 4.40 Obtaining a minimal sum from a map having single-variable map entries. (a) Variable-entered
map. (b) Step 1. (c) Step 2.
212 DIGITAL PRINCIPLES AND DESIGN
necessary to either form a subcube for the not completely covered 1|-cell or ensure
that the 1-cell becomes completely covered by using it with some of the z-cells. In
the first case, the 1-cell must be grouped alone, resulting in the term wxy. However,
as illustrated in Fig. 4.40c, the 1-cell can be grouped with a collection of z-cells.
This results in the term x z. If this is done, then the 1-cell becomes completely cov-
ered since it was previously grouped when forming the wyz term. Because the term
xz has one less literal than the term wxy, the xz subcube should be selected. The
minimal sum is
flw,x%y,z) =wz+yzt+wyzt+ xz
X+V+Z
*Recall that when reading sum terms from a Karnaugh map, 0’s on the map axes denote
uncomplemented variables and 1’s denote complemented variables.
CHAPTER 4 Simplification of Boolean Expressions 213
should be noted that a 0-cell was used in one of the subcubes as a don’t-care cell.
The z literal is covered by a single subcube that again uses the 0-cell as a don’t-care
cell. This results in the sum term x + y + z. Finally, only one 0-cell is not com-
pletely covered. Hence, an additional subcube is needed, corresponding to the sum
term w + x + y. The resulting minimal product is
Table 4.21 Single-variable map entries for incompletely specified Boolean functions
ee eeeeEeEeEeeEeEeeeeEeEeEeEEE——E————— EEE
if f fiivtfeyv Map entry
0 0 0-v+0-v=0+0=0 0
0 1 O-v+1-v=O0+v=y v
0 ~ O-v+—--v=0+-‘v=--yv v,0
1 0 l-y+0-v=v+0=y v
1 1 ley ley pay | |
1 - l-yvt+--v=v+-<y v,1
— 0 —-y+0°-v=-:v+0=--v v,0
- 1 =~y + loy—=—-y + Vv v,]
~ Me = 07) Fp Oy =
214 DIGITAL PRINCIPLES AND DESIGN
entry v,1 is used to signify that the map cell can be a v-cell, i.e., a cell having the
entry v, or a I-cell. In the following discussion, the first part of a double entry is re-
ferred to as the literal part and the second part of a double entry is referred to as the
constant part.
The process of reading a variable-entered map for incompletely specified
Boolean functions is more complex since the double-entry cells in the map provide
flexibility. Again, reading the map is done as a two-step process. For obtaining min-
imal sums, all cells containing the v and y literals alone are grouped separately in
the first step, as was done previously. Cells containing 1’s and —’s alone are used as
don’t-cares. In addition, the cells with double entries must also be considered. A
double-entry cell having a 0 constant part can be used as a don’t-care for subcubes
involving its literal part during Step 1. These cells, regardless of how used in Step |,
become 0-cells in Step 2. On the other hand, any double-entry cell having a 1 con-
stant part can be used as a don’t-care in Step | regardless of the literal part. How-
ever, how it is used in Step | determines how it must be used in Step 2. To illustrate
this, consider a cell with the double entry v,1. This cell can be used optionally for
subcubes involving the v literal in Step 1, in which case it becomes a don’t-care in
Step 2. Alternately, it can be used as a |-cell in Step 1. Since 1-cells are normally
regarded as don’t-care cells in Step 1, the v,l-cell can then also be used to group
v literals in Step |. If this option is used, it must be either completely covered in
Step 1, in which case it becomes a don’t-care in Step 2, or, if not completely cov-
ered, it must be considered a |-cell in Step 2, as was done previously when reading
variable-entered maps of completely specified functions.
The two-step process for obtaining minimal sums for incompletely specified
Boolean functions from a variable-entered map with a single map-entered variable
is summarized as follows:
Step 1. Form an optimal collection of subcubes for all entries that consist of
only a single literal, i.e., v and v, using the —’s, 1’s, and double entries
having a | constant part as don’t-cares. In addition, double entries
having a O constant part can be used as don’t-cares for subcubes that
agree with the literal part of the double entry. The subcubes must be
rectangular and have dimensions 2“ X 2? and should be minimized in
number and maximized in size.
Step 2. Form a Step 2 map as follows:
a. Replace the single literal entries, i.e., v and v, by 0.
b. Retain the single 0 and — entries.
c. Replace each single | entry by a — if it was completely covered in
Step 1; otherwise, retain the single | entry.
d. Replace the double entries having a 0 constant part, i.e., v,0 and
v,0, by 0.
e. Replace each double entry having a | constant part by a — if the cell
was used in Step | to form at least one subcube agreeing with the
literal part; otherwise, replace the double entry having a | constant
part by a 1. (It should be noted that the second case corresponds to
CHAPTER 4 Simplification of Boolean Expressions 215
the cell not being covered at all or only used in subcubes involving
the complement of the literal part of the double entry.)
The resulting Step 2 map only has 0, 1, and — entries. An optimal
collection of subcubes for the 1-cells should be determined using the
cells containing —’s as don’t-care cells.
|
i Map entry
|
|
|
|
|
[e
|
|
|
!
|
|r
|
1
i
i
|
KF
|
|
|
|
!
L
| This cell is being |
|
| 1 used as a |-cell.
|
|
L
|
|
|
|
|
|
r
|
|
|
!
|
Fr
|
|
|
|
|
Figure 4.42 Obtaining a minimal sum for the incompletely specified Boolean function
f(w,xX,y,Z) = =m(3,5,6,7,8,9,10) + de(4,11,12,14,15) using a variable-
entered map. (a) Truth table. (6) Variable-entered map. (c) Step 1 map
and subcubes. (d) Step 2 map and subcubes.
216 DIGITAL PRINCIPLES AND DESIGN
z,l-cells can be used as don’t-cares when grouping z-cells, the single subcube
shown in the figure is formed. This results in the term yz. With no other cells having
single-literal entries remaining ungrouped, Step | is completed. The Step 2 map is
shown in Fig. 4.42d. It should be noted that Step 2e requires the z,1-cell to be
replaced by a | rather than by a — since this cell was previously used only in a
z-subcube and not also in a z-subcube, and the z, l-cell to be replaced by a | since it
was not used as a z-cell. The 1-cells are now grouped on the Step 2 map, resulting in
the terms wx and wx. The minimal sum is thus given as
f(w,x%,y,Z) = yz + wx + wx
As a second example, consider the incomplete function f(w,x,y,z)
m(0,4,5,6,13,14,15) + dc(2,7,8,9). The partitioned truth table under the assump-
tion that z is the map-entered variable is shown in Fig. 4.43a, and the corresponding
variable-entered map is shown in Fig. 4.43b. The z-cell is covered by grouping
the first row of the map. This is permissible since the z,0-cell, the z,1-cell, and the
1-cell can all be regarded as don’t-cares when grouping a z-cell. This gives the term
wz. The z-cell is next put into a 2 X 2 subcube. Here the 1|-cells and z,1-cell are
Map entry
0 0
| ; z+0 Z
eo eee i Pi faearea||ES | Qkee een ba eee, This cell is This cell is
0 0 7 used in a double
0 0 —-2+0 z0 Z-subcube. covered.
0 | oe
Pcs eee Zz 7] W
0 |
f= 12:0 Ol Mole ae oe on ee en
O+2z Z
| |
wo catne osha 1) aa lee =| Sa Toute w
| ]
Z+Z |
] |
(a)
Figure 4.43 Obtaining a minimal sum for the incompletely specified Boolean function
f(w,x,y,Z) = 2m(0,4,5,6,13,14,15) + dce(2,7,8,9) using a variable-entered
map. (a) Truth table. (b) Step 1 map and subcubes. (c) Step 2 map and
subcubes.
CHAPTER 4 Simplification of Boolean Expressions 217
used as don’t-cares. The term associated with this subcube is xz. At this point the
Step 2 map is formed as shown in Fig. 4.43c. Since the z,1-cell in Fig. 4.43b was
used in a z-subcube, the corresponding cell is a don’t-care in Fig. 4.43c. In addi-
tion, the 1-cell in the upper right corner of Fig. 4.43b was completely covered dur-
ing Step 1. Hence, it also becomes a don’t-care cell in the Step 2 map. However,
since the other |-cell in Fig. 4.43b was not completely covered, it remains a 1-cell
in Fig. 4.43c. The single |-cell in Fig. 4.43c is now grouped with a don’t-care cell,
resulting in the term xy. The minimal sum is
flw.x,y,Z) = wz t+ xz + xy
By duality, the above two-step procedure for obtaining minimal sums of in-
completely specified Boolean functions can be restated for minimal products. To re-
state the procedure, all occurrences of “1” should be replaced by “0” and all occur-
rences of “O” replaced by “1” in the algorithm for determining minimal sums.
As an illustration of obtaining a minimal product, again consider the partitioned
truth table given in Fig. 4.43a from which the variable-entered map of Fig. 4.44a is
constructed. Step | requires grouping the z-cell and the z-cell individually using —’s,
0’s, double entries containing 0’s, and double entries having a | constant part with
an agreeing literal part as don’t-cares. This results in the two subcubes shown in
Fig. 4.44a and the corresponding sum terms x + z and w + y + z. The Step 2 map
is then constructed as shown in Fig. 4.445 using the dual construction procedure (to
the one) previously stated for Step 2 minimal-sum maps, i.e., upon interchanging all
occurrences of |’s and 0’s. On the Step 2 map, the 0’s are optimally grouped using
dashes as don’t-cares. In this case, there is only one 0 entry. Since it can be grouped
in two ways, one minimal product is
flw,.4y,Z) = (& + Zw + y + zx + y)
and another is
flw,xy,z) = (x + z2(w + y + zw + x)
xy
00 Ol 1] 10
| ee a al | |
I 1
w a
Soh Ooi al ha
fg oe ha -b--,j
(a) (b)
Figure 4.45 Maps having entries involving more than one variable. (a) Variable-
entered map. (6) Grouping the yliteral. (c) Grouping the Z literal.
(d) Grouping the not completely covered 1-cell.
CHAPTER 4 Simplification of Boolean Expressions 219
; (a)
described by y + z, which is not functionally equal to 1. Any 1-cells that are not
completely covered in Step 1 become 1-cells in a Step 2 map. Thus, the Step 2 map
of Fig. 4.45d must be constructed for this example and a minimal covering ob-
tained, which is wx. This results in the minimal sum
f(w,x% y,z) = wy + xz + wx
Although separate maps were drawn in this example to illustrate the various sub-
cubes, the procedure can be carried out on a single map.
Another example of a variable-entered map with two-variable map entries is
shown in Fig. 4.46a. In Fig. 4.46b the Step | subcubes, i.e., subcubes for each lit-
eral, are shown. The |-cell is viewed as a (y + y)-cell when grouping the y and y lit-
erals, and as a (z + z)-cell when grouping the z literal. The covered portion of this
1-cell can be written as y + y + z. Since y + y + zis functionally equal to 1, i.e.,
y+y+z=1+z = 1, this cell is completely covered and no Step 2 map is re-
quired. The minimal sum is
28 x
0 1 0 1
Ww Ww
(c) (d)
Figure 4.47 Maps having sum terms as entries. (a) Variable-entered map.
(b) Grouping the y literal. (c) Grouping the z literal. (7) Grouping the not
completely covered 1-cell.
220 DIGITAL PRINCIPLES AND DESIGN
(d)
Figure 4.48 Maps having product terms as entries. (a) Variable-entered map.
(b) Grouping the yz term. (c) Grouping the y literal. (d) Grouping the
not completely covered 1-cell.
previous case. In Step | each literal in the map is considered, in turn, by setting all
the other literals to 0. The optimal subcubes are formed using |-cells as don’t-cares.
Again Step 2 is used to group all 1-cells that are not completely covered in Step 1. In
Fig. 4.47a it is noted that two distinct literals appear within the cells, i.e., y and z.
Setting the z literal to 0 yields the map of Fig. 4.47b. The subcube involving the y lit-
eral results in the term wy. Next the y literal in the map of Fig. 4.47a is set to 0. Fig-
ure 4.47c illustrates the resulting map where the 1-cell is rewritten as a (z + z)-cell
for emphasis. A subcube involving the z literal is then formed that results in the term
xz. Since all the literals of the original map are grouped, it is next necessary to group
all 1-cells that are not completely covered. The Step 2 map is formed by replacing
all literals by 0’s, completely covered I-cells by —’s, and not completely covered
1-cells by 1’s. For this example, the map shown in Fig. 4.47d results. The grouping
of the 1-cell corresponds to the term wx. Thus, the minimal sum is given by
f(w,xy,Z) = wy + xz + wx
Again individual maps were drawn to illustrate the process. However, normally all
subcubes can be formed on a single map.
The third case to be considered involves cells containing product terms, i.e., the
and-ing of single literals. An example of this is shown in Fig. 4.48a. Each distinct
product term, in turn, is grouped as an entity while setting the literals that comprise
the product term to | and those not contained within the product term to 0 in all the
remaining cells that contain a different product term.* For Fig. 4.48a the product
term is yz. Thus, all cells having the y and z literals alone are replaced by 1’s. This
results in the map shown in Fig. 4.485. Using |-cells as don’t-cares, a yz-subcube is
formed, resulting in the term wyz. Since the map of Fig. 4.48a also has a cell with a
single-literal product term, this cell must also be grouped. This is done by setting all
literals, except y, equal to 0. As a consequence, the cell containing the yz term is re-
placed by 0. The resulting map is shown in Fig. 4.48c, where the 1-cell is rewritten
as a(y + y)-cell for emphasis. Grouping the y literal results in the term xy. Finally,
Step 2 is performed to cover all |-cells in the original map that are not completely
covered in Step I. All map entries involving product terms are replaced by 0’s and
completely covered |-cells by don’t-cares. In this example, Fig. 4.48d is the Step 2
map and the cover is given by wx. The minimal sum is
S(w,x% y,Z) = wyz + xy + wx
When both sum and product terms appear within the same map, the analysis
needed to obtain a minimal sum becomes more difficult since greater attention must
be given to the functional covering of cells. In all the previous examples, the con-
cept of functional covering only involved the 1-cells. In general, a cell is function-
ally covered if the sum of the coverings from the subcubes involving the cell is
equal to the function specified within the cell. To illustrate this point, consider the
map of Fig. 4.49a. To obtain a minimal sum, the yz-cell is first considered. As was
done previously, the occurrences of the y and z literals in the remaining cells are re-
placed by 1’s and the necessary subcubes established. This results in the map shown
in Fig. 4.495, from which the product term xyz is obtained. Since the original
(y + z)-cell was included in the subcube, it now has become partially covered.
The expression in the (y + z)-cell of Fig. 4.49a can be written as y + z = y +
(y + yz = y + yz + yz. It is noted that the second term of this expression is cov-
ered by the yz-subcube. The remaining two terms simplify to y + yz = y. Thus,
this cell may now be regarded as simply a y-cell for the purpose of determining
the remaining subcubes for the map. That is, the simplification problem is now
reduced to the map shown in Fig. 4.49c. From this map, the y-subcube results in
the term w y. Finally, since no 1-cells appeared in the original map, the Step 2 part
of the process is not needed and the resulting minimal sum is
f(w,x% y,Z) = xyz + wy
CHAPTER 4 PROBLEMS
4.1 Relative to the Boolean function
f(w,xy,z) = Ym(5,8,9,10,11,12,14)
classify each of the following terms as to whether it is (1) a prime implicant,
(2) an implicant but not prime, (3) a prime implicate, (4) an implicate but not
prime, (5) both an implicant and an implicate, or (6) neither an implicant nor
an implicate.
au Wz ee Saye areeZ Cc wtx
d. wxz 6.x f, Wisk ¥ Zz
e. Week h. wxyz
4.2 Represent each of the following Boolean functions on a Karnaugh map.
a. f(w,x,y,z) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyZz
bi fwisyD = wrx yt owt xy + Ze Paty tz)
GW Fae ei GW aia tyr eZ Cs ak evecare)
c. f(w,xy,z) = 2m(1,6,7,8,10,12,14)
d. f(w,xy,z) = IM(0,3,4,7,9,13,14)
e. f(%,y,z) = xy + xy + yz
{ fiey2) =O OY + 2 +z)
4.3 Using a Karnaugh map, determine all the implicants of the function
f(w,x% y,z) = Lm(0,1,2,5,10,11,14,15). Which of these are prime
implicants?
4.4 Using Karnaugh maps, determine all the prime implicants of each of the
following functions. In each case, indicate the essential prime
implicants.
a. f(w,x,y,z) = %m(0,1,2,5,6,7,8,9,10,13,14,15)
b. f(w,x,y,z) = IIM(0,2,3,8,9,10,12,14)
c. f(w,xy,zZ) = wyz + wyz + xyz + wxy + wxyz
d TWO WS 2 ea RO ee eet Cee eye)
NG Ei ay ae)
4.7 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
a. f(%y,z) = 2m(2,4,5,6,7)
b. f(x y,z) = Sm(0,1,2,3,4,6,7)
c. f(x y,z) = ILM(,4,5,6)
d. f(x y,z) = IIM(1,4,5)
4.8 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
fw,x y,Z) = 2m(0,1,6,7,8,14,15)
f(w,x% y,z) = 2mGB,4,6,9,11,12,13,14,15)
fw,x y,z) = 2m(1,3,4,6,7,9,11,13,15)
S(w,x%y,z) = IIM(1,3,4,5,10,11,12,14)
fiw,xy,z) = TIM(,4,5,6,14)
[@© S(w,x%y,z) = IIM(4,6,7,8,12,14)
EF
g. fw.xy,zZ) = wxz t+ xyz + wxz + xyz
h. f(w,xy,z) = xz + xyz + wxy + wyz
Py VEG) i— Waa Core ey VCR iz CW eae)
H(i kee are)
j. fwx%y.z)=wreyt+tzaxtyt+zwtetyweetytsz
oy se See Pe ND ae oe ae WP ae Z))
4.9 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
f(w,xy,z) = Ym(0,2,6,7,9,10,15)
flw,x,y,z) = Sm(0,1,2,4,5,6,7,8,9)
f(w,x% y,z) = {m(0,1,5,6,7,8,
15)
fiw,xy,z) = TIM(0,2,6,8,10,12,14,15)
fw,xy,z) = TLM(0,2,3,5,6,9,10, 11,13)
f(w,x,y,z) = IIM(1,5,10,14)
flw,x%y,Z) = wx + yz + wxy + wxyz
ee fw,xy,Z) = xy
Oo
Se
On
Spey
Sas + yz + xyz+ xyz wxy + wyz
i. fow.xy,2) = Ow + Hw ty t QW ++ DW + H+]
j. SW,%Y,Z =(wt y =P z)(w Seu els y )(w Siaeeatn y ateZ)
HC a eer haya)
4.10 Using Karnaugh maps, determine all the prime implicants and prime
implicates for each of the following incomplete Boolean functions. In each
case, indicate which are essential.
a. f(w,xy,z) = Xm(0,2,5,7,8,10,13,15) + de(1,4,11,14)
b. flw,xy,z) = 2m(1,3,5,7,8,10,12,13,14) + de(4,6,15)
c. f(w,x,y,z) = TIM(0,1,4,5,8,9,11) + de(2,10)
224 DIGITAL PRINCIPLES AND DESIGN
4.11 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following incomplete Boolean functions.
flw,xy,2) = Sm(0,1,2,5,8,15) + dce(6,7,10)
fW,%Y,Z) = Ym(2,8,9,10,12,13) + de(7,11)
f(w,xy,z) = Lm(1,7,9,10,12,13,14,15) + de(4,5,8)
flw,xy,2) = Ym(7,9,11,12,13,14) + de(3,5,6,15)
f(w,xy,z) = Ym(0,2,6,8,10) + de(1,4,7,11,13,14)
f(w,xy,z) = LmC1,4,6,8,9,10,11,12,13) + de(3,15)
f(w,xy,z) = Ym(2,6,7,8,9,10,12,13) + de(0,1,4)
© f(w.x%y,z) = TIM(0,8,10,11,14) + dce(6)
Oo
7
S
Ss
i. f(wxy,z) = TIM(2,8,11,15) + de(3,12,14)
j. f,xy,z) = TIM(0,2,6,11,13,15) + de(1,9,10,14)
4.12 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following incomplete Boolean functions.
a flw.xy,z) = Zm(6,7,9,10,13) + de(1,4,5,11,15)
b. f(w,x,y,z) = &m(1,5,8,14) + dc(4,6,9,11,15)
c. f(w,x,y,z) = Xm(0,2,4,5,8,13,15) + dce(1,10,14)
d. f(w,x,y,z) = 2m(1,3,4,6,12) + de(0,9,13)
e. f(w,xy,z) = m(0,1,2,3,4,9,13) + dc(5,10,11,14)
fe flw,xy,z) = m(1,2,3,4,6,9,12,14) + de(5,7,15)
g. flw,xy,z) = Ym(2,3,6,8,13,14,15) + de(4,5,12)
h. f(w,xy,z) = TIM(,4,9,11,13) + de(0,14,15)
i. f(w.xy,z) = TIM(1,2,3,4,9,10) + dc(0,14,15)
j. f(wxy,z) = TIM(0,3,4,11,13) + de(2,6,8,9,10)
4.13 Let g(w,xy,z) = 2m(1,3,4,12,13) and f,(w,x,y,z) =
+m(0,1,3,4,6,8,10,11,12,13). Determine a minimal sum and a
minimal product for the function f,(w,x,y,z) such that g(w,x,y,z) =
fiw, Y,2) * fo(W,%,Y,Z):
4.14 Using a Karnaugh map, determine a minimal sum and a minimal product for
each of the following functions.
a. f(v,w,% yz) = 2m(1,5,9,11,13,20,21,26,27,28,29,30,31)
b. flv,w,x,y,Z) = 2m(3,7,8,9,11,12,13,15,16,19,20,23,27,30,31)
c. flv,wxy,z) = Ym(1,3,4,5,11,14,15,16,17,19,20,24,26,28,30)
d. fiv,w,x,y,z) = TM(0,2,4,6,8,12,14,15,16,18,20,22,30,31)
4.15 Using a Karnaugh map, determine a minimal sum and a minimal product for
the function
f(uv,w,xy,z) = &m(4,5,6,7,8,10,12,14,36,37,38,39,40,42,44,
46,48,49,50,51,52,53,54,55,56,58,60,62)
CHAPTER 4 Simplification of Boolean Expressions 225
(a) (b
(J
i ee
Figure P4.24
4.28 Using the Quine-McCluskey and Petrick methods, determine all the
irredundant conjunctive normal formulas for the following Boolean
functions. Indicate which expressions are minimal products.
a. f(w,xy,z) = NM(,4,5,8,9,11,13,14,15)
b. f(w,x,y,z) = IIM(0,6,7,8,9,13) + dc(5,15)
4.29 For each of the prime-implicant tables shown in Table P4.29, determine a
minimal cover. The cost column indicates the cost associated with each row.
State your reasons for deleting any rows or columns.
Table P4.29
Cy C2 C3 C4 Cs C6 Cy Cost
(d)
C1 G C3 €; Cs C6 C7 Cost
ry x “ °
ry x «x x 4
rs x x x 3
rs x x x x >
rs x x< x 5S)
re x x x x 6
ry x x x V
228 DIGITAL PRINCIPLES AND DESIGN
4.32 For the following set of Boolean functions, apply Petrick’s method to
determine all the multiple-output minimal sums based on the number of
distinct terms and on the number of gate input terminals in the realization.
fiGayz) = S(1,2,4,6)
fal%y,z) = &m(0,1,3,4,7)
fy(wy,z) = &m(1,4,5,7)
4.33 For each of the following sets of Boolean functions, determine a multiple-
output minimal sum based on the number of gate input terminals in the
realization.
a. f,(%y,z) = &m(0,2,3,4,6)
foqy,z) = &m(0,2,5)
fy(%y,Z) = 2m(3,4,5,6)
b. fiGzy.z) = 2m(1,2,5) + de,7)
f@y,2) = 2MGB,5,6,7) + de(1,4)
fa(%y,z) = {m(1,4,6) + de(0)
4.34 For each of the following Boolean functions, determine a minimal sum and a
minimal product using variable-entered maps where w, x, and y are the map
variables.
f(w,xy,z) = &m(2,3,4,5,10,12,13)
fw,xy,z) = &m(0,3,4,5,8,9,11,12,13)
f(w,xy,z) = Ym(1,3,8,9,10,11,12,14,15)
flw,xy,z) = &m(4,7,8,12,13,15)
f(w,x y,z) = &m(3,4,5,7,8,11,12,13,15)
oe
@
Se
me f(w,xy,Z) = Xm(0,2,5,8,9,10,11,13,15)
4.35 For each of the following Boolean functions, determine a minimal sum and a
minimal product using variable-entered maps where w, x, and y are the map
variables.
a. f(w,xy,z) = 2m(2,3,5,12,14) + dc(0,4,8,10,11)
b. f(w,xy,z) = Ym(,5,6,7,9,11,12,13) + de(0,3,4)
f(w,x,y,z) = Xm(1,5,7,10,11) + de(2,3,6,13)
f(w,x y,Z) = &m(5,6,7,12,13,14) + dce(3,8,9)
©
{Ss
@ f(w,x,y,Z) = Ym(2,3,4,10,13,14,15) + de(7,9,11)
CHAPTER 4 Simplification of Boolean Expressions 229
Figure P4.37
4.36 For each of the following Boolean functions, determine a minimal sum using
variable-entered maps where x, y, and z are the map variables.
f(A,B,x y,z) = Axyz + Bxyz + Bxyz + xyz
b. f(A,B,x,y,z) = Axyz + Axyz + Axyz + Bxyz + Bxyz + xyz + xyz
f(A,B,x,y,Z) = Axyz + Axyz + ABxyz + ABxyz + xyz
4.37 For each of the variable-entered maps in Fig. P4.37, determine a minimal
sum.
Logic Design with MSI
Components and
Programmable Logic
Devices
t is possible to obtain fabricated circuit chips, or packages, that have from a small
set of individual gates to a highly complex interconnection of gates corresponding
to an entire logic network. The complexity of a single chip is known as the scale
of integration. As a rough rule of thumb, circuit chips containing from | to 10 gates
are said to be small-scale integrated (SSI) circuits, those having from 10 to 100
gates as medium-scale integrated (MSI) circuits, those having 100 to 1,000 gates as
large-scale integrated (LSI) circuits, and those having more than 1,000 gates as
very-large-scale integrated (VLSI) circuits.
Chapter 4 was concerned with obtaining optimal logic networks. At that time,
emphasis was placed on minimizing the number of gates and the number of gate
input terminals. Thus, the realization cost was based on chips having single gates.
However, if more than one gate is included on a chip, then the cost more realisti-
cally should be associated with the entire package rather than the individual gates.
In such a case, a good realization from a cost point of view does not necessarily re-
quire the use of a minimal expression according to the previous criteria but rather
requires one that does not exceed the capacity of the circuit package.
Another occurring situation in logic design is that certain gate configurations
have become so common and useful that manufacturers fabricate these networks on a
single chip using medium-scale and large-scale integration. These configurations nor-
mally provide a high degree of flexibility, allowing them to be used as logic-design
components. Again, good realizations of logic networks are achieved by proper use
of these generalized circuits without having to form minimal expressions.
This chapter first introduces some specialized MSI components that have ex-
tensive use in digital systems. These include adders, comparators, decoders, en-
230
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 231
coders, and multiplexers. Their principle of operation and, in some cases, how they
can be used as logic-design components are presented.
Unlike the MSI circuits which are designed to perform specific functions, LSI
technology introduced highly generalized circuit structures known as programma-
ble logic devices (PLDs). In their simplest form, programmable logic devices con-
sist of an array of and-gates and an array of or-gates. However, they must be modi-
fied for a specific application. Modification involves specifying the connections
within these arrays using a hardware procedure. This procedure is known as pro-
gramming. As a result of programming the arrays, it is possible to achieve realiza-
tions of specific functions using generalized components.
In the second part of this chapter, three programmable logic device structures are
studied. In particular, the programmable read-only memory (PROM), the program-
mable logic array (PLA), and the programmable array logic (PAL)* are discussed. @
xj Ji Cj Ci+1 Si
0 0 0 0 0
0 0 | 0 |
0 1 0 0
0 | l i 0
1 0 0 0 |
| 0 | | 0
| 1 0 1 0
| 1 ] 1 |
Having obtained a truth table, let us determine a logic network realization. Kar-
naugh maps for the sum and carry-out outputs of the binary full adder are shown in
Fig. 5.1. The corresponding minimal sums are
Si = XNCp + XN AVC XC,
Cray = XD 1 Xia Vi (5.1)
Although the minimal sum for the sum output is just its minterm canonical formula,
a possible simplification of the sum equation is achieved by making use of the
exclusive-or operation. In particular,
c, B® yi) (5.2)
In arriving atEq. (5.2), it should be noted that the form of the expression on the line
above it is AB + AB, where A = c; and B = (x; @ y,), which corresponds to A @ B.
The logic diagram for the binary full adder based on Eqs. (5.1) and (5.2) is shown in
Fig. 3.2.
The binary full adder is only capable of handling one bit each of an augend and
addend along with a carry-in generated as a carry-out from the addition of the previ-
i+]
ous lower-order bit position. Consider now the addition of two binary numbers each
consisting of n bits, i.e., X,_\X,-2° * * XX and y,_1V,-2* * * YY. This, in general, re-
sults in an (n + 1)-bit sum s,5,,; * * * 5,59. A direct approach for designing a binary
adder in this case is to write a truth table with 2”” rows corresponding to all the
combinations of values assignable to the 2n operand bits, and specifying the values
for the n + | sum bits. Clearly, this is a formidable task.
As an alternate approach, n binary full adders, e.g., of the type shown in Fig. 5.2,
can be cascaded as illustrated in Fig. 5.3, where c,, the carry-out from the highest-order
bit position, becomes the highest-order sum bit, s,.* Since for the least-significant-bit
position there is no carry-in, a 0 is entered on the corresponding input line. When in-
puts are applied simultaneously to a logic network, as in Fig. 5.3, it is commonly re-
ferred to as a parallel input. Thus, the adder network shown in Fig. 5.3 is called a par-
allel binary adder. Although the inputs to this adder are applied simultaneously, the
output sum bits do not necessarily occur simultaneously due to the propagation delays
associated with the gates. In particular, the network of Fig. 5.3 is prone to a ripple ef-
fect in that a carry-out generated at the ith-bit position can affect the sum bits at higher-
order bit positions. Hence, the value for a higher-order sum bit is not produced until
the carry at its previous order bit position is established. Consequently, this logic net-
work is also referred to as a ripple binary adder.
As was discussed in Chapter 2, binary numbers can be signed or unsigned, in
which case the output of the adder must be interpreted accordingly as a signed or
unsigned result. Another factor affecting the interpretation of the output of the adder
is if a final carry-out occurs, i.e., s,, since it may correspond to an overflow. The
reader is referred back to Chapter 2 for the details of binary arithmetic with signed
and unsigned numbers and the concept of overflow.
Xx; Ji b; Diss d;
0 0 0 0 0
0 0 | 1
0 i 0 1 |
0 | 1 1 0
1 0 0 0 1
1 0 1 0 0
| 1 0 0 0
1 ] i ] 1
borrow-out bit, b;,,. The difference bit at each order is obtained by subtracting both
the subtrahend and borrow-in bits from the minuend bit. To achieve this result, how-
ever, a borrow-out from the next higher-order bit position may be necessary.
For the purpose of obtaining a realization, Table 5.2 can also be viewed as a
truth table for a binary full subtracter. Since the d; column of Table 5.2 is identical
to the s; column of Table 5.1, it is immediately concluded that the difference equa-
tion for a binary full subtracter is
d; = x, ® 0; @ B))
By using a Karnaugh map, the minimal-sum expression for the borrow-out is read-
ily determined as
These results can be used to construct a logic diagram for a binary full subtracter.
As was done for addition, by cascading n binary full subtracters, a ripple binary
subtracter 1s realized for handling two n-bit operands. The structure of such a real-
ization is shown in Fig. 5.4, where x,_\x,-> ** * X;Xo is the n-bit minuend and
Yn-1Yn-2° ° * ¥1Yo iS the n-bit subtrahend.
Recalling from Secs. 2.8 and 2.9, subtraction can be replaced by addition
through the use of complements. For example, adding the 2’s-complement of the
subtrahend to the minuend results in the difference between the two numbers.
Xn-1Yn-1 Xn-2Yn2 x) Yi Xo Yo
Binary
full
subtracter
Yn-1 Yn-2
Binary
full
adder
Difference
Figure 5.5. Parallel binary subtracter constructed using a parallel binary adder.
Sum or difference
by the exclusive-or-gates prior to entering the parallel binary adder and the neces-
sary initial carry-in of 0 is provided.
Since the operands in subtraction can be either signed or unsigned, the output
of a binary subtracter must be interpreted appropriately. For example, for unsigned
operands, the output from the binary subtracter of Fig. 5.4 is the true difference if
the minuend is greater than or equal to the subtrahend. However, the output from
the subtracter is the 2’s-complement representation of the difference if the minuend
is less than the subtrahend. Again the reader is referred back to Chapter 2 for the de-
tails of binary arithmetic with signed and unsigned numbers.
The first term in the last equation, x,y, is called the carry-generate function since it
corresponds to the formation of a carry at the ith stage. The second term, (x, + VJG;,
corresponds to a previously generated carry c; that must propagate past the ith stage
to the next stage. The x; + y, part of this term is called the carry-propagate function.
Letting the carry-generate function be denoted by the Boolean variable g; and the
carry-propagate function by p,, i.e.,
Si = OSM (S33)
Dina OG Yi (5.4)
(OE eG VORes
Using this general result, the output carry at each of the stages can be written in
terms of just the carry-generate functions, the carry-propagate functions, and the
initial input carry co as follows:
C1 = 80 F Po€o Gs)
Ce Sita VC
= 81 + Pi(8o + Polo)
= 2, + D180 + PiPoCo (5.6)
C3 = 82 * Prep
= 82 + prl8i + PiSo + PiPoo)
= 82 + P281 + P2P180 + P2P1PoCo (5.7)
oF a Sa Ee)
= 83 + p3(82 + Pr81 + P2Pi80 + P2PiPoCo)
= 83 + P382 + P3P281 + P3P2P180 T+ P3P2P1P0Co (5.8)
Si
(a) (b)
Figure 5.7 A carry lookahead adder. (a) General organization. (6) Sigma block.
carry-propagate function at each stage. A sigma block based on Eggs. (5.2) to (5.4)
is shown in Fig. 5.7b.
From the above discussion, the logic diagram of a carry lookahead adder which
handles two 4-bit operands is shown in Fig. 5.8. Generalizing from this figure, the
path length from the generation of a carry to its appearance as an input at any
higher-order stage, i.e., the path length through any stage of the carry lookahead
network, is two levels of logic. Thus, with one level of logic to form g;, two levels
of logic for the carry to propagate between any two stages, and one level of logic to
have the carry effect a sum output, the maximum propagation delay for a carry
lookahead adder is 4 units of time under the assumption that each gate introduces a
unit time of propagation delay.
C4
53 S92 S| SO
Then, by using carry lookahead adders for each block, their cascade connection re-
sults in a large adder. Figure 5.9 illustrates this approach by cascading 4-bit carry
lookahead adders. In this case, ripple carries occur between the cascaded 4-bit carry
lookahead adders.
Another approach to realizing large high-speed adders again relies on the parti-
tioning of the operands into blocks. However, use is made of generic carry look-
ahead networks called carry lookahead generators. Figure 5.10a shows a possible
4-bit carry lookahead generator. It is the same as the first three stages of the carry
lookahead network of Fig. 5.8 with two additional outputs G and P, described by
the expressions
P = P3PrP\Po
240 DIGITAL PRINCIPLES AND DESIGN
These outputs provide for a block carry-generate signal and a block carry-propagate
signal. Using this 4-bit carry lookahead generator, the 16-bit high-speed adder shown
in Fig. 5.10b is realized where the {-blocks correspond to the network of Fig. 5.7).
Both of the above two compromises, i.e., cascading carry lookahead adders or
utilizing block carry lookahead generators, result in a large parallel adder much
faster than that of the ripple parallel adder.
(a)
915
eanbigOF e(q) yq-91 pesds-ybJeppe
iy
241
242 DIGITAL PRINCIPLES AND DESIGN
pan
Z; LZ, Z,; Zo
A3 Ay A; Ag B; By B, Bo
feet Ftd
Pea ee SINS Via NG
C4 4-bit binary adder _ GG
33.59 Sy
Cout Z; Z, Z, Zo
forming conventional binary addition on the two binary-coded operands and then
applying a corrective procedure. This approach is illustrated in Fig. 5.12. The code
groups for the two decimal digits are added using a 4-bit binary adder as discussed
in the previous section to produce intermediate results KP,P,P,P). These results are
then modified so as to obtain the appropriate output carry and code group for the
sum digit, i.e., C,,.Z3;Z,Z,Z. Since each operand digit has a decimal value from 0 to
9 along with the fact that a carry from a previous digit position is at most 1, the dec-
imal sum at each digit position must be in the range from 0 to 19. Table 5.3 summa-
rizes the various outputs from the 4-bit binary adder and the required outputs from
the single-decade decimal adder. As shown in the table, if the sum of the two deci-
mal digits and input carry is less than 10, then the code group for the required BCD
sum and output carry digits appear at the outputs of the 4-bit binary adder. In this
case no corrective procedure is necessary since KP3P,P);P) = CyyZ3;Z,Z,Z. On the
other hand, when the two decimal-digit operands and carry from the previous
decade produce an output from the 4-bit binary adder of KP3P,P,|Py) = 01010,
O1011,..., 10011, which corresponds to the decimal sums of 10 through 19, cor-
rective action must be taken to get the appropriate values for C,,,.Z3;Z)Z|Zp.
The need for a correction is divided into two cases as indicated by the dashed
lines in Table 5.3. Consider first the situation when the decimal sums are in the
range from 16 to 19. Here, the outputs from the 4-bit binary adder appear as
KP3P,P,P) = 10000, 10001, 10010, or 10011; while the required outputs from the
single-decade decimal adder should be C,,.Z3Z,Z,Z) = 10110, 10111, 11000, or
11001, respectively. In each of these cases, it is immediately recognized that the oc-
currence of the carry K indicates that a carry C,,, also is necessary. Furthermore, if
the binary quantity 0110 is added to the output P;P,P,Po, then the correct sum digit,
Z;Z,Z,Zo, is obtained. That is, the addition of a decimal 6, i.e., binary 0110, to the
output from the 4-bit binary adder is the necessary correction whenever the carry bit
Kis 1.
244 DIGITAL PRINCIPLES AND DESIGN
MeO
9 0
aHES Grey
Sie
aah
al
a
oe(wae eli. eee
1 0 0
ee 0 0
iL 0 i 0 I 1 1 0 0 0 1
12 0 | il 0 0 1 0 0 1 0
13 0 1 1 0 | 1 0 0 1 1
14 0 1 | 1 0 i 0 | 0 0
ES 0 | | | | | Oread:Gilat nate Oy an as
16aoi Lie AO SPA cme, WO rah aghReaD | eG
17 1 0 0 0 | | 0 1 1 1
18 l 0 0 | 0 1 I 0 0 0
19 | 0 0 l 1 l ! 0 0 1
Now consider the situation when the output from the 4-bit binary adder corre-
sponds to the decimal sums 10 to 15. These outputs appear as KP;P,P,P) = 01010,
01011,..., 01111 and the required outputs are C,,.Z;Z,Z,Z) = 10000, 10001,...,
10101, respectively. In each of these cases, it is necessary to have C,,, = 1 even
though K = 0. Again it is immediately recognized that the addition of decimal 6,
i.e., binary 0110, to the output from the 4-bit binary adder, P,;P,P,P,, results in the
correct sum digit. That is, whenever the six binary combinations P;P,P,P, = 1010,
KONBE aeee 1111 occur, the corrective procedure is to add the decimal quantity 6.
These six binary combinations correspond to the invalid code groups in the 8421
code. To obtain a Boolean expression to detect these six binary combinations, a
Karnaugh map is constructed as shown in Fig. 5.13. Obtaining the minimal sum
from the map, it is seen that a correction is needed to the binary sum whenever the
Boolean expression P,P, + P3P, has the value of 1.
In summary, to design a single-decade BCD adder having the organization of
Fig. 5.12, the two decimal digits are added as binary numbers. No correction to the
binary sum is necessary when KP3;P,P,P) = 01001, but the binary equivalent of the
decimal 6 must be added to P;P,P,P) when KP3;P,P,P, > 01001. The Boolean ex-
pression describing the need for a correction is
The first term corresponds to the situation when 10000 = KP3P P,P) = 10011, Le.,
whenever the carry bit K is 1, and the remaining two terms correspond to the situation
when 01010 = KP3P,P,P,) = 01111, i.e., whenever the code group for the sum digit
is invalid. It is also noted from Table 5.3 that whenever a corrective action is neces-
sary, a carry C,,, should be sent to the next decade. Thus, Eq. (5.10) also describes
the conditions for the generation of a carry. Figure 5.14 shows the logic diagram of
i A MEB- UB EE eRe
>
—
2 *1 Xo a5) Yi 0
in
C4 4-bit binary adder Co |<
$$
$3 So Sy, SO
* Xo) B32. nO
C4 4-bit binary adder cy |=— 0
53502 7210 20
Z; Z, Z, Zo
a single-decade BCD adder. In this diagram whenever C,,, = 0, the outputs from
the upper 4-bit binary adder are sent to the lower 4-bit binary adder and the decimal
quantity of zero is added to it, which results in no corrective action. However,
whenever C.,, = 1, decimal 6, i.e., binary 0110, is added to the outputs from the
upper 4-bit binary adder so that the correct sum digit is obtained.
The above discussion was concerned with the design of a single-decade BCD
adder. A decimal adder for two n-digit BCD numbers can be constructed by cascad-
ing the network of Fig. 5.14 in much the same way as was done for the ripple binary
adder.
5.3 COMPARATORS
A commonly encountered situation in logic design is the need for a network to com-
pare the magnitudes of two binary numbers for the purpose of establishing whether
one is greater than, equal to, or less than the other. A conceptually simple approach
to the design of such a network, called a comparator, makes use of a cascade con-
nection of identical subnetworks in much the same way as was done in the design of
the parallel adder.*
To see how such a subnetwork is designed, consider two n-bit binary numbers
A =A,_ 1° * * A;Aj-;°°* AjAp and B = B,_,ill ++: B,B;_, +++ B,Bo. For the purpose of
this design, assume that only one bit of corresponding order from each number is
entering the subnetwork, say, A; and B&;, and that the two binary numbers are to be
analyzed from right to left. This subnetwork is called a /-bit comparator. The func-
tion of the 1-bit comparator is to establish whether A;A;_, +++ A,Ap is greater than,
equal to, or less than B,;B,;_, -*+ B,By given the values of A;, B;, and whether A;_, °°:
A,Ao is greater than, equal to, or less than B;_, +++ B,Bo. The three conditions de-
scribing the relative magnitudes of A;_, +++ A,Ayg and B;_, +++ B,Bo are assigned to
three variables G,, E;, and L; where G; = 1 denotes A;_, °+* A;Ay > B;_, °** B,Bo,
E; = 1 denotes A;_, °-- AjAp = B;_, °** B,Bo, and L, = 1 denotes A;_; >+> A,Ay <
B,_,*** B,Bo. Thus, the 1-bit comparator is a 5-input, 3-output network as shown in
Figeouls.
Having obtained the organization of the 1-bit comparator, it is now necessary
to develop a rule for specifying the values of G;,,, E;.,, and L;,, given the values of
A;, B;, G;, E;, and L;. Upon a little thought it should become clear that, regardless of
G
(AjA\_,** A\Ao > BiB;_, -- - By Bo) ey ——— (A;_1°* AjAp >Bi + B,Bo)
(A;A;_|-- > A\Ag = B;B;_; - + > By Bo) |+»——-(A;_, -- A} Ap = B;_, - +: By Bo)
comparator :
(A;Aj_1 > + A\Ag < BB, ° By Bo) a (A AAG <Br BB)
the relative magnitudes of A;_;°-- A Ap and B,_, -*+ B,Bo, if A; = 1 and B; = 0 then
A;°** A\Ay > B;°** B,Bo; while if A; = 0 and B; = 1 then A;--* A,Ay < B;+++ B,Bo.
However, if A; and B; are the same, then the relative magnitudes of A; +++ A,Ay and
B;°+* B,Bo are the same as the relative magnitudes of A;_, +++ A,;Ap and B;_, °°
B, Bo. From this analysis, the truth table shown in Table 5.4 is constructed. The large
number of don’t-care conditions should be noted. This is a consequence of the fact
that one and only one of the three variables G,, E;, and L; has the value of logic-1 at
any time. The minimal sum Boolean expressions for Table 5.4 are
Gis
Gj
Fist
E;
it Lis)
(a)
A, Bn
G Gz
(A > B) E eee
(A = B)<—* tes
comparator) L,_,
(A <B) cnere
(b)
Figure 5.16 Comparing two binary numbers A and B. (a) 1-bit comparator network.
(b) Cascade connection of 1-bit comparators.
design of the subnetwork. Numbers consisting of more than 4 bits are then compared
by cascading these 4-bit comparator subnetworks in the same manner as was illus-
trated above for the 1-bit comparators.
5.4 DECODERS
Frequently, digital information represented in some binary form must be converted
into some alternate binary form. This is achieved by a multiple-input, multiple-output
logic network referred to as a decoder. The most commonly used decoder is the
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 249
n-to-2”
[— DEC 0 -—
—1 i
g ©
SD all ae etl
—_n-|l | ———
et fee eee
Figure 5.17 An 1n-to-2’-line
decoder symbol.
n-to-2"-line decoder. This digital network has n-input lines and 2”-output lines with
the property that only one of the 2”-output lines responds, say with a logic-1, to a
given input combination of values on its n-input lines. A symbol for such a device is
shown in Fig. 5.17.
The realization of the n-to-2"-line decoder is straightforward. Figure 5.18
shows the logic diagram, truth table, and symbol of a 3-to-8-line decoder. In this
figure the three input lines are assigned the variables xo, x,, and x5; while the
eight output lines are assigned the variables Zo, z;,..., Z7. AS shown in the truth
table, only one output line responds, i.e., is at logic-1, for each of the input
combinations.
To further understand the labels in the symbol of Fig. 5.18c, let a binary 0 be
associated with a logic-0 and a binary | be associated with a logic-1. In addition,
let the ith-input line be weighted by 2' fori = 01122. In this way, the input com-
binations can be regarded as binary numbers with the consequence that the jth-
output line is at logic-1, for 7 = 0, 1,..., 7, only when input combinationj is
applied.
The n-to-2”-line decoder is only one of several types of decoders. Function-
specific decoders exist having fewer than 2” outputs. For example, a decoder having
4 inputs and 10 outputs in which a single responding output line corresponds to a
combination of the 8421 code is referred to as a BCD-to-decimal decoder. There are
also function-specific decoders in which more than one output line responds to a
given input combination. For example, there is a four-input-line, seven-output-line
decoder that accepts the 4 bits of the 8421 code and is used to drive a seven-
segment display. However, the n-to-2”-line decoders are more flexible than the
function-specific decoders. It is now shown that they can be used as a general
component for logic design.
Xo
Zp = XXXQ = My
xy
Xo
Z4 = XpXjXQ = M4
Z5 = XX XQ = Ms
% = XXXp = Mo
SPE SEY) S
(a)
2 = XXX = Mo
(b)
Figure 5.18 A 3-to-8-line decoder. (a) Logic diagram. (b) Truth table.
(c) Symbol.
the same variables have to be realized. To illustrate this, consider the pair of
expressions
Fi %o,X1,%) = Ym(1,2,4,5)
Fo(%r,X1,Xo) = >m(1,5,7)
Using a single 3-to-8-line decoder and two or-gates, the realization shown in Fig. 5.19
is immediately obtained.
In the realization of Fig. 5.19, the number of input terminals required of each
or-gate is equal to the number of minterms that must be summed by the gate. When
more than one-half the total number of minterms must be or-ed, it is usually more
convenient to use nor-gates rather than or-gates to perform the summing. This re-
sults in a net reduction in the total number of input terminals required of the sum-
ming gates. For example, consider the pair of expressions
Fi(%2.X1,Xo) aS >m(0,1,3,4,5,6)
fi%2,X1,X9) = Xm(,2,3,4,6)
F a%1%0) = Ym(2,7)
F (%2%15%0) = >m(0,5,7)
3-to-8
DEC 0
fi
3-to-8
DEC 0
ff
5 h
(a) (b)
Figure 5.21 A decoder realization of f,(x2,X;,X) = TIM(0,1,3,5) and f,(x2,X;,X9) = IIM(1,3,6,7). (a) Using output
or-gates. (6) Using output nor-gates.
Si ,X1,%o) = [IM(0,3,5)
Ai%,X),Xo) = TIM(2,3,4)
Si %2,X),X0) = IIM(0,1,3,4,7)
fol%2,%1,%) = TIMC,2,3,4,5,6)
254 DIGITAL PRINCIPLES AND DESIGN
ZO = XXX = Xo ae xX se Xy=Mo
oo
F ‘
Xo
ea XX XG = Xo Ie xy + Xyp=M,
= = =
| pas = XX XQ = X% ar x) aia X9=M>,
x)
= =
tf p—23 = XgXjXq = XQ + Xp + XQ= Mz
X> a 24 = XpXXq =X +X + X= My
Fp iXy = y+ X, + =U
Zs = Hah
3 O— %3 = X9X Xp = XQ +X, + X= MG
: 4 O— 24 = XXX = XQ +X, + xXQ=My
2 = 5 JO 25 = xh Xo = % + xy + XQ= Ms
6 O— 2% = XXX = XQ + Xp + XQ= Me
TO 27 = X0N
1X9 = XQ +X} + XQ= Mz
ee
(b) (c)
Using the fact that the complement of a maxterm canonical formula is the product of
those maxterms not appearing in the original formula, the complementary formulas are
Ff,Q2*1.%) = IIM(2,5,6)
Ff,(%2-%1 Xo) = IIM(0,7)
3-to-8
DECIS op
it
f
2)
70110
3
“Si — 1
4
X97 ~ 2
5)
6a :
J2
=
(a) (b)
Figure 5.25 A decoder realization of f,(xX2,x;,X9) = £(0,2,6,7) and f,(x>,X;,X%>) = &m(3,5,6,7). (a) Using output
and-gates. (b) Using output nand-gates.
Enable 23 = XXL
(E)
(a)
Inputs Outputs
E Xx; Xo “Oi 2G 2a 48)
(b)
Figure 5.26 And-gate 2-to-4-line decoder with an enable input. (a) Logic diagram.
(b) Compressed truth table. (c) Symbol.
258 DIGITAL PRINCIPLES AND DESIGN
Il = &S ie)
aS +
Xo
mine
Zz} = X\XpE =
= aparenr
Enable
(®)
po
_Inputs Outputs
Ex, X% Z % 2% % Xy ——
Enable
(E)
Figure 5.27 Nand-gate 2-to-4-line decoder with an enable input. (a) Logic
diagram. (b) Compressed truth table. (c) Symbol.
The enable input provides the decoder with additional flexibility. For example,
suppose a digital network is to be designed which accepts data information and
must channel it to one of four outputs. This is achieved using a decoder in the con-
figuration shown in Fig. 5.28. Here, the data are applied to the enable input. By en-
tering a binary combination on the other two input lines, labeled as select lines in
the figure, precisely one output line is selected to receive the information appearing
on the data input line. In particular, if x, = 0 and x) = 1, then the output line labeled
Zg = XXL
Select | *0
Bey
lines
4 Outputs
Zp = XXL
Data input
line paAe
2-to-4
x 0 DEC QL — 85% 1%p
2-— X3XpX Xo
eeerse se E
3 feXXX 1X0
2-to-4
DEC 0 -— X34X9X
1X
7 1 LH 03X91 xX
:
x| | 1 32 0
2 | 33%2*1X0
EB Z
3 AEE)
2-to-4
DEC Oa X3XpX Xo
x»—— 0
| 1 iaX3XqX 1X0
xy at
3 ieN3XqX4XO
2-to-4
0
1 ——*3X2*
1X0
]
|
__X3X 9X 1XQ
3 3X9 X09)
output lines are labeled with their corresponding Boolean expressions. Here the
first-level decoder is used to generate the four combinations of the x, and x; vari-
ables since E = |. Each of these combinations is applied to the enable input at a
second-level decoder that introduces the four combinations of the x9 and x, vari-
ables. The net result is a network that generates the 16 minterms of four variables
or, equivalently, serves as a 4-to-16-line decoder.
5.5 ENCODERS
Like decoders, encoders also provide for the conversion of binary information from
one form to another. Encoders are essentially the inverse of decoders. Normally de-
coders have more output lines than input lines. On the other hand, decoders that
have more input lines than output lines are usually called encoders.
Perhaps the simplest encoder is the 2”-to-n-line encoder in which an assertive
logic value, say, logic-1, on one of its 2” input lines causes the corresponding binary
code to appear at the output lines. If it is assumed that at most one input line is as-
serted* at any time, then the 2”-to-n-line encoder is simply a collection of or-gates.
Figure 5.30 shows a 2"-to-n-line encoder symbol and Fig. 5.31 shows the logic dia-
gram for an 8-to-3-line encoder. The equations for the three outputs of Fig. 5.31 are
Z0 = x AF X3 ar Xs at X7
Z| = X7 ar X3 ta X6 ae X7
1G) oe OG WE RR AP ONG I 3G
In general, the Boolean expression for the output z; is the sum of each input x;
in which the binary representation of j has a | in the 2'-bit position.
2”-to-n
0 ENCODER 0 ~
—1 1}-—
ine) in)
y=
Inputs sjnding
*When a named input signal to a logic network is to cause an action when at logic-1, the signal is said to
be active high. Similarly, when a named input signal to a logic network is to cause an action when at
logic-O, the signal is said to be active low. When a signal is at its active level, it is said to be asserted.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 261
x7
The assumption that at most a single input to the 2”-to-n-line encoder is as-
serted at any time is significant in its operation. For example, in the encoder of
Fig. 5.31, assume that both x; and x; are simultaneously logic-1. Logic-1’s then
appear at all three output terminals, implying that x, must have been logic-1. For
this reason, priority encoders have been developed. In a priority encoder, a prior-
ity scheme is assigned to the input lines so that whenever more than one input
line is asserted at any time, the output is determined by the input line having the
highest priority. For example, Table 5.5 is a condensed truth table specifying the
behavior of a priority encoder where the output is determined by the asserted
input having the highest index, i.e., x; has higher priority than x; if i > j. Thus,
referring to Table 5.5, if x, = x5 = x6 = x7 = 0 and x, = 1, then 22,2) = O11 re-
gardless of the values of the x9, x,, and x, inputs.
An output is also included in Table 5.5, labeled valid, to indicate that at least
one input line is asserted. This is done so as to distinguish the situation that no
input line is asserted from when the x, input line is asserted, since in both cases
221% = 000.
Table 5.5 Condensed truth table for an 8-to-3 line priority encoder
Outputs
N rs) a NNSo = 2 =jo”
S
KS
x
KKK
KM MOK
Soe
ae
KK
KR 26
Oh
x2S
=
SS <x
xSoS oe
SS Xx
So Se)
eS
SS
eS)
xSS
TSorSolo
KH Oo
©
(Slo
Qo
Ooreono
SS
Ss
oa
Se
SoeoO So
SoS
oe
OR
Soe
BrPoedodorraco
PrePrFrROOOC
262 DIGITAL PRINCIPLES AND DESIGN
5.6 MULTIPLEXERS
Another very useful MSI device is the multiplexer. Multiplexers are also called data se-
lectors. The basic function of this device is to select one of its 2” data input lines and
place the corresponding information appearing on this line onto a single output line.
Since there are 2” data input lines, n bits are needed to specify which input line is to be
selected. This is achieved by placing the binary code for a desired data input line onto
its n select input lines. A symbol for a 2”-to-1-line multiplexer is shown in Fig. 5.32.
Typically an enable, or strobe, line is also included to provide greater flexibility as in
the case of decoders. The multiplexer shown in Fig. 5.32 is enabled by applying a
logic-1 to the E input terminal. Some commercial multiplexers require a logic-O for en-
abling. In such a case an inversion bubble appears in the symbol at the E input terminal.
A realization of a 4-to-1-line multiplexer is given in Fig. 5.33 along with its
compressed truth table and symbol. The X’s in the compressed truth table denote ir-
relevant, i.e., don’t-care, conditions. As shown in the figure, each data input line /;
goes to its own and-gate. The select lines are used to uniquely select one of the and-
gates. Thus, if the multiplexer is enabled, then the output corresponds to the value
on the data input line of the selected and-gate. As in the case of decoders, the 0-1
combinations on the select lines are regarded as binary numbers. The decimal
equivalents of these numbers determine which data input lines are selected and
serve to identify the corresponding input terminals in the symbol.
Table 5.6 provides an alternate description of the behavior of the 4-to-1-line
multiplexer. This description is frequently referred to as a function table. Here,
rather than listing the functional values on the output lines, the input that appears at
the output is listed for each combination of values on the select lines. Again a X in
the table indicates an irrelevant condition. From either the logic diagram or function
table, an algebraic description of the multiplexer can immediately be written as
Enable
line
Select input
lines
ace
:
iB)
ei
To eens
| A|
(a)
4-to-|
Wye See eee 0 Ip MUX
LO Oa Ole ae x 0 mee
I @) @) ih 26 Seex% 1
iets
2Woes cad fan) By Be
it Ok Oe xe | 1;
le Ome <a (ex 0
bol Oxi 1 x I “sae
lik i Se See © 0 S; So
il i il 3S se i | ae) Sethe p
—> ae —5 Nae
_| Output
One-bit bus
Source Destination
address address
lines lines
*In Fig. 5.35 the demultiplexer symbol was modified from Fig. 5.28 to emphasize the multiplexer/
demultiplexer arrangement.
266 DIGITAL PRINCIPLES AND DESIGN
structures shown in Fig. 5.35 in parallel, an n-bit word from any of four source loca-
tions is transferred to any of four destination locations.
where f; denotes functional values 0 and 1.* The Boolean expression for a 4-to-1-
line multiplexer was previously written as Eq. (5.11). In an analogous manner to
Eq. (5.11), an 8-to-1-line multiplexer is described by the Boolean expression
fo—flo MUX
i a ei I,
3 5
is
Ta —— Ta i,
(=:
te Le
fy; —1h,
Sousa 55
(a) (b)
*Note that/;denotes a functional value in this presentation and not an entire function as in other sections
of this book.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 267
If E is assumed to be logic-1, then Eq. (5.13) is transformed into Eq. (5.12) by re-
placing /; withf;,,S, with x, S; with y, and Sp with z. In other words, by placing x, y,
and z on select lines S,, $;, and So, respectively, and placing the functional valuesf,
on data input lines /;, an enabled 8-to-1-line multiplexer realizes a general three-
variable truth table. This realization is shown in Fig. 5.36b.
As a specific example, consider the truth table of Fig. 5.37a. By placing a logic-
1 on the enable input line, the eight functional values on the eight data input lines of
an 8-to-1-line multiplexer, and connecting the select lines $5,5),S9 to x,y,z, respec-
tively, the configuration of Fig. 5.37b becomes a realization of the given truth table.
Rather than working from a truth table, one could start with a minterm canoni-
cal formula to obtain a realization with a multiplexer. Since each minterm in an ex-
pression algebraically describes a row of a truth table having a functional value of
1, the realization is obtained by simply applying a | input to the /; line if minterm m;
appears in the expression and applying a 0 input to the J; line if m; does not appear
in the expression. For example, consider the minterm canonical formula
f(%y,z) = Ym(0,2,3,5)
The realization is obtained by placing x, y, and z on the Sy, S;, and So lines, respec-
tively, logic-1 on data input lines Jp, /5, /;, and /;, and logic-O on the remaining data
input lines, i.e., /,, 14, 4g, and /;. In addition, the multiplexer must be enabled by set-
ting E = |. This again is the realization shown in Fig. 5.37b.
If at least one input variable of a Boolean function is assumed to be available
in both its complemented and uncomplemented form, or, equivalently, a not-gate
0 0 0
OPO 0
OREO ]
Oy dh dl i
Ome) 0
i @ i 1
b i @ 0
toi 0
(a)
4-to-]
fo: Z+hf,-2—\1 0 MUX
hither
e I, " f
YO CAPR
U6 ae l,
| E
Si So
Xx y
i y
yz
00 Ol 11 10
Smet |
|
(a)
Figure 5.40 Obtaining multiplexer realizations using Karnaugh maps. (a) Cell groupings
corresponding to the data line functions. (b) Karnaugh maps for the /;subfunctions.
Now consider each term in this expression. The first term, /jxy, corresponds to
those cells in which x = 0 and y = 0. These are the two upper left cells of the Kar-
naugh map in Fig. 5.40 labeled as Jp. These two cells can be regarded as a submap
for the z variable as indicated in Fig. 5.405. Thus, depending upon the 0-1 entries
within this submap, the expression for /) is readily obtained. In a similar manner,
the second term, /,xy, corresponds to those cells in which x = 0 and y = 1. These
are the two upper right cells of the map. The entries within these cells correspond to
the /, input. The cells associated with /, and /; are obtained in a like manner and are
also shown in Fig. 5.40a.
As an example, again consider the truth table of Fig. 5.37a. The Karnaugh map
is drawn in Fig. 5.41a. For emphasis, the four pairs of cells corresponding to the
data inputs are redrawn as single-variable submaps in Fig. 5.41b. It should be noted
that the axis labels for the 7, and /; submaps are shown in reverse order to be consis-
tent with the Karnaugh map of Fig. 5.41a. Grouping the |-cells, the expressions for
the subfunctions are now written. In particular, J) = z, 7; = 1, 4 = z, and J, = 0.
This again leads to the realization shown in Fig. 5.39. Although submaps were
drawn in Fig. 5.415, the expressions for the subfunctions are obtained from the
original map by noting the patterns within the appropriate pair of cells. When both
cells contain 0’s or 1’s, then the subfunctions are 0 or 1, respectively. When one cell
contains a 0 and the other a I, /; = z if the | occurs in the cell in which z = 1; while
I, = zif the | occurs in the cell in which z = 0.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices
So
00 Ol il 10
rt (a eS ieie
Sj X:
|} oo | eo :
(a)
x=0 sel
ya 0 7h y=
0 1
Ty map
(b)
Figure 5.41 Realization of f(x, y,Z) = 2M(0,2,3,5). (a) Karnaugh map. (6) I, /;, 1, and /, submaps.
Karnaugh maps can readily handle other assignments of the input variables to
the select lines. For example, Fig. 5.42 illustrates the J; submaps under two addi-
tional assignments. In Fig. 5.42a, input variable y is applied to select line S$, and
input variable z is applied to select line So. In Fig. 5.42b, input variable x is applied
to select line Sy and input variable y is applied to select line $,. Depending upon the
assignment, the submaps for functions of the third variable are located differently.
(b)
(a)
| 4-to-]
i YZ jie MUX
00 Ol 11 10
I= |4
— ~i)
iS — i —s ff
0
SQ) ———
I, I; y EG
(b)
However, in each case, the submaps correspond to the four combinations of values
to the variables on the select lines. Realizations of the truth table of Fig. 5.37a using
the two assignments of Fig. 5.42 are shown in Fig. 5.43.
An 8-to-1-line multiplexer can be used to realize any four-variable Boolean
function. Three of the variables are placed on the select lines. The inputs to the data
lines are then the possible single-variable functions of the fourth variable, namely,
0, 1, the variable, and its complement. Figure 5.44 shows the relationships between
the map cells and the data-line inputs under the assumption that the input variables
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 273
So
w, x, and y are applied to select lines $5, $,, and So, respectively. In this case, the
eight /; inputs are determined by pairs of cells associated with the eight combination
of values to the x, y, and z variables. An example of a four-variable function on a
Karnaugh map, along with the multiplexer realization, is given in Fig. 5.45. Particu-
lar attention should be given to J, since z = 0 corresponds to the right cell and z = |
corresponds to the left cell of the /, submap. As in the case of the three-variable
Karnaugh map, it is a simple matter to reinterpret a four-variable map for different
assignments of the input variables to the select lines.
In the above discussion, 2”-to-1-line multiplexers were used to realize functions
of n + | variables. This was achieved by applying functions of a single variable to
the data input lines. By allowing realizations of m variable functions as inputs to the
data input lines, 2”-to-1-line multiplexers can be used in the realization of (n + m)-
variable functions. To illustrate this, Fig. 5.46 shows a four-variable Karnaugh map
in which it is assumed that the input variables w and x are applied to the S, and So
select inputs, respectively, of a 4-to-1-line multiplexer. This implies that functions
of the y and z variables must appear at the data input lines in the overall realization.
To determine these functions, it is necessary to consider the four cases correspond-
ing to the four assignments of 0’s and 1’s to the variables on the select lines. As in-
dicated in Fig. 5.46, there are four cells corresponding to wx = 00. These four cells
form the submap for the function at the /) terminal. Similarly, the input to the /, ter-
minal is described by the four cells in which wx = 01, the input to the /, terminal is
274 DIGITAL PRINCIPLES AND DESIGN
So
(a)
described by the four cells in which wx = 10, and the input to the /; terminal is de-
scribed by the four cells in which wx = 11. By analyzing these submaps, appropri-
ate logic is readily determined for these input terminals.
As an example, consider the Karnaugh map of Fig. 5.47a. Although the four
submaps can be interpreted directly on the Karnaugh map itself, they are redrawn in
Fig. 5.476 to e for clarity. These are two-variable Karnaugh maps where it is as-
NSS
I<]
Sais
nN
(g)
Figure 5.47 Realization of the Boolean function f(w,x,y,Z) = 2m(0,1,5,6,7,9,13,14). (a) Karnaugh map.
(b) lj submap. (c) |, submap. (d) /> submap. (e) /3 submap. (f) Realization using a 4-to-1-line
multiplexer. (g) Realization using a multiplexer tree.
276 DIGITAL PRINCIPLES AND DESIGN
sumed that the left and right edges are connected. From the four submaps, it imme-
diately follows that
ineeasy,
Ly
L= YZ
I,=yzt+tyz=y@z
n p
buffer/ product-term
inverters lines
m
input output
lines lines
_— = imi r
tad]
(a) (b)
tion, these devices produce the necessary drive for the and-array which follows
since, in general, the outputs from these devices serve as inputs to a very large
number of gates. The array of and-gates accepts the n input variables and their
complements and is used to generate a set of p product terms. These product terms,
in turn, serve as inputs to an array of or-gates to realize a set of m sum-of-product
expressions.
In PLDs, one or both of the gate arrays are programmable in the sense that the
logic designer can specify the connections within an array. In this way, PLDs serve
as general circuits for the realization of a set of Boolean functions. Table 5.7 sum-
marizes which arrays are programmable for the various PLDs. In the case of the
programmable read-only memory (PROM) and the programmable array logic
(PAL) devices, only one array is programmable; while both arrays are programma-
ble in the case of the programmable logic array (PLA).
In a programmable array, the connections to each gate can be modified. One
simple approach to fabricating a programmable gate is to have each of its inputs
connected to a fuse as illustrated in Fig. 5.50a. In this figure, the gate realizes the
product term abcd. Assume, however, that the product term bc is to be generated.
To do this, the gate is programmed by removing the a and d connections. This is
done by blowing the corresponding fuses. The net result is to have a gate with the
desired connections as illustrated in Fig. 5.50b. It is assumed in this discussion that
an open input to an and-gate is equivalent to a constant logic-1 input and that an
a ————o
S8
Qo /\"\_o— ——.
(a) (b)
a IB G
a ————
b —
c
(b)
@ to
a = ;
en) ===
-
(c)
i (OG
=D ee
a ——
C5)
(d)
Gi WD Ga Dac
(h)
:
Figure 5.51 PLD notation. (a) Unprogrammed and-gate. (6b) Unprogrammed or-gate.
(c) Programmed and-gate realizing the term ac. (d’) Programmed or-gate realizing
the term a+ b. (e) Special notation for an and-gate having all its input fuses intact.
(f) Special notation for an or-gate having all its input fuses intact. (g) And-gate with
nonfusible inputs. (h) Or-gate with nonfusible inputs.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 279
n-to-2” 0
Programmable
. DEE or-array . ie
input * (and-array output
(address) f with (bit)
lines : buffer/ (memory array) lines
n—-1 inverters) 21
decoder output. These are known as word lines. As is seen in Fig. 5.53a, all 2” outputs
of the decoder are connected to each of the m gates in the or-array via programmable
fusible links. The n input lines are called the address lines and the m output lines the
bit lines. A PROM is characterized by the number of output lines of the decoder and
the number of output lines from the or-array. Hence, the PROM of Fig. 5.53a is re-
ferred to as a 2” X m PROM.
The logic diagram of Fig. 5.53a is redrawn in Fig. 5.53b using the PLD nota-
tion introduced in the previous section. Since the and-array is fixed, i.e., not pro-
grammable, connections are shown by junction dots. The fusible connections in the
or-array, however, are shown by crosses since this array is programmable.
The realization of a set of Boolean expressions using a decoder and or-gates
was discussed in Sec. 5.4. The very same approach is applicable in using a PROM
since a PROM is a device that includes both the decoder and or-gates within the
same network. Given a set of Boolean expressions in minterm canonical form or a
set of Boolean functions in truth table form, it is only necessary to determine which
programmable links of a PROM to retain and which to open. The programming of
the PROM is then carried out by blowing the appropriate fuses. PROMs are typi-
cally used for code conversions, generating bit patterns for characters, and as
lookup tables for arithmetic functions.
As a simple example of using a PROM for combinational logic design, con-
sider the Boolean expressions
fi (%5,%1-%0) = &m(O,1,2;5,7)
falX2,X1,X%) = Xm(1,2,4,6)
The corresponding truth table is given in Fig. 5.54a. Since these are functions of
three input variables, a PROM having a 3-to-8-line decoder is needed. In addi-
tion, since there are two functions being realized, the or-array must consist of
two gates. Hence, an 8X2 PROM is needed for the realization. The realization is
shown in Fig. 5.545 using the PLD notation. A blown fusible link on the input of
an or-gate is equivalent to a logic-O input. It should be emphasized that this ex-
ample is for illustrative purposes only. From a practical point of view, PROMs
are intended for combinational networks having a large number of inputs and
outputs.
z
ia}
‘es
xy 5
eo
:
ee
Xn-1 ea
OQ
ie)
°
far,
io)
a
a
e Si
5
2 h
: 2=I
|
=
oe
= Jin
(bd) -
x0
OO
@ | i @
ee)
ONO |e ae
Ot
@ | ii
il O © 0 1
101 lO —— fi
i i © 0 1
i i il a0) Shs h
(a) (b)
Figure 5.54 Using a PROM for logic design. (a) Truth table. (6) PROM realization.
It may seem strange that the structure of Fig. 5.52, as a logic-design device, is
called a read-only memory. Read-only memory devices were originally developed
to store permanent data in a digital system. In these devices each piece of data,
called a word, is accessible by specifying an address.
To see how the structure of Fig. 5.52 is viewed as a memory device, again con-
sider Fig. 5.54. By applying a 3-bit combination to the xo, x, and x, lines, precisely
one and-gate in the decoder is selected in the sense that its output line, 1.e., word
line, is logic-1. Thus, each input combination is regarded as an address of one of the
word lines. As a consequence of selecting a given word line, a pattern of 0’s and
l’s, i.e., a word, as determined by the fusible connections to the selected word line
appears at the output terminals, 1.e., the bit lines, of the device. This 0-1 pattern is
considered the word stored at the address associated with the selected word line.
For example, the word stored at address x,x,;x) = 100 in Fig. 5.54 is f,f{ = O1. Fi-
nally, the fact that the connections associated with the fusible links normally cannot
be altered once they are formed makes the term read-only appropriate for this de-
vice. Hence, the realization shown in Fig. 5.54 is a read-only memory storing four
words each consisting of 2 bits.
For each additional input line to a PROM, the number of gates in the decoder
and the number of inputs to each gate in the or-array double. This is because all pos-
sible minterms are generated by the decoder and all the minterms appear as inputs
to the gates in the or-array. However, in many applications, not all the minterms are
necessary. In such cases, the and-array is not utilized efficiently. Also, as was seen
in the discussion on minimization, collections of minterms can frequently be re-
placed by a single product term. If the and-array is made programmable so that only
necessary product terms are generated, then its size can be controlled. As is seen in
the next two sections, programmable and-arrays occur in the PLA and PAL devices.
CHAPTER 5 Logic Design with MS! Components and Programmable Logic Devices 283
Programmable and-array
lines
input
n
product-term ———
lines
yndyno
wsour]
Programmable or-array
In this case there are 2'° = 65,536 minterms. However, in a 16X48 8 PLA, provision
is made to realize only 48 product terms. Referring to Fig. 5.55, it should be noted that
both complemented and uncomplemented inputs, for a total of 2n inputs, appear at
each and-gate to provide maximum flexibility in product-term generation.
Since all minterms are generated in a PROM, the realization of a set of Boolean
functions is based on minterm canonical expressions. It is never necessary to mini-
mize these expressions prior to obtaining a realization with a PROM. On the other
hand, in the case of PLAs, depending upon how the fuses are programmed, the and-
gates are capable of generating product terms that are not necessarily minterms. As
a consequence, a realization using a PLA is based on sum-of-product expressions
that may not be canonical. However, what is significant is that the logic designer is
bounded by the number of product terms that are realizable by the and-array. This
implies that it is necessary to obtain a set of expressions in which the total number
of distinct product terms does not exceed the number of gates in the and-array.
Thus, some degree of equation simplification generally is appropriate. Techniques
for minimizing a set of Boolean expressions using the criterion of minimal number
of distinct terms were previously discussed in Chapter 4.
To illustrate the use of a PLA for combinational logic design, consider the
expressions
fi%y,z) = &m(0,1,3,4)
fo@sy,z) = &m(1,2,3,4,5)
Assume that a 3X42 PLA is available for the realization of the expressions. Before
continuing, however, the reader should be well aware that this is not a practical appli-
cation of the use of PLAs due to its simplicity, but it does serve the purpose of show-
ing the concept of PLA combinational logic design. It is now noted that the size of the
or-array in the available PLA is sufficient since it has two output or-gates. However,
there are six distinct minterms between the two expressions. A realization based on
the canonical expressions is therefore not possible with the assumed PLA since only
four and-gates appear in the and-array. A formal approach to obtaining a pair of
equivalent expressions, hopefully having at most four distinct terms, is to first estab-
lish the multiple-output prime implicants using the Quine-McCluskey method and
then, using a multiple-output prime-implicant table, to find a multiple-output minimal
sum having the fewest terms as discussed in Secs. 4.12 and 4.13. Of course, for real-
world problems the minimization mechanics is done by specialized software written
for this purpose. However, at this time let us attempt to obtain a solution using simple
observations. When dealing with two output functions, it is known from Chapter 4
that the complete set of multiple-output prime implicants consists of all the prime im-
plicants of the individual functions f; and f; as well as the prime implicants of the
product functionf; f). It was also established in Chapter 4 that there exists a multiple-
output minimal sum consisting of just multiple-output prime implicants. A subset of
the prime implicants of f; andf; * f, are used in the multiple-output minimal sum for f;;
while a subset of the prime implicants of f; andf, - f, are used in the multiple-output
minimal sum for f,.Figure 5.56a shows the prime implicants of f,,5,andf,: f, as they
p ve fh yz
(d)
Figure 5.56 Example of combinational logic design using a PLA. (a) Maps showing the multiple-output prime
implicants. (0) Partial covering of the f, and f maps. (c) Maps for the multiple-output minimal
sum. (da) Realization using a 3x42 PLA.
285
286 DIGITAL PRINCIPLES AND DESIGN
appear on Karnaugh maps.* There are a total of seven distinct prime implicants. Re-
ferring to thef,andf, : f;maps to determine the terms for the minimized f, expression,
it is now noted that of the four distinct prime implicants in these maps only prime im-
plicant xz covers the xyz = 011 I-cell of f,. Similarly, referring to the f, and f, « f, maps
to determine the terms for the minimizedf, expression, xy is the only prime implicant
of the five distinct prime implicants in these maps that covers the xyz = 010 1-cell of
fy. Hence, these two prime implicants must occur in the multiple-output minimal sum.
Furthermore, it is next noted that prime implicant xz, which is being used for f;, can
also be used forf, to cover the xyz = 001 1I-cell. Figure 5.56b shows the covering of
the f, and f; maps at this point, along with the incomplete multiple-output minimal
sum having two distinct product terms. From these maps it is immediately seen that
using one additional prime implicant subcube for each of the functions, as shown in
Fig. 5.56c, results in a multiple-output minimal sum having four distinct terms, 1.e.,
fiQoy,Z) = xz + yz
Sry, 2) = xy + xz + xy
The corresponding 3X42 PLA realization is shown in Fig. 5.56d.
Although, in the above example, the final expressions for f, and f, could have
been obtained using the prime implicants of the individual functions and ignoring
the product functionf; ° f;, it should not be concluded that simply minimizing the in-
dividual expressions always results in a multiple-output minimal sum. A second ex-
ample illustrates this point. Consider the expressions
fi@y,z) = Xm(0,1,3,5)
how y,z) = 2m(3,5,7)
Again a realization with a 3X42 PLA is attempted. The Karnaugh maps display-
ing the multiple-output prime implicants are shown in Fig. 5.57a. Using an analysis
similar to the previous example, Fig. 5.57b shows the covering for the multiple-
output minimal sum
fiQy.Z) = XY + XO xyz
SoQuy,z) = yz + xyz
which consists of only four distinct product terms. Hence, a realization using a
3X4X2 PLA is possible. An alternative covering, shown in Fig. 5.57c, corresponds
to the multiple-output minimal sum
Fiy,2) = XY + ye + xyz
JAGVi2) = Xe + yz
The realization based on the expressions obtained from Fig. 5.57b is shown in
Fig. 5.57d using the PLD notation. It should be noted that a realization would not
be possible with the assumed 3X42 PLA if the expressions were individually
minimized.
“Recall that the minterms of the product functionf, *f, are the minterms common to both f, and f..
a
hh YE fi h
10 00 10 11
0 0 0 0 0
x x
0 l 0 0 | 0
(a)
f, = AY + yz + xyz fy = x2 + XyZ
(d)
Figure 5.57 Example of combinational logic design using a PLA. (a) Maps showing the multiple-output prime
implicants. (b) A multiple-output minimal sum covering. (c) Alternative multiple-output minimal
sum covering. (d@) realization using a 3x42 PLA.
287
288 DIGITAL PRINCIPLES AND DESIGN
+V Programmable
fuse
(b)
(a)
For greater flexibility, PLAs normally make provision for either a true output or a
complemented output. One way in which this is achieved is illustrated in Fig. 5.58a.
The output from each gate in the or-array, f;, feeds into one input of an exclusive-or-
gate. The other input to the exclusive-or-gate, having a programable fuse to ground, is
connected to a pull-up resistor as shown in the figure. Assuming positive logic, when
the fuse is left intact, the lower input to the exclusive-or-gate is at ground which is
equivalent to a logic-0. Since f;@ 0 = f,, it follows that the output of the exclusive-or-
gate is the same as the upper input. That is, the output corresponds to the true realiza-
tion of f;,On the other hand, when the fuse is blown, a positive voltage, 1.e., logic-1, is
applied to the lower input of the exclusive-or-gate. Since f,@ 1 = f;, the net result is
that the output of the exclusive-or-gate corresponds to the complemented realization
of f;,. The symbolic representation of the programmable exclusive-or-gate is given in
Fig. 5.58b. The general structure of a PLA with true or complemented output capabil-
ity is shown in Fig. 5.59.
Now consider the Boolean functions
fi@%y,z) = Sm(1,2,3,7)
fox y,z) = Xm(0,1,2,6)
The Karnaugh maps of these functions are given in Fig. 5.60. The upper two maps
are used to obtain a multiple-output minimal sum forf, andf;; while the lower two
maps are used to obtain the multiple-output minimal sum for f, and f,. Again as-
sume a realization of these functions using a 3X42 PLA is to be attempted. As
in the previous examples, realizations of functions of this simplicity are not justi-
fied using PLAs. However, the interest here is to illustrate the use of comple-
mented functions. If a 3X42 PLA is to be used, then only four product terms
can be generated. Thus, a realization is not possible using the subcubes of 1-cells
as indicated in the upper two maps of Fig. 5.60. On the other hand, the indicated
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 289
(CO eo ee
eee
n
oO
Ee ° °
>
Q e e
|S _ 4
iw
<P
P
product-term eed
lines
e e e
yndyno
sour]
w
Figure 5.59 General structure of a PLA having true and complemented output
capability.
fr
f ye hy ye
00 Ol 11 10 00 Ol 1] 10
fe feeafely
ee of | a| 1
xX
f= ty OR + ye fratyz
subcubes of the 1-cells forf,and the subcubes of the O0-cells forf,in Fig. 5.60 re-
sult in the expressions
FiGYV2) = Xz ay yz
Pay =o
For these two expressions there are only four distinct product terms: xz, xy, yz, and
xy. Thus, the fuses in the and-array and or-array can be programmed for the f, and
f> expressions. If the 3X42 PLA has provisions for complementing its outputs as
was illustrated in Fig. 5.58, then by leaving the fuse for the f; output exclusive-
or-gate intact and blowing the fuse for the f; output, the desired realization is possi-
XZ xy yz
mK
(a) =
y -_ f
|
f
fy
(b) =
ble. This is shown in Fig. 5.61la. It should be noted that f, really occurs at one of
the outputs of the or-array. By programming the corresponding exclusive-or-gate
fuse, f, = f, appears at the output of the PLA.
In Fig. 5.60, it is also observed that there are only four distinct product terms in
the expressions for f,; and f. Hence, an alternative realization using a 3X42 PLA
with output complementation capability can be based on these expressions. In this
case, both output exclusive-or-gate fuses must be blown. This results in comple-
menting the expressions so that the original functions are realized. The correspond-
ing realization is shown in Fig. 5.61.
A common way of specifying the connections in a PLA is via the PLA table.
PLA tables for the two realizations of Fig. 5.61 are given in Table 5.8. In general,
the PLA table has three sections for indicating connections: an input section, an out-
put section, and a 7/C section. Each product term is assigned a row in the table. The
input section is used to specify the connections between the inputs and the gates in
the and-array, thereby describing the connections needed to generate the product
terms. The input variables are listed across the top of the input section. A | entry in
this section indicates that a connection is to exist between the uncomplemented form
of the input variable listed in the column heading and the and-gate associated with
the row. On the other hand, a 0 entry in the input section indicates that a connection
is to exist between the complemented form of the input variable listed in the column
heading and the and-gate associated with the row. Finally, a dash indicates that there
are no connections for the associated variable and the corresponding and-gate.
Product
term
292 DIGITAL PRINCIPLES AND DESIGN
The output section of the PLA table is used to specify the connections between
the outputs of the and-gates and the inputs to the or-gates. The column headings
correspond to the functions being realized. Here a | entry indicates that a connec-
tion is to exist between the and-gate associated with the row and the or-gate associ-
ated with the column. A dash entry in the output section indicates that the and-gate
associated with the row is not connected to the or-gate associated with the column.
The 7/C section indicates how the exclusive-or-gate fuses are programmed. A
T entry means that the true output is used, thereby implying the fuse should be kept
intact; while a C entry means that the output should be complemented, implying the
fuse should be blown.
The above examples were contrived so that multiple-output minimal expres-
sions were required to obtain the desired PLA realizations. However, PLAs are avail-
able in a variety of sizes. Nothing is gained by performing minimization if the mini-
mized and nonminimized expressions result in using the same size PLA. PLAs are
intended to provide for convenient realizations. For this reason, complete minimiza-
tion becomes a secondary consideration when obtaining a PLA realization, since no
simplification or only slight simplification of expressions may be sufficient for a real-
ization using a PLA of a specified size. For example, simply minimizing the individ-
ual expressions and making use of any common terms might be sufficient to obtain
an efficient realization without the need for determining the multiple-output minimal
sum that involves the prime implicants of the product functions. In Chapter 8, PLAs
are used without regard to determining multiple-output minimal sums. It will be seen
that the networks being designed at that time are modeled in a form that immediately
suggests a PLA realization.
Aviie-pur
spqrvuuresis01g
Tis
eae
Ne
aa
Fixed
or-array
L ita
Figure 5.62 A simple four-input, three-output PAL device.
f:;G%YZ) = BmC,2,4,5,7)
$Oy,2) = 2m0,1,3,5,7)
The corresponding Karnaugh maps are drawn in Fig. 5.63a from which the
minimal sums are found to be
ie a= Oy eae Vz at V,
Wag) = 4 esa
To use the illustrative PAL device of Fig. 5.62, a problem occurs with the realiza-
tion of f, since the minimal expression consists of four product terms, while no
or-gate in this device has more than three inputs. However, a realization is
achievable if the realization is based upon the three expressions
f= fat Ye + x2
fea 2 EX
fp = xy + xz
This realization is shown in Fig. 5.63b. Here the first two product terms of f; are
generated as the subfunction f;. Thef, subfunction is then fed back into an input
terminal and combined with the remaining product terms of f; to produce the de-
sired realization of f,. To realize f,, only two terms need to be generated. Since a
three-input or-gate is used, the third input must correspond to a logic-0 so as not
294 DIGITAL PRINCIPLES AND DESIGN
(a)
i a | oaks
ieee
(b)
to affect the f, output. This 1s achieved by keeping all the fuses intact to the and-
gate that serves as the third input to thef, or-gate. With a variable and its comple-
ment as inputs to an and-gate, the output of the gate is always at logic-0. As was
mentioned in Sec. 5.7, the X in the gate symbol indicates that all its fuses are kept
intact.
CHAPTER 5 PROBLEMS
5.1 Assume an adder/subtracter of the type shown in Fig. 5.6 is capable
of handling two 5-bit operands. For each of the following set of
unsigned operands, X and Y, and control input, Add/Sub, determine
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 295
the output. Check your answers by converting the binary numbers into
decimal.
a. X = 10111, Y = 00110, Add/Sub = 0
b. X= 11010, Y = 01101, Add/Sub = 0
c. X = 11001, Y = 00101, Add/Sub = 1
d. X= 10011, Y = 11010, Add/Sub = 1
SH Assume the binary adder/subtracter shown in Fig. 5.6 is to handle signed
binary numbers in which x,,_, and y,,_, are the sign bits. Two methods were
given in Sec. 2.8 for the detection of an overflow condition, one based on the
sign bits of the operands and the other based on the carries into and from the
sign digit position during addition.
a. Determine the additional logic needed if an overflow condition is to be
detected based on the sign bits of the operands.
b. Determine the additional logic needed if an overflow condition is to be
detected based on the carries into and from the sign digit position during
addition.
fe) Consider the cascade connection illustrated in Fig. 5.9 of 4-bit carry
lookahead adders to obtain a large parallel adder. For this configuration,
calculate the maximum propagation delay time, assuming each gate
introduces a unit time of propagation delay, for a parallel adder handling
a. 8 bits.
b. 20 bits.
c. 40 bits.
d. n bits where n is divisible by 4.
5.4 Consider the 16-bit adder using carry lookahead generators shown in Fig.
5.10b. Calculate the maximum propagation delay time assuming each gate
introduces a unit time of propagation delay.
See) a. Using a 4-bit binary adder, design a network to convert a decimal digit
in 8421 code into a decimal digit in excess-3 code.
b. Using a 4-bit binary adder, design a network to convert a decimal digit
in excess-3 code into a decimal digit in 8421 code.
5.6 Using an approach similar to that for the design of a single decade 8421
BCD adder, design a single decade 8421 BCD subtracter incorporating 4-bit
binary subtracters.
Si) Using an approach similar to that for the design of a single decade 8421
BCD adder, design a single decade adder in which the operand digits are in
excess-3 code.
5.8 Design a specialized comparator for determining if two n-bit numbers are
equal. To do this, design the necessary 1-bit comparator that can be cascaded
to achieve this task.
oe In the design of the 1-bit comparator in Sec. 5.3, conditions A > B, A = B,
and A < B corresponded to GEL = 100, 010, and 001, respectively. Another
approach to the design of a 1-bit comparator is to code the three conditions.
296 DIGITAL PRINCIPLES AND DESIGN
BX itShoei i |
Figure P5.9
One possible code is $,S) = 10, 00, and 01 for A > B, A = B, andA < B,
respectively. This implies that only two output lines occur from each 1-bit
comparator. However, at the output of the last 1-bit comparator, an
additional network must be designed to convert the end result into terms of
G, E, and L. This approach is illustrated in Fig. P5.9. Design a 1-bit
comparator and output network for this approach.
5.10 Using or-gates and/or nor-gates along with a 3-to-8-line decoder of the type
shown in Fig. 5.18, realize the following pairs of expressions. In each case, the
gates should be selected so as to minimize their total number of input terminals.
dak) = m3)
fr(%.X1,Xo) = 2m(3,6,7)
bs fiGotity = O,1,5,637)
Days) = 21, 2,3.6,7)
C. fi%>,%1.%) = Dm(0,2,4)
Prox) = Sn 2A)
5.11 Using or-gates and/or nor-gates along with a 3-to-8-line decoder of the type
shown in Fig. 5.18, realize the following pairs of expressions. In each case, the
gates should be selected so as to minimize their total number of input terminals.
a. fi(%2,X1X9) = IIM(0,3,5,6,7)
fo(%,X1,Xo) = TIM(2,3,4,5,7)
b. f,Gy,%y3X0) = ILM(0,1,7)
foQ%,X1,Xo) = TLM,5,7)
c.f (%,%1,.%9) = HM(1,2,5)
fr%,X) Xo) = HM(0,1,3,5,7)
5.12 Using and-gates and/or nand-gates along with a 3-to-8-line decoder of the
type shown in Fig. 5.22, realize the pairs of expressions of Problem 5.11. In
each case, the gates should be selected so as to minimize their total number
of input terminals.
5.13 Using and-gates and/or nand-gates along with a 3-to-8-line decoder of the
type shown in Fig. 5.22, realize the pairs of expressions of Problem 5.10. In
each case, the gates should be selected so as to minimize their total number
of input terminals.
5.14 Using a 4-to-16-line decoder constructed from nand-gates and having an
enable input E, design an excess-3 to 8421 code converter. Select gates so as
to minimize their total number of input terminals.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 297
5.15 Using two 2-to-4-line decoders of the type shown in Fig. 5.26 along with
any necessary gates, construct a 3-to-8-line decoder.
5.16 Write the condensed truth table for a 4-to-2-line priority encoder with a valid
output where the highest priority is given to the input having the highest
index. Determine the minimal sum equations for the three outputs.
5.17 Repeat Problem 5.16 where the highest priority is given to the input having
the lowest index.
5.18 Figure 5.34 showed the structure of a 16-to-1-line multiplexer constructed
from only 4-to-1-line multiplexers. Other structures are possible depending
upon the type of multiplexers used. Construct a multiplexer tree for a 16-to-
1-line multiplexer
a. Using only 2-to-1-line multiplexers.
b. Using 2-to-1-line and 4-to-1-line multiplexers. (Note: three different
structures are possible.)
c. Using 2-to-1-line and 8-to-1-line multiplexers. (Note: two different
structures are possible.)
5.19 Determine a Boolean expression in terms of the input variables that
correspond to each of the multiplexer realizations shown in Fig. P5.19.
4-to-1
0——\40 MUX
(= I,
| — iD) dhru
0-4
=| 75
ON
ON
N&
1 —“E
S, Spo
Ww x
(a) (b)
Figure P5.19
298 DIGITAL PRINCIPLES AND DESIGN
5.20 For each of the following assignments to the select lines of an 8-to-1-line
multiplexer, show the location of the J, submaps, for i = 0,1,..., 7, ona 4-
variable Karnaugh map having the variables w, x, y, and z.
a. x, y, and zon select lines S,, S,, and So, respectively.
b. w, y, and zon select lines S,, S|, and So, respectively.
c. y, x, and w on select lines S,, S,, and So, respectively.
5.21 Realize each of the following Boolean expressions using an 8-to-1-line
multiplexer where w, x, and y appear on select lines $5, S,, and So,
respectively.
a. f(w,x%y,z) = Ym(1,2,6,7,9,11,12,14,15)
b. (wx y,z) = 2m(2,5,6,7,9,12,13,15)
c. f(w,xy,z) = &m(1,2,4,5,8,10,11,15)
d. f(w,xy,z) = >m(0,4,6,8,9,11,13,14)
5.22 Repeat Problem 5.21 where x, y, and z appear on select lines S,, S,, and So,
respectively.
5.23 For the function given by the Karnaugh map in Fig. 5.47a, determine a
realization using a 4-to-1-line multiplexer and external gates if the w and x
variables are applied to the S,) and S; select lines, respectively.
5.24 Realize the Boolean expression
f(w,x%y,z) = Xm(4,5,7,8,10,12,15)
using a 4-to-1-line multiplexer and external gates.
a. Let w and x appear on the select lines S,; and So, respectively.
b. Let y and z appear on the select lines S$, and So, respectively.
5.25 Realize the Boolean expression
Sw,x y,zZ) = 2m(0,2,4,5,7,9,10,14)
using a multiplexer tree structure. The first level should consist of two 4-to-
|-line multiplexers with variables w and z on their select lines S$; and So,
respectively, and the second level should consist of a single 2-to-1-line
multiplexer with the variable y on its select line.
5.26 A shifter is a combinational network capable of shifting a string of 0’s and
1’s to the left or right, leaving vacancies, by a fixed number of places as a
result of a control signal. For example, assuming vacated positions are
replaced by 0’s, the string 0011 when shifted right by | bit position becomes
0001 and when shifted left by | bit position becomes 0110. A shifter to
handle an n-bit string can be readily designed with n multiplexers. Bits from
the string are applied to the data input lines. The control signals for the
various actions are applied to the select input lines. The shifted string
appears on the output lines. Design a shifter for handling a 4-bit string where
Table P5.26 indicates the control signals and the desired actions. Vacated
positions should be filled with 0’s.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 299
Table P5.26
Action
5.27 For the PROM realization shown in Fig. P5.27, determine the corresponding
Boolean expressions for the outputs.
Phos aorice
+ =
—*
4 h
Figure P5.27
301
302 DIGITAL PRINCIPLES AND DESIGN
shown that all sequential networks require the existence of feedback. In Sec. 6.1 it
is seen that feedback is present in flip-flop circuits. A flip-flop has two stable condi-
tions. To each of these stable conditions is associated a state, or, equivalently, the
storage of a binary symbol. This chapter is concerned with the structure and opera-
tion of several types of flip-flops and some simple networks, e.g., registers and
counters, that are constructed using them.
Ol
nal is called the normal output; while the Q output is referred to as the comple-
mentary output. When the device is storing a 1, it is said to be in its /-state or
set. On the other hand, when the device is storing a 0, it is said to be in its 0-state
or reset.
Although the bistable element is normally in one of its two stable conditions,
there is one more equilibrium condition that can exist. This occurs when the two
output signals are about halfway between those associated with logic-0 and logic-1.
Thus, the output is not a valid logic signal. This is known as the metastable state.
However, a small change in any of the internal signal values of the circuit, say, due
to circuit noise, quickly causes the basic bistable element to leave the metastable
state and enter one of its two stable states. Unfortunately, the amount of time a de-
vice can stay in its metastable state, if it should occur, is unpredictable. For this rea-
son, the metastable state should be avoided. To avoid the metastable state, certain
restrictions are placed on the operation of the basic bistable element. This is further
discussed in Sec. 6.3.
The basic bistable element of Fig. 6.1 has no inputs. When power is applied, it
becomes stable in one of its two stable states. It remains in this state until power is
removed. For the circuit to be useful, provisions must be made to force the device
into a particular state. A flip-flop is a bistable device, with inputs, that remains in a
given state as long as power is applied and until input signals are applied to cause
its output to change. It consists of a basic bistable element in which appropriate
logic is added in order to control its state. The process of storing a 1 into a flip-flop
is called setting or presetting the flip-flop; while the process of storing a 0 into a
flip-flop is called resetting or clearing the flip-flop.
The inputs to a flip-flop are of two types. An asynchronous or direct input 1s
one in which a signal change of sufficient magnitude and duration essentially pro-
duces an immediate change in the state of the flip-flop. In physical circuits, the re-
sponse actually occurs after a very short time delay. This point is elaborated upon
in Sec. 6.3 when the timing of signals is discussed in greater detail. On the other
hand, a synchronous input does not immediately affect the state of the flip-flop, but
rather affects the state of the flip-flop only when some control signal, usually called
an enable or clock input, also occurs. In the next several sections, various input
schemes to the basic bistable element are introduced that result in different types of
flip-flops.
6.2 LATCHES
The storage devices called latches form one class of flip-flops. This class is charac-
terized by the fact that the timing of the output changes is not controlled. That is,
the output essentially responds immediately to changes on the input lines, although
a special control signal, called the enable or clock, might also need to be present.
Thus, the input lines are continuously being interrogated. In Secs. 6.4 and 6.5 flip-
flops in which the timing of the output changes is controlled are studied. It this case,
the inputs are normally sampled and not interrogated continuously.
304 DIGITAL PRINCIPLES AND DESIGN
Inputs
q | *Unpredictable behavior
O cs O will result if inputs
return to 0 simultaneously
(a) (b)
(c)
Figure 6.2 SA latch. (a) Logic diagrams. (b) Function table where Q* denotes the output Q in response to
the inputs. (c) Two logic symbols.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 305
since the Q = | signal applied to the lower input of the upper nor-gate maintains the
outputs OQ= 0 and Q = 1.
By a similar argument, if a 1 is applied to the S input and a 0 is applied to the R
input, then the latch becomes set regardless of its present state. That is, the new out-
puts are Q = | and Q = 0. This corresponds to the third row of the function table.
Furthermore, the latch remains in the 1-state when the S input returns to 0.
For the three situations just discussed, the outputs Q and Q are complementary.
Consider now the case when § = R = 1. This causes the outputs of both nor-gates to
become 0 as indicated in the function table, and, consequently, they are not comple-
mentary outputs. Difficulty is encountered when the inputs return to 0. If one input
should return to 0 before the other, then the final state of the latch is determined by
the order in which the inputs are changed. In particular, the last input to stay at 1 de-
termines the final state. In the event of both inputs returning to 0 simultaneously, the
device may enter its metastable state. This is a condition that should be avoided as
discussed previously. Eventually, the device becomes stable, but its final state is un-
predictable since it is based on such things as construction differences and thermal
noise. For this reason, along with the fact that the outputs are not complementary, the
S = R = | input is frequently regarded as a forbidden input condition.
From the function table of the SR latch it should be noted that a | serves as the
activation signal of the device. That is, a 1 on either the S or R input terminal causes
the device to set or reset, respectively. Furthermore, since changes on the S and R
inputs can immediately affect the outputs of the latch, the S and R inputs are re-
garded as asynchronous (or direct) inputs.
Two logic symbols for the SR latch are given in Fig. 6.2c. In the second sym-
bol, the output bubble indicates the inversion of the normal state of the latch. Thus
the output terminal with the bubble corresponds to Q.
Va
A g Time
= ae V
;
a v
Time
(a)
Va
V
0 :
Time
Vp
V
0 ;
Time
a ear
Time
Se
Time
(b)
Figure 6.3 An application of the SA latch. (a) Effects of contact bounce. (6) A switch debouncer.
are shown in Fig. 6.3b. Assume positive logic so that +V volts corresponds to
logic-1 and ground to logic-0. By use of the two pull-down resistors, logic-0 values
are ensured at the S and R terminals of the SR latch whenever the center contact of
the switch is not connected to either terminals A or B, i.e., whenever the switch is
open. Thus, when the center contact moves from its lower position to its upper posi-
tion, the SR latch remains in its reset state until the center contact reaches terminal
A, at which time the Q output of the SR latch becomes 1. If the switch now opens, as
a result of contact bounce, then the 0 input to the S$ and R terminals of the latch
causes the Q and Q outputs to remain unchanged. Hence, by use of the SR latch, the
effect of contact bounce is eliminated. In a similar manner, the effect of contact
bounce is also eliminated when the switch moves from its upper position to its
lower position.
Outputs
nA! nH)
*Unpredictable behavior
will result if inputs
return to | simultaneously
(a)
Figure 6.4 SR latch. (a) Logic diagrams. (b) Function table where Q* denotes the output Q in response to the
inputs. (c) Two logic symbols.
308 DIGITAL PRINCIPLES AND DESIGN
output of the second nand-gate becomes 0). Thus, if R = Oand § = 1, then the latch
resets; while if R = 1 and S = 0, then the latch sets. These conditions are described
by the two middle rows of the function table. In either case, when the 0 input re-
turns to 1, the SR latch retains its present state.
Similar to the SR latch, the fourth possible input combination causes difficulty.
In this case, if 0 is applied to both the S and R inputs, then both outputs become 1.
Now if the inputs subsequently return to 1 simultaneously, then unpredictable be-
havior results in a similar way, as was discussed for the SR latch. Thus, the applica-
tion of S= R = O is normally not recommended.
Referring to the function table of Fig. 6.44, it is readily seen that 0 serves to ini-
tiate action in the SR latch. That is, a 0 on the S terminal causes the latch to set;
while a 0 on the R terminal causes it to reset.
Two symbols for the SR latch are shown in Fig. 6.4c. It should be noted that in-
version bubbles appear at the input terminals of the symbols since the latch re-
sponds to 0’s on the inputs.
Outputs
Enable (C)
Figure 6.5 Gated SA latch. (a) Logic diagram. (b) Function table where Q* denotes
the output Qin response to the inputs. (c) Two logic symbols.
activation signals are present. Since the effects of the S and R inputs are depen-
dent upon the presence of an enable signal, these inputs are classified as syn-
chronous inputs.
Two symbols for the gated SR latch are given in Fig. 6.5c. Again, the output
terminal having the bubble corresponds to Q.
oe Q
Outputs
\ / Enable (C)
(a) (b)
D Q D Q
G. (C
Q ae
(c)
Figure 6.6 Gated Dlatch. (a) Logic diagram. (b) Function table where Q™ denotes the output
Qin response to the inputs. (c) Two logic symbols.
S ' ! \
ie Cwcmin) >| 1 —Fyy(min) 1
R ea§ eo Se I eee
i}
| !
te !(min)1
Time
time t,,, however, the application of the 1 on the set input terminal returns the latch
to predictable behavior after a short propagation delay time.
i]
tie Sgt,
nored and the Q-output retains the state of the latch just prior to the 1 to 0 change in
the enable signal.
To achieve this operation of a gated latch, constraints are normally placed on the
time intervals between input changes. Consider times f;, f¢, t;,, and f,, in Fig. 6.10.
At these times the enable signal C is returned to 0, causing the output to latch onto its
current state. However, to guarantee this latching action, a constraint is placed upon
the D signal for some minimum time before and after the enable signal goes from 1
to 0. This is shown as the shaded areas in Fig. 6.10. For proper operation, the D sig-
nal must not change during this period. The minimum time the D signal must be held
fixed before the latching action, t,,,, is called the setup time; while the minimum time
the D signal must be held fixed after the latching action, f,, is called the hold time.
Failure to satisfy the setup time and hold time constraints can result in unpre-
dictable output behavior, including metastability. This is illustrated in Fig. 6.11,
where unpredictable behavior occurs when latching is attempted at time ¢,, since the
signal on the D line changed at time ¢,,, which is assumed to have occurred within
the setup time of the gated D latch.
Setup and hold time constraints are very important properties when considering
the behavior of all types of flip-flops. In the next two sections master-slave and
edge-triggered flip-flops are discussed. The need to satisfy setup and hold times in
these types of flip-flops relative to changes in their control signal must be adhered to
for their proper operation.
by changes on the information input lines, i.e., the S, R, and D lines. This property is
referred to as transparency. In certain applications this is an undesirable property.
Rather, it is necessary that the output changes occur only coincident with changes
ona control input line. This is particularly the case when it is necessary to sense the
current state of a flip-flop while simultaneously allowing new state information to
be entered as determined by the information lines. The property of having the tim-
ing of a flip-flop response being related to a control input signal is achieved with
master-slave and edge-triggered flip-flops.
A master-slave flip-flop consists of two cascaded sections, each capable of stor-
ing a binary symbol. The first section is referred to as the master and the second
section as the slave. Information is entered into the master on one edge or level of a
control signal and is transferred to the slave on the next edge or level of the control
signal. In its simplest form, each section is a latch.
Master-slave SR flip-flop
Clock (C)
(a)
ty bs
Clock:
(C)
+ —____——_> <«—____ |
Slave enabled } Slave disabled ; Slave enabled
] 4
Time
(b)
Inputs Outputs
Saal
S R G Oz Or
G
0 tg ie
ai 0 1
1 Cremae eek 1 0 4 50
G
1 tae tnlers Undefined Undefined
<a aX 0 QO Q
(c) (d)
Figure 6.12 Master-slave SRflip-flop. (a) Logic diagram using gated SA latches.
(b) Flip-flop action during the control signal. (c) Function table where
Q* denotes the output Q in response to the inputs. (da) Two logic
symbols.
316 DIGITAL PRINCIPLES AND DESIGN
This controlling of the output change to be coincident with the change on the con-
trol input line is precisely the property being sought by flip-flops not in the latch
category. The master-slave principle is one way in which this property is achieved.
The behavior of the master-slave SR flip-flop is summarized by the function
table in Fig. 6.12c. The pulse symbol in the C column, _| |_, indicates that the
master is enabled while the control signal is high and that the state of the master is
transferred to the slave, and, correspondingly, to the output of the flip-flop, at the
end of the pulse period. Special attention should be given to the fourth row of the
function table. This row corresponds to the situation of both S and R being 1 when
the control signal goes from high to low. Since the master is a latch, it enters an un-
predictable state, including the possibility of the metastable state. This state value is
then subsequently transferred to the slave. Hence, the output of the master-slave SR
flip-flop itself becomes unpredictable. Such a condition should be avoided. Since the
behavior of master-slave flip-flops constructed from latches is dependent upon the
rising and falling edges of the control signal as well as the period of time in which
the control signal is high, they are also referred to as pulse-triggered flip-flops.
Two logic symbols for the master-slave SR flip-flop are given in Fig. 6.12d. The
~ | symbol, called the postponed-output indicator, at the output terminals is used to
imply that the output change is postponed until the end of the pulse period. For the
master-slave flip-flop of Fig. 6.12a, this corresponds to the time when the control sig-
nal goes from high to low. Also, as in the case of latches, bubble notation is used to in-
dicate the complementary output @Q of the flip-flop in the second logic symbol shown.
Figure 6.13 shows a timing diagram for the input and output terminals of a
master-slave SR flip-flop along with a timing diagram for the output terminals of the
master section of the flip-flop. For simplicity, the finite slopes of the rising and
falling edges of the signals are not shown and the propagation delays are assumed to
be all equal. However, the sequence of events during the rise and fall times of the
control signal as indicated in Fig. 6.12 is still occurring.
Master-slave JK flip-flop
i]
Clock (C)
(a)
Outputs
1g
(e
pu:
| 2! wal©)
iE
—— Ki
(c)
Figure 6.14 Master-slave JKflip-flop. (a) Logic diagram using gated SR latches. (6) Function table where Q*
denotes the output Q in response to the inputs. (c) Two logic symbols.
The output of the J-input and-gate is logic-1 and the output of the K-input and-gate
is logic-O. Thus, S = | and R = 0 at the inputs to the master latch. When the clock
goes high, the master is set. The |-state of the master is then subsequently trans-
ferred to the slave when the clock returns to 0. In summary, regardless of its present
state when J = | and K = 0, the master-slave JK flip-flop enters or remains in its
|-state upon the occurrence of the pulse signal on the control line. This corresponds
to the third row of the function table.
By a similar argument, if J = 0 and K = 1, then the master-slave JK flip-flop
enters or remains in its 0-state after a clock pulse has occurred. This resetting effect
is described by the second row of the function table.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 319
Considering the remaining rows of the function table, the first row indicates
that the master-slave JK flip-flop retains its current state when J = K = 0 during a
clock pulse. Similarly, the last row indicates that whenever the clock is low, i.e.,
C = 0, the state of the flip-flop does not change.
In Fig. 6.14c two symbols are shown for the master-slave JK flip-flop. Again,
the postponed-output indicator is used to symbolize that the output change occurs
coincident with the falling edge of the control signal, i.e., when the control signal
changes from | to 0.
For ease of the above analysis, the logic-1 values on the J and K lines were as-
sumed to be applied prior to the application of the clock pulse. In actuality, these
values can occur anytime while the control signal is | since the master, being a
latch, is enabled during that time.
A timing diagram illustrating the behavior of a master-slave JK flip-flop is
shown in Fig. 6.15. Again, for simplicity, propagation delays are assumed to be
equal and the finite slopes of the rising and falling edges of the signals are not
shown. In addition, manufacturer’s constraints regarding minimum width of the sig-
nals, i.e., minimum time durations that signals are applied, and setup and hold times
of the information signals relative to the control signal must be adhered to for
proper operation of master-slave flip-flops. It is assumed these constraints are satis-
fied in the timing diagram of Fig. 6.15.
Thus, if the slave latch is in its 1-state, then a logic-1 on the K input line while the
control signal is 1 causes the master latch to reset. This subsequently results in the
slave becoming reset when the control signal returns to 0. An example of this oc-
curred during the second clock pulse in Fig. 6.15. This behavior is known as 0’s
catching. It should be noted that once the master latch is reset by a logic-1 signal on
the K input line, a subsequent logic-1 signal on the J input line during the same pe-
riod in which C = 1| does not cause the master to again become set. This is due to the
fact that since the slave does not change its state until C returns to 0, the feedback
signal from the slave, i.e., Q, = 0, keeps the output of the J-input and-gate at logic-0.
In a similar manner, if the slave is storing a 0, then a logic-1 on the J input line
while the control signal is | causes the master latch to be set, which subsequently
results in the setting of the slave upon the occurrence of the falling edge of the con-
trol signal. This behavior occurred during the third clock pulse in Fig. 6.15 and is
known as /’s catching.
In many applications, the 0’s and 1’s catching behavior is undesirable. Hence, it
is normally recommended that the J and K input values should be held fixed during
the entire interval that the master is enabled. To satisfy this constraint, any changes in
the J and K inputs must occur while the control signal is 0. This was done during the
first and fourth clock pulses in Fig. 6.15. The function table of Fig. 6.145 does not ac-
count for 0’s and 1|’s catching but, rather, assumes the J and K inputs are held fixed
during the entire period the control signal is 1. The problem of 0’s and 1’s catching is
also solved by the use of another class of flip-flops called edge-triggered flip-flops.
This class of flip-flops is studied in Sec. 6.5. Alternatively, a variation of the master-
slave flip-flop, called the master-slave flip-flop with data lockout, is available that is
not subject to 0’s and 1’s catching. This variation also is discussed in the next section.
(a) (b)
Figure 6.16 Master-slave Dflip-flop. (a) Logic diagram using a master-slave SRflip-
flop. (b) Two logic symbols.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 321
Inputs Outputs
Figure 6.17 Master-slave Tflip-flop. (a) Logic diagram using a master-slave JKflip-flop. (6) Function table
where Q” denotes the output Q in response to the inputs. (c) Two logic symbols.
changes state, or toggles, with each control pulse if T = 1 and retains its current
state with each control pulse if T = 0. This is called a master-slave Tflip-flop.
The function table of the master-slave T flip-flop and its logic symbols are given
in Fig. 6.17b-c.
Outputs
(a)
Figure 6.18 flip-flop. (a) Logic diagram.(b) Function table where Q” denotes the
Positive-edge-triggered D
output Qin response to the inputs. (c) Two logic symbols.
positive edge of the control input has the effect of sampling the D input line. This is
indicated in the function table by the 7 symbol. At all other times, including the
time while the clock is at 1, the D input is inhibited and the state of the flip-flop can-
not change.
To see how the positive-edge-triggered D flip-flop operates, consider the logic
diagram in Fig. 6.18a. Nand-gates 5 and 6 serve as an SR latch whose behavior was
previously described by the function table in Fig. 6.4b. Thus, as long as S = R = 1,
the state of the latch cannot change; while whenever either S or R is 0, but not both,
the latch sets or resets, respectively.
Assume the control input, i.e., clock, C, is 0. Regardless of the input at D, the
outputs of nand-gates 2 and 3 are |. These signals are applied to the SR output
latch, causing it to hold its current state. Now assume that D is also 0. This holds the
output of gate 4 at 1. In turn, the output of gate | is 0 since the outputs of gates 2
and 4 are 1’s. When the clock goes from 0 to 1, i.e., the positive edge of the control
signal, all three inputs to gate 3 become 1, causing the output of the gate to change
to 0. Meanwhile, the output of gate 2, S, remains at 1 since the output of gate | is
still 0. The 0 on the R line and the | on the S line cause the §R latch to enter or re-
main in its reset state, i.c., Q = 0 and Q = 1. In addition, the output of gate 3, which
is currently 0, is also fed back as an input to gate 4. This now keeps the output of
gate 4 at 1, and any subsequent changes in the D input while C is | have no effect
upon the output of gate 4 and, correspondingly, gate |. Thus, after the occurrence of
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 323
the positive edge of the clock signal when D = 0, the flip-flop is in its 0-state and
any changes in the D input are inhibited even though the clock is 1.
Again assume C = 0, but now let D = 1. As before, the outputs of gates 2 and
3 are 1, causing the SR latch to hold its current state. However, the D = | input
causes the output of gate 4 to be 0, and this output, in turn, causes the output of gate
1 to be 1. Now when the clock changes to 1, both inputs to gate 2 are 1 and, conse-
quently, its output, S, becomes 0. Since the output of gate 4 is 0, the output of gate
3, R, remains at 1. The S = 0 and R = | results in the setting of the SR latch con-
sisting of gates 5 and 6. The 0 output from gate 2 serves as an input to both gates 1
and 3 that, in turn, guarantees that their outputs remain at 1. Thus, if D should sub-
sequently change from | to 0 while the clock is 1, causing the output of gate 4 to
change, then the outputs of gates | and 3 do not change. Therefore, once the posi-
tive edge of the clock has occurred, changes in the D input while C = | have no ef-
fect upon the state of the flip-flop.
In summary, only upon the occurrence of the positive edge of the clock signal
does the flip-flop respond to the value of the D input. Once the new output state is
established, changes in the D input while C = 1 are ineffectual. When the clock sig-
nal returns to 0, both § and R become 1, and the SR latch retains the state entered as
a consequence of sampling the D input by the positive edge of the control signal.
Two logic symbols for the positive-edge-triggered D flip-flop are shown in
Fig. 6.18c. Since the outputs of the flip-flop respond essentially immediately to
the positive edge of the control signal, postponed-output indicators do not appear.
To signify that the output change can only occur as a consequence of the transi-
tion of the control signal, a triangular symbol, called the dynamic-input indicator,
is used at the control input of the logic symbol.
Figure 6.19 shows a timing diagram for the positive-edge-triggered D flip-flop.
For simplicity, the finite slopes of the rising and falling edges of the signals are not
shown and all propagation delays are assumed to be equal. Indicated in Fig. 6.19 are
the setup, ¢,,, and hold, ¢,, times with respect to the triggering edge of the control
signal that need to be satisfied. During these times the D input must not change;
otherwise, an unpredictable output, including the metastable state, is possible.
Se Q ee
Clk —op SG
Inputs Outputs 5
5 7
Chk—OFC
(O) Oa
is
(a) (b)
Inputs
PR CLR D O60:
(GIR Q oo x
es Oe
Clock (C) +4 ae
5 i il
| | 1
1 Il OK
i Ih OX
(a) (b)
CLR
Figure 6.21 Positive-edge-triggered Dflip-flop with asynchronous inputs. (a) Logic diagram. .
(b) Function table where Q* denotes the output Q in response to the inputs. (c) Two logic
symbols.
326 DIGITAL PRINCIPLES AND DESIGN
Referring to the function table, the first two rows indicate the fact that a 0 on just
the PR or CLR input lines causes the flip-flop to set or reset asynchronously, 1.¢., re-
gardless of the values on the D and C lines as denoted by crosses in the D and C
columns. The third row corresponds to the situation when both the PR and CLR inputs
are simultaneously active. This condition is not recommended since unpredictable be-
havior results if both asynchronous inputs return to | simultaneously. Only when both
asynchronous inputs are |’s does the flip-flop behave as a positive-edge-triggered D
flip-flop. This corresponds to the last four rows of the function table.
Again consider the logic diagram of Fig. 6.21a. If the asynchronous lines are
removed, then the logic diagram of Fig. 6.18a results. This is analogous to having
1’s on both the PR and CLR input lines. In this case the operation of the device is as
discussed previously for the positive-edge-triggered D flip-flop.
If PR = 0 and CLR = 1 while the control signal C is 0, then the output of nand-
gate 5 becomes | and the output of nand-gate 6 becomes 0). Thus, the SR latch por-
tion of the flip-flop is forced into the 1-state, i.e., to be set. Similarly, if PR = 1 and
CLR= 0 is applied, then the SR latch portion of the flip-flop is forced into the
0-state, i.e., to be reset. The PR and CLR inputs are also applied to nand-gates 1, 2,
and 4. This is done to ensure the effect of an asynchronous input on the flip-flop
outputs while the control signal C is 1. That is, if either PR or CLR becomes 0 while
the clock is 1, then the flip-flop accordingly responds immediately.
Although the above discussion was concerned with asynchronous inputs in a
positive-edge-triggered D flip-flop, asynchronous inputs also may occur in negative-
edge-triggered D flip-flops as well as in the other types of edge-triggered flip-flops
that are discussed shortly. In addition, asynchronous inputs also occur in pulse-
triggered flip-flops. Occasionally, however, only one asynchronous input appears in
commercial flip-flops.
Inputs Outputs
(b)
LLL) —— aS SSS
(a) (c)
Figure 6.22 Positive-edge-triggered JKflip-flop. (a) Logic diagram. (b) Function table where Q* denotes the
output Qin response to the inputs. (Cc) Two logic symbols.
(a)
ISIS
Cl
(b) (c)
J Q Q
Clock (C)
K Q Q
Slave
(a)
(b)
Figure 6.24 Master-slave JKflip-flop with data lockout. (a) Logic diagram. (b) Two
logic symbols.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 329
Symbols for the master-slave JK flip-flop with data lockout are given in
Fig. 6.24b. The dynamic-input indicator is used since the information input lines
are sampled on the positive edge of the control signal. Postponed-output indica-
tors appear in the symbols since the output change is delayed as a consequence
of the master-slave configuration.
(c) (d)
inputs, the application of a control signal causes the flip-flop to change to the next-
state O°.
The algebraic description of the next-state table of a flip-flop is called the char-
acteristic equation of the flip-flop. This description is easily obtained by construct-
ing the Karnaugh map for Q” in terms of the present state and information input
variables. An example of such a Karnaugh map for an SR flip-flop is shown in Fig.
6.25a. For the purpose of this map, the inputs that cause an undefined output are re-
garded as don’t-cares since these inputs are assumed to not occur. From the Kar-
naugh map of Fig. 6.25a, the characteristic equation
OS SPRO
Table 6.2 Flip-flop next-state tables. Q denotes the current state and Q* denotes
the resulting state as a consequence of the information inputs and
flip-flop. (c) JKflip-flop.
the control signal. (a) SRflip-flop. (b) D
(d) Tflip-flop.
S R Q
0 0 0
0 0 1
0 1 OF
0 1
1 0 0
1 0 i
1 ] 0 - Inputs not
1 1 ] - } allowed
(a) (b)
J K Q T O
0 0 0 0 0 0 0
0 0 I I 0 I
0 I 0 0 I 0 I
0 ] | 0 1 ] 0
1 0 0
1 0 1 1
I 0
1 I I 0
(c) (d)
For many of the networks encountered in this book, it is necessary that the out-
put changes occur coincident with the changes on the control input line. Either
edge-triggered or pulse-triggered flip-flops can be used for this purpose. Thus, they
are categorically referred to as simply clocked flip-flops.
6.7 REGISTERS
In the remaining sections of this chapter, attention is turned to some simple applica-
tions involving clocked flip-flops. These applications are examples of sequential
networks. Sequential networks are formally discussed in the next three chapters of
this book. However, the intention at this time is to illustrate the use of clocked flip-
flops as network devices. Sequential networks, unlike combinational networks, pos-
sess a memory property. This can be achieved with flip-flops since they have the ca-
pability of storing the symbols 0 and 1, whether they correspond to the binary digits
or the logic values.
A register is simply a collection of flip-flops taken as an entity. The basic func-
tion of a register is to hold information within a digital system so as to make it
available to the logic elements during the computing process. However, a register
may also have additional capabilities associated with it.
Since a register consists of a finite number of flip-flops and since each flip-flop
is capable of storing a 0 or a | symbol, there are only a finite number of 0-1 combi-
nations that can be stored in a register. Each of these combinations is known as the
state or content of the register.
Registers that are capable of moving information positionwise upon the occur-
rence of a clock signal are called shift registers. These registers are normally classi-
fied by whether they can move the information in one or two directions, i.e., unidi-
rectional or bidirectional.
The manner in which information is entered into and outputted from a register is
another way in which they are categorized. There are two basic ways in which these
transfers are done: serially or in parallel. When information is transferred in a paral-
lel manner, all the 0-1 symbols that comprise the information are handled simultane-
ously as an entity in a single unit of time. Such information transfers require as many
lines as symbols being transferred. On the other hand, the serial handling of informa-
tion involves the symbol-by-symbol availability of the information in a time se-
quence. These information transfers only require a single line to perform the transfer.
Thus, there are four possible ways registers can transfer information: serial-in/
serial-out, serial-in/parallel-out, parallel-in/parallel-out, and parallel-in/serial-out.
Figure 6.26 illustrates the serial-in, serial-out unidirectional shift register con-
structed from positive-edge-triggered D flip-flops. The Q output of each flip-flop is
connected to the D input of the flip-flop to its right. The control inputs of all the flip-
flops are connected together to a common synchronizing signal called the clock.
Thus, upon the occurrence of a positive edge of the clock signal, the content of each
flip-flop is shifted one position to the right. The content of the leftmost flip-flop
after the clock signal depends upon the signal value on the serial-data-in line, and
the content of the rightmost flip-flop prior to the clock signal is lost. The output
from the shift register occurs at the rightmost flip-flop on the serial-data-out line.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 333
Serial
data
in
Clock
For the register of Fig. 6.26, if the initial content of the four flip-flops is 1011 and a
logic-O is applied to the serial-data-in line prior to the positive edge of the clock sig-
nal, then the content of the register becomes 0101 after the positive edge of the
clock signal. The signal value that is shifted in, i.e., the logic-O0, becomes available
as an output on the serial-data-out line after four clock pulses.
In some applications, the information within a register must be preserved, but
only a reorientation of the information is desired. To achieve this, the serial-data-out
line of Fig. 6.26 is connected to the serial-data-in line. In this way the content of the
register is again shifted one position to the right upon the occurrence of each clock
signal, but the state of the leftmost flip-flop is replaced by the state of the rightmost
flip-flop. For example, again assume the initial content of the register is 1011, but the
output of the rightmost flip-flop is connected to the input of the leftmost flip-flop.
Then after the occurrence of the positive edge of the clock signal, the register contains
1101. Shift registers having this type of connection are called circular shift registers.
It is important to note that the flip-flops of a register are subject to a change in
state while they are being interrogated by the next flip-flop in the cascade connec-
tion. That is, a flip-flop is simultaneously being read into while being read by an-
other flip-flop. Thus, edge-triggered or pulse-triggered, i.e., master-slave, flip-flops
are used. Latches are not appropriate in such an application since their outputs are
subject to changes during the entire period in which they are enabled.
The serial-in, parallel-out unidirectional shift register is illustrated in Fig. 6.27.
In this case, outputs are provided from each flip-flop. Once information is shifted
into the register, i.e., serial in, the information is available as a single entity, 1.e.,
parallel out, at the flip-flop output terminals. Since information is transferred into
this register serially and, after an appropriate number of shifts, made available in
parallel, this type of register provides for the serial-to-parallel conversion of
information.
The register shown in Fig. 6.28 is used as a parallel-in, serial-out unidirec-
tional shift register. The operation of the register is controlled by the Load/Shift
line. When a logic-0 signal appears on this line, the signals on the parallel-data-in
lines /,J,/clp are transferred into the register upon the occurrence of a positive-edge
clock signal. Then, when a logic-1 signal appears on the Load/Shift line, the D flip-
flops become a cascade connection that functions as a unidirectional shift register
providing the serial output. In this way, the register of Fig. 6.28 provides for the
parallel-to-serial conversion of information. By taking the outputs from the indi-
vidual flip-flops, the register functions as a parallel-in, parallel-out unidirectional
shift register. It should be noted that the register illustrated in Fig. 6.28 can also
function as a serial-in, parallel-out unidirectional shift register and as a serial-in,
serial-out unidirectional shift register.
The second general classification of shift registers consists of the bidirectional
shift registers. These types of registers are capable of shifting their contents either
left or right depending upon the signals present on appropriate control input lines.
An example of a bidirectional shift register is shown in Fig. 6.29. This register
is also known as the universal shift register. Depending upon the signal values on
the select lines of the multiplexers, 1.e., the mode control lines, the register can re-
tain its current state, shift right, shift left, or be loaded in parallel. Each of these op-
erations is the result of the occurrence of a positive edge on the clock line. In addi-
tion, the register is cleared asynchronously if a logic-O is applied to the line labeled
CLEAR.
As an illustration of the operation of the universal shift register, according to
the table in Fig. 6.29b the register performs the shift-right operation when the
logic values on the select lines $,S) of the multiplexers are O01. Under this condi-
tion the 7, input of each multiplexer is connected to its f output. Thus, as seen in
Fig. 6.29a, the input to the leftmost D flip-flop is the signal on the serial-input-
for-shift-right line, the input to the second leftmost D flip-flop is the output of
the leftmost D flip-flop, the input to the third leftmost D flip-flop is the output
of the second leftmost D flip-flop, and the input to the fourth leftmost D flip-
flop is the output of the third leftmost D flip-flop. Upon the occurrence of the
positive-edge signal on the clock line, the register shifts its content one position
to the right. The remaining three register operations listed in Fig. 6.29b are easily
verified in a similar manner. A symbol for the universal shift register is given in
Fign6.2 9c.
Registers are available commercially as MSI components. In these circuits,
the control lines for the clock inputs of the flip-flops are connected together and
appropriate logic is included to provide various capabilities, e.g., unidirectional
or bidirectional shifting, and handling ability of the information input and out-
put lines.
[aTfereg
vIep jNO
[BLASBVPUT
POTD
My 7
AE (-
[ater
elepUl
335
336 DIGITAL PRINCIPLES AND DESIGN
Parallel outputs
pee =< —_
On Op Oc Op
D Q
EXE
cLr2P
O
CLEAR
Clock |
f
Mode { S; 4-to-|
control Se MUX
hihi kh I,
0 0 Hold
Ona Shift right
i 0 Shift left
an Parallel load
(b)
Figure 6.29 Universal shift register. (a) Logic diagram. (b) Mode control. (c) Symbol.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 337
6.8 COUNTERS
A counter is another example of a register. Its primary function is to produce a spec-
ified output pattern sequence. For this reason, it is also a pattern generator. This
pattern sequence might correspond to the number of occurrences of an event or it
might be used to control various portions of a digital system. In this latter case, each
pattern is associated with a distinct operation that the digital system must perform.
As in the case of a register, each of the 0-1 combinations that are stored in the
collection of flip-flops that comprise the counter, i.e., the output pattern, is known as
a state of the counter. The total number of states is called its modulus. Thus, if a
counter has m distinct states, then it is called a modulus-m counter or mod-m counter
for short. The order in which the states appear is referred to as its counting sequence.
The counting sequence is often depicted by a directed graph called a state dia-
gram. Figure 6.30 shows a state diagram for a mod-m counter where each node, S,,
denotes one of the states of the counter and the arrows in the graph denote the order
in which the states occur.
Q; Q, A A
OR Om Oma)
OOF Ort
ty IE
Oe til
Oe oth @ @
OT SIO
Ole ao
ie al th a
i. W- @ @
( @ @) eal
10 sO
JO allies <i
a ee OmeO
i ae Oe i
Ny peed es (8)
Pale ol
OF OOO
Count enable etc.
(a) (c)
Count
pulses
Qo
es a te pe fia |
Time
(b)
Figure 6.31 Four-bit binary ripple counter. (a) Logic diagram. (b) Timing diagram.
(c) Counting sequence.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 339
Since this is a 4-bit up-counter, its modulus is 2 = 16 and its counting se-
quence is from 0000,.) to 1111,.). The output of the counter appears at the Q out-
put terminals of the four flip-flops where the flip-flop output Q; corresponds to
the ith-order bit of the binary number. The input to the counter is a count enable
signal and a series of count pulses applied to the flip-flop associated with the
lowest-order binary digit. In this way, as long as the count enable signal is
logic-1, the Qo flip-flop changes state on each positive edge of a count pulse.
The control input, i.e., €, of the remaining flip-flops is connected to the Q output
of its previous-order flip-flop. Thus, when the Q;_, flip-flop changes, from its
l-state to its 0-state, thereby resulting in the Q;_, output to change from logic-0
to logic-1, a positive triggering edge occurs at the control input of the Q, flip-
flop causing it to toggle.
Figures 6.31b-c illustrate the counter’s behavior. Although propagation de-
lays are associated with each flip-flop, i.e., the output changes occur after an
input change, these delays are not included in the timing diagram for simplicity.
The counter is assumed to be initially in its 0000 state and the count enable sig-
nal is logic-1. Upon the occurrence of the positive edge of the first count pulse,
the Qy flip-flop changes to its 1-state. Since the Qp-output terminal goes from
logic-1 to logic-0O, flip-flop Q, is not affected by the input pulse. The state of the
counter is now 0001. When the positive edge of the second count pulse arrives,
the Qp flip-flop is again toggled. This time it returns to its O-state. Furthermore,
since the Qy output goes from logic-0 to logic-1, a positive edge appears at the
control input of the Q, flip-flop and causes it to toggle. The change in state of the
Q, flip-flop does not affect the Q, flip-flop since a negative edge occurs at its
control input. Hence, at the end of the second count pulse, the state of the
counter is 0010. The third count pulse causes only the Q, flip-flop to change
state, and the count to become 0011. When the positive edge of the fourth pulse
occurs, the Q, flip-flop returns to its 0-state. This causes a positive edge to occur
at the Qy terminal. Thus, the Q, flip-flop is toggled, returning it to its 0-state. In
addition, when the Q, flip-flop changes its state, the Q, flip-flop is toggled by
the logic-0 to logic-1 transition appearing at the Q,-output terminal. The counter
now stores the binary number 0100. The binary counting sequence continues
until the count 1111 is reached. At that time, a count pulse causes the Qy flip-
flop to return to its 0-state. This, in turn, causes the Q, flip-flop to return to its 0-
state. A consequence of this change causes the Q, flip-flop to return to its O-state
and, finally, this change returns the Q; flip-flop to its 0-state. Thus, the state of
the counter becomes 0000. If any further count pulses are applied to the counter,
then it repeats its counting sequence.
The binary counter of Fig. 6.31a is known as a ripple counter since a change in
state of the Q,_, flip-flop is used to toggle the Q; flip-flop. Thus, the effect of a count
pulse must ripple through the counter. Ripple counters are also referred to as asyn-
chronous counters. Recalling there is a propagation delay between the input and
output of a flip-flop, this rippling behavior affects the overall time delay between
the occurrence of a count pulse and when the stabilized count appears at the output
terminals. The worst case occurs when the counter goes from its 11 +--+ 1-state
3490 DIGITAL PRINCIPLES AND DESIGN
to its 00+ + - O-state since toggle signals must propagate through the entire length
of the counter. For an n-stage binary ripple counter, the worst-case settling time
becomes n X t,,, where t,, is the propagation delay time associated with each
flip-flop.
Count
enable
Count pulses
the output terminals. In this sense, synchronous counters are faster than asynchro-
nous counters. It is the gate delays and the flip-flop propagation delay in a synchro-
nous counter that determine the rate at which count pulses can be applied. In an
asynchronous counter, the allowable count pulse rate is determined by simply the
first stage of the counter. This implies that an asynchronous counter is very fast rel-
ative to its input. That is, once the first flip-flop changes state it can accept the next
count pulse even though the change has not propagated through the rest of the
counter. However, it is not until the rippling effect is completed that the count is
available for use.
If provisions are made to the synchronous counter structures of Figs. 6.32 or
6.33 so that they are loaded in parallel with an initial binary number prior to the
counting operation, then the mod-2” counter can be used as a mod-m counter where
m < 2". The counter structure of Fig. 6.33 modified to provide for parallel loading is
shown in Fig. 6.34a. JK flip-flops, rather than T flip-flops, are used in this network
to facilitate the handling of the parallel load inputs. Two enable signals are utilized.
One is to allow the parallel loading of the data inputs Do, D;, D,, and D3, and a sec-
ond to provide for counting. Both of these operations are synchronized with the pos-
itive edges of the count pulses. The load function takes precedence over the count
342 DIGITAL PRINCIPLES AND DESIGN
Count O
enable 0
N © fe es
ae
Count pulses
function* so that if a logic-1 is placed on the load enable line, regardless of the sig-
nal value on the count enable line, then the signal values on the data input lines, 1.e.,
Do, D,, D>, and D3, are entered into the four flip-flops of the counter upon the occur-
rence of the positive edge of the count pulse. If a logic-0O is applied to the load enable
line and a logic-1 is applied to the count enable line, then the network of Fig. 6.34a
behaves as a binary up-counter in the same way as the counter of Fig. 6.33. Finally,
a logic-O applied to both the load enable and count enable lines causes the count
pulses to be ignored and the counter to retain its current state since logic-0’s appear
at the J and K terminals of each flip-flop. A symbol for the counter of Fig. 6.34a is
given in Fig. 6.345.
Figure 6.35a shows how the counter of Fig. 6.34a is converted to function as a
mod-10, i.e., decimal, counter having the counting sequence given in Fig. 6.35b.
The normal counting sequence for the counter of Fig. 6.34a is that of a 4-bit binary
up-counter when enabled with a logic-1! on the count enable input. To limit the
counting sequence to the first 10 binary numbers, an and-gate is used to detect the
Count
enable
Load
enable
Do
4-bit
count
Do Q%
ema Dy =
= Load CO. =
Figure 6.34 Four-bit synchronous binary counter with parallel load inputs. (a) Logic diagram. (6) Symbol.
344 DIGITAL PRINCIPLES AND DESIGN
4-bit
count
Do Q% Q%
0 D Q Q;
Dy Q> Q>
Dy QO, Q, Q; Q, Q, A
:
Load» GO OF
AMR TO ORO:
tials
i) Count YO i ©
OO ae
_ (oe ewe OO
ab Dy etl
Q a a
Tarik SU al
OOO
i Oy 0)
Count Pa rronmG. can
pulses etc.
(a) (b)
count of 1001. Starting from the 0000 state, the first occurrence of Qy) = Q; = 1
causes the output of the and-gate to be logic-1. Since the load function takes prece-
dence over the count function, by connecting the and-gate output to the load enable
input the counter is loaded with 0000, i.e., the values on the D; inputs, upon the next
occurrence of a positive edge of a count pulse. In this way, the counting sequence is
0000, 0001, ... , 1001, 0000, etc.
Also incorporated into the counter of Fig. 6.34a is a carry output, CO, in
which a logic-1 appears whenever the counter state is 1111 and the counter is in
its count mode, i.e., when the count enable signal is logic-1 and the load enable
signal is logic-0. This output is used for constructing larger binary counters by
cascading two or more 4-bit binary counters. Figure 6.36 shows the connections
necessary to construct an 8-bit binary counter. When the state of the upper 4-bit
binary counter is 1111, which corresponds to the four least significant binary dig-
its of the 8-bit binary counter, its carry output signal is logic-1. This signal is ap-
plied to the count enable input of the lower 4-bit counter that is used for the four
most significant binary digits of the 8-bit binary counter. In this way, upon the oc-
currence of the positive edge of the next count pulse, the upper 4-bit binary
counter of Fig. 6.36 returns to its 0000 state, while the lower 4-bit binary counter
is incremented by I.
Many different types of MSI counters are commercially available. These in-
clude counters of both the asynchronous and synchronous types. Commercial coun-
ters may provide for downward counting as well as upward counting.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 345
4-bit
count
D0 ON0 eee CO
D, Cie a= Oh
= Dy Qy |=
Ds Q, Q;
0 Load CO
| Count
4-bit
count
a
D, Q Qs
ale Se
Dy Q> O%
0 Load CO }—
Count
Count
pulses
Q, Qs Qc VY
i @ © @
oy ft @ @
Oo @ it ©
0) @ jl
[OOO
Gic.
(b)
Figure 6.37 Mod-4 ring counter. (a) Logic diagram. (6) Counting sequence.
occurrence of each count pulse, the single | is shifted to its adjacent flip-flop. As
a consequence, a ring counter consisting of n flip-flops has only n states in its
counting sequence. Figure 6.37a shows a circular shift-right register. This con-
figuration is capable of serving as a mod-4 ring counter. If it is assumed that the
counter is initialized to its Q,Q;Q-Qp = 1000 state, then the counting sequence
given in Fig. 6.375 results.
Although the ring counter is not efficient in the number of flip-flops used, it
provides a decoded output. That is, to detect any particular state in the counting se-
quence, it is only necessary to interrogate the output of a single flip-flop. For exam-
ple, the 0001 state is readily detected by observing the output terminal Qp. When-
ever a logic-1 value appears at this terminal, the state of the counter is known to be
0001. Similarly, the determination of any other state only requires observing the
output of a single flip-flop.
A variation of the ring counter is the switch-tail counter, also known as the
twisted-ring counter or Johnson counter. This counter is illustrated in Fig. 6.38a
and its counting sequence is given in Fig. 6.385 assuming the counter starts in the
Q,Q3QcQp = 0000 state. In this counter, the complement of the rightmost flip-flop
serves as the input to the leftmost flip-flop in the shift-right register configuration.
And-gate
Q, Og Qc Ap inputs
0 100540 Q4Qp
i @ © O70;
(00 OnQc
[rier 0 QcOp
Bl heal Q4Qp
Olas. Q, Oz
es O,0c
OF Ooomet OcOp
(ae On a)
Figure 6.38 Mod-8 twisted-ring counter. (a) Logic diagram. (b) Counting sequence.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 347
And-gate
Q, Qp Qe Qp inputs
0 40 0 Q,Qp
en) Q4Qp
AP thead) QpQc
hil 1 QcQp
Cent Q4Qp
0 Oui Q3Q9c
ORO QcQp
ORO
(a)
Figure 6.39 Mod-7 twisted-ring counter. (a) Logic diagram. (b) Counting sequence.
21 Q, Q;
0 0 0
0 0
0 I
0
0 ]
0 0 ao ot
oe BI SaRs (ae eae 20
etc:
is further explored in the next two chapters. In any event, nonbinary counting se-
quences are often desirable.
At this time a general procedure is developed for designing synchronous coun-
ters having a prespecified output pattern sequence. For illustrative purposes, syn-
chronous mod-6 counters having the counting sequence shown in Table 6.3 are de-
signed using the four types of clocked flip-flops introduced in this chapter.
Count
pulses
Logic network
from some other source. The current state of the counter is applied to a logic net-
work. The function of the logic network is to generate the appropriate signals for
the J and K terminals of the clocked flip-flops so that the specified next state in
the counting sequence results upon the occurrence of the triggering edge of a
count pulse. What needs to be designed is the appropriate logic network. In this
case, the logic structure of this network can be described by six Boolean expres-
sions, one for each of the six inputs to the three flip-flops, in terms of the
Boolean variables Q,,-Q,, and Q; that correspond to the present state of the
counter. To obtain these expressions, a truth table for the logic network, called an
excitation table, is first developed and then the simplified Boolean expressions
are obtained.
Table 6.4 shows the excitation table for the synchronous mod-6 counter. It is
divided into three sections labeled present state, next state, and flip-flop inputs. At
this point, the first two sections can be completed. The counting sequence is listed
in the present-state section and the desired next state for each present state is en-
tered in the next-state section.
Before the third section can be filled-in, it is necessary to consider the terminal
behavior of a clocked JK flip-flop. In general, there are four distinct actions that a
flip-flop can undergo as a consequence of a triggering signal at its control input. In
particular, a flip-flop should remain in its O0-state, a flip-flop should remain in its
|-state, a flip-flop should go from its O-state to its 1-state, and, finally, a flip-flop
should go from its |-state to its 0-state. To see how these four actions are achieved,
it is necessary to consider the flip-flop next-state tables previously established in
Table 6.2. Using the table for the JK flip-flop, it is seen that the conditions for a JK
flip-flop to remain in its O-state are given by the first and third rows where Q = 0
and Q* = 0. In the first row J = 0, K = O and in the third row J = 0, K = 1. Thus,
for a JK flip-flop to remain in its O-state upon the occurrence of a triggering signal
on its control input, a logic-0 must appear at its J input terminal but either a logic-0
or a logic-1 may appear at its K input terminal. This is summarized by the first row
of the JK flip-flop application table given in Table 6.5, where the dash denotes a
don’ t-care.
Table 6.4 Excitation table for a synchronous mod-6 counter using clocked
JK flip-flops
Present state Next state Flip-flop inputs
350 DIGITAL PRINCIPLES AND DESIGN
Q Q- | J K
0 0 0 ~
0 1 | -
| 0 - 1
| | - 0
Continuing this analysis, the fifth and seventh rows of the /K flip-flop next-state
table shown in Table 6.2 indicate the necessary conditions at the J and K terminals
when it is required to have it change from its present 0-state to its 1-state upon the
occurrence of a triggering signal. From the fifth row it is seen that this occurs when
J = 1, K = Oand from the seventh row it is seen that this occurs when J = 1, K = 1.
Thus, it immediately follows that to change a JK flip-flop from its O-state to its
l-state, it is necessary that J = | and that K can be either logic-O or logic-1. This
condition is given by the second row of Table 6.5.
The remaining two rows of Table 6.5 are again obtained from Table 6.2. From
the fourth and eighth rows of the JK flip-flop next-state table, it follows that the
action of changing a JK flip-flop from its |-state to its O-state requires that K = |
and that either a logic-0 or a logic-1 appear at the / input terminal. Finally, the
second and sixth rows of the JK flip-flop next-state table lead to the last row of the
JK application table shown in Table 6.5, denoting the conditions needed for a JK
flip-flop to remain in its |-state. In this case, K = 0 and J is either a logic-O or a
logic-1.
Returning to Table 6.4 for the synchronous mod-6 counter, it is now a simple
matter to determine the logic signals that must be applied to the three JK flip-flops
in order to produce the present-state to next-state transitions specified in each row.
For example, when the present state of the counter is 0,Q,Q,; = 000, its next state is
to be Q/ Q; Q; = 010. Flip-flop Q, must remain in its 0-state. As indicated in Table
6.5, this is achieved by having J, = 0 and K, = -. Thus, these become the first two
entries in the first row of the flip-flop inputs section of Table 6.4. Similarly, since
flip-flop Q, must go from its O-state to its 1-state, this is achieved by having J, = |
and K, = — according to Table 6.5. Finally, the last pair of entries in the first row of
the mod-6 counter excitation table, /; = 0 and K, = —, corresponds to the necessary
conditions for flip-flop Q; to remain in its 0-state. The remaining entries in the flip-
flop inputs section are determined row by row in a similar manner, thus completing
Table 6.4.
Referring to Fig. 6.40 and Table 6.4, the inputs to the logic network corre-
spond to the present-state section of the excitation table and the outputs from the
logic network correspond to the flip-flop inputs section of the excitation table.
Thus, the first and third sections of the counter’s excitation table is really the
truth table for the logic network. Using just these two sections, the six Karnaugh
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 351
00
0 0
lem Oy
1
J, = Q0;
00
0 1
FF: Qi
fy ae
J, = Q3
00
FF3: Qy
1
Jz= Q)
maps of Fig. 6.41 are drawn. The six maps correspond to the six flip-flop input
functions and a cell of the map corresponds to a present state of the counter.
Thus, for example, from the first row of Table 6.4, Q,Q,Q, = 000, entries are
made into the upper left cell of each map for the appropriate values of J,, K,, Jo,
K,, J;, and K;. After considering the remaining five rows of Table 6.4, six of the
eight cells of each map have entries. Finally, the two cells Q,Q,Q; = 100 and
111 correspond to the two states that do not occur in the counting sequence.
Hence, dashes are placed in these two cells of each map since these present states
should never occur. A minimal-sum expression for each map is also included in
Fig. 6.41. These expressions lead to the logic diagram of Fig. 6.42 for the syn-
chronous mod-6 counter. Although minimal-sum expressions were written, mini-
mal-product expressions could have been obtained instead by grouping the 0’s
and the don’t-cares.
352 DIGITAL PRINCIPLES AND DESIGN
Count
pulses
Table 6.7 Excitation table for a synchronous mod-6 counter using clocked
D flip-flops
Present state Next state) \ ee Flip-flop inputs
This table simply states that whatever next-state value is needed of a clocked D flip-
flop, that logic value should appear at its D input terminal upon the occurrence of a
triggering edge at its control input. Using the information of Table 6.6, the third
section of the mod-6 counter excitation table is completed in a manner analogous to
that done previously for JK flip-flops. This is shown in Table 6.7.* Finally, regard-
ing the first and third sections of Table 6.7 as a truth table, the Karnaugh maps for
the three output functions, D,, D,, and D3, are constructed as shown in Fig. 6.43.
Again don’t-cares occur in the two cells of each map corresponding to the two un-
used states of the counting sequence, i.e., Q,Q,Q; = 100 and 111. From these maps,
minimal expressions for the logic network are obtained. In this case, a minimal-sum
expression describing the logic preceding each D flip-flop is written beneath its
Karnaugh map. Once the expressions for the logic network are established, the logic
diagram can be drawn.
The above procedure is readily modified when clocked T flip-flops are used for
the counter. As was done previously, the application table for a clocked T flip-flop
is first obtained and then the excitation table for the synchronous mod-6 counter is
constructed. Table 6.8 gives the T flip-flop application table. This table again imme-
diately follows from the T flip-flop next-state table given in Table 6.2 by noting
what logic value should be applied to the T input terminal for each of the four
present-state/next-state combinations. According to the clocked T flip-flop applica-
tion table, a logic-1 is needed at the T input terminal if the flip-flop is to change
state upon the occurrence of a triggering edge at its control input; otherwise, a
logic-0 should occur at the T input terminal. Using Table 6.8, the third section of the
synchronous mod-6 counter excitation table shown in Table 6.9 is completed. From
the first and third sections of Table 6.9, the Karnaugh maps for the synchronous
mod-6 counter with clocked T flip-flops, given in Fig. 6.44, are obtained. Using
these maps, minimal sums are easily written.
*Since D, = Q;', it should be no surprise that the next-state and flip-flop inputs sections of Table 6.7 are
identical.
354 DIGITAL PRINCIPLES AND DESIGN
D3 = Q, +Q0;
Table 6.9 Excitation table for a synchronous mod-6 counter using clocked Tflip-flops
0 0
0
T3 = Qo +103
Finally, consider the design of the synchronous mod-6 counter, having the
counting sequence given in Table 6.3, with clocked SR flip-flops. Table 6.10 gives
the necessary SR flip-flop application table from which the flip-flop inputs section of
the counter excitation table is completed. Table 6.2 is used to construct the SR flip-
flop application table by using the same type of analysis previously used to obtain the
JK flip-flop application table. However, since an SR flip-flop has nonallowable input
combinations, these combinations must not be used in forming Table 6.10. Thus,
when a clocked SR flip-flop is to change from its O-state to its l-state, according to
Table 6.2 this is achieved only if S = 1 and R = 0. Table 6.11, the excitation table
for the synchronous mod-6 counter using SR flip-flops, is next constructed. From this
table the Karnaugh maps of Fig. 6.45 are formed and the minimal sums written.
Table 6.11 Excitation table for a synchronous mod-6 counter using clocked SR
flip-flops
oS
So
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 357
S R
0,0; Q50,
00 Ol 1] 10 00 Ol 1] 10
0 0 0 1 0 0 0 -
lhe (Or Q,
1 es 0 = 2 1 & 0
S3 = QO; R3 = Q,0,
can also cause the counter to enter one of its initially unused states. Thus, it is of
interest to consider the behavior of a counter under the assumption that the unused
states of a counting sequence occur. A counter in which all the states not included
in the original counting sequence eventually lead to the normal counting sequence
after one or more count pulses are applied to the control inputs, C, is said to be
self-correcting. To avoid having the counter “hang up,” it should always be de-
signed as self-correcting.
Again consider the synchronous mod-6 counter previously designed with JK
flip-flops. State Q,Q0,Q;, = 100 was not included in the counting sequence. Substi-
tuting these values into the flip-flop input equations obtained in Fig. 6.41 for the re-
alization, it is seen that J, = 0, K, = 1, J, = 1, K, = 1, J; = 0, and K; = 0. Conse-
quently, upon the occurrence of the count pulse, flip-flop Q, resets, flip-flop Q,
toggles, and flip-flop Q; remains unchanged, with the net result that the next state of
the counter is O/ Q;Q; = 010. Hence, a valid state of the counting sequence is
358 DIGITAL PRINCIPLES AND DESIGN
reached if the initially unused state should occur. In a similar manner, it is easily
checked that 0,050; = 111 leads to the valid next state Q;Q;Q; = 101 in the
counting sequence. Figure 6.46 shows the complete state diagram for the synchro-
nous mod-6 counter whose realization was given in Fig. 6.42. Included are the ef-
fects of the two initially unused states. Since these states lead to the normal count-
ing sequence, the realization is that of a self-correcting counter. In an analogous
manner, it can be shown that the other three realizations of the synchronous mod-6
counter are also self-correcting.
Counters having no unused states are always self-correcting. It is the assignment
to the don’t-cares associated with the unused states when the logic expressions are
obtained that can cause a counter not to be self-correcting. By actually specifying the
next states for each of the unused states in the counting sequence prior to construct-
ing the Karnaugh maps, a self-correcting counter realization can be guaranteed.
CHAPTER 6G PROBLEMS
6.1 Design a switch debouncer using an SR latch.
6.2 The input signals shown in Fig. P6.2 are applied to the SR latch of Fig. 6.2a
when initially in its O-state. Sketch the Q and Q output signals. Assume all
timing constraints are satisfied.
Figure P6.2
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 359
6.3 The input signals shown in Fig. P6.3 are applied to the SR latch of Fig. 6.4a
when initially in its O-state. Sketch the Q and Q output signals. Assume all
timing constraints are satisfied.
Time
Figure P6.3
6.4 The input signals shown in Fig. P6.4 are applied to the gated SR latch of Fig.
6.5a when initially in its 0-state. Sketch the Q and Q output signals. Assume
all timing constraints are satisfied.
Figure P6.4
6.5 The input signals shown in Fig. P6.5 are applied to the gated D latch of Fig.
6.6a when initially in its 0-state. Sketch the Q and Q output signals. Assume
all timing constraints are satisfied.
Figure P6.5
360 DIGITAL PRINCIPLES AND DESIGN
6.6 The input signals shown in Fig. P6.6 are applied to the master-slave SK flip-
flop of Fig. 6.12a when initially in its 0-state. Sketch the Qy, Qy, Qs, and Os
output signals. Assume all timing constraints are satisfied.
Figure P6.6
6.7 The input signals shown in Fig. P6.7 are applied to the master-slave JK flip-
flop of Fig. 6.14a when initially in its O-state. Sketch the Q, and Q, output
signals. Assume all timing constraints are satisfied.
Figure P6.7
6.8 The input signals shown in Fig. P6.8 are applied to the master-slave D flip-
flop of Fig. 6.16a when initially in its 0-state. Sketch the Qy and Q, output
signals. Assume all timing constraints are satisfied.
Figure P6.8
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 361
6.9 The input signals shown in Fig. P6.9 are applied to the master-slave T flip-
flop of Fig. 6.17a when initially in its 0-state. Sketch the Qy, and Ox output
signals. Assume all timing constraints are satisfied.
Time
Figure P6.9
6.10 A logic diagram and function table for a proposed gated JK latch is shown in
Fig. P6.10. Discuss the problems that can be encountered with this network
and under what constraints proper JK flip-flop behavior is achieved.
i
Q Inputs Outputs
ot or
= J
pe
0
- 0
Q 1
K
1
xX xe
Oro Ovol-
ch oOlO
WI
oF
(a)
Figure P6.10
6.11 The input signals shown in Fig. P6.8 are applied to the positive-edge-
triggered D flip-flop of Fig. 6.18a when initially in its 0-state. Sketch the Q
output signal. Assume all timing constraints are satisfied.
6.12 The input signals shown in Fig. P6.7 are applied to the positive-edge-
triggered JK flip-flop of Fig. 6.22a when initially in its O-state. Sketch the Q
output signal. Assume all timing constraints are satisfied.
6.13 The input signals shown in Fig. P6.9 are applied to the positive-edge-
triggered Tflip-flop of Fig. 6.23a when initially in its O-state. Sketch the Q
output signal. Assume all timing constraints are satisfied.
6.14 The input signals shown in Fig. P6.7 are applied to the master-slave JK flip-
flop with data lockout of Fig. 6.24a when initially in its 0-state. Sketch the
Qy and Q, output signals. Assume all timing constraints are satisfied.
362 DIGITAL PRINCIPLES AND DESIGN
6.15 The positive-edge-triggered D flip-flop shown in Fig. P6.15a has the signals
of Fig. P6.15 applied when initially in its 0-state. Sketch the Q output
signal. Assume all timing constraints are satisfied.
Logic-1
(a) (b)
Figure P6.15
6.16 Show that for the positive-edge-triggered D flip-flop of Fig. 6.2 1a, it resets
when PR = 1 and CLR = 0 and that the C and D inputs have no effect on its
behavior. Show that the flip-flop sets when PR = 0 and CLR = 1.
6.17 Show that the master-slave configuration involving two gated D latches as
given in Fig. P6.17 is best described by the positive-edge-triggered D flip-
flop function table of Fig. 6.18.
D D Q D = ©
( ie
Ob 2D0—O
Figure P6.17
6.18 Verify the characteristic equations for the JK, D, and T flip-flops given in
Fig. 6.25D by constructing the appropriate Karnaugh maps and obtaining the
minimal sums.
6.19 Assume the shift register of Fig. 6.26 initially contains 1101. What is the
content of the register after the positive edge of each clock signal if the
values occurring on the serial-data-in line are 1, 1, 0, 1, 0, 1, and O in that
order?
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 363
6.20 Modify the register of Fig. 6.26 so that it is synchronously cleared. That is,
incorporate a Clear/shift control input line in which, upon the occurrence of
the clock signal, all the flip-flops enter their 0-state when the control signal is
0 and behaves as a shift-right register when the control signal is 1.
6.21 Design a register, incorporating four multiplexers and four positive-edge-
triggered D flip-flops, having the behavior specified in Table P6.21.
Table P6.21
Select lines
Sy So Register operation
0 0 Hold
0 | Synchronous clear
1 0 Complement contents
1 Circular shift right
6.22 For a 3-bit binary ripple up-counter similar to the one in Fig. 6.31, draw the
Count pulse, Qo, Q;, and Q, signals assuming a propagation delay of t for
each flip-flop.
6.23 Design a 4-bit binary ripple up-counter using negative-edge-triggered JK
flip-flops.
6.24 Design a 4-bit binary ripple up-counter using positive-edge-triggered D flip-
flops. Do not include a count-enable line.
6.25 Design a 4-bit binary ripple down-counter using positive-edge-triggered T
flip-flops.
6.26 Using a structure similar to that of Fig. 6.33, design a 4-bit synchronous
binary down-counter.
6.27 a. Using a structure similar to that of Fig. 6.33, design a 4-bit synchronous
binary up/down-counter having a count enable input line and an
up/down input line. When the signal value on the up/down input line is
logic-1, the counter should behave as a binary up counter; when the
signal value on the up/down input line is logic-0, the counter should
behave as a binary down counter.
b. Repeat part (a) using a structure similar to that of Fig. 6.34. The CO output
should be logic-1 when the counter is going down and the counter state is
0000 and when the counter is going up and the counter state is 1111.
6.28 Using the counter of Fig. 6.34, design a mod-5 counter
a. whose counting sequence consists of its first five states, i.e., 0000, 0001,
..., 0100, 0000, etc.
b. whose counting sequence consists of its last five states, i.e., 1011, 1100,
elle LO il ete:
364 DIGITAL PRINCIPLES AND DESIGN
6.29 Modify the synchronous mod-256 binary counter shown in Fig. 6.36 to
become a synchronous mod-77 binary counter.
6.30 Realize the 4-bit ring counter of Fig. 6.37 using the universal shift register of
Fig. 6.29. Use the parallel load capability of the register to initialize the
counter.
6.31 Realize the 4-bit twisted-ring counter of Fig. 6.38 using the universal shift
register of Fig. 6.29. Use the asynchronous clear capability of the register to
initialize the counter.
6.32 Using the general design procedure of Sec. 6.9, design a synchronous
mod-16 binary counter by obtaining its minimal-sum equations. Use
positive-edge-triggered D flip-flops.
6.33 Using the general design procedure of Sec. 6.9, design a synchronous mod-
10 binary counter, i.e., one whose counting sequence corresponds to the first
10 binary numbers, by obtaining its minimal-sum equations.
a. Use positive-edge-triggered JK flip-flops.
Use positive-edge-triggered D flip-flops.
Use positive-edge-triggered T flip-flops.
Use positive-edge-triggered SR flip-flops.
Oo
&© For the design of part (a), determine if the counter is self-correcting by
constructing the complete state diagram.
6.36 Design a synchronous mod-6 counter whose counting sequence is 000, 001,
100, 110, 111, 101, 000, etc., by obtaining its minimal-sum equations.
a. Use positive-edge-triggered JK flip-flops.
b. Use positive-edge-triggered D flip-flops.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 365
Logic-1
Count
pulses
Figure P6.37
6.37 Consider the synchronous counter shown in Fig. P6.37 constructed with
positive-edge-triggered flip-flops. Assuming it is initialized to 000 prior to
the first count pulse, determine the counting sequence. Is this counter self-
correcting?
6.38 From the Karnaugh maps of Fig. 6.43, it is seen that the synchronous mod-6
counter could also be realized from the equations
Di= 0,0; + QQ;
truth table for the logic network. Using this approach, design a positive-
edge-triggered AB flip-flop.
Clock A Q
0 0
0 1
0 0
0 1
| 0
1 1
l 0
I 1
(a) (b) (c)
Figure P6.39
367
368 DIGITAL PRINCIPLES AND DESIGN
Inputs
pe Combinational -— Outputs
Present Next
state state
classifies sequential networks into two broad categories: synchronous and asynchro-
nous. In the case of synchronous sequential networks it is assumed that the behavior of
the system is totally determined by the values of the present state and external input
signals at discrete instants of time. Special timing signals are used to define these time
instants. In addition, it is only at these discrete instants of time that the memory of the
system is allowed to undergo changes. In this way it is possible to describe the behav-
ior of the system relative to the set of ordinal numbers that are assigned to the timing
signals. The counters of the previous chapter are examples of synchronous sequential
networks where the discrete instants of time are associated with the occurrences of the
triggering edge of the count pulses.
The second category of sequential networks is the asynchronous sequential net-
works. In these networks, it is the order in which input signals change that affects
the network behavior. Furthermore, these changes are allowed to occur at any in-
stants of time. This class of sequential networks is studied in Chapter 9. @
*Although edge-triggered flip-flops are shown in the figure, master-slave flip-flops can also be used.
CHAPTER 7 Synchronous Sequential Networks 369
Inputs
Memory
(clocked flip-flops)
Present Next
state state
Clock
The clock signal is a periodic waveform having one positive edge and one neg-
ative edge during each period. Thus, during part of the period the clock signal has
the value of logic-1 and during the other part of the period it has the value of logic-
0. Since this control signal is applied to clocked flip-flops, one edge is used for trig-
gering in the case of edge-triggered flip-flops. Alternatively, the time duration in
which the clock signal is in one of its logic states, along with its associated edges, is
used to achieve a pulse for pulse-triggered flip-flops. For simplicity, no distinction
is made in this chapter between these edges or pulses, and their occurrence is here-
after referred to as the triggering times or active times of the clock signal.
The use of a single master clock provides for network synchronization and has
the advantage of preventing many timing problems. The basic operation of clocked
synchronous sequential networks proceeds as follows. After the input and new
present state signals in Fig. 7.2 are applied to the combinational logic, the effects of
the signals must propagate through the logic network since gates have finite propa-
gation delay times. As a result, the final values at the flip-flop inputs occur at differ-
ent times depending upon the number of gates involved in the signal paths and the
actual propagation delays of each gate. In any event, it is only after the final values
are reached that the active time of the clock signal is allowed to occur and cause any
state changes. Since the clock signal is applied simultaneously to all the flip-flops in
370 DIGITAL PRINCIPLES AND DESIGN
the memory portion of the network, all state changes of the flip-flops occur at the
same time.* The process is then repeated. That is, new inputs are applied and then
the synchronizing clock signal affects the state changes. It is important to note that
the flip-flops are only allowed to undergo at most a single state change for each
clock period. That is, any changes in the outputs of the flip-flops incurred as a result
of the clock signal cannot cause another state change until the next clock period. As
was seen in the previous chapter, edge-triggered and pulse-triggered, i.e., master-
slave, flip-flops provide this type of behavior.
As shown in Fig. 7.2, combinational logic is used to generate the next-state and
output signals. The present state of the network is the current content of the flip-
flops that comprise the memory portion of the network. The next state corresponds
to the updated information about the past history of inputs that must be preserved by
the system. Thus, if X denotes the collective external input signals and Q the collec-
tive present states of the flip-flops, then the next state of the network, denoted by
Q*, is functionally given by
O* = f(X,Q) (7.1)
Similarly, if Z is regarded as the collective output signals of the network, then under
the assumption that the outputs are a function of both the inputs and present state, it
immediately follows that
Z= 3(X,Q) (7.2)
Equations (7.1) and (7.2) suggest the general structure of a clocked synchro-
nous sequential network shown in Fig. 7.3. This model is frequently referred to as
the Mealy model or Mealy machine.
A variation to the Mealy model occurs when the outputs are only a function of
the present state and not of the external inputs. In this case
Z= g(Q) (7.3)
*As is seen in Chapter 9, this is in contrast to asynchronous sequential networks, which are allowed to
respond to signal changes on the inputs as they occur,
CHAPTER 7 Synchronous Sequential Networks 371
Again the next state of the network, Q", is a function of the external inputs and the
present state as given by Eq. (7.1). This variation of a clocked synchronous sequen-
tial network is referred to as the Moore model or Moore machine. The general struc-
ture suggested by Eqs. (7.1) and (7.3) is illustrated in Fig. 7.4.
Although the use of a master clock is the most common method of synchroniz-
ing sequential networks, there are other timing mechanisms. For example, pulses of
controlled duration from one or more sources can be used to trigger the flip-flops.
However, such pulse-type timing introduces complications in realizations that do
not occur when a single master clock is used. The concepts of Mealy and Moore
models are still applicable to such synchronous sequential networks if these pulses
only can cause single state changes. In this chapter, however, the study of synchro-
nous sequential networks is restricted to only those that are single master-clock
controlled.
In the general models of Figs. 7.1 to 7.4, the inputs to the memory portion of
the sequential network are labeled “next state.” The interpretation here is that ap-
propriate signals are applied to the clocked flip-flops so that after the triggering
edge or pulse of the clock signal, the states of the flip-flops reflect the updated state
information. In a realization, the actual signals that must be applied to the flip-flops
are really excitation signals that achieve the appropriate next state. These signals
are dependent upon the type of clocked flip-flops used for the memory. That is, the
necessary excitation signals depend upon whether JK, SR, D, or T flip-flops are
used. In the previous chapter, relationships were established between the present-
state/next-state transitions of a flip-flop and the logic values needed at its input ter-
minals to achieve these transitions.
Second, analysis provides a means for studying the terminal behavior of clocked
synchronous sequential networks. In particular, given a time sequence of inputs,
the time sequence of outputs and next states is readily determined.
In the previous section, two models of clocked synchronous sequential net-
works were presented: the Mealy model as defined by Eqs. (7.1) and (7.2) and the
Moore model as defined by Eqs. (7.1) and (7.3). For both models, the next states
(and, correspondingly, excitation) of the flip-flops are a function of the external in-
puts and the present states of the flip-flops. However, the two models differed in the
case of the network outputs. The outputs of Mealy sequential networks are also a
function of both the external inputs and the present states of the flip-flops; while for
Moore sequential networks the outputs are a function of just the present states of the
flip-flops.
Two clocked synchronous sequential networks are analyzed in this section.
They are shown in Figs. 7.5 and 7.6 and are referred to as Examples 7.1 and 7.2, re-
spectively, during the course of this discussion. As required for clocked synchro-
nous sequential networks, a clock signal is applied simultaneously to each of the
flip-flops for synchronization. In both examples, positive-edge-triggered flip-flops
are used for the memory portion of the network. Thus, the flip-flops change state
only at the occurrence of a leading edge of the clock signal. In order to explain vari-
ations to the analysis procedure, however, one network utilizes D flip-flops and the
other JK flip-flops.
The realization of Fig. 7.5 corresponds to a Mealy network. The diagram has
been laid out so as to resemble Fig. 7.3. The present state of the sequential network
corresponds to the signals at the output terminals of the flip-flops. These signals are
tae
L1 V Clock
fed back to the combinational logic that precedes the flip-flop input terminals. It is
this present state along with the external input signal x that serves as the inputs to
the combinational logic that provides the excitation signals to the D flip-flops. As
easily seen in the figure, the output portion of the sequential network is also a func-
tion of the external input x and the present states of the flip-flops. Thus, the two
combinational subnetworks satisfy the functional relationships given by Eqs. (7.1)
and (7.2) for a Mealy network.
The logic diagram for Example 7.2, 1.e., Fig. 7.6, is for a Moore network. The
diagram has been laid out to correspond to the general structure shown in Fig. 7.4.
As previously noted, the difference between Mealy and Moore models involves the
output portion of the network. In the case of Fig. 7.6, it is seen that the two outputs,
z, and z,, are only a function of the present states of the flip-flops and not a function
of the external inputs x and y. On the other hand, the excitations to the J and K ter-
minals of the flip-flops are indeed a function of both the external inputs and the
present states of the flip-flops.
under discussion, the upper flip-flop outputs are assigned the variables Q, and Q, for
the true and complemented outputs, respectively; while the lower flip-flops have
outputs Q, and Q,. In addition, it is necessary to assign excitation variables to the in-
puts of the flip-flops. This is done by defining the excitation variables to be the same
as the input terminal designators of the flip-flops, subscripted according to the num-
ber assigned to the flip-flop. Thus D, is the excitation variable for the upper flip-flop
of Example 7.1 and J, and K, are the excitation variables for the upper flip-flop of
Example 7.2. In a similar manner, excitation variables are assigned to the lower flip-
flops of the two examples. Since the clock signal is applied directly to the clock
input terminal of all the flip-flops, it is not necessary to write expressions for the
clock inputs. Once the state and excitation variables are defined, Boolean expres-
sions for the flip-flop excitations are readily written in terms of the present-state
variables and the external input variables.
The excitations to the flip-flops of Fig. 7.5 correspond to the logic values that
appear at the D input terminals of flip-flops FF 1 and FF2. Algebraically,
Ji=y CET)
K, = y + xO, (7.8)
Jy = x0, + xyO, 79)
K, = xy + yQ, (7.10)
Finally, the outputs of the sequential network are given by
£1 0,0, (iN)
Z = 2+ O; (7.12)
7.2.2 Transition Equations
The general structures for Mealy and Moore models given in Figs. 7.3 and 7.4 show
the inputs to the memory portion of the sequential network as next states rather than
excitation signals. Effectively, these structures are independent of the flip-flop types
used in a realization. To convert excitation expressions into next-state expressions,
It Is necessary to use the characteristic equations of the flip-flops. Characteristic
CHAPTER 7 Synchronous Sequential Networks 375
equations for the various types of flip-flops were previously developed in Chapter 6.
As indicated in Fig. 6.25), the characteristic equation for a D flip-flop is
OD
and for a JK flip-flop it is
OO
These equations indicate the next state of a flip-flop for given excitations at its input
terminals. By substituting the excitation expressions for a flip-flop into its character-
istic equation, an algebraic description of the next state of the flip-flop is obtained.
These expressions are referred to as transition equations.
Since Example 7.1 consists of D flip-flops, the next states of the flip-flops are
given by
Or Dy
Q; =D,
Substituting Eqs. (7.4) and (7.5) into the above equations gives the transition equations
On JO, 15 K,Q,
QO; aL JO, a K,Q,
The next-state section has one column for each combination of values of the
external input variables. Hence, if there are n external input variables, then this sec-
tion consists of 2” columns. Each entry in this section is a p-tuple corresponding to
the next state for each combination of present state (as indicated by its row) and ex-
ternal input (as indicated by its column).
The structure of the third section of the transition table depends upon whether
the network is of the Mealy or Moore type. In the case of a Mealy sequential net-
work, the outputs of the network are a function of both the present state and external
inputs. Thus, as in the next-state section, there is one column for each combination
of values of the input variables, and the entries within the section indicate the out-
puts for each present-state/input combination. On the other hand, since the outputs
of Moore sequential networks are only a function of the present state, the output
section of the transition table has only a single column. The entries within this col-
umn correspond to the outputs for the associated entries given in the present-state
section of the table.
Table 7.1 shows the transition table for Example 7.1. In the present-state sec-
tion, the four combinations of values of Q, and Q, are listed. Next, the next-state
section is constructed. Since there is only one external input variable, x, there are
2' = 2 columns in this section. One column is for x = 0 and the other is for x = 1.
The entries within this section correspond to the pair of Q; and Q3 values for the
various values of x, Q,, and Q, given by the column and row labels. These entries
are obtained by evaluating Eqs. (7.13) and (7.14). Attention should be paid as to the
order of the pair of elements for each entry in the table. To illustrate the determina-
tion of the first entry of each pair from Eq. (7.13), Q* = 1 when xQ, = 1, i.e., when
x = Oand Q, = 0. Thus, 1’s occur as the first element in the first column, x = 0, and
first and third rows, Q, = 0, of the next-state section. In addition, Q' = 1 when
Q,Q, = 1, 1e., when Q, = 0 and Q, = 1. This accounts for additional 1’s as the first
element in both columns of the second row of the next-state section. This completes
all present-state/input combinations that causes Q; = 1. Thus, all the remaining
first elements in the next-state section of the transition table are 0’s. In a similar
manner, Eq. (7.14) is used to determine the second element for each entry in the
next-state section of the transition table.
Since Example 7.1 corresponds to a Mealy sequential network, the output sec-
tion also has two columns to correspond to the two possible values of the input vari-
able x. The entries within this section of the table are obtained by evaluating Eq.
(7.6) for the eight combinations of values of x, Q,, and Q3.
The transition table for Example 7.2 is given in Table 7.2. Again the present-state
section consists of four rows that correspond to all the combinations of values to Q,
and Q,. The next-state section has four columns since there are 27 = 4 combinations
of values of the two external input variables x and y. The entries within this section
consist of pairs of elements corresponding to Q; and Q;. These entries are obtained
by evaluating Eqs. (7.15) to (7.16). In this example, the output section of the transition
table has only a single column since the logic diagram is of a Moore sequential net-
work. The entries in this column correspond to the evaluation of Eqs. (7.11) to (7.12)
for the four combinations of values of Q, and Q, given in the present-state section.
It should be noted that the transition table is really the truth table for the transi-
tion and output equations. The only difference lies in the fact it is represented as a
two-dimensional array where the rows denote the values of the present-state vari-
ables and the columns denote the values of the external input variables.
However, in view of the fact that the characteristic equation for a D flip-flop is
Q* = D, it is readily seen that the excitation section of Table 7.3 is the same as the
next-state section of Table 7.1. Hence, for sequential networks using D flip-flops,
the excitation table and the transition table are identical except for the label assign-
ment to the entries in the second section.
Table 7.4 gives the excitation table for Example 7.2. Equations (7.7) to (7.10)
are the excitation equations for this example. These four expressions are used to de-
termine the 4-tuples appearing as entries in the excitation section by evaluating the
expressions in the same manner as the transition equations were previously evalu-
ated. The comma in each 4-tuple is used just to delineate the excitations of flip-flop
FF from flip-flop FF 2.
In order to obtain the transition table from the excitation table, it is necessary to
analyze each entry of the excitation table to determine the effect of the indicated ex-
citation values. The effects of excitation signals on the states of the various types of
flip-flops were previously given in Table 6.2.
To illustrate the construction of the transition table from an excitation table,
consider the entry in the fourth column, first row of the excitation section of Table
7.4, 1.e., J,K,,J>Ky = 11,10. The present state associated with the first row of the
table is Q,Q, = 00. Thus, for flip-flop FF1, J, = 1, K, = 1, and Q, = 0. From Table
6.2 it is immediately seen that under this condition Q; = 1. That is, when a logic
value of | appears at both the J and K terminals of a JK flip-flop at the triggering
time of the clock signal, the state of the flip-flop is complemented. For flip-flop
FF2, J, = 1, K, = 0, and Q, = 0. Again from Table 6.2 it is seen that Q3 = 1. That
is, a logic value of | on just the J terminal of a JK flip-flop at the triggering time of
the clock signal causes it to set. Hence, the next state of the sequential network
when xy = 11 and Q,Q, = 00 is QQ; = 11. This is precisely the entry in the
fourth column, first row of the next-state section of Table 7.2. By repeating this pro-
cedure on each entry in the excitation section of Table 7.4, the next-state section of
Table 7.2 is obtained. The present-state and output sections of both the excitation
table and transition table are always the same.
ke
©
—— Sa
0/1
Table 7.5 in exactly the same way. For simplicity, if more than one input causes a
transition to occur between two states, then multiple labels are used on a single
branch. This is seen in Fig. 7.7 for present-state B where D is the next state for
x = 0 and x = 1. In both cases, the output is 0.
When state diagrams for Moore sequential networks are constructed, the
outputs associated with each state are entered in the node along with the state
designator. The branch labels in this case are just the input combinations that af-
fect the state transitions. Figure 7.8 shows the state diagram for Table 7.6. Each
Present
state
00 00
node is labeled with a state and its associated outputs separated by a slash. The
multiple label on the branch connecting nodes B and D corresponds to the situa-
tion in which two input combinations affect a transition between the same two
states.
In Sec. 8.2 another diagrammatic form describing sequential network behavior
is discussed. This form is a chart bearing some resemblance to a flowchart fre-
quently encountered in computer programming. However, it is time-oriented rather
than task-oriented.
The state diagram for a Mealy sequential network is a little misleading rel-
ative to the outputs. Although the outputs are shown on the directed branches
of the state diagram, this does not mean that the outputs are produced during
the transition between two states. Rather, the outputs appearing on the branches
are continuously available while in a present state and the indicated inputs are
applied.
“Whenever a time sequence of symbols is given, it is assumed to occur from left to right. In this case,
the first input for x is 0, then another 0, then a 1, ete.
CHAPTER 7 Synchronous Sequential Networks 383
The fact that the outputs from a Mealy sequential network are a function of
both the external inputs and the present state introduces the possibility of false
outputs or glitches. When the state and output sequences are determined from a
State table or state diagram, as was done above, the values of the external input
variables only at the triggering time of the clock signal are considered. However,
the external input variables may change values any time during the clock pe-
riod. Although these input changes can continuously affect the network outputs,
the consequences of these input changes do not appear in the listing of the output
sequences.
To illustrate this problem of false outputs, a timing diagram for the input se-
quence x = 0011011101 applied to state A of Example 7.1 is shown in Fig. 7.9.
For simplicity, propagation delays are assumed to be zero so that the effects of
all signal changes occur immediately. Since positive-edge-triggered flip-flops
are used in the realization, any state changes occur coincident with the positive
edges of the clock signal. Previously it was seen that this input sequence pro-
duced the output sequence z = 0101001011. However, as can be seen in the fig-
ure, the actual output sequence is z = 01010(1)0101(0)1 where the two outputs
in parentheses are false outputs. Consider the first of these false outputs. An
input of x = 0 applied to the network when in present-state B causes the network
to go to state D upon the occurrence of the positive edge of the clock signal.
se ee a
State
sequence A c @ A B D A B
Output
sequence 0 ] 0 1 0 (1) O | 0
False False
output output
However, immediately after the state transition, the input x is still 0. Referring
to the state table or the state diagram for Example 7.1, it is seen that when in
present-state D, the input x = 0 produces a | output. It is not until the input x
changes to | that the output for present-state D becomes 0. Hence, for the period
of time in which the network is in its new state and the old input is still being ap-
plied, there is a false logic-1 output. The second false output shown in Fig. 7.9 is
the result ofx still being 0 for a short time after the state transition from state D
to state A.
The above discussion only considered one possible cause of false outputs. False
outputs can also occur as a result of propagation delays. This topic is further investi-
gated in Chapter 9.
Terminal behavior of a Moore sequential network is also readily determinable
from its state table or state diagram. For the Moore sequential network of Example
Logic
diagram
|
Assign state and
excitation variables
to each flip-flop
Excitation and
output equations
es
Transition equations
(obtained by applying Excitation
flip-flop characteristic table
equation)
—
Transition
table
State
diagram
7.2, the state table shown in Table 7.6 or the state diagram in Fig. 7.8 is used. If it is
assumed the network begins operation in state A, then the following is an example
of input, state, and output sequences:
that signifies that no past inputs have been applied. This preliminary analysis of list-
ing the states may not produce all the necessary states of the network. However, it
does serve as a starting point for the construction of the model. As the state table or
state diagram is formed, additional states are added to the proposed collection if
none of the initially defined states adequately describes the information to be pre-
served at some point in time. It is also possible that too many states are proposed
and used in the model. This problem is readily handled, as is seen in Sec. 7.4, by ap-
plying a systematic reduction technique to the state table. An important point to re-
member, however, is that the state table must be finite in length. Thus, although
states can be continually added to the proposed listing, for a realization to exist,
only a finite number of states are allowed.
The first network to be modeled is the serial binary adder. Let us assume that
the realization is to be a Mealy network having the general structure shown in
Fig. 7.11. Two binary sequences, corresponding to the two operands being added,
are applied to the network inputs x and y, least significant bits first. The values on
the x and y inputs are applied prior to the triggering time of the clock signal that is
used for synchronization. The binary sum of the two numbers appears as a time se-
quence, also least significant bit first, on the single output line z.
At any time, there are four possible input combinations for the two external in-
puts x and y, i.e., 00, 01, 10, and 11. As was discussed previously, the state of a se-
quential network must preserve any necessary past history in order to determine a
present output and a next state. From the discussion on binary addition in Sec. 2.3,
it was seen that with the exception of the addition of the least significant pair of bits,
the sum bit for any order position is determined by the two operand bits of that
order as well as whether or not any carry was generated from the addition of the
previous order of bits. Thus the existence or nonexistence of a carry must be the re-
quired internally preserved information needed to perform the addition process
upon any pair of operand bits. Two states can now be defined for the network re-
flecting this fact. The first state, say, A, is associated with the past history “no carry
was generated from the previous order addition,” and a state, say, B, is associated
with the past history “a carry was generated from the previous order addition.”
When the first pair of bits is added, i.e., the least significant pair of bits, the correct
sum bit is obtained if it is assumed that the carry from the previous order addition
v—>P-
Serial
binary
adder
Clock
was 0. It is now seen that knowledge of the state information and the current pair of
operand bits is sufficient to determine a sum bit and an appropriate next state with
state A serving as the initial state of the network.
Having defined a set of states describing the information that must be preserved
about the past history, the construction of a state diagram is started by introducing a
node for each state. Since there are two input bits present at any time, corresponding
to the bit values on the x and y lines, four input-bit combinations must be considered
for each node. Since state-A denotes no carry was generated from the previous order
addition, the appropriate sum bit to be produced while in state A is simply the bi-
nary sum of the two bits on the x and y lines. Furthermore, the next state must corre-
spond to whether or not a carry results from the current two-bit addition. No carry
occurs upon the bit-pair addition of the input combinations 00, 01, and 10. The first
of these three input combinations results in a sum bit of 0, while the other two com-
binations result in a sum bit of 1. This behavior is illustrated in Fig. 7.12a as the
loop on node A. The letters “I.S.” next to node A indicate that it is the initial state.
The fourth input combination, 11, also results in a sum bit of 0, but since a carry is
generated, the network must go to state B so as to preserve this information about
the past history. This appears in Fig. 7.12a as the directed branch from node A to
node B.
Next it is necessary to describe the behavior of the network under the four
input conditions while in state B. When the network is in state B, it is remembering
that a carry was produced from the previous order addition. Thus, the appropriate
sum bit for this state must be one greater than the binary sum of the two input bits.
Hence, the xy inputs 01 and 10 must produce a zero sum bit plus a carry. Since state
B corresponds to the remembering of a carry, an arc is directed from node B to
node B for these two cases as shown in Fig. 7.12b. Similarly, the arc also is labeled
with the input combination 11 since again the sum of these two bits and a “remem-
bered” carry results in a carry. However, the output in this case is a 1. The final
11/0
00/0,
-@
O1/1, oe
10/1
00/0, 01/0,
(b)
input combination for the network while in state B corresponds to xy = 00. In this
case, the appropriate sum bit is a | due to the remembered carry. The next state,
however, must correspond to a state that preserves the information of “no carry was
generated for the previous order addition.” This is precisely the meaning of state A.
Hence, a transition from state B to state A must occur for the input combination of
11 as shown in Fig. 7.125.
Using two binary numbers, the reader can easily check that the state diagram of
Fig. 7.12b models the behavior of a serial binary adder assuming the process is
started in state A. Since state A is the initial state, provision must be made to start
the network in this state when the realization is completed. The initialization
process is discussed further in Sec. 7.6.
In many cases, such as the example just completed, the state diagram is a con-
venient way of formalizing the network behavior of a synchronous sequential net-
work. However, for the remaining steps of the synthesis procedure, the state table is
more useful. Since there is a one-to-one correspondence between a state diagram
and a state table, it is a simple matter to construct a state table. As was explained in
the previous section, for a Mealy sequential network, both the next-state and output
sections of the state table have one column for each possible input combination.
The rows of the state table, listed in the present-state section, correspond to the
states of the network. The entries in the remaining two sections of the table are
readily determined by noting the next state and output for each present-state/input
combination appearing on the state diagram. The state table for the serial binary
adder is shown in Table 7.7. An asterisk is placed next to state A in the present-state
section to indicate that it is the initial state.
mally specified for the initial state, this first output must be ignored and only the
subsequent outputs are relevant as the output sequence. Furthermore, in a Moore se-
quential network, each state must be associated with both an output and information
regarding how the past history of inputs causes that output. For this reason two
states are not sufficient for modeling the serial binary adder as a Moore sequential
network since either a 0 or | sum bit is possible both with and without the existence
of a carry from the previous order addition. A little reflection upon the problem
leads to the conclusion that four states are necessary, one state for each combination
of sum bit and carry bit from the previous order addition. Thus, the following four
states are defined:
A: The sum bit is 0 and no carry was generated from the previous order
addition.
B: The sum bit is 0 and a carry was generated from the previous order
addition.
C: The sum bit is 1 and no carry was generated from the previous order
addition.
D: The sum bit is 1 and a carry was generated from the previous order
addition.
Having defined the states for the network, the construction of the state diagram
for the serial binary adder under the Moore sequential network assumption proceeds
in much the same way as was done previously. Appropriate next states are deter-
mined for each state under the four input combinations of values occurring on the x
and y input lines of the network. The corresponding state diagram is shown in Fig.
7.13 and the state table is given in Table 7.8. It is important to realize that the first
valid sum bit occurs immediately after entering the first state from the initial state.
After that, the sum bits are produced in sequential order. Thus, either state A or state
C can serve as the initial state since the first output must be ignored. In this exam-
ple, state A is chosen as the initial state.
*A A ( G B 0
B G B B D 0
C A C G B |
D Cc B B D 1
Sequence
recognizer |
Clock
Figure 7.14 A
sequence
recognizer.
1/0
Figure 7.15 State diagram for a sequence recognizer. (a) Detection of two consecutive O's. (6) Partial analysis
of the three-symbol sequence. (c) Completed state diagram. (d) Definition of states.
continue the waiting time needed before producing the | output. When the net-
work is in state E, the third input occurs. Since at least one | has occurred during
the last three inputs, the output is 1, regardless of the current input, and the net-
work returns to state A to repeat the recognition procedure.
The state diagram of Fig. 7.150 is still not complete since the situation of a
0 input occurring while in state C must be considered. As shown in Fig. 7.15c,
the network in this case goes to state F, with a 0 output, to indicate that the first
input after the initial pair of 0’s was also a 0. When in state F, the occurrence of
a | input implies that the conditions of the network specifications are satisfied,
but the network must still wait one more time period before producing a 1 out-
put. This is precisely the meaning of state E. Thus a directed branch, for the
| input, is drawn from state F to state E with a 0 output. However, if the input is
0 while in state F, then two consecutive 0’s have occurred in the sequence of
three inputs that is being analyzed. The network enters state G to signify this fact
and a 0 output is produced. One more input must be considered when the net-
work is in state G. Regardless of this input, however, the network must return to
the initial state A. If this third input is a 1, then a | output is produced according
to the network specifications. On the other hand, a 0 input implies that the three
inputs that followed the initial pair of 0 inputs were also 0’s and, consequently,
the output is 0.
CHAPTER 7 Synchronous Sequential Networks 393
AwmnmdOAd
The state table corresponding to Fig. 7.15c is shown in Table 7.9. An asterisk is
placed next to state A to signify that this is the initial state of the network.
0/0
0/0
0/0
0/0
(a)
A: No inputs received (initial state). |£: Last two inputs received were 10.
B: Last input received was 0. F: Last three inputs received were 011.
C: Last input received was 1. G: Last three inputs received were 100.
D: Last two inputs received were 01.
(D)
Figure 7.16 A 0110/1001 sequence recognizer. (a) Beginning the detection of the sequences 0110 or 1001.
(b) Definition of states. (c) Completing the detection of the two sequences 0110 or 1001.
(d) Completed state diagram.
state E was initially introduced to record the occurrence of the input sequence 10.
This 10 sequence can also be the first two inputs of a sequence that must produce a |
output if it is followed by 01. Since the two sequences that must be recognized, i.e.,
0110 and 1001, are allowed to overlap and 10 are the last two inputs of the 0110
sequence, the 0/1 branch from state F' should be directed to state E as shown in
Fig. 7.16c to record the fact that another potentially detectable sequence has started.
Using a similar analysis, the next state for state G under a | input is state D.
The state diagram of Fig. 7.16c is still incomplete since each node must have
exit branches for both 0 and | inputs. A 1 input when in state F corresponds to an
input sequence of 0111. This last | input could be the first 1 of a 1001 sequence.
From Fig. 7.16b, state C was defined to record such a situation. Hence, the next state
from F should be C if the input is 1. This is shown in Fig. 7.16d. By a similar argu-
ment the next state from state G with a O input should be state B. Next the second
input during states D and F is studied. State D records the fact that the last two inputs
were (1 and a 0 input during state D implies the last three inputs are 010. The 10
portion of this sequence can be the beginning of the sequence 1001 that results in a 1
CHAPTER 7 Synchronous Sequential Networks 395