0% found this document useful (0 votes)
99 views727 pages

Givone Text Book

Uploaded by

anshrk.13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views727 pages

Givone Text Book

Uploaded by

anshrk.13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 727

tal Princigles and Qesia

onal UbWvonge
ADDITIONAL RESOURCES

In addition to the comprehensive coverage within the book, a number of additional


resources are available with Givone’s DIGITAL PRINCIPLES AND DESIGN.
These resources include:

= ACD-ROM that contains Altera’s MAX+PLUS II 10.1 Student Edition


programmable logic development software and Multisim 2001 Textbook
Edition from Electronics Workbench, which comes free with every copy of the
pic

MAX+PLUS I] is a fully integrated design environment that offers


unmatched flexibility and performance. The intuitive graphical
interface is complemented by accessible on-line documentation,
which makes learning and using MAX+PLUS II quick and easy.
The MAX+PLUS II Student Edition software is for students who
are learning digital logic design. By entering the designs presented
in the book or creating custom logic designs, students develop skills
for prototyping digital systems using programmable logic devices.
Multisim 2001, the latest release of the Electronics Workbench
flagship product, delivers intuitive schematic capture, accurate
simulation and analysis, along with programmable logic. It
provides industry-leading power with all of the features and
functionality instructors demand in an easy-to-learn and easy-to-use
package. Students can create circuits, analyze pre-built circuits,
work through virtual laboratory assignments, simulate circuit
behavior and troubleshoot for faults. Teachers can customize their
circuits using Multisim’s powerful “Teacher Features” including
the ability to build faulted components into a circuit and to
password protect access to components, instruments and analyses.

m= An appendix containing two tutorials—one for the Altera software and one for
LogicWorks™4. These tutorials, provided by two separate contributors, are
meant to offer basic introductions to these software packages for students who
are using them in their course.
= A website at http://www.mhhe.com/givone with several tools for both the
instructor and the student—

Expanded software tutorials


Labs
Examples/Problems using software
PowerPoint Slides
Altera, LogicWorks and Multisim sample circuit files
Solutions Manual (for instructors only)
We hope these resources enrich your experience as you use Givone’s DIGITAL
PRINCIPLES AND DESIGN.
| Digital Principles and Design
Related Titles

Brown, Vranesic: Fundamentals of Digital Logic with VHDL Design


Ham, Kostanic: Principles of Neurocomputing for Science and Engineering
Hamacher, Vranesic, and Zaky: Computer Organization
Hayes: Computer Architecture and Organization
Hwang: Advanced Computer Architecture: Parallelism, Scalability, Programmability
Hwang: Scalable Parallel Computing: Technology, Architecture, Programming
Leon-Garcia/Widjaja: Communication Networks
Marcovitz: Introduction to Logic Design
Navabi: VHDL: Analysis and Modeling of Digital Systems
Patt, Patel: Introduction to Computing Systems: From Bits & Gates to C & Beyond
Schalkoff: Artificial Neural Networks
Shen/Lipasti: Modern Processor Design
Digital Principles and Design

Donald D. Givone
University at Buffalo
The State University of New York

Boston Burr Ridge, IL Dubuque, !A Madison, WI New York SanFrancisco St. Louis
Bangkok Bogota Caracas KualaLumpur Lisbon London Madrid Mexico City
Milan Montreal New Delhi Santiago Seoul Singapore Sydney Taipei Toronto
McGraw-Hill Higher Education 57
A Division of The McGraw-Hill Companies

DIGITAL PRINCIPLES AND DESIGN

Published by McGraw-Hill, a business unit of The McGraw-Hill Companies, Inc., 1221 Avenue of the
Americas, New York, NY 10020. Copyright © 2003 by The McGraw-Hill Companies, Inc. All rights
reserved. No part of this publication may be reproduced or distributed in any form or by any means, or
stored in a database or retrieval system, without the prior written consent of The McGraw-Hill
Companies, Inc., including, but not limited to, in any network or other electronic storage or
transmission, or broadcast for distance learning.

Some ancillaries, including electronic and print components, may not be available to customers outside
the United States.

This book is printed on acid-free paper.

International 1234567890 QPF/QPF098765432


Domestic 1234567890 QPF/QPF098765432

ISBN 0-07-—252503-7
ISBN 0-07-—119520-3 (ISE)

Publisher: Elizabeth A. Jones


Developmental editor: Michelle L. Flomenhoft
Executive marketing manager: John Wannemacher
Senior project manager: Susan J. Brusch
Senior production supervisor: Sandy Ludovissy
Lead media project manager: Audrey A. Reiter
Senior media technology producer: Phillip Meek
Coordinator of freelance design: David W. Hash
Cover designer: Rokusek Design
Cover image: ©Eyewire, Inc./Don Bishop
Compositor: UG/GGS Information Services, Inc.
Typeface: 10/12 Times Roman
Printer: Quebecor World Fairfield, PA

Library of Congress Cataloging-in-Publication Data

Givone, Donald D.
Digital principles and design / Donald D. Givone. — Ist ed.
p. cm.
ISBN 0-07-252503-7—ISBN 0-07-—119520-3 (ISE)
1. Digital electronics. I. Title.

TK7868.D5 G57 2003


621.381—de21 2002022661
CIE

INTERNATIONAL EDITION ISBN 0-07-119520-3


Copyright © 2003. Exclusive rights by The McGraw-Hill Companies, Inc., for manufacture and export.
This book cannot be re-exported from the country to which it is sold by McGraw-Hill. The International
Edition is not available in North America.

www.mhhe.com
To my children Donna and David and my brother Billy
Introduction 1

Number Systems, Arithmetic, and Codes 7

Boolean Algebra and Combinational Networks 61

Simplification of Boolean Expressions 127

ao
NY
WO
&= Logic Design with MSI Components and Programmable Logic
Devices 230
Flip-Flops and Simple Flip-Flop Applications 301

Synchronous Sequential Networks 367


Algorithmic State Machines 444

Oo
rn
oo Asynchronous Sequential Networks 505

Appendix A: Digital Circuits 589

Appendix B: Tutorials 665

Bibliography 684

Index 687

vi
Preface xiii 2.5.4 Verification of the Iterative Method
for Fractions 23

Chapter
1_ 2.5.5. A Final Example 23

Introduction 1 2.6 Special Conversion Procedures 24


2.7. Signed Numbers and Complements 26
1.1 The Digital Age 1
2.8 Addition and Subtraction with r’s-
1.2 Analog and Digital Representations Complements 31
of Information 2
2.8.1 Signed Addition and Subtraction 33
1.3 The Digital Computer 2
2.9 Addition and Subtraction with (r — 1)’s-
1.3.1 The Organization of a Digital Complements 36
Computer 3
2.9.1 Signed Addition and Subtraction 39
1.3.2 The Operation of a Digital Computer 5
2.10 Codes 41
1.4 AnOverview 5
2.10.1 Decimal Codes 41
2.10.2 Unit-Distance Codes 44
Chapter
2— 2.10.3 Alphanumeric Codes 46
Number Systems, Arithmetic, 2.11 Error Detection 48
and Codes /7 2.12 Error Correction 50
2.12.1 Hamming Code 51
2.1 Positional Number Systems 7
2.12.2 Single-Error Correction plus Double-Error
2.2 Counting in a Positional Number Detection 54
System 9
2.12.3 Check Sum Digits for Error
2.3 Basic Arithmetic Operations 11 Correction 54
2.3.1 Addition 11 Problems 55
2.3.2 Subtraction 11
2.3.3 Multiplication 14
2.3.4 Division 16 Chapter
3_
2.4 Polynomial Method of Number Boolean Algebra and
Conversion 16 Combinational Networks 61
2.5 Iterative Method of Number Conversion 19 3.1 Definition of a Boolean Algebra 62
2.5.1 Iterative Method for Converting 3.1.1 Principle of Duality 63
Integers 20
3.2 Boolean Algebra Theorems 63
2.5.2 Verification of the Iterative Method
3.3. A Two-Valued Boolean Algebra 70
for Integers 21
2.5.3 Iterative Method for Converting 3.4 Boolean Formulas and Functions 73
Fractions 22 3.4.1 Normal Formulas 75

Vii
viii DIGITAL PRINCIPLES AND DESIGN

3.5 Canonical Formulas 76 Chapter4|


3.5.1 Minterm Canonical Formulas 76 Simplification of Boolean
3.5.2 m-Notation 78 Expressions 12/7
3.5.3. Maxterm Canonical Formulas 80 Formulation of the Simplification
4.1
3.5.4 M-Notation 81 Problem 127
3.6 Manipulations of Boolean Formulas 83 4.1.1 Criteria of Minimality 128
3.6.1 Equation Complementation 83 4.1.2 The Simplification Problem 129
3.6.2 Expansion about a Variable 84
4.2 Prime Implicants and Irredundant Disjunctive
3.6.3 Equation Simplification 84 Expressions 129
3.6.4 The Reduction Theorems 86
4.2.1 Implies 129
3.6.5 Minterm Canonical Formulas 87
4.2.2. Subsumes 130
3.6.6 Maxterm Canonical Formulas 88
4.2.3. Implicants and Prime Implicants 131
3.6.7 Complements of Canonical
4.2.4 Irredundant Disjunctive Normal
Formulas 89
Formulas 133
3.7. Gates and Combinational Networks 91
4.3 Prime Implicates and Irredundant Conjunctive
Sil Gatesmno2
Expressions 133
3.7.2 Combinational Networks 92
4.4 Karnaugh Maps 135
3.7.3. Analysis Procedure 93
4.4.1 One-Variable and Two-Variable
3.7.4 Synthesis Procedure 94
Maps 135
3.7.5 A Logic Design Example 95
4.4.2 Three-Variable and Four-Variable
3.8 Incomplete Boolean Functions and Don’t- Maps 136
Care Conditions 97 4.4.3. Karnaugh Maps and Canonical
3.8.1 Describing Incomplete Boolean Formulas 138
Functions 99 4.4.4 Product and Sum Term Representations
3.8.2 Don’t-Care Conditions in Logic on Karnaugh Maps_ 141
Design 99
4.5 Using Karnaugh Maps to Obtain Minimal
3.9 Additional Boolean Operations and Expressions for Complete Boolean
Gates 101 Functions 145
3.9.1 The Nand-Function 102 4.5.1 Prime Implicants and Karnaugh
3.9.2 The Nor-Function 103 Maps 145
3.9.3. Universal Gates 103 4.5.2 Essential Prime Implicants 150
3.9.4 Nand-Gate Realizations 105 4.5.3 Minimal Sums 151
3.9.5 Nor-Gate Realizations 108 4.5.4 Minimal Products 155
3.9.6 The Exclusive-Or-Function 111 4.6 Minimal Expressions of Incomplete Boolean
3.9.7. The Exclusive-Nor-Function 113 Functions 157
3.10 Gate Properties 113 4.6.1 Minimal Sums 158
3.10.1 Noise Margins 115 4.6.2 Minimal Products 159
3.10.2 Fan-Out 116 4.7 Five-Variable and Six-Variable Karnaugh
3.10.3 Propagation Delays 117 Maps 160
3.10.4 Power Dissipation 118 4.7.1 Five-Variable Maps 160
Problems 118 4.7.2 Six-Variable Maps 163
CONTENTS ix

4.8 The Quine-McCluskey Method of Generating 4.14.4 Incompletely Specified Functions 213
Prime Implicants and Prime Implicates 166 4.14.5 Maps Whose Entries Are Not Single-
4.8.1 Prime Implicants and the Quine-McCluskey Variable Functions 218
Method 167 Problems 222
4.8.2 Algorithm for Generating Prime
Implicants 170
Chapter
5—
4.8.3. Prime Implicates and the Quine-McCluskey
Logic Design with MSI
Method 173
Components and Programmable
4.9 Prime-Implicant/Prime-Implicate Tables and Logic Devices 230
Irredundant Expressions 174
5.1 Binary Adders and Subtracters 231
4.9.1 Petrick’s Method of Determining
Irredundant Expressions 175 5.1.1 Binary Subtracters 233
4.9.2 Prime-Implicate Tables and Irredundant 5.1.2 Carry Lookahead Adder 236
Conjunctive Normal Formulas 178 5.1.3. Large High-Speed Adders Using the Carry
Lookahead Principle 238
4.10 Prime-Implicant/Prime-Implicate Table
Reductions 178 5.2 Decimal Adders 242
4.10.1 Essential Prime Implicants 179 5.3. Comparators 246
4.10.2 Column and Row Reductions 180 5.4 Decoders 248
4.10.3 A Prime-Implicant Selection 5.4.1 Logic Design Using Decoders 249
Procedure 184 5.4.2 Decoders with an Enable Input 256
4.11 Decimal Method for Obtaining Prime 5.5 Encoders 260
Implicants 184 5.6 Multiplexers 262
4.12 The Multiple-Output Simplification 5.6.1 Logic Design with Multiplexers 266
Problem 187 5.7. Programmable Logic Devices (PLDs) 276
4.12.1 Multiple-Output Prime Implicants 191 5.7.1 PLD Notation 279
4.13 Obtaining Multiple-Output Minimal Sums and 5.8 Programmable Read-Only Memories
Products 191 (PROMs) 279
4.13.1 Tagged Product Terms 192 5.9 Programmable Logic Arrays (PLAs) 283
4.13.2 Generating the Multiple-Output Prime 5.10 Programmable Array Logic (PAL)
Implicants 193 Devices 292
4.13.3 Multiple-Output Prime-Implicant
Problems 294
Tables 195
4.13.4 Minimal Sums Using Petrick’s
Method 196 Chapter
6—
4.13.5 Minimal Sums Using Table Reduction Flip-Flops and Simple Flip-Flop
Techniques 198 Applications 301
4.13.6 Multiple-Output Minimal Products 201 6.1 The Basic Bistable Element 302
4.14 Variable-Entered Karnaugh Maps 202 6.2 Latches 303
4.14.1 Constructing Variable-Entered Maps 203 6.2.1 The SRLatch 304
4.14.2 Reading Variable-Entered Maps 6.2.2. An Application of the SR Latch: A Switch
for Minimal Sums 207 Debouncer 305
4.14.3 Minimal Products 212 6.2.3. TheSRLatch 307
x DIGITAL PRINCIPLES AND DESIGN

6.2.4 The Gated SR Latch 308 Ife Analysis of Clocked Synchronous Sequential
6.2.5 The Gated D Latch 309 Networks 371
6.3 Timing Considerations 310 Thee Excitation and Output Expressions 373
6.3.1 Propagation Delays 310 Tded Transition Equations 374
6.3.2 Minimum Pulse Width 312 Tepes) Transition Tables 375
6.3.3 Setup and Hold Times 312 7.2.4 Excitation Tables 377

6.4 Master-Slave Flip-Flops (Pulse-Triggered WIAD State Tables 379


Flip-Flops) 313 7.2.6 State Diagrams 380
6.4.1 The Master-Slave SR Flip-Flop 314 UcP6ll Network Terminal Behavior 382
6.4.2 The Master-Slave JK Flip-Flop 317 es: Modeling Clocked Synchronous Sequential
6.4.3 0’s and 1’s Catching 319 Network Behavior 385
6.4.4 Additional Types of Master-Slave 7.3.1 The Serial Binary Adder as a Mealy
Flip-Flops 320 Network 385
6.5 Edge-Triggered Flip-Flops 321 7.3.2 The Serial Binary Adder as a Moore
Network 388
6.5.1 The Positive-Edge-Triggered D
Flip-Flop 321 7.3.3 A Sequence Recognizer 390

6.5.2 Negative-Edge-Triggered D 7.3.4 A 0110/1001 Sequence Recognizer 393


Flip-Flops 324 7.3.5 A Final Example 396
6.5.3 Asynchronous Inputs 324 7.4 State Table Reduction 398
6.5.4 Additional Types of Edge-Triggered 7.4.1 Determining Equivalent Pairs
Flip-Flops 326 of States 399
6.5.5 Master-Slave Flip-Flops with Data 7.4.2 Obtaining the Equivalence Classes
Lockout 328 of States 405
6.6 Characteristic Equations 329 7.4.3 Constructing the Minimal State Table 406
6.7 Registers 332 7.4.4 The 0110/1001 Sequence Recognizer 410
6.8 Counters 337 hss The State Assignment 415
6.8.1 Binary Ripple Counters 337 7.5.1 Some Simple Guidelines for Obtaining State
6.8.2 Synchronous Binary Counters 340 Assignments 418
6.8.3 Counters Based on Shift Registers 345 7.5.2 Unused States 422

6.9 Design of Synchronous Counters 347 7.6 Completing the Design of Clocked
6.9.1 Design of a Synchronous Mod-6 Counter
Synchronous Sequential Networks 424
Using Clocked JK Flip-Flops 348 7.6.1 Realizations Using Programmable Logic
6.9.2 Design of a Synchronous Mod-6 Counter Devices 432
Using Clocked D, T, or SR Flip-Flops 352 Problems 436
6.9.3 Self-Correcting Counters 356
Problems 358
Chapter8 —
Algorithmic State Machines 444
Chapter
7_ 8.1 The Algorithmic State Machine 444
Synchronous Sequential 8.2 ASM Charts 447
Networks 367
8.2.1 The State Box 448
7.1 Structure and Operation of Clocked 8.2.2 The Decision Box 449
Synchronous Sequential Networks 368 8.2.3. The Conditional Output Box 450
CONTENTS xi

8.2.4 ASM Blocks 450 9.4.2 The Primitive Flow Table for Example
8.2.5 ASMCharts 456 9.4 526
8.2.6 Relationship between State Diagrams and 9.5 Reduction of Input-Restricted Flow
ASM Charts 459 Tables 529
8.3. Two Examples of Synchronous Sequential 9.5.1 Determination of Compatible Pairs
Network Design Using ASM Charts 461 of States 530
8.3.1 A Sequence Recognizer 461 9.5.2 Determination of Maximal
8.3.2 A Parallel (Unsigned) Binary Compatibles 533
Multiplier 463 9.5.3 Determination of Minimal Collections
of Maximal Compatible Sets 535
8.4 State Assignments 468
9.5.4 Constructing the Minimal-Row Flow
8.5 ASM Tables 470 Table 536
8.5.1 ASM Transition Tables 470 9.6 A General Procedure to Flow Table
8.5.2 Assigned ASM Transition Tables 472 Reduction 538
8.5.3. Algebraic Representation of Assigned 9.6.1 Reducing the Number of Stable
Transition Tables 475 States 538
8.5.4 ASM Excitation Tables 477 9.6.2 Merging the Rows of a Primitive Flow
8.6 ASM Realizations 479 Table 540
8.6.1 Realizations Using Discrete Gates 479 9.6.3 The General Procedure Applied to Input-
8.6.2 Realizations Using Multiplexers 484 Restricted Primitive Flow Tables 543

8.6.3 Realizations Using PLAs 487 ON, The State-Assignment Problem and the
8.6.4 Realizations Using PROMs 490 Transition Table 545
9.7.1 The Transition Table for Example 9.3 546
8.7. Asynchronous Inputs 491
9.7.2 The Transition Table for Example 9.4 550
Problems 493
9.7.3 The Need for Additional State
Variables 551
9.7.4 A Systematic State-Assignment
Chapter
9—
Procedure 555
Asynchronous Sequential
9.8 Completing the Asynchronous Sequential
Networks 505
Network Design 557
9.1 Structure and Operation of Asynchronous 99 Static and Dynamic Hazards in Combinational
Sequential Networks 506 Networks 561
9.2 Analysis of Asynchronous Sequential 9.9.1 Static Hazards 562
Networks 510 9.9.2 Detecting Static Hazards 565
9.2.1 The Excitation Table 512 9.9.3 Eliminating Static Hazards 568
9.2.2 The Transition Table 514 9.9.4 Dynamic Hazards 570
9.2.3 The State Table 516 9.9.5 Hazard-Free Combinational Logic
9.2.4 The Flow Table 517 Networks 571
9.2.5 The Flow Diagram 519 9.9.6 Hazards in Asynchronous Networks
Involving Latches 571
9.3. Races in Asynchronous Sequential
Networks 520 9.10 Essential Hazards 573
9.4 The Primitive Flow Table 522 9.10.1 Example of an Essential Hazard 574
9.10.2 Detection of Essential Hazards 575
9.4.1 The Primitive Flow Table for Example
O28 Problems 578
xii DIGITAL PRINCIPLES AND DESIGN

Appendix
A A.9 The MOS Field-Effect Transistor 641

Digital Circuits 589 A.9.1 Operation of the n-Channel, Enhancement-


Type MOSFET 641
A.l The pn Junction Semiconductor A.9.2 The n-Channel, Depletion-Type
Diode 590 MOSFET 645
A.1.1 Semiconductor Diode Behavior 590 A.9.3 The p-Channel MOSFETs 646
A.1.2 Semiconductor Diode Models 592 A.9.4 Circuit Symbols 646
A.2 Diode Logic 593 A.9.5 The MOSFET as a Resistor 647
A.2.1 The Diode And-Gate 594 A.9.6 Concluding Remarks 648
A.2.2 The Diode Or-Gate 505 A.10 NMOS and PMOS Logic 649
A.2.3 Negative Logic 596 A.10.1 The NMOS Inverter (Not-Gate) 649
A.3 The Bipolar Junction Transistor 527, A.10.2 NMOS Nor-Gate 650
A.3.1 Simplified de Transistor Operation 598 A.10.3 NMOS Nand-Gate 651
A.3.2. Normal Active Mode 600 A.10.4 PMOS Logic 652
A.3.3 Inverted Active Mode 602 A.10.5 Performance 652
A.3.4 Cutoff Mode 603 CMOS Logic 654
A.3.5 Saturation Mode 605 A.11.1 The CMOS Inverter (Not-Gate) 654
A.3.6 Silicon npn Transistor Characteristics 606 A.11.2 CMOS Nor-Gate 655
A.3.7 Summary 608 A.11.3 CMOS Nand-Gate 656
A.4 The Transistor Inverter 608 A.11.4 Performance 657
A.4.1 Loading Effects 611 Problems 657
A.5 Gate Performance Considerations 614
A.5.1 Noise Margins 614
A.5.2 Fan-Out 616 Appendix
B
A.5.3 Speed of Operation and Propagation Delay Tutorials 665
Times 616
B.1 A Gentle Introduction to Altera
A.5.4 Power Dissipation 618
MAX+plus II 10.1 Student Edition 665
A.6 Diode-Transistor Logic (DTL) 618 B.2 A Gentle Introduction to
A.6.1 Loading Effects 620 LogicWorks™4 678
A.6.2 Modified DTL 621
A.7 Transistor-Transistor Logic (TTL) 622
Bibliography 684
A.7.1 Wired Logic 625
A.7.2 TTL with Totem-Pole Output 626
A.7.3 Three-State Output TTL 630 Index 68/7
A.7.4 Schottky TTL 632
A.7.5 Concluding Remarks 634
Additional Resources
A.8 Emitter-Coupled Logic (ECL) 634 1. CD-ROM with Altera MAX+plus II and
A.8.1 The Current Switch 635 Multisim 2001 (included with book)
A.8.2 The Emitter-Follower Level Restorers 638 Website at http://www.mhhe.com/givone that
A.8.3 The Reference Supply 639 includes labs for both Altera MAX+plus II
A.8.4 Wired Logic 639 and LogicWorks!™4
PREFACE

With the strong impact of digital technology on our everyday lives, it is not surpris-
ing that a course in digital concepts and design is a standard requirement for majors
in computer engineering, computer science, and electrical engineering. An introduc-
tory course is frequently encountered in the first or second year of their undergradu-
ate programs. Additional courses are then provided to refine and extend the basic
concepts of the introductory course.
This book is suitable for an introductory course in digital principles with em-
phasis on logic design as well as for a more advanced course. With the exception of
the appendix, it assumes no background on the part of the reader. The intent of the
author is not to just present a set of procedures commonly encountered in digital de-
sign but, rather, to provide justifications underlying such procedures. Since no back-
ground is assumed, the book can be used by students in computer engineering, com-
puter science, and electrical engineering.
The approach taken in this book is a traditional one. That is, emphasis is on the
presentation of basic principles of logic design and the illustration of each of these
principles. The philosophy of the author is that a first course in logic design should
establish a strong foundation of basic principles as provided by a more traditional
approach before engaging in the use of computer-aided design tools. Once basic
concepts are mastered, the utilization of design software becomes more meaningful
and allows the student to use the software more effectively. Thus, it is the under-
standing of basic principles on which this book focuses and the application of these
principles to the analysis and design of combinational and sequential logic net-
works. Each topic is approached by first introducing the basic theory and then illus-
trating how it applies to design. For those people who want to use CAD tools, we
have included a CD-ROM containing Altera MAX+plus II 10.1 Student Edition, as
well as software tutorials in an Appendix.

SCOPE OF THE BOOK


Chapter | discusses the differences between continuous, 1.e., analog, and discrete,
i.e., digital, networks and devices. Then, the basic operation of the digital computer
is introduced as an example of a system that utilizes most of the concepts presented
in the remaining chapters. The chapter concludes with an overview of the topics
that will be introduced.
In Chapter 2 the general concepts of positional number systems, arithmetic,
and conversion techniques are introduced. This material is developed for arbitrary
positive integer bases rather than simply for the binary number system to empha-
size the similarity of all positional number systems and their manipulations. Then,
various codes and their properties are discussed with emphasis on error detection
and correction.

xiii
xiv DIGITAL PRINCIPLES AND DESIGN

The two-valued Boolean algebra is introduced in Chapter 3. How Boolean ex-


pressions are written, manipulated, and simplified is presented. Since there is a one-
to-one correspondence between the two-valued Boolean algebra and logic net-
works, it is then shown how the algebra can serve as a mathematical model for the
behavior and structure of combinational logic networks. The chapter concludes with
a discussion of the gate properties that are relevant to logic networks, i.e., noise
margin, fan-out, propagation delays, and power dissipation.
An important application of the Boolean algebra is to obtain those expressions
which can best be associated with optimal networks. Under the assumption that the
reduction in the delay time of a network is of paramount importance, it is possible to
obtain efficient networks by systematic procedures. Two methods for obtaining min-
imal expressions are presented in Chapter 4. The first method, Karnaugh maps, is a
graphical procedure that permits minimal expressions to be obtained very rapidly.
However, since the procedure relies upon the recognition of patterns, there is a limit
to the complexity of a problem for which the procedure is effective. This limit seems
to be problems of six variables. A second method for obtaining minimal expressions
is the Quine-McCluskey method. This method involves just simple mathematical
manipulations. For problems with many variables, the Quine-McCluskey method
can be carried out on a digital computer. The concepts of both approaches are then
extended to a set of Boolean expressions describing multiple-output networks. The
chapter concludes with a variation of the Karnaugh map concept, called variable-
entered Karnaugh maps, in which Boolean functions can appear as map entries.
Chapter 5 is concerned with several MSI and LSI components. The intent of
this chapter is to investigate combinational networks that are commonly encoun-
tered in digital systems. Several types of adders and subtracters are discussed.
These include binary and decimal adders as well as high-speed adders using the
carry lookahead concept. Also included in this chapter are discussions on compara-
tors, decoders, encoders, and multiplexers. In the case of decoders and multiplexers,
attention is given to their use as generic logic design devices. The final part of the
chapter involves the three basic structures of programmable logic devices: program-
mable read-only memories, programmable logic arrays, and programmable array
logic devices. Emphasis is placed on their utilization for the realization of logic net-
works, with special attention given to their strengths and weaknesses and the con-
straints placed on a logic design utilizing them.
Chapter 6 begins the presentation on sequential logic networks. In this chapter,
various types of flip-flops, i.e., JK, D, T, and SR flip-flops, are introduced. The oper-
ational behavior of the three categories of flip-flops, i.e., latches, edge-triggered,
and master-slave flip-flops, is discussed in detail. The remainder of the chapter is
concerned with some simple flip-flop applications, in particular, registers and coun-
ters. Ripple and synchronous counters are presented and compared. The chapter
concludes with a general design procedure for synchronous counters. This proce-
dure serves as a basis for synchronous sequential logic design, which is elaborated
upon in the next two chapters.
Chapters 7 and 8 involve clocked synchronous sequential networks. In Chapter
7 the classic Mealy and Moore models of a synchronous sequential network are pre-
PREFACE XV

sented. First, these networks are analyzed to establish various tabular representa-
tions of network behavior. Then, the process is reversed and synthesis is discussed.
Chapter 8 also involves the design of clocked synchronous sequential networks;
however, this time using the algorithmic state machine model. The relationship be-
tween the classic Mealy/Moore models and the algorithmic state machine model is
discussed as well as the capability of the algorithmic state machine model to handle
the controlling of an architecture of devices.
In Chapter 9 asynchronous sequential networks are studied. Paralleling the ap-
proach taken for synchronous sequential networks, the analysis of asynchronous se-
quential networks is first undertaken and then, by reversing the analysis procedure,
the synthesis of these networks is presented. Included in this chapter is also a dis-
cussion on static and dynamic hazards. Although these hazards occur in combina-
tional networks, their study is deferred to this chapter, since these hazards can have
a major effect on asynchronous network behavior. A great deal of attention is given
to the many design constraints that must be satisfied to achieve a functional design
of an asynchronous network. In addition to the static and dynamic hazards, the con-
cepts of races, the importance of the state assignment, and the effects of essential
hazards are addressed.
An appendix on digital electronics is included for completeness. It is not in-
tended to provide an in-depth study on digital electronics, since such a study should
be reserved for a course in itself. Rather, its inclusion is to provide the interested
reader an introduction to actual circuits that can occur in digital systems and the
source of constraints placed upon a logic designer. For this reason, the appendix does
not delve into circuit design but, rather, only into the analysis of electronic digital
circuits. Emphasis is placed on the principles of operation of TTL, ECL, and MOS
logic circuits. Since circuits are analyzed, the appendix does assume the reader has
an elementary knowledge of linear circuit analysis. In particular, the reader should be
familiar with Ohm’s law along with Kirchhoff’s current and voltage laws.
Another appendix with software tutorials is also included. These tutorials, pro-
vided by two contributors, include one on Altera MAX+plus II 10.1 Student Edition
and one on LogicWorks'™4. The tutorials are meant to provide basic introductions to
these tools for those people who are using them in their course.

HOMEWORK PROBLEMS
With the exception of Chapter 1, each chapter includes a set of problems. Some of
these problems provide for reinforcement of the reader’s understanding of the ma-
terial, some extend the concepts presented in the chapter, and, finally, some are
applications-oriented.

ADDITIONAL RESOURCES
The expanded book website at Attp://www.mhhe.com/givone includes a download-
able version of the Solutions Manual for instructors only and PowerPoint slides.
There are also a variety of labs using both the Altera Software and LogicWorks. A
xvi DIGITAL PRINCIPLES AND DESIGN

CD-ROM containing Altera’s MAX+plus II CAD software and Multisim 2001 is


included free with every copy of the book.

WHAT CAN BE COVERED IN A COURSE


More material is included in this book than can be covered in a one-semester
course. This allows the instructor to tailor the book to the background of the stu-
dents and the time available. Different ways the book can be used include:
m A possible one-semester course based on this book would include Chapters |
to 3, Sections 4.1 to 4.7, and Chapters 5 to 7. Sections 4.1 to 4.7 involve the
simplification of Boolean functions using Karnaugh maps.
@ Sections 4.8 to 4.11, involving the Quine-McCluskey method, might also be
included in a slightly more quickly paced course.
@ = The material of Sections 4.12 to 4.14 and Chapters 8 and 9, along with a
introduction to CAD tools, (including Appendix B) can serve as the basis for a
second semester course.
m Appendix B, the CD-ROM and Labs at the website are optional material that
can be worked in as needed.

A few formal proofs have been included for the interested reader. However, these
proofs are clearly delineated and can be skipped without loss of continuity.

ACKNOWLEDGMENTS
I would like to thank the reviewers of the manuscript for their comments and sug-
gestions. These include:

Kenneth J. Breeding, The Ohio State University


Kirk W. Cameron, University of South Carolina
Mehmet Celenk, Ohio University
Travis E. Doom, Wright State University
Richard W. Freeman, Iowa State University
Bruce A. Harvey, Florida State University
Raj Katti, North Dakota State University
Larry Kinney, University of Minnesota
Wagdy H. Mahmoud, Tennessee Technological University
Jeffery P. Mills, Illinois Institute of Technology
Debashis Mohanty, Texas A&M University
Richard G. Molyet, The University of Toledo
Jane Moorhead, Mississippi State University
Suku Nair, Southern Methodist University
Emil C. Neu, Stevens Institute of Technology
Tatyana D. Roziner, Boston University
PREFACE xvii

Salam Salloum, California State Polytechnic University, Pomona


Susan Schneider, Marquette University
Charles B. Silio, Jr., University of Maryland
Dan Stanzione, Clemson University
A. J. Thomas, Jr., Tennessee State University
Massood Towhidnejad, Embry-Riddle Aeronautical University
Murali Varanasi, University of South Florida
Donald C. Wunsch II, University of Missouri-Rolla

Also, I would like to acknowledge the efforts of the staff at McGraw-Hill, particu-
larly Betsy Jones, Michelle Flomenhoft, and Susan Brusch. In addition, I want to
express my appreciation to Donna and David, who helped with the typing and art-
work in the early drafts of the manuscript. I would also like to thank David for his
work that led to the image on the cover of this book. Finally, I am grateful to my
wife Louise for her support and patience during this project.
Donald D. Givone
AB T THE AUT

Donald D. Givone received his B.S.E.E. degree from Rensselaer Polytechnic In-
stitute and the M.S. and Ph.D. degrees in Electrical Engineering from Cornell Uni-
versity. In 1963, he joined the faculty at the University at Buffalo, where he is cur-
rently a Professor in the Department of Electrical Engineering.
He has received several awards for excellence in teaching. He is also the author
of the textbook /ntroduction to Switching Circuit Theory and the coauthor of the
textbook Microprocessors/Microcomputers: An Introduction, both of which were
published by McGraw-Hill Book Company.

xviii
CHAPTER

introduction

way to measure human progress is through inventions that ease mental and
physical burdens. The digital computer is one such great invention. The ap-
plications of this device seem to have no bounds, and, consequently, new
vistas have opened up for humans to challenge.
The digital computer, however, is only one of many systems whose design and
operation is based on digital concepts. The idea of representing information in a dis-
crete form and the manipulation of such information is fundamental to all digital
systems. In the ensuing chapters of this book we study number and algebraic con-
cepts, logic design, digital networks, and digital circuits. These are the digital prin-
ciples that serve as the foundation for the understanding and design of digital com-
puters and, in general, digital systems. @

1.1 THE DIGITAL AGE


Digital systems are not really something recent. The first mechanical digital calcula-
tor, developed by the French mathematician Blaise Pascal, dates back to 1642. Even
the concept of a general-purpose digital computer dates to 1833. This innovation
was the work of the English mathematician and scientist Charles Babbage. How-
ever, it was not until the 1930s that there appeared a fully realized practical digital
system—the telephone switching system. This system utilized the electromechani-
cal relay as its basic digital element.
The electromechanical relay was also used in early design attempts of the
digital computer during the late 1930s and early 1940s. However, in order to
achieve a higher speed of operation, it was necessary to utilize electronic de-
vices. Thus, during the late 1940s and early 1950s a great deal of effort went into
the development of general-purpose electronic digital computers with vacuum
tubes as their basic digital element. These early electronic computers were only
one of a kind. The commercial digital computer became a reality in 1951 with
DIGITAL PRINCIPLES AND DESIGN

the UNIVAC (UNIVersal Automatic Computer). A total of 48 of these comput-


ers were constructed.
Vacuum tubes consumed a great amount of electrical power, dissipated a great
deal of heat, and had short life spans. It is no wonder that with the development of
the transistor a new and more rapid expansion in digital techniques emerged. More
recently, advancements in solid-state electronics have enabled the fabrication of
complete digital circuits as a single entity called integrated circuits. As the density
of components in an integrated circuit increased, the concept of a programmable
digital device, the microprocessor, became possible. With more and more support-
ive circuitry integrated with the microprocessor being achieved, a computer-on-a-
chip has become a reality.
The combination of microelectronics and digital concepts has provided highly-
reliable, cost-effective devices. As a consequence, there has been a rapid growth in
the development of new digital systems. The applications of digital devices will
have far-reaching effects upon the technology of tomorrow.

1.2 ANALOG AND DIGITAL


REPRESENTATIONS OF INFORMATION
There are two general ways in which information is represented—analog form or dig-
ital form. In a digital representation, the information is denoted by a finite sequence of
digits. Such a form is also said to be discrete. On the other hand, in an analog repre-
sentation, a continuum is used to denote the information. Examples of such a contin-
uum are the range of voltages between certain limits and an angular displacement.
To illustrate the idea of information representation, consider time as the infor-
mation. A digital watch which expresses time in a numerical form, i.e., a sequence
of digits, is a digital representation. A conventional watch expresses time in an ana-
log form as the angular position of the watch hands.
Since analog information involves a continuum, the reading of the information
becomes a measurement within the continuum, the precision of the measurement
being the number of digits that are obtained from the measurement. Normally there
is a limitation as to the precision that can be achieved when handling analog infor-
mation. In the case of digital information, any degree of precision becomes possible
simply by using more digits in the representation.
Systems that manipulate or process information, e.g., computers, have been de-
signed for both analog and digital information. However, with today’s technology,
digital systems have a lower component cost, higher reliability, and greater versatil-
ity than analog systems.

1.3 THE DIGITAL COMPUTER


This book is not directly concerned with the digital computer. However, it is perhaps
the most fascinating of all digital systems. Furthermore, because of its generality, it
encompasses many digital concepts. For this reason, a brief and simplistic discussion
on a basic organization of a digital computer and its operation is appropriate.
CHAPTER 1 Introduction

Basically, a digital computer receives numbers, called data, performs opera-


tions upon these numbers, and forms new numbers. The desired operations to be
performed by the computer are also given to the computer in the form of numbers
which are called instructions. Since numbers are stored and manipulated in the
computer, a number system which lends itself to easy electronic representation is
necessary. As a consequence, the binary number system or a coded binary number
system is most frequently used, since highly reliable electronic devices with two
stable states are easily fabricated.
A digital computer can solve very complex problems. However, it can perform
only very simple operations. The solution of a problem thus becomes a matter of re-
ducing the problem to a long sequence of very elementary mathematical and logical
operations called a program. Even though these operations are performed one at a
time, the computer has the capability to perform them at a very high speed. Two
features of the computer permit it to perform a great number of operations in a short
time. First, it has the capability to store the data and the sequence of instructions to
solve a problem. Second, it has the capability to sequence through the given instruc-
tions without human intervention. Thus the computer has the necessary data at its
disposal to solve a problem and is capable of performing the sequence of operations
to solve a problem without slowing down or stopping between operations. Without
these two features, it would be necessary to stop the computer many times in order
to insert data, copy results, and determine the next operation.

1.3.1. The Organization of a Digital Computer


A simple way of viewing a digital computer is to subdivide it into five basic units:
arithmetic, control, memory, input, and output. A computer organization involving
the basic units and the flow lines showing the routing of information required for
carrying out the elementary operations is shown in Fig. 1.1. These units are com-
prised of logic networks that provide for the manipulation and modification of bi-
nary information, i.e., the data and instructions. Provision is also made for holding
binary information within these networks. This storage is achieved with devices

Data
and Final

ae
instructions results

Intermediate
and
Decision final results
information
Arithmetic
— Information signals :
--- Control signals ee ee ee re le

Figure 1.1. A basic organization of a digital computer.


DIGITAL PRINCIPLES AND DESIGN

called registers. In short, the operation of a digital computer and, in general, a digi-
tal system involves a series of data and instruction transfers from register to register
with modifications and manipulations occurring during these transfers. To achieve
this, many registers occur within a digital system.
Most mathematical and logical operations on data are performed in the arith-
metic unit. The simple mathematical operations of a computer are addition, subtrac-
tion, multiplication, and division. The more complex mathematical operations, such
as integration, taking of square roots, and formulation of trigonometric functions,
can be reduced to the basic operations possible in the computer and are performed
by a program. One of the more important of the logical operations a digital com-
puter can perform is that of sensing the sign of a number. Depending upon whether
the computer senses a positive or negative sign on a number, it can determine
whether to perform one set of computations or an alternate set.
Referring to Fig. 1.1, it is seen that there is a two-way communication between
the arithmetic unit and the memory unit. The arithmetic unit receives from the
memory unit numbers on which operations are to be performed and sends interme-
diate and final results to the memory unit. All necessary data for the solution to the
problem being run on the computer are stored in the memory unit. The memory unit
is divided into substorage units (i.e., registers), each referenced by an address,
which is simply an integral numerical designator. Only one number is stored at a
particular address. Also stored in the memory unit, each at a separate address, are
the instructions. Each of these consists of a command, which identifies the type of
operation to be performed, plus one or more addresses, which indicate where the
numerical data used in the operation are located. The instructions and the numerical
data are placed in the memory unit before the start of the program run.
The control unit receives instructions, one at a time, from the memory unit for
interpretation. Normally, the program instructions are in sequential order within the
memory unit. By means of a program counter located in the control unit, indicating
the next instruction’s address, the correct instruction is transferred into the control
unit, where it is held in the instruction register for decoding. After the decoding
process, the control unit causes connections to be made in the various units so that
each individual instruction is properly carried out. With the connections properly
made, the arithmetic unit is caused to act as, say, an adder, subtracter, multiplier, or
divider, as the particular instruction demands. The control unit also causes connec-
tions to be made in the memory unit so that the correct data for the instruction are
obtained by the arithmetic unit. As shown in Fig. 1.1, the control unit is capable of
receiving information from the arithmetic unit. It is along this path that, say, the
sign of a number is sent. Such information gives the computer its decision-making
capability mentioned previously. In addition, the control unit sends signals to the
input and output units in order to have these units work at the proper time.
The input and output units are the contacts between the computer and the out-
side world. They act as buffers, translating data between the different speeds and
languages with which computers and humans, or other systems, operate. The input
unit receives data and instructions from the outside world and sends them to the
memory unit. The output unit receives numerical results and communicates them to
CHAPTER 1 Introduction

the user or to another system. Input and output units may be of a very simple nature
or highly complicated, depending on the application the computer is expected to
handle.

1.3.2 The Operation of a Digital Computer


Having explained the function of each of the various computer units, let us now
consider the sequence of events that occur during a program run. The control unit
oversees the operation by cycling through three phases: fetch, decode, and execute.
As stated previously, the instructions are initially placed in the memory unit in se-
quential order as well as the necessary data. When the program is ready to be run,
the program counter is set to indicate the address of the first instruction and the con-
trol unit is set to its fetch phase.
During the fetch phase, the content of the program counter, which is an ad-
dress, is sent to the memory unit. The address is decoded by digital logic in the
memory unit and a copy of the instruction located at that address in memory is sent
to the control unit, where it is placed in the instruction register. At this time the pro-
gram counter is incremented so that the address of the next instruction is available
on the next fetch phase of the control unit.
Having completed the fetch phase, the control unit proceeds to the decode phase
of its cycle. During this phase, the command portion of the instruction is deciphered
and the appropriate connections made in accordance with the indicated operation.
Finally, the control unit enters its execute phase. During this time, the control
unit generates the appropriate signals needed to carry out the instruction. If the in-
struction involves a stored operand, then the address portion of the instruction,
which indicates the operand location, is sent to the memory unit so that the operand
is retrieved and sent to the appropriate computer unit. After completing the execute
phase, the control unit returns to its fetch phase and the fetch-decode-execute se-
quence is repeated for the next instruction. This process continues until a halt-type
instruction is encountered.
In the above discussion, it was assumed that the program instructions are stored
and executed sequentially. In general, however, it is frequently necessary to deviate
from the sequential order, for example, as a consequence of the decision information
from the arithmetic unit. To achieve this, special jump-type instructions are provided
in the instruction set of the computer. The address portion of a jump-type instruction
indicates the location of the next instruction to be fetched. Thus, when a jump-type
instruction is encountered and the conditions for a jump are satisfied, the address
portion of the instruction is placed into the program counter. In this way, when the
control unit enters its next fetch phase, the appropriate next instruction is obtained.

1.4 AN OVERVIEW
As indicated in the previous section, this book is not about digital computers per se
but rather about the fundamentals behind the logic design and behavior of digital
systems. The digital computer was introduced simply as an excellent example of a
DIGITAL PRINCIPLES AND DESIGN

digital system incorporating networks whose design and operation are the subject of
this book. In general, digital systems involve the manipulation of numbers. Thus, in
Chap. 2, the concept of numbers as a representation of discrete information is stud-
ied, along with how they are manipulated to achieve arithmetic operations.
It is the function of digital networks to provide for the manipulation of discrete
information. In Chap. 3 an algebra, called a Boolean algebra, is introduced that is
capable of describing the behavior and structure of logic networks, i.e., networks
that make up a digital system. In this way, digital network design is accomplished
via the manipulation of expressions in the algebra. The study of digital network de-
sign from this point of view is referred to as logic design. Chapter 4 then continues
with the study of Boolean algebra to achieve expressions describing optimal logic
networks.
Having developed mathematical tools for the logic design of digital networks,
Chap. 5 studies several basic logic networks commonly encountered in digital sys-
tems, e.g., adders, subtracters, and decoders. These are all networks found in the
digital computer. In addition, generic, complex logic networks have also been de-
veloped, referred to as programmable logic devices. In the second part of Chap. 5,
these devices are studied for the purpose of showing how they are used for the de-
sign of specialized logic networks.
It was seen in the discussion on the digital computer that there is a need for the
storage of digital information. The basic digital storage device in a digital system is
the flip-flop. Chapter 6 deals with the operation of several flip-flop structures. Fi-
nally, the application of flip-flops to the logic design of registers and counters is
presented.
The remaining three chapters continue with the logic design of those networks
that involve the storage of information. These logic networks are referred to as sequen-
tial networks. There are two general classes of sequential networks—synchronous and
asynchronous. Chapters 7 and 8 deal with the logic design of synchronous sequen-
tial networks, and Chap. 9 deals with the logic design of asynchronous sequential
networks.
For completeness, an appendix is included which discusses the electronics of
digital circuits. In the previous chapters, all designs are achieved on a logic level,
i.e., without regard to the actual electronic circuits used. Several types of electronic
digital circuits have been developed to realize the logic elements used in the net-
works of the previous chapters. It is the analysis of these electronic circuits, along
with a comparison of their advantages and disadvantages, upon which the appendix
focuses.
CHAP

Number Systems,
Arithmetic, and Codes

f\ Ss was mentioned in the previous chapter, the study of digital principles deals
with discrete information, that is, information that is represented by a finite
set of symbols. Certainly, numerical quantities are examples of discrete in-
formation. However, symbols can also be associated with information other than
numerical quantities, as, for example, the letters of the alphabet. Nonnumeric sym-
bols can always be encoded using a set of numeric symbols with the net result that
discrete information appears as numbers.
At this time various number systems and how they denote numerical quantities
are studied. One number system in particular, the binary number system, is very
useful since it needs only two digit symbols. This is important, since many two-
state circuits exist which can then be used for the processing of these symbols. This
chapter is also concerned with how arithmetic is performed with these different
number systems. Finally, it is shown how the numerical symbols are used to encode
information. The encoding process can result in encoded information having desir-
able properties that provide for reliability and ease of interpretation. H

2.1 POSITIONAL NUMBER SYSTEMS


The decimal number system is commonly accepted in our everyday lives. A typical
decimal number is 872.64. This is really a contraction of the polynomial

872.64 = 800 + 70 + 2+ 0.6 + 0.04


ge a0 wo ORE I 6 ON 4 0101

Se ai ee Oe al 6c One ae 4x. 1062


Each of the symbols in this number, i.e., 8, 7, 2, 6, and 4, by itself denotes an
integer quantity. Furthermore, it is seen that the 8 in this case is weighted by the
DIGITAL PRINCIPLES AND DESIGN

quantity 100 (or, 10°), while the 7 is weighted by only 10 (or, 10’). If the 8 and 7
were interchanged, then the 7 would be weighted by 100 and the 8 by 10. In gen-
eral, the weighting factor is totally determined by the location of the symbol within
the number. Thus, the quantity being denoted by a symbol (say, 8) is a function of
both the symbol itself and its position.
The decimal number system is an example of a radix-weighted positional num-
ber system or, simply, positional number system.* In a positional number system
there is a finite set of symbols called digits. Each digit represents a nonnegative in-
teger quantity. The number of distinct digits in the number system defines the base
or radix of the number system. Formally, a positional number system is a method
for representing quantities by means of the juxtaposition of digits, called numbers,
such that the value contributed by each digit depends upon the digit symbol itself
and its position within the number relative to a radix point. The radix point is a de-
limiter used to separate the integer and fraction parts of a number. Within this posi-
tioning arrangement each succeeding digit is weighted by a consecutive power of
the base. Considering again the decimal number system, there are 10 distinct digits,
0, 1, 2,...,9, and hence numbers in this system are said to be to the base 10.
From the above definitions it now follows that a general number JN in a posi-
tional number system is represented by

Ne dy —\Fn—9 aie d\do.d_, so Gh

Sh te Me) Bd aa ae iar dy i ob de aek pie! isin ee

where d; denotes a digit in the number system such that 0 = d; = (r — 1), ris the base
of the number system, n is the number of digits in the integer part of N, and m is the
number of digits in the fraction part of N." When referring to a particular digit in a
number, it is frequently referenced by its order; that is, the power of the base that
weights the digit. Thus, for integer quantities, the least significant digit is called the
Oth-order digit, followed by the Ist-order digit, the 2nd-order digit, etc. Table 2.1 lists
some of the more common positional number systems and their set of digit symbols.
Consider now the binary number system. In this system there are only two digit
symbols, 0 and 1. A digit in the binary number system is usually referred to as a bit,
an acronym for binary digit. Thus, 1101.101 is a binary number consisting of seven
binary digits or bits. To avoid possible confusion when writing a number, fre-
quently a decimal subscript is appended to the number to indicate its base. In this
case, the binary number 1101.101 is written as 1101.101,,). This convention will be
adhered to when the base is not apparent from the context or when attention is to be
called to the base of the number system.

*The Roman number system is an example of a nonpositional number system. In this case many
different symbols are used, each having a fixed definite quantity associated with it. The relative location
of a symbol in the number plays a minimal role in determining the total quantity being represented. The
only situation in which position becomes significant is in the special case where the symbol has a
subtractive property as, for example, the I in IV indicates that one should be subtracted from five.
‘The powers of the base are written, by convention, as decimal numbers since they are being used as an
index to indicate the number of repeated multiplications of r; e.g., 7° denotes r X r X r. This is the only
circumstance in which digits not belonging to the number system itself appear in a number representation.
CHAPTER 2 Number Systems, Arithmetic, and Codes

Table 2.1 Positional number systems and their digit symbols


Base Number system Digit symbols
2 Binary 0,1
3 Ternary OnE
4 Quaternary (Oh 3}
5 Quinary (Os il3h4!
8 Octal ON223°4-5.657
10 Decimal OME 3 AS. 6:768.9
12 Duodecimal 0,1,2,3,4,5,6,7,8,9,A,B
16 Hexadecimal 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F

From Table 2.1 it is seen that for those number systems whose base is less than
10, a subset of the digit symbols of the decimal number system is used. Thus,
732.16,g) is an example of an octal number. In the case of the duodecimal and hexa-
decimal number systems, however, new digit symbols are introduced to denote inte-
ger quantities greater than nine. Typically, the first letters of the alphabet are used
for this purpose. In the duodecimal number system, the quantity 10 is denoted by A
and the quantity 11 is denoted by B. For the hexadecimal number system, the sym-
bols A, B,..., F denote the decimal quantities 10, 11,..., 15, respectively. Hence,
1A6.B2,,,, is an example of a number in the duodecimal number system, while
8AC3F.1D4,;¢) is a number in the hexadecimal number system.
The binary number system is the most important number system in digital tech-
nology. This is attributed to the fact that components and circuits which are binary
in nature (1.e., have two distinct states associated with them) are easily constructed
and are highly reliable. Even in those situations where the binary number system is
not used as such, binary codes are employed to represent information. In computer
technology, the octal and hexadecimal number systems also play a significant role.
This is due to the simplicity of the conversion between the numbers in these sys-
tems and the binary number system, as is shown in Sec. 2.6.

2.2 COUNTING IN A POSITIONAL


NUMBER SYSTEM
Perhaps the most fundamental operation that is performed with numbers is that of
counting. In a positional number system this is a very simple process. First the dis-
tinct digits in the system are listed according to the integer quantities they represent.
In the case of the decimal number system this becomes 0, 1, ... , 9, while in the bi-
nary number system it is merely 0, 1. In order to continue the counting process, the
digit 1 is introduced in the Ist-order digit position and the digit symbols in the Oth-
order digit position are recycled, thus giving 10, 11,..., 19 for the decimal number
system and 10, 11 for the binary number system. After each cycle of the digits in
the Oth-order digit position, the digit in the Ist-order digit position is incremented
by 1 until this is no longer possible; that is, a cycle has been completed in this posi-
tion, at which time the digit 1 is introduced into the 2nd-order digit position. The
10 DIGITAL PRINCIPLES AND DESIGN

above cycling procedure is then repeated in the Oth-order and Ist-order digit posi-
tions, etc. As is seen from the counting process, all quantities are represented in a
positional number system by means of only a finite number of digit symbols merely
by introducing more digit positions.
The concept of counting is illustrated in Table 2.2, where the first 32 integers in
the binary, ternary, octal, and hexadecimal number systems are given, along with
their decimal equivalents.

Table 2.2 The first 32 integers in the binary, ternary, octal, and hexadecimal number
systems, along with their decimal equivalents

Decimal Binary Ternary Octal Hexadecimal

0 0 0 0
1 | l |
2 10 2 2}
3 ili 10 3
4 100 11 4
5 101 12 5
6 110 20 6
7 itil 21 i
8 1000 22 10
9 1001 100 15]
10 1010 101 12
11 1011 102 13
12 1100 110 14
13 1101 i 15
14 1110 12 16
15 1111 120 17 WS
IS
ON
SN
SA
es,
We)
(ee)
(@)
1S)
Jeol
ag
oS
SS
I
16 10000 121 20 10
17 10001 122 21
18 10010 200 22
19 10011 201 a3}
20 10100 202 24
21 10101 210 D5
22 10110 211 26
me} 10111 2D) ay
24 11000 220 30
25 11001 221 31
26 11010 222 32
Pa 11011 1000 33
28 11100 1001 34
29 11101 1002 35
30 11110 1010 36
31 Ua 1011 37
CHAPTER 2 Number Systems, Arithmetic, and Codes 11

2.3 BASIC ARITHMETIC OPERATIONS


The four basic arithmetic operations are addition, subtraction, multiplication, and di-
vision. How each of these operations is performed in various positional number sys-
tems is now studied. In each case the reader should note the similarity with the deci-
mal number system. To further emphasize the similarities of arithmetic in any
positional number system, examples of both binary and ternary arithmetic are given.

2.3.1 Addition -
First, consider two-digit addition. This operation is easily stated in tabular form.
Table 2.3 summarizes two-digit binary addition and two-digit ternary addition. The
entry in each cell corresponds to the sum digit of a + b and a carry if appropriate.
As in the decimal number system, when the sum of the two digits equals or exceeds
the base, a carry is generated to the next-higher-order digit position. Thus, in the
lower right cell of Table 2.3a, corresponding to the binary sum 1,9, + 1) = 10.) =
2,10), the sum digit is 0 and a carry of | is produced. The entries in these tables are
easily checked by counting or using Table 2.2.
When the two numbers being added no longer are each single digits, the addi-
tion tables are still used by forming the sum in a serial manner. That is, after align-
ing the radix points of the two numbers, corresponding digits of the same order are
added along with the carry, if generated, from the previous-order digit addition. The
following examples illustrate the addition process.

11 = carries al = carries
11010.) = augend 11.01.) = augend
+ 11001,., = addend + 10.01.) = addend
110011,.) = sum 101.10, = sum

11 = carries Wik il = carries


1102/3, = augend 1201.2, = augend
+1022,,, = addend + 1200.1(3) = addend
2201(3,) = sum 10102.0,,) = sum

2.3.2 Subtraction
The inverse operation of addition is subtraction. Two-digit subtraction in the binary
and ternary number systems is given in Table 2.4. The entry in each cell denotes the
resulting difference digit and if a borrow is needed to obtain the difference digit. As
in decimal subtraction, when a larger digit is subtracted from a smaller digit, it is
12 DIGITAL PRINCIPLES AND DESIGN

Table 2.3 Two-digit addition tables. (a) Binary. (b) Ternary

a ar Ie b
0 1

0 0 1

= 0
i 1 and
carry |

(a)

ase 1p) b
0 | 2

0 0 | 2

0
a | | 7) and
carry |

0 |
2 2 and and
carry | carry |

ils
(b)

necessary to perform borrowing. Borrowing is the process of bringing back to the next-
lower-order digit position a quantity equal to the base of the number system. For exam-
ple, in Table 2.4a, the upper right cell denotes the difference 0,., — 1,2). In order to sub-
tract the larger digit from the smaller digit, borrowing is necessary. Since this is binary
subtraction, a borrow corresponds to bringing back the quantity 2, i.e., the base of the
number system, from which | is subtracted. Thus, in the upper right cell of Table 2.4,
CHAPTER 2 Number Systems, Arithmetic, and Codes 13

Table 2.4 Two-digit subtraction tables. (a) Binary. (b) Ternary

a0 b
0 |

1
0 0 and
l borrow |

a
l 1 0

le
(a)

ab b
0 | 2

2) 1
0 0 and and
borrow | borrow |

Dy
a i i 0 and
borrow |

D 2 | 0

es
| ee |

(b)

the entry “1 and borrow 1” denotes that under the assumption that a borrow from the
next-higher-order minuend digit is performed, the difference digit of | results.
As in the case of addition, subtraction tables can be used to form the difference
between two numbers when they each involve more than a single digit. Again, after
the radix points are aligned, subtraction is performed in a serial fashion starting
with the least significant pair of digits. When borrowing is necessary, the next-
14 DIGITAL PRINCIPLES AND DESIGN

higher-order minuend digit is decremented by 1. If this minuend digit is 0, then bor-


rowing is performed on the first higher-order nonzero digit and the intervening 0's
are changed to r — | where r is the base of the number system. It should be noted
this is precisely the process one performs in decimal subtraction.

O11 = minuend 00 0 = minuend


46801(>,= subtrahend AXOK.01 1,5) = subtrahend
— 1011(., = difference — 110.110.) = difference
110.) 110.101.)

0 020 = minuend
2X02,3;) = minuend XOX0.22/,. = subtrahend
—1021,,) = subtrahend — 21.02(, = difference
1011(3) = difference 212.20.)

2.3.3 Multiplication
The third basic arithmetic operation is multiplication. Table 2.5 summarizes two-
digit multiplication in the binary and ternary number systems. When the multiplier
consists of more than a single digit, a tabular array of partial products is constructed
and then added according to the rules of addition for the base of the numbers in-
volved. The entries in this tabular array must be shifted such that the least signifi-
cant digit of the partial product aligns with its respective multiplier digit. Multipli-
cation is illustrated by the following examples.

_
EXAMPLE
2.5
10.11.) = multiplicand
x 101) multiplier
10 11
000 0 |array of partial products
1011
1101.11,., = product

2102,,. = multiplicand
X 102.) = multiplier
11211
0000 array of partial products
2102
2221113) = product
CHAPTER 2 Number Systems, Arithmetic, and Codes 15

Table 2.5 Jwo-digit multiplication tables. (a) Binary. (b) Ternary

aXxb b
0 1

0 0 0

| 0 1

[ee eee =| ae |
(a)

axb / b
0 | D,

0 0 0 0

a 1 0 | 2)

i) jo) N i} i=}joy
16 DIGITAL PRINCIPLES AND DESIGN

The simplicity of binary multiplication should be noted. If the multiplier bit is a


1, then the multiplicand is simply copied into the array of partial products; while if
the multiplier bit is 0, then all 0’s are written into the array.

2.3.4 Division
The final arithmetic operation to be considered is division. This process consists of
multiplications and subtractions. The only difference in doing division in a base-r
number system and in decimal is that base-r multiplication and subtractions are per-
formed. The following two examples illustrate binary and ternary division.

110.1,5, = quotient
divisor = 11,))10100.1,, = dividend

10.) = remainder

EXAMPLE 2.8

102,3, = quotient
divisor = ie) 2010,,, = dividend

23) = remainder

The four basic arithmetic operations have been illustrated for only the binary
and ternary number systems. However, the above concepts are readily extendable to
handle any base-r positional number system by constructing the appropriate addi-
tion, subtraction, and multiplication tables.

2.4 POLYNOMIAL METHOD OF NUMBER


CONVERSION
A number is a symbolic representation for a quantity. Therefore, it is only natural to
expect that any quantity that can be represented in one number system can also be
represented in another number system. In positional number systems with positive
CHAPTER 2 Number Systems, Arithmetic, and Codes 17

integer bases greater than |, integers in one number system become integers in an-
other number system; while fractions in one number system become fractions in an-
other number system. There are two basic procedures for converting numbers in one
number system into another number system: the polynomial method and the itera-
tive method. Both methods involve arithmetic computations. The difference be-
tween the two methods lies in whether the computations are performed in the source
or target number system. In this section the polynomial method is presented; the it-
erative method is discussed in the next section.
A number expressed in a base-r, number system has the form

Naty a peste.1) ears


oe fabio (rl) ee
c

=d, nly x7 gin ; am ‘ a do,., x “iy =f: day. x (rl)


ri,

a Ne cies (2.1)
The second subscript, (1), associated with each digit symbol d; and base symbol r,
in these equations is used to emphasize the fact that these are base-r, quantities.
Furthermore, the base of a number system expressed in its own number system, 1.e.,
r,,, 18 always 10,,:). This is readily seen in Table 2.2, where the quantity 2 in binary
is 10,2), the quantity 3 in ternary is 10,3), the quantity 8 in octal is 10,g), etc. This im-
plies that Eq. (2.1) can be written as

Nay a pa, 1) x 10%) pa eats do... x 10(4) oP dar. x 1064)

5 ete damn rl) x 10(45 CZ

Equation (2.2) is the general form of any number in a positional number system
expressed in its own number system. Since this number denotes a quantity, the
equivalent quantity is obtained by simply replacing each of the quantities in the
right side of Eq. (2.2) by its equivalent quantity in the base-r, number system. That
is, by replacing each of the digit symbols and the weighting factors as governed by
the position of each digit in Eq. (2.2) by its equivalent quantity in base r, Eq. (2.2)
becomes

Noa) = daa ae des Pigs do, x a + G9 x i

diss Samethe xr” (2.3)


where d; , is the quantity d, expressed in base r, and r,, is the quantity r;
oun base r,. Since all quantities in Eq. (2.3) are now in base r>, the eval-
uation of Eq. (2.3) must be the quantity N,,,) expressed as the base-r, number
Ney:
In summary, the above conversion procedure consists of the following steps:
(1) Express the number N,,,, as a polynomial in its own number system, 1.e., in
base r, as indicated by Eq. (2.2). (2) Replace each digit symbol and 10,,,, by their
equivalent representations in base r,. (3) Evaluate the polynomial using base-r,
arithmetic.
18 DIGITAL PRINCIPLES AND DESIGN

To illustrate the above algorithm, consider the conversion of the binary number
1101 into its equivalent decimal number:*
AOT pe yea WOp yer lay Lay cr Une 105, Ua 0,
v ene: J J J J i
= Lio) x Dabs ms Lio) x Bee ie 0.10) x 2) - Lio) x Qu)

=8+4+0+1
= 13 40)

The polynomial method of number conversion is especially useful for humans


when converting a number in base r, into the decimal number system, since decimal
arithmetic is always used when evaluating the polynomial.
The following five examples further illustrate the polynomial method of num-
ber conversion. In these examples attention should be paid to the manner in which
each digit in base r, (as well as r, itself), denoting an integer quantity, becomes the
corresponding integer quantity in base r, and that the arithmetic operations are per-
formed using base-r, arithmetic.

L
EXAMPLE
2.9
Convert the binary number 101.011 into decimal.

101.0112) = 1) X 10%) + Og X 10f) + Ii) X 102)

+ Oe) X 10a) + 1) X 10g) + Ie X 103;


90
= lao) X 2G0) + Ou0) X 2(10) 1 Lio) x =(10)
=i = =
1 O10) x 2110) ir (10) x 2(10) a lio) x 2(10)

= 41 SS (0) SF Il se Ose Os) se OLDS

Sy lb

Convert the ternary number 201.1 into decimal.

201.1(3, = 2,3) X Oe + On, X 101, + La G07 al ana 103}

= 20) X 3~10) + Oto) X 3.10) + ao) X 3%10) + Leo) X 3100)


= Mesa Ose 1h a OLa3}3}3ho0
= 19.3333 *(10)

It should be noted in Example 2.10 that the fraction part of the number having a
finite number of digits in one number system converts into a fraction part with an

*TIn replacing the digits of base r, by those of base r>, Table 2.2 is used to determine the necessary
equivalences. The equivalence symbol (=) is used in these equations to emphasize the fact that the same
quantity is being denoted even though it is expressed in a different number system.
CHAPTER 2 Number Systems, Arithmetic, and Codes 19

infinite number of digits in another number system. In such situations, a sufficient


number of digits are used to have the desired precision.

Convert the decimal number 113.5 into binary.

113.5440) = lao X 10{10) + Luo) X 10,10) + 3,10) X 102.0) + Sci) X 1020)


= 1) X 1010%) + 1@ X 10104) + 11) X 1010%, + 101, x 1010¢!
= 1100100 + 1010 + 11 + 0.1
= 110001:te

Convert the binary number 11010 into ternary.

110109) = 1g) X 106) + 1a X 10%) + Og) * 102) + 1g X 104, + Ow x 108,

= 1) x 2%3) ste 13) x 2%) Ui 0) x 263) a7 13) x 26) Ds 0.3) x 20s)

= (Pil se 22 se @) ae se @

= 222%

Convert the ternary number 2102 into binary.

2102;5) i 26) x 1023) a ls) x 10%5) + Oe) x 10/3) a 26) x 1023)


SHO el ee te la Ope elle 10m, 11)
SOLO <> lOOIe O10
= 1000001)

2.5 ITERATIVE METHOD OF NUMBER


CONVERSION
Inasmuch as the polynomial method of converting a number in base r, into base
r> is characterized by the fact that base-r, arithmetic is performed, the iterative
method is characterized by the fact that base-r, arithmetic is used. Thus, this
method of number conversion is especially of interest to humans when converting
a decimal number into its equivalent representation in some other positional num-
ber system. However, unlike the polynomial method, when a mixed number is con-
verted by the iterative method, the integer and fraction parts of the number must be
handled separately—the integer part converting into an integer part and the frac-
tion part converting into a fraction part. Then, the two parts are combined to yield
the equivalent mixed number in the new number system.
20 DIGITAL PRINCIPLES AND DESIGN

2.5.1. iterative Method for Converting Integers


To convert an integer in base r, into its equivalent integer in base r,, divide the
number by r., which is r, expressed in base r), using base-r; arithmetic. The re-
sulting remainder is then converted into a single digit in base r, and is the Oth-order
digit of the base-r, number. The process is then repeated on the resulting integer
quotient to obtain the Ist-order digit of the base-r, number. The division procedure
is continued until the resulting integer quotient in base r, becomes zero.
The above algorithm sounds more complicated than it really is. For example, to
convert a decimal integer into binary, repeated division by 2 is performed using
decimal arithmetic. The remainders as they are formed, being 0’s and 1’s, become
the digits of the binary number starting with the least significant digit.

LEXAMPLE
ccc 2.14
Conversion of 43,;9) into its binary equivalent proceeds as follows:
43.40) + 200) = 21 ao) + remainder of 1,19); 1(9) = 12, = Oth-order digit
210) + 20) = 10g) + remainder of 1,4q; 1(19) = 1) = Ist-order digit
100) = 20) = Sco) + remainder of 0/19); O¢19) = Oia) = 2nd-order digit
Sao) + 2a0) = 20) + remainder of 119); 149) = 1,2) = 3rd-order digit
200) ~ 240) = lao) + remainder of 0,19); 0440) = 02) = 4th-order digit
lio) + 200) = Oo) + remainder of 1 (19); 1(49) = 12) = Sth-order digit
Therefore, 43,;9, = 101011,

Conversion of 213,,9) into its equivalent in hexadecimal simply involves repeated


division by 16,9). The procedure 1s as follows:

213(10) ~ 16(10) = 13,0) + remainder of 5(19); 5(10) = Sci6) = Oth-order digit

13(19) = 16¢0) = Oo) + remainder of 13,19); 13,19, = Dis) = Ist-order digit

Therefore, 213(19, = D546)

EXAMPLE
2.16 [i
Conversion of 1O001011,.) into its decimal equivalent by the iterative method re-
quires repeated base-2 division by the quantity 10 expressed in binary, which is
1010.2). The steps of the conversion are:
1LOO1011 (5) + LOL0, = 111) + remainder of 101,.); 101) = 549) = Oth-order digit
I11() + 10102) = Og) + remainder of 111(.); 111(.) = 7,9) = Ist-order digit
Therefore, 1001011) = 75,19)
CHAPTER 2 Number Systems, Arithmetic, and Codes 21

2.5.2 Verification of the Iterative Method for Integers*


For the interested reader, let us verify the above algorithmic procedure. Consider an
integer in base rj, i.e., N,,,). After this number is converted into base r,, it has the
form

ai Lay x 10() a d,— ?) 7A nat Sd d,., x 10(,2) + do,,, x 1002)

which is the general form of a number in its own system where d,,denotes a digit
in the base-r, number system. Thus, the following equivalence is established:

Not) =d,-\,, x 1075, 27) a 10732 4L 3 G0 de aye x 1055 ah do,., x Loe (2.4)
N~ £(79)

The digits d; _ must now be determined from a knowledge of N,,).


Proceeding as in the polynomial method, if each d; |coefficient and 10,,.) are
converted into their equivalents in base r,, then the right side of Eq. (2.4) becomes a
base-r, arithmetic expression. Performing the necessary substitutions gives
= zi =e2 1 0
Nr) ae Eng) * rl) + d,- 2 rl) x Toy iu coy + dy, x Ls; + dp, x i) (rl)

FF [F=f x oe om ae “2 dy, x 12,1) ] x Pi an do, x 1, 1) (2.5)

where d; is the equivalent of d; , expressed in base r, and ry is the equivalent


of 10,,.) expressed in base a Itshould be noted that the last form of ban)
was obtained by factoring om from the first n — 1 terms. If both sides of Eq.
(2.5) are now divided by r, using base-r; arithmetic, then the integer quotient
corresponds to
d,nly eis cia ‘ an dy, x r<r)

while the remainder corresponds to dp, i.e.,

7 Noy
Remainder Pra Tih
tr)

Recall that do % is the base-r, representation of dy ,. It now follows that if this re-
mainder, do ,, is converted into its equivalent digit1
in base r, by Table 2.2, then the
Oth-order digit of the base-r, number is obtained.
The above argument can now be applied to the resulting integer quotient,

d n—liny OT ane tN(rl) oy


Id, Lr) x rn a rape ig dy, x 75,1 x 7), + dy, x “(rl)
15

If the integer quotient is divided by 7,,,, upon converting the remainder, d,,,, into a
base-r, digit, then the 1st-order digit of the base-r, number is obtained. By repeating
this process until the resulting quotient is zero, the coefficients of Eq. (2.4) are
generated.

*This subsection may be skipped without loss of continuity.


22 DIGITAL PRINCIPLES AND DESIGN

2.5.3 iterative Method for Converting Fractions


A slightly different algorithm is needed to convert a fraction expressed in base r;
into its base-r, equivalent by the iterative method. Algorithmically, to convert a
fraction in base r; into its equivalent fraction in base r, multiply the base-r, frac-
tion by r>,,, which is r, expressed in base r,, using base-r, arithmetic. The integer
part of the product is then converted into a single digit in base r, and is the most
significant digit of the base-r, fraction. The multiplication and converting
process is then repeated on the resulting fraction part of the product to obtain the
next-most-significant digit of the base-r, fraction. This procedure is continued
until the resulting fraction part of the product becomes zero. It should be pointed
out, however, that it is possible that an infinite number of digits may be needed
to represent a base-r, fraction even though the base-r, fraction has a finite num-
ber of digits. In this case, the process is terminated when the desired precision is
obtained.
For the special case of converting a decimal fraction into binary, repeated mul-
tiplication by 2 is performed. The integer parts of the products, being 0’s and 1’s,
become the digits of the binary fraction starting with the bit closest to the binary
point.

Cra =
Conversion of 0.8125,,9) into its equivalent binary fraction proceeds as follows:

0.812519) X 2q0) = 1-6250,10); lao) = 1) = Most significant fraction digit


0.6250 (10) X 20) = 1.2500 ,05 Lo) = 1)
0.2500,10) X 210) = 9.5000(10); O10) = Oe)
O5000 22 = 1 0000s eek = 1.) = Least significant fraction digit
Therefore, 0.8125,,9, = 0.1101,»

EXAMPLE 2.18

Conversion of 0.101 1,,, into decimal proceeds as follows:


O.1LO11(5) X 1010.) = 110.1110); 110.5) = 6:19) = Most significant fraction digit

0.1110.) X 1010.4) = 1000.1100,.); 1000, = 8,10)

0.1100) X 1010.) = 111.1000,); 111) = 7,10)


0.10002) X 1010, = 101.0000,.); 101(2, = 5,49, = Least significant fraction digit

Therefore, 0.10119, = 0.687540)


CHAPTER 2 Number Systems, Arithmetic, and Codes 23

2.5.4 Verification of the Iterative Method


for Fractions*
In general, the equivalence between a fraction in base r,, N,,.), and its representation
in base ry is given by

N= dog X10py, 1 a>, x 10,3 te ndesOx 10,7 (2.6)


where d; , denotes a digit in the base-r, number system. Converting each d, ,coeffi-
cient and. 10, into their equivalent representations in base r,, Eq. (2.6) becomes
Nee Te
rl) =(rl)
TRUE 5 AUGTS
-(rl) “(rl)
reheat: dS NTA (rl)

where d; is the equivalent of d;, expressed in base r; and r, is the equivalent of


10,2) expressed iin base 7. Multiplying each side of this equation by ry, gives

>
“(rl)
x Not) — dey
(rl)
RT a (rl)
Ip dg rl)

If the integer part of this product, d_, , is converted into its equivalent digit in base
ry, then the most significant digit of the base-r, fraction is established. Repeating
this process of multiplying the fraction part of the above product by 7, and con-
verting the resulting integer part, the next-most-significant digit of the base-r, frac-
tion is generated. By continuing this multiplication and converting process, the re-
maining digits of the base-r, fraction are obtained one at a time starting with the
most significant digit of the fraction.

2.5.5 A Final Example


The following example illustrates the handling of a mixed number in which the in-
teger and fraction parts are converted separately and then connected with a radix
point. It should be noted that base 3 arithmetic is being used.

EXAMPLE 2.19

Conversion of 201.12,3) into its binary equivalent proceeds as follows:

Integer part:
201.3) + 2,3, = 100.3, + remainder of 1,3); 1/3) = 1/2) = Least significant integer digit
100) + 2,3) = 11) + remainder of 1); 1) = 1
11,3) + 2/3) = 24) + remainder of 0,3); 0) = 0.)
23) + 2,3) = 1g) + remainder of0,3); 03) = 0,2)
13) + 2,3) = 0) + remainder of 1,3); 14) = 1/2, = Most significant integer digit
Therefore, 201,3, = 100112,

*This subsection may be skipped without loss of continuity.


24 DIGITAL PRINCIPLES AND DESIGN

Fraction part:
0.12;3. X 24) = 1.01); 1,3) = 1) = Most significant fraction digit
0.01(3)X 2) = 0.02.3); 0,3) = Oe)
0.02(3) x 203) = 108 1 (3); 0.3) = 0.2)

O0.11(3) X 2g) = 0.22); 02) = Oe)


0.224) X 2) = 1.216); le)= la
0.21) X 24) = 1.12); 1)= Le)
Up to this point the first six fraction digits are obtained. However, it is noted that the
resulting fraction part of the above computation is 0.12,,,, which is also the fraction
we began with. As a result, the next six fraction digits are the same as the first six
fraction digits and the resulting fraction part of the computation is again 0.12(,).
Thus, the fraction conversion results in the nonterminating sequence 0.12;,) =
0.100011100011 - - - ,.. Furthermore, since this conversion results in a repeating se-
quence of fraction digits, we can write the conversion as

Repeating

0.12,,, = 0.100011,,)
Combining the integer and fraction parts:

Repeating

PONTO t= 10011 1000 EI ey

2.6 SPECIAL CONVERSION PROCEDURES


When performing a number conversion between two bases in which one base is an
integer power of the other, there is a simplified conversion process. Examples of
such conversions are between numbers in base 2 (binary) and base 8 (octal) since
8 = 2°, and between numbers in base 2 (binary) and base 16 (hexadecimal) since
16 = 2*. One reason that the octal and hexadecimal number systems are of interest
to computer people is because of these simple conversions.
Consider first conversions between binary and octal numbers. If a number
is given in binary, then starting at the binary point and working both left and
right, the bits are grouped by threes where leading and trailing 0’s are added if
necessary. For example, the bits of the binary number 11111101.0011,.) are
grouped as

011111101.001100,,
NeNO ae)
CHAPTER 2 Number Systems, Arithmetic, and Codes 25

Then, each group of three bits is converted into its equivalent octal digit. For the
above number, we have

011111101.001100,,)
SI SS)
Jeeves |
SGAsae mee
Thus, the octal equivalent of 11111101.0011,,) is 375.14,g).
By reversing the above procedure, octal numbers are readily converted into
their equivalent binary numbers. In particular, to convert an octal number into its
equivalent binary number, each octal digit is replaced by its equivalent three binary
digits. For example, the octal number 173.24,g) is converted into binary as follows:

Wake DHaIOOL A
less bere at
ESE LONI
001111011.010100,
It is relatively simple to justify this algorithm for number conversions between
base 2 and base 8. Consider the binary number Ni.) = +++ dgd, +++ d\dp.d_\d_,d_3°**
Then, its decimal equivalent is given by

eed rd, OP de Od, Rt ae Ke ed


dete dads 2 4 da 2 hid) X2e° +d x2 He
Sree (de 2? Hd, x 2 dg 2) X Pods Xe Hd, X 2d, XD )xX? +
(da? dye rd 2 2 a (a? ed ay Xd KD") 2?
+. .

=---+(d,x?+da,xX2+4X2)xX8+idax?t+dax+da,xX2xsi+
(ea) de ne Be (dak 2 td x 2) td xX2) x 8
+ -e+-5

The right side of this last equation has the form of an octal number with its coeffi-
cients, lying in the range 0 to 7, given in the form of binary numbers (where both
the binary and octal forms are expressed in the decimal number system). Hence, by
replacing each group of three bits by its equivalent octal digit, the conversion from
binary to octal is achieved. By reversing this argument, the procedure for convert-
ing octal numbers into binary follows.
A similar procedure exists for conversions between binary and hexadecimal
numbers since 2* = 16. In this case, however, four binary digits are associated with
a single hexadecimal digit. Thus, in converting a binary number into a hexadecimal
number, the bits of the binary number are blocked off (working left and right from
the hexadecimal point) in groups of four (adding leading and trailing 0’s to com-
plete a block if necessary), and each group of four bits is replaced by its equivalent
hexadecimal digit. Conversely, when converting a hexadecimal number into a bi-
nary number, each hexadecimal digit is simply replaced by its equivalent four
26 DIGITAL PRINCIPLES AND DESIGN

binary digits. For example, the binary number 1010110110.111(,) is converted into
its hexadecimal equivalent as follows:
001010110110.1110,)
SE REI NS
aaa! J \
2 &B Bh Evi)
while the hexadecimal number 3AB.2,;,) is converted into its binary equivalent as
follows:

001110101011.0010,,
The binary number system is the most frequently used number system in digital
systems. Hence the numbers in the memory and the various registers are strings of
0’s and 1’s. In these systems, it is often more convenient to regard these numbers as
being either octal or hexadecimal rather than binary when referring to them since
fewer digits are involved.

2.7 SIGNED NUMBERS AND COMPLEMENTS


In the above discussion of numbers, emphasis was placed on simply denoting the
magnitude of a quantity. No attention was given to whether this magnitude is posi-
tive or negative. Certainly, one approach to represent signed quantities is to precede
a number with a symbol, say, a plus sign (+) if it is positive and a minus sign (—) if
it is negative. This form of signed numbers is known as the sign-magnitude repre-
sentation. In actuality, digital systems utilizing the sign-magnitude representation
use the binary digit 0 to denote the plus sign and the binary digit 1 to denote the
minus sign.
It is possible to give a graphical interpretation to numbers in a sign-magnitude
representation by assuming the real numbers appear as points along a line. In this
case, the point denoting the zero quantity is called the origin of the line; positive
quantities are considered as points lying to the right of the origin and negative
quantities as points lying to the left of the origin. Figure 2.1 illustrates this con-
cept. Thus, the sign serves the purpose of indicating which side of the origin the
magnitude lies on and the magnitude itself as a measurement with respect to the
origin.
Consider the possibility of defining a second origin, called the offset origin,
at some point to the left of the origin aes with the zero quantity, called the
true origin. This is illustrated in Fig. 2.2, where the offset origin is 10 units to
the left of the true origin. Referring to this figure, the negative quantity 3 is a
leftward measurement relative to the true origin, denoted by —3, or, equiva-
lently, as a rightward measurement relative to the offset origin, denoted by *7.
Both measurements can denote the same quantity without ambiguity as long as
CHAPTER 2 Number Systems, Arithmetic, and Codes 27

Origin

———+—1+3

—3 oO

a
ttt
a 5

—5 |

Figure 2.1 Graphical interpretation of


sign-magnitude numbers.

the reference for the measurement is known. The asterisk was used in writing the
number simply to indicate that the measurement is relative to the offset origin
rather than the true origin. Furthermore, if the largest negative quantity to be rep-
resented is known, then it is always possible to place the offset origin suffi-
ciently far to the left of the true origin so that all negative quantities are mea-
sured rightward of the offset origin. Since in digital systems the number of digits
allocated to a quantity is fixed, the largest negative number that can be repre-
sented is always known.
Again consider a point to the left of the true origin which is denoted as an (un-
signed) number N, representing a measurement relative to the true origin. When this
same point is denoted as an (unsigned) number N, representing a measurement rela-
tive to the offset origin, N, is called the complement of N,. Thus, in Fig. 2.2, it is

Offset origin True origin

ca: 10 units aa yer

Figure 2.2 Graphical interpretation of


complements.
28 DIGITAL PRINCIPLES AND DESIGN

said that 7 is the complement of 3. Formally, the complement of a number N, is de-


fined as another number N, such that the sum NV, + N, produces a specified result; in
particular, this specified result is the displacement of the offset origin relative to the
true origin.
There are two well-known types of complements: the r’s-complement (also
called the radix complement or the true complement) in which the offset origin is
displaced r” units to the left of the true origin and the (r — /)’s-complement (also
called the diminished-radix complement or the radix-minus-one complement) in
which the offset origin is displaced by r” — r ” units. In both cases, n is the number
of integer digits of the largest quantity that must be represented, m is the number of
fraction digits, and r is the base of the number expressed in its own number system,
i.e., r = 10. Thus, given some sequence of digits denoting a number N in base r, the
two types of complements are obtained by performing the following computations
using base-r arithmetic:

r’s-complement of N = r” — N= 10" —N (2.7)

@ = 1)is-complement. of N = 77> = 7 7 = N= 109 — 10 2 (2.8)

For the special case of decimal numbers, the r’s-complement is called the /0’s-
complement (or tens-complement) and the (r — 1)’s-complement is called the 9’s-
complement (or nines-complement); while for binary numbers, the r’s-complement
is called the 2’s-complement (or twos-complement) and the (7 — 1)’s-complement is
called the /’s-complement (or ones-complement).
The purpose of introducing the complements of numbers is to provide an-
other means for denoting signed quantities. These are known as complement
representations. Depending upon the offset used, signed numbers are expressed
in either the r’s-complement representation or (r — 1)’s-complement representa-
tion. In both cases the true origin is used for dealing with positive quantities, and
an offset origin for dealing with negative quantities. In addition, a “sign digit”
is appended to a number. The sign digit serves to indicate which origin is being
used for a measurement. The binary digit 0 is appended to denote a positive
number, i.e., a measurement relative to the true origin, and the digit 1 to denote

Table 2.6 Representations of signed numbers


eee
Positive numbers:
All representations: 0,N

Negative numbers:
Sign-magnitude representation: | ,N
Signed 7’s-complement representation: *
WGA” — IN) SL IO) INP)
Signed (r — 1)’s-complement representation: *
|Oo Na Gl nO ate a Na)
*Recall the base of a number expressed in its own number system is 10.
CHAPTER 2 Number Systems, Arithmetic, and Codes 29

a complement or negative number, i.e., a measurement relative to an offset


origin.
The three representations of signed numbers are summarized in Table 2.6. In
this table, the subscript s is regarded only as a delimiter to separate the sign digit
from the measurement digits for ease of reading. Furthermore, it should be noted
that in all three representations of signed numbers, the positive form of a number is
the same since the true origin is used as the reference for the measurement.
The following examples illustrate the forming of complements and the writ-
ing of signed numbers. In all cases the same number of digits are used when
writing the complement of a number as the number itself. This will be important
in the following two sections when arithmetic is performed with complement
numbers.

Given the unsigned decimal number 123.45, its 10’s-complement is given by


10° — 123.45 = 1000 — 123.45
II 876.55
As a signed number in the 10’s-complement representation, the positive representa-
tion of 123.45 is 0,123.45 and the negative representation is 1,876.55.

Given the unsigned decimal number 123.45, its 9’s-complement is given by


O10" 25123:45 = 1000'= 0101 12345
= 999.99 — 123.45
= 876.54
As a signed number in the 9’s-complement representation, the positive representa-
tion of 123.45 is 0,123.45 and the negative representation is 1,876.54.

Given the unsigned binary number 1101.011, its 2’s-complement is given by


104, — 1101.011,,, = 10000 — 1101.011
= 0010.101
As a signed number in the 2’s-complement representation, the positive representa-
tion of 1101.011 is 0,1101.011 and the negative representation is 1,0010.101.
30 DIGITAL PRINCIPLES AND DESIGN

Given the unsigned binary number 1101.011, its 1’s-complement is given by


104, — 10g? — 1101.01 1) 10000
— 0.001 — 1101.011
S01 = alt OA
0010.100

As a signed number in the |’s-complement representation, the positive representa-


tion of 1101.011 is 0,1101.011 and the negative representation is 1,0010.100.

When the complements of an (unsigned) number are formed, it can always be


done by using the mathematical definitions as given in Eqs. (2.7) and (2.8). How-
ever, there are also some simple procedures for forming complements. To form the
r’s-complement of N, working from right to left, keep all least significant zero digits
of N unchanged, subtract the first least significant nonzero digit from r, the base of
the number system, and subtract each of the remaining digits from r — 1. To form
the (r — 1)’s-complement of N, simply subtract each digit of Nfrom r — 1.
For the case of signed numbers in the r’s-complement or (r — 1)’s-complement
representations, the above procedures are applied to the measurement portion of the
number, 1.e., those digits to the right of the sign bit, when the complements are formed.
In addition, for both cases, it is necessary to reverse the sign bit. That is, if the sign bit
is O, then it is replaced by a 1. Similarly, if the sign bit is a 1, then it is replaced by a 0.
In general, taking the complement of a signed number is equivalent to taking the nega-
tive of the number.
It should be noted that the r’s-complement of N can also be obtained by simply
adding | to the least significant digit of the (r — 1)’s-complement of N.* Further-
more, for binary numbers, the |’s-complement is formed by merely replacing each
| by a0 and each 0 by a | in the original number. Finally, it should be observed that
the complement of the complement of N is once again N. That is, the double com-
plement of a number is the number itself. Readers are encouraged to test their un-
derstanding of these algorithms by applying them to Examples 2.20 to 2.23.
In closing this discussion on signed numbers and complements, one final
comment is in order. Because of the relative simplicity of forming the (r — 1)’s-
complements, it might appear that this representation is advantageous over the r’s-
complement representation. However, one objection that can be given against the
(r — 1)’s-complement representation, as well as the sign-magnitude representation,
is that there exists a minus zero. That is, for the case of binary numbers in the
1’s-complement representation, a plus zero appears as 0,00 +++ 0 and a minus zero
appears as 1,11 +++ 1. In the sign-magnitude representation a plus zero appears as
0,00 +++ 0 and a minus zero as 1,00 +++ 0. This situation does not occur in the 2’s-
complement representation since zero only appears as 0,00: -: 0.

*The single exception to this rule is 1,00 +++ 0, representing the quantity — 10”, in the r’s-complement
representation, which cannot be complemented.
CHAPTER 2 Number Systems, Arithmetic, and Codes 31

2.8 ADDITION AND SUBTRACTION


WITH r’S-COMPLEMENTS
One reason for the interest in the complement form of numbers is that it is possible
to subtract two numbers by doing addition if the subtrahend is first complemented.
However, the subtraction procedure differs slightly depending upon whether the
r’s-complement or the (r — 1)’s-complement of a number is being used.
First a general procedure for the subtraction of two numbers by the addition of
the r’s-complement of the subtrahend is stated. The validity of the procedure is later
established. Assume the difference N, — N, is to be obtained, and, for initial sim-
plicity, assume that NV, and N, are unsigned numbers in base r (thereby effectively
implying that they are both positive). To perform subtraction, N, and N, are first ex-
pressed so that each has the same number of integer digits (i.e., 2) and the same
number of fraction digits (i.e., m). This can always be done by adding leading
and trailing 0’s to the numbers. The unsigned r’s-complement of N,, denoted by N5,
is then formed as was discussed in the previous section. Upon aligning the radix
points of N, and N,, the sum N, + Nj is obtained using base-r addition. If a carry,
called the end carry, is generated from the most-significant-digit position, i.e., if the
sum has one more digit than the original numbers, then the carry digit is ignored
when writing the sum. The resulting sum is the representation for the quantity
N, — N>. If N, = N>, then the difference has the form of a positive number; while if
N, < N5, then the difference, being negative, appears in r’s-complement form.*
The occurrence of the end carry indicates that a positive difference has resulted,
while the nonoccurrence of the end carry indicates that the difference is in r’s-
complement form.
The following examples illustrate subtraction by the addition of the r’s-
complement.

Consider the two decimal numbers N,; = 532 and N, = 146. The 10’s-complement
of 146isNj, = 854. The difference N, — N, is now obtained by forming the sum
N, + No.
Conventional Subtraction by addition
subtraction of the 10’s-complement

N,= 532 N,;= 532


— N, = —146 +N,
= + 854
2 | = T oooO(oe)
I
2 = 14386
+

(es The end carry is ignored

Since an end carry has occurred, it is known that the result is positive.

*[t should be carefully noted that a negative answer always appears in complement form.
32 DIGITAL PRINCIPLES AND DESIGN

To illustrate the effect of a negative difference, again consider the decimal numbers
N, = 532 and N, = 146 where the 10’s-complement of 532 is 468.

Conventional Subtraction by addition


subtraction of the 10’s-complement

N,= 146 N,= 146


— N, = —532 + N, = +468
N, — N, = —386 N, +N, = 614
The fact that the end carry did not occur indicates that the result is in 10’s-complement
form. Since the 10’s-complement of 386 is 614, the result VN, + N, is the 10’s-
complement of the negative difference.

|
EXAMPLE
2.26
Consider the binary_ numbers N; = 11101.11 and N, = Q1O1L. 10 The 2is-
complement of N, is N, = 10100.10.

Conventional Subtraction by addition


subtraction of the 2’s-complement

N,= 1110011 N, = LOM


— N, = —01011.10 + N, = + 10100.10
N, — N,= 10010.01 N, +N, = _1]|10010.01

= The end carry is ignored

Using the same numbers as the above example, the difference N, — N,; = N, + N,
is obtained as follows where N, = 00010.01:

Conventional Subtraction by addition


subtraction of the 2’s-complement

N,= 01011.10 No= O1011.10


— ING = AAUKONStL + N, = +00010.01
N,— N, = —10010.01 NotNi = O10IM

Since the 2’s-complement of 10010.01 is 01101.11, the result N, + N, is the 2’s-


complement of the negative difference.

*Note that leading and trailing 0’s are used in N, so that both N, and N, have the same number of digits.
CHAPTER 2 Number Systems, Arithmetic, and Codes 33

It is a simple matter to justify the above addition procedure to achieve sub-


traction. Assume N, — N, is to be obtained and that the r’s-complement of N, is
denoted by N,. Adding the r’s-complement of the subtrahend to the minuend
gives

N, + N, = N, + 10" — N, = 10" + (N, — Np) (2.9)


where base-r arithmetic is performed. If N, — N, = 0 in Eq. (2.9), then a | oc-
curs in the nth-order digit position of N, + N, due to the 10” term.* The digit |
in this position indicates the existence of an end carry, and ignoring it when writ-
ing the sum results in the true difference N, — N, being expressed. On the other
hand, if N; — Nj < 0 in Eq. (2.9), then N, + N, = (10" — |N, — N,|), which is
precisely the r’s-complement of N, — N>. Furthermore, the nonoccurrence of the
end carry indicates that the result is in r’s-complement form. Thus, the validity
of the subtraction procedure is established.

2.8.1 Signed Addition and Subtraction


It can also be proved that the subtraction of two signed numbers is achieved by
adding the signed r’s-complement of the subtrahend to the signed minuend. In this
case, the sign digits are regarded as simply another digit of a number and are also
“added.” The measurement digits of the two signed numbers are added using base-r
addition. However, in view of the fact that only 0’s and 1’s appear in the sign-digit
position, binary addition is performed in the column associated with the sign digits
independent of the arithmetic being used on the measurement digits. If a carry is
generated from the addition of the most significant measurement digits, then it is
carried over into the sign-digit column and is added along with the sign digits ac-
cording to the rules of binary addition. This is illustrated shortly in Example 2.28.
On the other hand, an end carry generated from the addition of the sign digits is ig-
nored.’ When the subtraction of signed numbers by the addition of a signed r’s-
complement is performed, the result of the operation has the correct sign digit.
Hence, if the sign digit of the result is 1, then the measurement digits of the answer
are in r’s-complement form.

*Note that under the assumption of n integer digits in the operands, the highest-order digit position is the
n-|st-order. Thus, 10” corresponds to a | in the nth-order digit position.
‘Although this approach for handling the sign digits appears rather awkward, when dealing with
signed binary numbers no inconvenience is encountered since this simply implies that it is not
necessary to distinguish the sign digit from the rest of the measurement digits. Alternatively, for
nonbinary signed numbers, frequently the digit b — 1 is used as the sign digit of a negative quantity.
For example, in the case of decimal numbers, the digit 9 is used to denote a negative quantity rather
than 1. If this is done, then regular base-h addition can be performed on the sign digits instead of
binary addition.
34 DIGITAL PRINCIPLES AND DESIGN

EXAMPLE 2.28
Consider the signed decimal numbers N, = 0,856.7 and N, = 0,275.3. The signed
10’s-complement of 0,275.3 is 1,724.7. The difference N, — N, is obtained by form-
ing the sum N, + N,.

Conventional Subtraction by addition


subtraction of the 10’s-complement

N, ae 856.7 N, = 0,856.7
SONNE 75 3 +N, = + 1.724.7 Binary addition is performed
N,-N,= 581.4 Nu NS = -140,581.4 in this column

ean The end carry is ignored

EXAMPLE 2.29

Using the numbers N, and N, of the previous example, NV, — N, is formed as follows
where the signed 10’s-complement of 0,856.7 is 1,143.3:

Conventional Subtraction by addition


subtraction of the 10’s-complement

N,= 275.3 Ne =. 102753


= Ni aeON + N, = +1,143.3
N, — N, = —581.4 N, +N,= 1,418.6

The | in the sign digit position indicates that the answer is negative and the mea-
surement digits are in the 10’s-complement form.

Consider the signed binary numbers N, = 0,11011.01 and N, = 0,10110.10. Since


the signed 2’s-complement of N, is 1,01001.10, the difference N, — N, is obtained
by using the signed 2’s-complement of N, as follows:
Conventional Subtraction by addition
subtraction of the 2’s-complement

Ny = 11O1MO1 N, 0,11011.01
— N, = S1011000 + N, lI= + 1,01001.10
N,—N,= 00100.11 N, + N, = _1}0,00100.11

ee, The end carry is ignored


CHAPTER 2 Number Systems, Arithmetic, and Codes 35

Using N, and N, of Example 2.30, N, — N, is achieved as follows:

Conventional Subtraction by addition


subtraction of the 2’s-complement

N,= 10110.10 N,= 0,10110.10


— N, = —11011.01 + N, = +1,00100.11
N,
— N, = —00100.11 N,+N,= 1,11011.01
The | in the sign digit position indicates that the answer is negative and the mea-
surement digits are in 2’s-complement form.

Although the above discussion was based on the subtraction of two signed
positive numbers, the concept presented applies to the algebraic addition and
subtraction of any two signed numbers in r’s-complement representation. As
long as the measurement portions of the numbers are added, possibly after form-
ing the signed r’s-complement of the subtrahend when subtraction is specified,
digit position by digit position using base-r arithmetic and the signs are added
using binary addition, then the result is algebraically correct.* For example, if
one of the operands is negative, and hence in its r’s-complement form, then
under the specification of addition the sum of the two operands results in the dif-
ference being obtained. On the other hand, if the difference between two
operands is to be calculated, then the signed r’s-complement of the subtrahend is
simply added to the minuend. If the subtrahend is initially a negative quantity
(and thereby expressed in r’s-complement form), then taking its signed r’s-com-
plement results in a positive representation since the negative of a negative quan-
tity is a positive quantity.
There is, however, one complication that can arise when dealing with algebraic
addition and subtraction of signed numbers. Depending upon the signs of the
operands, when the two signed numbers are added, possibly after the signed r’s-
complement of the subtrahend is formed when a subtraction operation is specified,
it is possible that the operands are both positive or both negative. In such a case, if p
digits are allocated to the measurement portion of the operands, then their sum
could require p + | digits to properly indicate the measurement portion. However,
it has been assumed in this discussion that a fixed number of digits are available for
the measurement representation. When the resulting sum requires more digits than
are available, an overflow condition’ is said to occur. If an overflow occurs, then the

*It is seen shortly that there is a constraint that the algebraic result must be expressible with the number
of digits allocated for the measurement portion.
‘The reader should carefully note that an overflow condition is not the same as the end carry discussed
previously.
36 DIGITAL PRINCIPLES AND DESIGN

algebraic sum is incorrect. One method of detecting an overflow condition is to


detect when the signs of the two operands are the same, but the sign of the resulting
sum is opposite to that of the operands. Another way of detecting an overflow con-
dition involves detecting the carries into and from the sign digit position when the
addition is performed. In this case, an overflow condition exists under two possibili-
ties: (1) there is a carry into the sign digit position and no carry from the sign digit
position or (2) there is no carry into the sign digit position and a carry from the sign
digit position.

Consider the signed binary numbers N, = 0,11011.01 and N,= 0,10110.10. When
the numbers are added to form N, + N5, the following results:

N,= il 0,11011.01
+N, ll= +0,10110.10
Not N,= 1,1000011
In this case an overflow condition has occurred since the addition of two positive
numbers results in a negative sum. The overflow condition is a consequence of
an insufficient number of measurement digits to properly represent the sum. Al-
ternatively, in the above addition, an overflow is detected since there is a carry
into the sign digit position while no carry results from the addition of the sign
digits.

2.9 ADDITION AND SUBTRACTION


WITH (r — 1)’S-COMPLEMENTS
Just as subtraction is achievable by doing addition with r’s-complements, it is also
possible to do subtraction by doing addition using the (r — 1)’s-complements.
The difference between the two procedures lies in the handling of the end carry.
Initially, assume the minuend and subtrahend are unsigned base-r numbers. To
obtain the difference N; — N5, the numbers N, and N, are first expressed so that they
both have the same number of integer digits (i.e., 2) and the same number of frac-
tion digits (1.e., m). Upon rewriting N, in its (r — 1)’s-complement form as was dis-
cussed in Sec. 2.7, denoted by N,, the radix points of N, and N, are aligned and their
sum is obtained using base-r addition, i.e., N; + N>. If an end carry is generated,
then it is carried around to the least-significant-digit position of the sum and added
to it. This carry is known as the end-around carry. The resulting sum corresponds to
N, — N>. If N, > N>, then the difference has the form of a positive number; while if
N, = Ny, then the difference, being negative, appears in (r — 1)’s-complement
CHAPTER 2 Number Systems, Arithmetic, and Codes 37

form.* The occurrence of an end-around carry indicates that a positive difference


has resulted, while the nonoccurrence of the end-around carry indicates that the dif-
ference is in (r — 1)’s-complement form and, thereby, denotes a negative result.
The validity of this procedure is given shortly.
The following examples illustrate subtraction by the addition of the (r — 1)’s-
complement.

Let N, = 85.2(19) and Ny = 32.5.9). The 9’s-complement of N, is Ny =67.4, To. ob-


tain the difference VN, — N, the sum N, + N, is formed:

Conventional Subtraction by addition


subtraction of the 9’s-complement

N,= 85.2 N,= 85.2


= N= =—325 + N, = + 67.4
Nii—sNo == 52.7. Nie N= 1152.6
=F 1 = End-around carry
52.7 = Difference

It should be noted that the end carry resulting upon forming the sum N, + Nj is
added to the least-significant-digit position to obtain the final result.

se
Again let N,; = 85.2(;9, and N, = 32.5(;9). The 9’s-complement of N, is Ser
To obtain the difference N, — N,:

Conventional Subtraction by addition


subtraction of the 9’s-complement

N,= 32.5 N,= 32.5


— N, = —85.2 + N, = +14.7
iN5 = NG =o 2eT No +N, 47.2 = Difference in 9’s-complement form

In this case, no end carry occurred when adding. Thus, the result is in 9’s-
complement form.

*It should be noted that a zero difference appears in (r — 1)’s-complement form and hence as a negative
quantity. The single exception is when a negative zero is subtracted from a positive zero. In this case, a
positive zero is obtained since after the negative zero is complemented, a positive zero is added to a
positive zero.
38 DIGITAL PRINCIPLES AND DESIGN

Let N, = 110.1,., and N) = 011.0,). The 1’s-complement of N, is N= a00Ue Te


obtain the difference N, — N,:
Conventional Subtraction by addition
subtraction of the 1’s-complement

N,= 110.1 N, = 110.1


— N, = —011.0 + N,= + 100.1
N,-N,= O11.1 N, + N, = _1|011.0
ian = End-around carry
011.1 = Difference

Again let N, = 110.1,., and N, = 011.0,.). The 1’s-complement of NV, is N= 001.0.


To obtain the difference N, — N;:
Conventional Subtraction by addition
subtraction of the 1°s-complement

N,= 011.0 N,= 011.0


—N, = -110.1 + N, = +001.0
N, — N,; = —011.1 N, +N, = 100.0 = Difference in 1’s-complement form

To establish the validity of achieving subtraction by the addition of the


(r — 1)’s-complement of the subtrahend to the minuend, assume NV, — N,j is to be
obtained and that the (r — 1)’s-complement of N, is denoted by N,. Adding the
(r — 1)’s-complement of the subtrahend to the minuend results in

N, + N>=N, + 10"— 10 — N, = 10°— 10” + (WN, —N,)._ 10)


where base-r arithmetic is performed in the above expression. If N, — N, > 0 in
Eq. (2.10), then a | occurs in the nth-order digit position of N, + N, due to the 10”
term. The digit | in this position indicates the existence of an end carry. By adding
it to the least-significant-digit position of N,; + N,, it cancels the effect of the
—10°” term and the difference VN, — N, results. On the other hand, if N, — N, <0
in Eq. (2.10), then N, + Ny, = (10" — 10°" — |N, — Nl), which is precisely the
(r — 1)’s-complement representation of N, — N >. It should be noted in this case,
the nonoccurrence of an end carry indicates the result is in (r — 1)’s-complement
form.
CHAPTER 2 Number Systems, Arithmetic, and Codes 39

2.9.1 Signed Addition and Subtraction


If the minuend and subtrahend are signed numbers, then the above procedure can
still be applied by considering the sign digit as simply another digit of the number
as was done in the previous section. Thus, when performing the base-r addition, the
sign digits are also “added.” However, in view of the fact that only 0’s and 1’s can
appear in the sign-digit position, binary addition is performed in the column associ-
ated with the sign digits independent of the arithmetic being used on the measure-
ment digits. If a carry is generated from the addition of the most significant mea-
surement digits, then it is carried into the sign-digit column and is added along with
the sign digits according to the rules of binary addition (see Example 2.37). On the
other hand, an end carry generated from the addition of the sign digits serves as the
end-around carry and must be added, using base-r addition, to the least significant
digit of the previously generated sum.* After the addition involving the end-around
carry, when necessary, the sign digit of the difference is always correct assuming
there are a sufficient number of measurement digits to express the result.’ That is, a
0 sign digit indicates a positive difference, while a 1 sign digit indicates a negative
difference with the measurement digits in (r — 1)’s-complement form.

CE
Let N,; = 0,54.2(;9) and Ny = 0,32.8,;9). The signed 9’s-complement of N, is No =
1.67.1. To obtain N, — N;%:

Conventional Subtraction by addition


subtraction of the 9’s-complement
{ey
ee )')
N,= 54.2 N,= 0,54.2 Binary addition is
= ING = = B58 + N, = + 1,67.1 performed in this column
Nie No =e 221.4 N, + Ny = 1|0,21.3
++ | =End-around carry
0.21.4 = Positive difference

* Although this approach for handling the sign digits appears rather awkward, when dealing with
signed binary numbers no inconvenience is encountered since this simply implies that it is not
necessary to distinguish the sign digit from the rest of the measurement digits. Alternatively, for
nonbinary signed numbers, frequently the digit b — 1 is used as the sign digit of a negative quantity.
For example, in the case of decimal numbers, the digit 9 is used to denote a negative quantity rather
than 1. If this is done, then regular base-b addition can be performed on the sign digits instead of
binary addition.
"As is discussed shortly, adding two signed numbers that are both positive or both negative can cause an
overflow condition, in which case the algebraic result is incorrect.
40 DIGITAL PRINCIPLES AND DESIGN

EXAMPLE 2.38
Again let N, = 0,54.2,,9 and Nj = 0,32.8,;9). The signed 9’s-complement of N, is
N, = 145.7. To obtain N, — N;:
Conventional Subtraction by addition
subtraction of the 9’s-complement

N,= 32.8 No 0a28


= IN = =D)
ae NM eae

N, = INE — =7)i\ 4
N, +N, = 1,78.5 = Negative difference in
9’s-complement form

[EXAMPLE
[ooo 2.39
Let N,; = 0,110.101,.) and N, = 0,010.110,.). The signed 1’s-complement of N, is
N, = 1,101.001. To obtain NV, — N,:

Conventional Subtraction by addition


subtraction of the 1’s-complement

N,= 110.101 N,= 0,110.101


Nee COLO 110 + N,= + 1,101.001
N,-N,= OMT N, + N, = _1]0,011.110
qu | = End-around carry
0.011.111 = Positive difference

EXAMPLE 2.40

Again let VN; = 0,110.101,,) and N, = 0,010.110,.). The signed 1’s-complement of


N, is N, = 1,001.010. To obtain N, — N;:

Conventional Subtraction by addition


subtraction of the 1*s-complement

N,= 010.110 N, = 0,010.110


= N, = —110:101 + N, = +1,001.010
N, — N, = —011.111 N, +N, = 1,100.000 = Negative difference in
1°s-complement form

In the above discussion, N,; and N, were assumed to be two signed positive
numbers. The remarks at the end of Sec. 2.8 regarding the algebraic addition and
subtraction of signed numbers and the concept of overflow are equally applicable
for numbers in the (r — 1)’s-complement representation. That is, when the operands
CHAPTER 2 Number Systems, Arithmetic, and Codes 41

are in signed form, their base-r addition, with the possible need to handle the end-
around carry, of the measurement digits and binary addition of the sign digits gives
the correct algebraic result subject to the constraint of an overflow condition. Recall
that an overflow condition is the lack of sufficient measurement digits to represent a
quantity. When an overflow condition occurs, the result is not algebraically correct.
The reader should carefully note that overflow is not the end-around carry needed to
achieve the valid algebraic result. The detection of an overflow when using the
signed (r — 1)’s-complement representation of numbers is exactly the same as
when using the signed r’s-complement representation.

2.10 CODES
Because of the availability and reliability of circuits that have two physical states
associated with them, the binary number system is a natural choice for data han-
dling and manipulation in digital systems. To one of these physical states of the cir-
cuit is associated the binary digit 0, while the binary digit 1 is associated with the
other physical state. On the other hand, humans prefer the decimal number system.
In addition, it is often desirable to have a digital system handle information other
than numerical quantities, such as letters of the alphabet. A solution to this problem
is to encode each of the decimal symbols, letters of the alphabet, and other special
symbols by a unique string of binary digits, called a code group. In this way the dig-
ital system considers each code group as a single entity and can process and manip-
ulate the elements comprising these code groups as binary digits.

2.10.1 Decimal Codes


Consider first the 10 symbols occurring in the decimal number system. There are
many codes that can be conceived for just these symbols. Coded representations for
the 10 decimal symbols are known as binary-coded decimal (or BCD) schemes or,
simply, decimal codes. With 3 bits, there are 8 combinations (2° = 8) of 0’s and 1’s
available to form a code group; while 4 bits allow 16 such combinations (2* = 16).
Since there are 10 distinct digits in the decimal number system which must be
coded, a minimum of 4 bits are necessary in each code group for a decimal digit. If
the code group consists of 4 bits, then there are six combinations that are not used.
It is interesting to note that there are approximately 30 billion coding schemes for
the decimal digits in which the code groups consist ofjust 4 bits.*
Of the many 4-bit BCD coding schemes, the most common is the &842/ code."
In this coding scheme, the 10 decimal digits are represented by their 4-bit binary
equivalents as shown in Table 2.7. This is known as a weighted code since the cor-
responding decimal digit is easily determined by adding the weights associated with
the 1’s in the code group. The weights are given by the code name. In the case of

*TIn actuality, there are 16!/6! 4-bit codes for the 10 decimal digits.
‘Because of its popularity, this coding scheme is often simply referred to as BCD.
42 DIGITAL PRINCIPLES AND DESIGN

Table 2.7 Weighted decimal codes

Decimal 8421 2421 5421 7536 Biquinary code


digit code code code code 5043210

0 0000 0000 0000 0000 0100001


| 0001 0001 0001 1001 0100010
2 0010 0010 0010 O111 0100100
3 OO11 0011 0011 0010 0101000
4 0100 0100 0100 1011 0110000
5 0101 1011 1000 0100 1000001
6 0110 1100 1001 1101 1000010
7 O11] 1101 1010 1000 1000100
8 1000 1110 1011 0110 1001000
9 1001 1111 1100 1111 1010000

the 8421 code, the Oth-order bit in the code group has the weight 1, the Ist-order bit
has the weight 2, the 2nd-order bit has the weight 4, and the 3rd-order bit has the
weight 8. In general, if the 4 bits of a code group are written as b;b,b,b,) and the
weights for the corresponding bits are w3, w>, w;, and wo, then the decimal digit N is
given by
N= b; X w3 + bp X w, + db, X w, + do X Wo

using base-10 arithmetic. To form the 8421 BCD representation of a decimal num-
ber, each decimal digit is simply replaced by its 4-bit code group. For example, the
8421 BCD representation of the decimal number 392 is 001110010010. It is impor-
tant to note that the number of bits within each code group must be fixed to avoid
any ambiguity in interpreting a coded number. Furthermore, it should be understood
that a coded decimal number is not the binary equivalent of the decimal number as
was discussed earlier in this chapter.
There are numerous other weighted codes. Among these are the 242] code and
the 542/ code, both of which are shown in Table 2.7. Again the code group for each
decimal digit consists of 4 bits, and the weights associated with the bits in each code
group are given by the name of the code.
Although the 8421 code is the most common BCD scheme, the 2421 code does
have an important property not possessed by the 8421 code. In some codes, the 9’s-
complement of each decimal digit is obtained by forming the I’s-complement of its
respective code group. That is, if each 0 is replaced by a | and each | is replaced by
a Q in the code group for the decimal digit X, then the code group for the decimal
digit 9-X results. Codes having this property are said to be sel/f-complementing. As
indicated in Table 2.7, the 2421 code does have this property. For example, the 9’s-
complement of 7 is 2, and the 1’s-complement of 1101 is 0010. As a consequence,
the 9’s-complement of an entire decimal number is obtained by forming the 1’s-
complement of its coded representation. As an illustration, the decimal number 328
appears as 001100101110 in 2421 code. Forming the 1’s-complement of this coded
CHAPTER 2 Number Systems, Arithmetic, and Codes 43

representation results in 110011010001. This is the coded form for 671, which is
the 9’s-complement of 328. A self-complementing coding scheme is convenient
in a digital system where subtraction is performed by the addition of the 9’s-
complement.
It is even possible to have codes with negative weights. An example of such a
code is the 7536 code, where the bar over the 6 indicates that the weight associated
with the Oth-order bit in the code group is —6. The code group for each decimal
digit in the 7536 code is enumerated in Table 2.7. It should be noted that this is also
a self-complementing code.
Although all the codes mentioned thus far have 4 bits, there are also weighted
codes having more than 4 bits. The best known of these is the biquinary code. As
indicated in Table 2.7, the weights for the biquinary code are 5043210. The name of
the biquinary code is derived from the fact that for each decimal digit, two 1’s ap-
pear in the code group—one of these among the first 2 bits of the code group (hence
the prefix bi) and the other among the remaining 5 bits of the code group (hence the
suffix guinary).
A well-known 4-bit nonweighted BCD scheme is the excess-three code (or XS-
3 code) shown in Table 2.8. This code is derived by adding the binary equivalent of
the decimal 3, that is, 0011,.), to each of the code groups of the 8421 code. Thus, in
this coding scheme, the decimal number 961 appears as 110010010100. As is evi-
dent from Table 2.8, this code is self-complementing.
The final decimal code to be introduced is the 2-out-of-5 code, also shown in
Table 2.8. This is a nonweighted code* in which exactly 2 of the 5 bits in each code
group are |’s, the remaining 3 bits being 0’s. An advantage of this code, as well as
the biquinary code, is that it has error-detecting properties. Error detection is dis-
cussed in the next section.

Table 2.8 Nonweighted decimal codes

Decimal digit Excess-3 code 2-out-of-5 code


0 OO11 11000
1 0100 00011
2 0101 00101
3 0110 00110
4 O111 01001
5 1000 01010
6 1001 01100
7 1010 10001
8 1011 10010
9 1100 10100

*Except for the coding of the numeral 0, it is a weighted code having the weights 74210.
44 DIGITAL PRINCIPLES AND DESIGN

Frame Frame
bar bar

Check
sum digit

Figure 2.3 U.S. Postal Service bar code


corresponding to the ZIP code
14263-1045.

An interesting application of the 2-out-of-5 code is the U.S. Postal Service bar
code. Figure 2.3 shows the bar code that would appear on a piece of mail having the
ZIP code 14263-1045. Each digit is coded using the 2-out-of-5 code where the bi-
nary digit 0 of a code group occurs as a short bar and the binary digit 1 of a code
group occurs as a tall bar. The ZIP code appears between two tall bars, called frame
bars, which serve to define the beginning and ending of the bar code. The frame
bars are used for aligning the scanner which reads the bar code. A final check sum
digit is also included in the bar code, e.g., the digit 4 in Fig. 2.3. The check sum
digit is used for error correction and is discussed in Sec. 2.12.

2.10.2 Unit-Distance Codes


There is a class of codes, referred to as unit-distance codes, that are particularly im-
portant when an analog quantity must be converted into a digital representation. The
basic property of a unit-distance code is that only one bit changes between two suc-
cessive integers which are being coded. An example of a unit-distance code is the
Gray code, which is given in Table 2.9 for the decimal numbers 0 to 15.
To illustrate the usefulness of a unit-distance code, assume the position of a
shaft, an analog quantity, is to be digitally represented. To do this, a positional en-
coder wheel is attached to the shaft. For simplicity of this discussion, assume that
it is sufficient to express the angular position of the shaft with only four binary dig-
its.* Two possible angular position encoder wheels are shown in Fig. 2.4. In one
case, the coding is done with conventional binary and the other utilizes the Gray
code. To determine the angular position, 1.e., sector, fixed photosensing devices
arranged in a line can be used to detect the light and shaded cells across a sector of
the encoder wheel. In this example, it is assumed that a light cell on the wheel is
read by a photosensing device as the binary digit 1 and a dark cell as the binary
digit 0.

*Thus, the angular position is given as increments of 360/16 = 22.5°. Greater refinement is easily
achieved by using more digits and an encoder with more sectors.
CHAPTER 2 Number Systems, Arithmetic, and Codes 45

Table 2.9 Gray code


a

Decimal number Gray code


0 0000
1 0001
yD) 0011
5 0010
4 0110
5 O11
6 0101
7 0100
8 1100
9 1101
10 HUI
11 1110
12 1010
13 1011
14 1001
15 1000

If the conventional binary encoder wheel is used, then when moving from sec-
tor 3 to 4, the code word must change from 0011 to 0100. This requires 3 bits to
change value. On the other hand, in the case of the Gray code encoder, the code
word must change from 0010 to 0110, which involves only a change of 1 bit. Now
assume that the photosensing devices are out of alignment such that the photosensing

Photo Photo
sensors sensors

Figure 2.4 Angular position encoders. (a) Conventional binary encoder. (b) Gray code encoder.
46 DIGITAL PRINCIPLES AND DESIGN

Photo Photo
sensors sensors

(a)

Figure 2.5 Angular position encoders with misaligned photosensing devices. (a) Conventional binary encoder.
(b) Gray code encoder.

device reading the second most significant bit in the code word is slightly advanced,
causing it to read ahead of the others. This is shown in Fig. 2.5a. When the angular
position of the encoder is near the transition between sectors 3 and 4, it is possible
that a reading of 0111 might result since the photosensing device for the second
most significant digit is already reading sector 4 while the other photosensing de-
vices are reading sector 3. A significant error has now occurred since a reading of
0111 indicates sector 7, which is several sectors away. On the other hand, if a unit-
distance code is used, such as the Gray code, then the effect of the misalignment re-
sults in the code for the next sector being obtained prematurely as illustrated in Fig.
2.5b. With such a coding scheme, however, the error encountered in a digital repre-
sentation can never exceed that of one sector since only | bit must change value be-
tween any two adjacent sectors.

2.10.3 Alphanumeric Codes


Thus far only coding schemes for decimal numbers have been considered. However,
frequently alphabetic information must be handled by a digital system. In addition to
alphabetic and numeric information, it has been found desirable to code special sym-
bols such as punctuation marks, $, #, @, =, +, (, ), etc., and control operations such
as Backspace, Form Feed, Carriage Return, etc. In general, alphabetic symbols, nu-
meric symbols, special symbols, and certain control operations are referred to as char-
acters. Codes that are used to represent characters are called alphanumeric codes.
To code the 26 letters of the alphabet in both upper- and lowercase along with the
10 numeric symbols, at least 6 bits are necessary. Although many alphanumeric codes
CHAPTER 2 Number Systems, Arithmetic, and Codes 47

Table 2.10 The 7-bit American Standard Code for Information Interchange (ASCII)

bo bsb,
bz by by by O10 O11 100 101 110 111
ORO ROMEO SP 0 @ RB, p
0 0 Kou l A Qube q
OP Oa ea wy » B R b r
oo @ i i # 3 Cc S c S
Oe OF 0 $ ah |p eTe wT d t
Omori %o 5 ee e u
Oot 1 @ & 6 Re V f
Oa aia : 7 G W
1G, S0980 ex 8 H Xx
l © @ fl ) g) | I BY
i @ st @ = : J I,
I @ tf a 5 K [
Lo Oh ; << IL, \
bats Oe al - = M ]
dS aie (0) : > | N e
el aL / ? O
Control Characters
NUL Null DCI Device Control |
SOH Start of Heading DC2 Device Control 2
STX Start of Text Des Device Control 3
ETX End of Text DC4 Device Control 4
BOLT End of Transmission NAK Negative Acknowledge
ENQ Enquiry SYN Synchronous Idle
ACK Acknowledge EIB End of Transmission Block
BEL Bell CAN Cancel
BS Backspace EM End of Medium
HT Horizontal Tab SUB Substitute
LE Line Feed ESE Escape
VT Vertical Tab FS File Separator
BE Form Feed GS Group Separator
CR Carriage Return RS Record Separator
SO Shift Out US Unit Separator
SI Shift In SP Space
DIET: Data Link Escape DEL Delete

have been developed, the best known alphanumeric code is the 7-bit American Stan-
dard Code for Information Interchange, or ASCII code. Table 2.10 gives a listing of
this code. As with the numeric codes using code groups, alphanumeric information is
coded by the juxtaposition of the appropriate code groups for the characters.
More recently, the Unicode Standard has been developed. This 16-bit character
coding system provides for the encoding of not only the English characters but also
those of foreign languages including those of the Middle East and Asia. In addition,
the Unicode Standard includes punctuation marks, mathematical symbols, technical
symbols, geometric shapes, and dingbats.
48 DIGITAL PRINCIPLES AND DESIGN

2.11 ERROR DETECTION


In the process of transferring information between two points, it is possible for er-
rors to be introduced. Since digital information consists of strings of 0’s and lessran
error is said to occur when at least one 0 inadvertently becomes a | or when at least
one | becomes a 0. In such a situation it is desirable to detect the occurrence of the
error. The ability to detect errors is inherent in some codes; while in other cases, ad-
ditional bits are added to the information for this purpose.
In Sec. 2.10, the biquinary and 2-out-of-5 codes were discussed. In both of
these cases, there are exactly two |’s in each code group for a decimal digit. Thus, if
a single error should occur to any code group, that is, one 1 becomes a 0 or one 0
becomes a | (but not both), then there no longer are exactly two 1’s appearing in the
code group for the digit. For example, in the 2-out-of-5 code, if the code group
00110 is transmitted and 01110 is received, then the three 1’s indicate that an error
has occurred. By means of an appropriate logic network, the condition of other than
exactly two 1’s in a code group can be detected. Hence, it is said that codes of this
type are error-detecting codes.
When error-detecting codes are not used to represent coded information or
when information appears as conventional binary numbers, error detection is
achieved by the addition of a parity bit. In this approach an additional bit is ap-
pended to the information so that the number of 1’s in the code group or binary
number, in the case of noncoded information, is even or odd, depending upon the
parity rule being employed. When the odd parity scheme is employed, the parity
bit is selected so that the number of 1’s in the code group or binary number, in-
cluding the parity bit, is odd. In the even parity scheme, the parity bit is selected so
there are an even number of 1|’s in the code group or binary number, including the
parity bit. The odd parity bit scheme as applied to the 8421 code is illustrated in
Table 2.11a. If any single bit representing a character should erroneously change,
including the parity bit, then there no longer is an odd number of 1’s in the overall
code group for that character. In Table 2.11b, the even parity bit scheme is applied
to the 8421 code. In this case, each code group with the parity bit consists of an
even number of |’s.
Testing for an even or odd number of 1’s within a string of binary digits is
readily done by a logic network. When an error is detected, the digital system can
be designed to request the retransmission of the string of bits or emit a signal indi-
cating a malfunction.
Inasmuch as single errors are detected by the parity-bit scheme, double errors
are not detected since double errors do not cause the overall parity to change. How-
ever, triple errors also are detected, and, in general, it is possible to detect any odd
number of errors by this method.
For the purpose of classification, the error-detecting capability of a coding
scheme is defined as | less than the minimum number of errors it can not always de-
tect. Thus, the parity-bit scheme is regarded as a single-error-detecting scheme even
though triple errors are also detected.
CHAPTER 2 Number Systems, Arithmetic, and Codes 49

Table 2.11 8421 code with a parity bit. (a) Odd-parity scheme.
(b) Even-parity scheme
Decimal digit 8421p Decimal digit 8421p
0 00001 0 00000
1 00010 | 00011
2 00100 2 00101
3 00111 3 00110
4 01000 4 01001
5 01011 5 01010
6 01101 6 01100
7 01110 Wf Oll11
8 10000 8 10001
9 10011 9 10010
(a) (b)

Given the code groups for two characters in some coding scheme, the number
of bits that must be changed in the first code group so that the second code group re-
sults is defined as the distance between the two code groups. Thus, referring to the
2-out-of-5 code, the distance between the code groups for the decimal digits 2, i.e.,
00101, and 5, i.e., 01010, is four. Furthermore, the minimum distance of a code is
the smallest distance between any two valid code groups appearing in the coding
scheme. Again referring to the 2-out-of-5 code and considering all pairs of code
groups, the minimum distance is two. Similarly, the minimum distance for the
biquinary code is also two.
The significance of the minimum distance is that it is related to the error-
detecting capability of a coding scheme, 1.e., the maximum number of bits in error
that is always detectable. In particular,

D=M- 1 (a)

where D is the error-detecting capability of the code and M is its minimum dis-
tance. To see the validity of Eq. (2.11), consider a code whose minimum distance
is two. This means that at least 2 bits must change before any valid code group be-
comes another valid code group in the same coding scheme. If only | bit should er-
roneously change, then the resulting code group consists of a combination of 0's
and 1’s that do not appear in the definition of the code. Hence, when such a code
group is detected it can be concluded that an error has occurred in transmission.
Since, in general, at least M bits must change in a code with minimum distance M
before one code group can become another valid code group in the same coding
scheme, any M — 1, or fewer, bit changes always results in an invalid code group.
Thus, the existence of any of these code groups indicates the presence of de-
tectable errors.
50 DIGITAL PRINCIPLES AND DESIGN

2.12 ERROR CORRECTION


In the previous section, error detection was studied. At that time no consideration
was given to the exact nature of the error so that it could be corrected. It is also
possible to construct codes where error correction can be performed on received
information in which bits erroneously have been changed during transmission.
The generalized form of Eq. (2.11), which gives the relationship between the
minimum distance of a code and its error-detecting and error-correcting capabili-
ties, is
Ce Vial where C = D (2.12)

In this equation C is the number of erroneous bits that always can be corrected, D is
the number of erroneous bits that always can be detected, and M is the minimum
distance of the code. The restriction C = D is necessary since no error can be cor-
rected without its first being detected. An implication of Eq. (2.12) is that for a
given minimum distance, it is possible to perform a trade-off between the amount of
the error-correcting and error-detecting capabilities of a code.
From Eq. (2.12), it is seen that single-error correction 1s achievable when a
code has a minimum distance of three, i.e., when C = 1 and D = 1. In sucha
code at least 3 bits must change before a valid code group can convert into an-
other valid code group. If a single error should occur, then the resulting code
group is within | bit of matching the intended code group and within at least 2
bits of matching any other valid code group. By noting which single bit must be
changed to obtain a valid code group it is possible to achieve the correction.
However, if two errors occur, then erroneous correction might result since the in-
valid code group can now be within | bit of matching a nonintended code group.
Thus, in a code having a minimum distance of three, only single-error correction
is possible.
On the other hand, a code having a minimum distance of three can be used
for double error-detection rather than single-error correction, i.e., when C = 0
and D = 2 in Eq. (2.12). If the code is being used in this manner, then it is only
necessary to observe that an invalid code group is received. Since the minimum
distance is three, the occurrence of one or two errors always results in an invalid
code group.
By having codes with a minimum distance greater than three, it is possible to
provide for additional error-detecting and error-correcting capabilities. This is
shown in Table 2.12, which is obtained by evaluating Eq. (2.12).

Table 2.12 Amount of error detection D and error correction C possible with a code
having a minimum distance M

fey ea 3 4 | 5 6 |
PO area 3. oe ae ee eee
moa OFcee Ons ea ae =r; 24) SORE eae
CHAPTER 2 Number Systems, Arithmetic, and Codes 51

2.12.1 Hamming Code


Perhaps the best-known coding scheme that enables single-error correction was de-
vised by R. W. Hamming and is known as the Hamming code. In a Hamming code
several parity bits are included in a code group. The values of the parity bits are de-
termined by an even-parity scheme over certain selected bits. When a code group is
received, the parity bits are recalculated; that is, a check is made to see if the correct
parity still exists over their selected bits. By comparing the recalculated parity bits
against those received in the code group, it is possible to determine if the received
code group is free from a single error or, if a single error has occurred, then exactly
which bit has erroneously changed. If more than one bit is changed during transmis-
sion, then this coding scheme no longer is capable of determining the location of the
errors.
For the case of 4 information bits, 3 parity bits are included along with the 4 in-
formation bits to form a 7-bit code group. The general organization of a code group
in this case is as follows:

iv 6 5 4 3 2 it Position
b, b, b; P3 b, Po P| Code group format

where the 7 bits of the code group are numbered from right to left, the three p’s de-
note the parity bits, and the four b’s denote the information bits being encoded. The
values of the parity bits are determined by the following rules:

Pp, is Selected so as to establish even parity over positions 1, 3, 5, and 7


P> is selected so as to establish even parity over positions 2, 3, 6, and 7
p; is selected so as to establish even parity over positions 4, 5, 6, and 7

That is, p; is a O when there are an even number of 1’s in positions 3, 5, and
7; otherwise, p, is a 1. In this way, there are an even number of 1|’s in the four
positions 1, 3, 5, and 7. In a similar manner, the values of p, and p; are deter-
mined so as to have an even number of 1|’s over their selected positions of the
code group.
To illustrate the above rules for constructing a Hamming code group, assume
the four information bits are b4b3b,b, = 0110. These bits appear in positions 3, 5, 6,
and 7 of the Hamming code group; that is,

7] 6 5 4 3 2 1 Position
b, b; by P3 b, Po P\ Code group format
0 | 1 0 Hamming code group
with information bits
placed

The next step is to determine the values of the parity bits in the code group. Parity
bit p, is used to establish even parity over positions 1, 3, 5, and 7. Since position 5
has a 1 and positions 3 and 7 have 0s, parity bitp;must be a 1. In a similar manner,
parity bit p, is a 1 so that even parity is established over positions 2, 3, 6, and 7.
52 DIGITAL PRINCIPLES AND DESIGN

Finally, parity bit p; is a 0 in order to have an even number of 1’s in positions 4, 5,


6, and 7. This results in the following Hamming code group:
7 6 5 4 3 2) | Position
by b; by P3 b, Po Pi Code group format
0 | | 0 0 | | Hamming code group

The information bits in a Hamming code group may themselves be a code for a
character. For example, Table 2.13 gives the complete Hamming code for the 10
decimal digits when they are coded by the 8421 BCD scheme.
Now consider how a 7-bit Hamming code group, upon being received, is checked
for an error during transmission and, if necessary, how the location of a single error is
determined. Upon the receipt of a Hamming code group, the parity bits are recalcu-
lated using the same even-parity scheme over the selected bit positions as was done
for encoding. From the recalculation, a binary check number, cc¥cj}, is constructed.
In particular, if the recalculation of parity bit p; is the same as the p; bit in the received
Hamming code group, then c;* is set equal to 0. If, on the other hand, the recalculated
value of p; is not the same as the p; bit in the received Hamming code group, then c** is
set equal to |. In other words, if there are an even number of 1’s in positions 1, 3, 5,
and 7, then cj‘ is set equal to 0; while if there are an odd number of 1’s in these posi-
tions, then c;* is set equal to 1. In a similar manner, c% is set equal to 0 if and only if
there are an even number of 1’s in positions 2, 3, 6, and 7; while c¥ is set equal to 0 if
and only if there are an even number of |’s in positions 4, 5, 6, and 7. Otherwise, c¥
and c} are set equal to 1. The binary check number c¥c3‘c¥ indicates the position of
the error if one has occurred. If c¥‘c3c¥ = 000, then no position is in need of correc-
tion, 1.e., no single error has occurred in the Hamming code group.
As an example, assume the previously constructed Hamming code group with
the information bits b,b3b)b, = 0110 is transmitted, i.e., 0110011, but the code group
0110111 is received. Thus, the bit in position 3 erroneously changed from 0 to | dur-
ing transmission. Referring to bit positions 1, 3, 5, and 7 of the received code group,

Table 2.13 The Hamming code for the 10 decimal digits weighted by the 8421 BCD
scheme

Decimal wi) 6 5 4 3 2 1 Position


digit 8 4 2 P3 t P> P, 8421 BCD and parity bits
0 0 0 0 0 0 0 0
| 0 0) 0 0 | | l
2 0) 0 l l 0 0 |
3 0) 0) | | l ] 0
4 0) | 0 l 0 | 0
2 0) l 0 | | 0 ]
6 0 l | 0 0 | ]
i 0 | | 0 | 0 0
8 | 0 0 | 0 ] |
9 | 0 0 1 | 0 0
CHAPTER 2 Number Systems, Arithmetic, and Codes 53

it is seen that there are an odd number of 1’s. As stated above, this requires c¥ to be
set to | in the binary check number. In a similar manner, since there are an odd num-
ber of 1’s in positions 2, 3, 6, and 7, it follows that c = 1. Finally, it is seen that
there are an even number of 1’s in positions 4, 5, 6, and 7. Thus, c¥ = 0. The binary
check number is therefore cc}c}/ = 011 which, in turn, is the binary equivalent of
the decimal number 3. This indicates that the bit in position 3 is incorrect. Now that
the location of the error is established, it is a simple matter to complement the bit in
position 3 of the received Hamming code group to obtain the transmitted code group.
In this example, it is concluded that the correctly transmitted Hamming code group
was 0110011 and that the actual information bits were b,b3b,b,; = 0110. Although
this example involved an error in one of the information bits, a change in a parity bit
can equally well occur and be located upon constructing the binary check number.
The above procedure for constructing a Hamming code group that enables
single-error correction is extendable to handle any number of information bits. If m
information bits are to be encoded, then k parity bits are needed in each Hamming
code group where
m=x2-k-1

Table 2.14 gives a listing of the minimum number of parity bits kK needed for various
ranges of m information bits, resulting in a Hamming code group with m + k bits.
For example, when encoding information consisting of from 5 to 11 bits, four parity
bits must appear in each code group. If the bit positions in a Hamming code group
are numbered right to left from | to m + k, then those positions corresponding to 2
raised to a nonnegative integer power, i.e., positions 1, 2, 4, 8, etc., are allocated to
the parity bits. The remaining bit positions are used for the information bits. Table
2.15 indicates which bit positions are associated with each parity bit for the purpose
of establishing even parity over selected bit positions. For example, the parity bit in
position 1, i.e., p;, checks the bit in every other position beginning with the bit in po-
sition 1. The parity bit in position 2, i.e., p,, considers every other group of 2 bits be-
ginning with the parity bit in position 2. The third parity bit, p,, considers every other
group of 4 bits beginning with the parity bit in position 4. In general, the parity bit in
position 2', i.e., p;, considers every other group of 2' bits beginning with the parity bit
in position 2’. To determine the location of a single error in a received Hamming
code group, the binary check number cf « - cxc¥ is formed. This is done by recal-
culating the parity bits or, equivalently, checking the parity over the selected bits

Table 2.14 The number of parity bits k needed to construct a Hamming code with m
information bits

Range of m Number of parity


information bits bits k

Nn
YH
54 DIGITAL PRINCIPLES AND DESIGN

Table 2.15 Bit positions checked by each parity bit ina Hamming code

Parity bit
position Positions checked
[She Om LTE nS lS Se Sree
DOr Oy Ma La Sree
An Onl 2S l4As loe20y ce
SO On MMOS 4a 2A ees
oOo
fhN
-

indicated in Table 2.15. If the recalculated value of p; is the same as the p; bit re-
ceived in the code group, then c** is set equal to 0; otherwise, it is set equal to 1. The
binary check number cj * + + cc’ indicates the position of the error if one has oc-
curred. If cf + + + cc} = 0+++ 00, then a valid Hamming code group was received.

2.12.2 Single-Error Correction Plus Double-Error


Detection
A Hamming code as constructed above provides for the detection and correction of
only a single error. With a slight modification, it is possible to construct Hamming
code groups in which single-error correction plus double-error detection is possible.
To a Hamming code group constructed by the above procedure, an overall parity bit
is appended and assigned a value such that the complete code group, including its
overall parity bit, contains an even number of |’s. The overall parity bit position is
not used in determining the values of the other parity bits p,, p, etc. The resulting
code group enables single-error correction plus double-error detection.
To interpret a received code group, three cases must be considered. As before, the
binary check number cf + + + c¥ cf is calculated as well as the overall parity of the re-
ceived code group. Again, the overall parity bit position is not used in determining the
values of the other parity bits p;. If cf + + + c¥ ci’ = 0+++ 00 and the recalculation of
the overall parity is correct, then no single or double errors have occurred in the trans-
mission of the code group. If the recalculation of the overall parity is incorrect, then a
single error has occurred and the bit position of the error is indicated by the binary
check number cj + + + c¥ cj‘ where a binary check number of cf + - c¥ c* = 0--+ 00
indicates that it is the overall parity bit that is in error. Hence, single-error correction
is achieved. Finally, if c;° + + + c} cy is other than 0 +++ 00 and the recalculation of the
overall parity is correct, then two errors have occurred. In this case, double-error de-
tection is achieved. However, no correction is possible.

2.12.3 Check Sum Digits for Error Correction


Another approach to achieving error correction is through the use of check sum dig-
its. An example of this approach is used in the U.S. Postal Service bar code. It was
noted in Sec. 2.10 that an additional digit was appended to the encoded ZIP code in
Fig. 2.3. This additional digit provides for single-error correction. In this case, the
CHAPTER 2 Number Systems, Arithmetic, and Codes 55

check sum digit is a single digit which when added to the sum of the digits in the ZIP
code results in a total sum that is a multiple of 10. Mathematically, this is written as

(ZIP digit sum + check sum digit),,.og 19 = 0 (2:13)

For the example in Fig. 2.3, the sum of the seven ZIP digits is 26. Therefore, the
check sum digit that is appended to the bar code is 4 since this digit increases the
sum to the next multiple of 10. The reason mod 10 is used in this calculation is that
there are 10 code groups in the bar code (or, equivalently, the 2-out-of-5 code).
To understand how single-error correction is obtained, again consider Fig. 2.3.
Upon scanning the bar code, if the condition of three short bars and two tall bars is
not satisfied for any block of five bars starting after the frame bar, then it is known
that a particular digit is in error, i.e., single-error detection. The value of the erro-
neous digit is determined by summing the correct digits and applying Eq. (2.13). For
example, assume the fifth digit read, i.e., digit 3, in the bar code of Fig. 2.3 is de-
tected as being erroneous. Then, it is seen that the sum of the digits in the bar code is

ae Ore lO coe 27) Se

where e denotes the erroneous digit. Since Eq. (2.13) must be satisfied, 1.e.,

(27 oN aed 10a 0

it immediately follows that the erroneous digit e = 3. Although the 2-out-of-5 code
allows the detection of more than one error digit in the bar code, correction is possi-
ble only if a single digit in the bar code is in error.

CHAPTER 2 PROBLEMS
2.1. Continue Table 2.2 by listing the next 19 integers in each of the stated
number systems.
2.2 Construct a table for the first 32 integers in the quaternary, quinary, and
duodecimal number systems, along with their decimal equivalents.
2.3 Perform the following additions in the binary number system.
a, ATO1- 1001 be OOn ela
Cm tOOOI SS O11 dy 11-01 £0011
TG USTCO a fee Ie ORO
2.4 Perform the following subtractions in the binary number system.
a gULO IO be SOOT — O14
Cp 00 IALOr depo Ot LO:t
Ce LOO LOR 11 101 POO SLO 4
2.5 Perform the following multiplications in the binary number system.
an etOMa 110 bs SiOx Lond
Ce lO1O Xeb-Ol d= 10RLS<11.01
56 DIGITAL PRINCIPLES AND DESIGN

2.6 Perform the following divisions in the binary number system.


a: 110012 1040 b. 10000101111 + 10101
LOOT hitsaiiG doe LORI One
2.7. Perform the following arithmetic operations in the ternary number system.
a, 2012.4. 110201 by 220 er a2
eye 20102 12121 de wiZ002FI 2a 2
ee 120219122 f, 1220206:2 2
e 22120011012 be
2.8 Construct addition, subtraction, and multiplication tables for the following
number systems.
a. Quaternary b. Octal c. Hexadecimal
2.9 Perform the following additions and subtractions in the indicated number
systems.
a SIIBee s10 2, b. 130012.)— 333214
c. 466735,
+ 375627, d. 700605.) —356742,
e. 8C9F65
46 + 374B27 16 f. D62B053,46— 47E3C89,46
2.10 Using the polynomial method of number conversion, determine the
equivalent decimal number for each of the following:
AcalOUOisIa hae WhaLOles
Ce 2h Om d. 12021.1,,
e. 362.) fa 1475 2s
Oo Gae i Ay Bre
2.11 Using the polynomial method of number conversion, determine the
equivalent binary number for each of the following:
a. 42:10) b. 78.510)
eth 20m diniDho
e. 204: Eas Lebshey

2.12 Using the polynomial method of number conversion, determine the


equivalent ternary number for each of the following:
a. 11010.) b. 73.2)
C. 75,10) d. 3Du6
2.13 Determine the base b of a number system such that 225,,. = 89,19).
2.14 Using the iterative method of number conversion, convert the decimal
numbers 163.75 and 202.9 into
a. Binary b. Ternary
c. Octal d. Hexadecimal
(Note: For the fraction part, form a sufficient number of digits so that an
exact equivalence or a repeating sequence is obtained.)
CHAPTER 2 Number Systems, Arithmetic, and Codes 57

2.15 Using the iterative method of number conversion, convert the binary number
11100010.1101 into
a. Ternary b. Octal
c. Decimal d. Hexadecimal
(Note: For the fraction part, form a sufficient number of digits so that an
exact equivalence or a repeating sequence is obtained.)
2.16 Using the iterative method of number conversion, convert the ternary
number 10112.1 into
a. Binary b. Octal
c. Decimal d. Hexadecimal
(Note: For the fraction part, form a sufficient number of digits so that an
exact equivalence or a repeating sequence is obtained.)
2.17 Convert each of the following binary numbers into its equivalent in the octal
and hexadecimal number systems.
a. 111111001.00111101 b. 1010001011.1
c. 10111100010.01001011 d. 11100100110.101
2.18 Convert each of the following octal numbers into its equivalent in the binary
number system.
eles ye) Bb. 45.1
Cagolts d. 724.06
2.19 Convert each of the following hexadecimal numbers into its equivalent in
the binary number system.
ae G3 Bb E2C
c. 450.B d. SEA59
2.20 Using the ideas of Sec. 2.6, determine an algorithm to convert numbers
between base 3 and base 9. Illustrate your algorithm by converting
21021.112,3) into base 9.
2.21 a. Show that the r’s-complement of the r’s-complement of a number is the
number itself.
b. Repeat for the (r — 1)’s-complement.

2.22 Form the 1’s-complement and 2’s-complement for each of the following
binary numbers. In each case express the complement with the same number
of integer and fraction digits as the original number.
a. 10111011 b. 101110100
c. 101100 d. 0110101
€. 010.11 to) A1014.100
g. 100101.101 h. 1010110.110

2.23 Form the 9’s-complement and 10’s-complement for each of the


following decimal numbers. In each case express the complement
58 DIGITAL PRINCIPLES AND DESIGN

with the same number of integer and fraction digits as the original
number.
a. 285302 b. 39040
cy 059637 d. 610500
en 4289 f. 5263.4580
g. 0283.609 h. 134.5620
2.24 Form the r’s-complement and (r — 1)’s-complement for each of the
following numbers. In each case express the complement with the same
number of integer and fraction digits as the original number.
am O1202 ts by ORO.
c. 241.03,5 d. 031.240,
e, 4072105 f. 0156.0037,.)
g. 83D.9F6) h. 0070C.B6E6,
Perform the following unsigned binary subtractions by the addition of the
1’s-complement representation of the subtrahend. Repeat using the 2’s-
complement representation of the subtrahend. (Note: Recall that when
dealing with complement representations, the two operands must have the
same number of digits.)
ae LIOMOF shore b. 10100 — 110000
2 LOOTC0OTT LOU tr d OLLO.Of One er
€ TOL TOLL Ou Ole f. 101.1001 — 11010.010011
2.26 Perform the following binary subtractions by expressing the quantities
involved as signed numbers and using the |’s-complement representation of
the subtrahend. Repeat using the 2’s-complement representation of the
subtrahend. (Note: Recall that when dealing with complement
representations, the two operands must have the same number of
digits.)
a. LO110— 1101 bs aLOLI — 110100
e- TOA00P= 14.01 ds SLO10T = 10T0Ts
eo) JO De. OLOTs i. LOL O1OL aR TOTOE
2.21 Perform the following unsigned decimal subtractions by the addition of the
9’s-complement representation of the subtrahend. Repeat using the 10’s-
complement representation of the subtrahend. (Note: Recall that when
dealing with complement representations, the two operands must have the
same number of digits.)
a. 7842 — 3791 b. 265 — 894
€.. 5083: — 9457 de 13:08 538.9
€.. 427/208 3933 f. 804.2 — 3621.47
2.28 Perform the following decimal subtractions by expressing the quantities
involved as signed numbers and using the 9’s-complement representation of
CHAPTER 2 Number Systems, Arithmetic, and Codes 59

the subtrahend. Repeat using the 10’s-complement representation of the


subtrahend. (Note: Recall that when dealing with complement
representations, the two operands must have the same number of
digits.)
da 5460 = 232 b. 384 — 726
Ga o20.4.— 87.2 d. 76.23 — 209.4
e 406.9 — 406.9 f. 63.4 — 240.36
2.29 Consider the signed binary numbers A = 0,1000110 and B = 1,1010011
where B is in 2’s-complement form. Perform the operations
ar. GAs »B De, AY as)
Coa BA dij pA B
by taking the signed 2’s-complement of a signed operand when necessary
and doing signed addition.

2.30 Consider the signed decimal numbers A = 0,601.7 and B = 1,754.2 where B
is in 10’s-complement form. Perform the operations
a. “A+ B bs BAB
Cie BoA. (6 os Pret 2
by taking the signed 10’s-complement of a signed operand when necessary
and doing signed addition.

2.31 Consider the signed binary numbers A = 0,1010110 and B = 1,1101100


where B is in |’s-complement form. Perform the operations
aA Bb b; A—B
(Oh dia ds =A —B
by taking the signed 1’s-complement of a signed operand when necessary
and doing signed addition.

2.32 Consider the signed decimal numbers A = 0,418.5 and B = 1,693.0 where B
is in 9’s-complement form. Perform the operations
a. A+B b> y4e=B
Cu iA de Ars
by taking the signed 9’s-complement of a signed operand when necessary
and doing signed addition.

Pree) Give the coded representation of the decimal number 853 in each of the
following BCD coding schemes.
a. 8421 code b. 7536 code c. Excess-3 code
d. Biquinary code e. 2-out-of-5 code

2.34 Consider the sequence of digits 10001001010110000011. Determine the


number being represented assuming each of the following BCD coding
schemes.
a. 8421 code b. Excess-3 code c. 2-out-of-5 code
60 DIGITAL PRINCIPLES AND DESIGN

2.35 Encode each of the decimal digits 0 to 9 with 4 bits using the following
weighted BCD codes. State which of these codes are self-complementing.
a. 7635 code b. 8324 code
c. 8342 code d. 8641 code
2.36 Prove that for a weighted BCD code having all positive weights, one of the
weights must be 1, another must be either | or 2, and the sum of the weights
must be greater than or equal to 9.
2.37 Prove that in a 4-bit weighted BCD code with all positive weights, at most
one weight can exceed 4.
2.38 Prove that in a self-complementing BCD code, which can have both positive
and negative weights, the algebraic sum of all the weights must be 9.
2.39 Prove that in a 4-bit weighted BCD code, at most two of the weights can be
negative.
2.40 Give the coded representation for each of the following character strings
using the 7-bit ASCII code.
a. 960 bh ie ey eo 3Code
2.41 Assume an 8th bit, 7, serving as an even-parity bit is added to the 7-bit
ASCII code. Give the coded representation for each of the following
character strings as hexadecimal numbers.
aes b Z=1 c. Bits
2.42 Construct a Hamming code to be used in conjunction with the following
BCD codes.
a. 2421 code b. 2-out-of-5 code
2.43 Write the Hamming code groups for each of the following 8 bits of
information.
a. 11100011 b. 01011000 c. 10010101
2.44 Assume the following 7-bit Hamming code groups, consisting of 4
information bits and 3 parity bits, are received in which at most a single
error has occurred. Determine the transmitted 7-bit Hamming code groups.
a. 0011000 b. 1111000 C2 TIOLLOO
2.45 a. Construct a Hamming code group for the information bits 100101 that
enables single-error correction plus double-error detection where the
most significant bit position of the code group is for the overall even-
parity bit.
b. Assume that errors occur in bit positions 2 and 9 of the above code
group. Show how the double error is detected.
2.46 A certain code consists only of the following code groups:
00110, 01011, 10001, 11100
What error-detecting and error-correcting properties does this code have?
CHAPTER

Boolean Algebra and


Combinational Networks

n 1854, George Boole (1815-1864), an English mathematician, wrote his now


classic book An Investigation of the Laws of Thought in which he proposed an al-
gebra for symbolically representing problems in logic so that they may be ana-
lyzed mathematically. It was Boole’s intention to provide a means for systemati-
cally establishing the validity of complex logic statements involving propositions
which are only true or false. The foundation laid by Boole has resulted in the calcu-
lus of propositions and the algebra of classes. Today, the mathematical systems
founded upon the work of Boole are called Boolean algebras in his honor.
The application of a Boolean algebra to certain engineering problems was intro-
duced in 1938 by Claude E. Shannon, then at the Massachusetts Institute of Tech-
nology. Shannon showed how a Boolean algebra could be applied to the design of
relay networks in telephone systems. Upon the advent of digital computers, it was
immediately recognized that this algebra had application to the design of any logic-
type system consisting of elements with two-valued characteristics. The study of
Boolean algebra as applied to logic design is also known as switching circuit theory.
Boolean algebra serves as a convenient way for describing the terminal proper-
ties of networks that appear in digital systems. There is a one-to-one correspon-
dence between the algebraic expressions and their network realizations. Since
Boolean algebraic expressions are subject to manipulations and simplification, it
follows that the various ways of writing the expressions provide descriptions for
different networks that can be constructed to achieve the same terminal behavior.
By defining measures of equation complexity, it is possible to obtain economical
and reliable networks using an algebraic approach. Thus, Boolean algebra becomes
an effective tool for the logic design of digital systems.
The logic networks described by a Boolean algebra are divided into two gen-
eral categories: combinational networks and sequential networks. Combinational
networks are characterized by the fact that the outputs at any instant are only a func-
tion of the inputs at that instant. On the other hand, the outputs at any instant from

61
62 DIGITAL PRINCIPLES AND DESIGN

sequential networks are not only a function of the current inputs but, in addition, de-
pend upon the past history of inputs. Thus, sequential networks have a memory
property, and the order in which the inputs are applied is significant. Both types of
logic networks are found in digital systems.
The principles of Boolean algebra are presented in this chapter. It is shown how
expressions written in this algebra are manipulated and simplified. In addition, it is
shown how the algebra is applied to the analysis and design of combinational logic
networks. The analysis and design of sequential logic networks is studied in later
chapters. @

3.1 DEFINITION OF A BOOLEAN ALGEBRA


A mathematical system is formulated by defining a set of elements, a set of opera-
tions, and a set of postulates. Of course the postulates are not arbitrarily defined but
must be consistent so that noncontradictory conclusions, called theorems, are ob-
tained. At this time a particular mathematical system known as a Boolean algebra is
introduced. The postulates in the following definition are based on the work of E. V.
Huntington and are only one of several sets of consistent postulates that are used to
describe a Boolean algebra.

Definition: A mathematical system consisting of a set of elements B, two binary op-


erations (+) and (-), an equality sign (=) to indicate equivalence of expressions
(i.e., the expression on one side of the equality sign can be substituted for the ex-
pression on the other side), and parentheses to indicate the ordering of the opera-
tions 1s called a Boolean algebra if and only if the following postulates hold:

Pl. The operations (+) and (-) are closed; i.e., for all x, y © B
(a) ser
(Ol yy Gap
P2. There exist identity elements in B, denoted by 0 and 1, relative to the
operations (+) and (*), respectively; i.e., for every x € B
(a) O+Fx=x+0=x
(by Lege xl =x
P3. The operations (+) and (-) are commutative; i.e., for all x, y € B
(Q)) 36 52 = Wr oe
(b) x*y=y'x
P4. Each operation (+) and (-) is distributive over the other; i.e., for all
I ON, eS 15)
(Ca 38 (GY Oe = (Ge op WO(Ge Sr a)
(DY Oy a) = ieee)
P5. For every element x in B there exists an element x in B, called the
complement of x, such that
(Qa easel
(DB) eae 10)
P6. There exist at least two elements x, y € B such that x # y.
CHAPTER 3 Boolean Algebra and Combinational Networks 63

Some important observations should be made about the above definition. The
actual elements in the set B (other than 0 and 1) are not specified and the operations
(+) and (-) are not defined. Thus, any mathematical system over a set B having two
operations is a Boolean algebra if these six postulates are satisfied. Hence, in actual-
ity, an entire class of algebras is being defined.
The elements of an algebra are called its constants. For a Boolean algebra the
constants are the elements in the set B of which only two are required: 0 and 1. It is
important that the reader realizes that these two symbols, in general, are nonnumeri-
cal and should not be confused with the binary symbols 0 and | studied in the previ-
ous chapter.
A symbol which represents an arbitrary element of an algebra is called a variable.
Since the postulates of a Boolean algebra make reference to arbitrary elements in the
set B, the symbols x, y, and z in the above definition are considered variables of the al-
gebra. Furthermore, the variables in the above postulates and the theorems to follow
can represent an entire expression as well as a single element. This follows from the
fact that the operations of a Boolean algebra are defined to be closed by Postulate P1
and hence an expression, when evaluated, always represents an element of the algebra.
The symbols (+) and (-) in the definition of a Boolean algebra are used to de-
note two arbitrary operations with properties satisfying the postulates. These sym-
bols are commonly used in equations describing logic networks and should not be
confused with the corresponding symbols in conventional arithmetic.

3.1.1 Principle of Duality


Having defined a Boolean algebra, additional properties of the mathematical system
are developed from its postulates. These properties are stated as theorems in the
next section. However, to reduce the amount of work in proving these theorems, ad-
vantage is taken of the symmetry incorporated in the postulates.
With the exception of Postulate P6, each postulate consists of two expressions
such that one expression is transformed into the other by interchanging the opera-
tions (+) and (-) as well as the identity elements 0 and 1. The expressions appearing
in each of these postulates are called the dual of each other. A theorem or algebraic
identity in a Boolean algebra is proved by applying a sequence of Boolean algebra
postulates. If the dual postulates are applied in exactly the same sequence of steps,
then a parallel argument is established that leads to the dual of the original theorem
or algebraic identity. Consequently, every theorem or algebraic identity deducible
from the postulates of a Boolean algebra is transformed into a second valid theorem
or algebraic identity if the operations (+) and (+) and the identity elements 0 and 1
are interchanged throughout. This is known as the principle of duality.

3.2 BOOLEAN ALGEBRA THEOREMS


From the definition of a Boolean algebra, a number of theorems can be proved. In
this section the most important theorems are presented. The proofs of these theo-
rems are included not only to establish their validity but also to illustrate how
Boolean identities are manipulated.
64 DIGITAL PRINCIPLES AND DESIGN

Before proceeding with the basic Boolean algebra theorems, however, let us
introduce two notational conveniences. This will simplify the writing of expres-
sions in the algebra. First, the product x + y usually is written as the juxtaposition of
x and y, that is, simply xy. This creates no ambiguities as long as variables are des-
ignated as single symbols, possibly with subscripts. Second, a hierarchy between
the operations (+) and (-) is assumed such that the operation (+) always takes
precedence over the operation (+). This allows less frequent use of parentheses
since xy + xz is understood to mean (x y) + (x° z). The reader should note that
these same two notational conveniences are used in the case of conventional alge-
bra and should offer no difficulty in understanding the expressions to which they
are applied.

Theorem 3.1
The element x in Postulate PS of a Boolean algebra is uniquely determined
by x.

Proof
Suppose that for a given element x there are two elements x, and x, satisfy-
ing Postulate PS. Then x + x, =1,x°x, =0,x + x, = 1, andx-x, =0.

XH=x,°!1 by P2(b)
=x,(x+x,) by substitution
=xx+xx, by P4(d)
= xx, + xx, by P3(b)
= Gk by substitution
= xx, + x,x, _ by substitution
=Xx+ xXx, by P3(b)
=x,(x+x,) by P4(b)
=x,° 1 by substitution
(> by P2(b)
Thus, any two elements that are the complement of the element x are
equal. This implies that x is uniquely determined by x. |

*The symbol Ml is used to denote the end of a proof.

Since x is uniquely determined by x, the overbar symbol (_) is regarded as


a unary operation that assigns to an element x in the set B the element x also in the
set B.
Se ee
Theorem 3.2
For each element x in a Boolean algebra

(i) eee
(Die OS 0
CHAPTER 3 Boolean Algebra and Combinational Networks 65

Proof
First consider part (a) of the theorem.

sear I = ho Gs ap by P2(b)
=(x xx. + 1)! by P5(@)
=x+(-1) by P4(a)
=x+x by P2(b)
= by PS(a)

Part (b) of the theorem follows directly from the principle of duality.
That is, if in the expression of part (a) (+) is replaced by (-) and 1 by 0,
then the expression of part (b) results. However, let us prove part (b) in
order that the steps of the proof can be compared with those of the first
part.

x-0=0+ («:0) by P2(a)


= (0° x) = @- 0) by P5(b)
=x-(x + 0) by P4(b)
=x°X by P2(a)
=0 by P5(b)
It should be noted that at each step in the above proof, the dual of the pos-
tulate in the proof of part (a) is applied. 9

As indicated in Sec. 3.1, the letter symbols in the Boolean algebra postulates
and theorems can denote entire expressions as well as single elements. Thus, the
first part of Theorem 3.2 implies that if the constant | is summed with any
Boolean element or expression, then the result is equally well described by simply
the constant 1. A dual statement applies as a consequence of the second part of this
theorem.

Theorem 3.3
Each of the identity elements in a Boolean algebra is the complement of
the other; i.e.,

Proof
By Theorem 3.1 there exists for the identity element 0 a unique element
0 in a Boolean algebra. In Postulate P2(a) let the element x be 0. Then
0 + 0 = O. On the other hand, if x is 0 in Postulate P5(a), and hence
x = 0, then 0 + 0 = 1. Since it is always possible to equate something to
itself, in particular, 0 + 0 = 0 + 0, upon substitution it follows that 0 = I.
By duality it follows that T = 0.
66 DIGITAL PRINCIPLES AND DESIGN

LEE
Theorem 3.4
The idempotent law. For each element x in a Boolean algebra
(Q) 6 0
(b) xx =x

Proof
eo oe eet ie by P2(b)
= (e+ x(x X)e by PS)
=X XX by P4(a)
=xt+0 by P5(b)
=e by P2(a)

The proof that xx = x is carried out in a manner similar to the above


by using the dual of each postulate at every step in the proof as was done
for Theorem. 3.2. However, it also follows by the duality principle. a

Theorem 3.5
The involution law. For every x in a Boolean algebra
(ee.
Proof >
Let x be the complement of x and ( (x) be the complement of x. Then, by
Postulate P5, x + x = 1,xx =0,x + (x) = 1, and x(x) = 0.

(x)= (x) +0 by P2(a)


= (rex by substitution
= (x1 1G) a) | “by PAG)
= [x + (x)] [x + (x)] by P3(a)
= eiclo(es) \oall by substitution
= [x + (x)](x + x) by substitution
== (eK by P4(a)
= + x(x) by P3(b)
=x+0 by substitution
=x by P2(a)

Theorem 3.6
The absorption law. For each pair of elements x and y in a Boolean algebra

(a) XK XV =X
(b) x(x+y)=x
CHAPTER 3 Boolean Algebra and Combinational Networks 67

Proof

pet ye eed ogy by P2(d)


= x(1 + y) by P4(b)
= x(y + 1) by P3(a)
xe | by Theorem 3.2(a)
=x by P2(d)
Part (b) follows from the duality principle. a

Theorem 3.7
For each pair of elements x and y in a Boolean algebra

(GQ) X XV = xy
(b) x+y)
=xy
Proof

Eee (6 Xe ey) by P4(a)


pei Ltea(Xert ey?) by PS(a)
=xt+y by P2(b)

Part (b) follows from the duality principle. a

Theorem 3.8
In every Boolean algebra, each of the operations (+) and (-) is associative.
That is, for every x, y, and z in a Boolean algebra
(Oe yore 2) yr Z
(b)x(yz)
= @y)z

Proof
Let A = x + (y + z) and B = (x + y) + z. It must now be shown that
A = B. To begin with,
xA =xA
=x[x + (y+ z)] _ by substitution
=X by Theorem 3.6())

and xB = xB
=x[(x+ y) +z] _ by substitution
= x(x + y) + xz by P4(b)
=xt xz by Theorem 3.6(b)
=x by Theorem 3.6(a)
68 DIGITAL PRINCIPLES AND DESIGN

Therefore, xA = xB = x. On the other hand,

xA = XA
=ale + (y+ 2] by substitution
=xx+xy+2 by PA)
= xx + x(y + Z) by P3(b)
= OMe xcy Zz) by PS(b)
= (are) by P2(a)
and xB = xB
=A + y) + zl by substitution
II ee ave xe by P4(b)
= (xx tary) XZ by P4(b)
= (ax + xy) + xz by P3(b)
Oxy) sea by P5(b)
Ky Xe by P2(a)
= x(y + 2) by P4(b)
Therefore, xA = xB = x(y + 2).
To complete the proof,

xA + xA =xA+ XA
XA +xA=xB+xB by substituting for xA and xA
Ax + Ax = Bx + Bx by P3(b)
Alte == BOG ee) by P4(b)
A-1=B:1 by P5(a)
A=B by P2(b)
Infother words; esp (yaa 2) (oy)
Part (b) of the theorem follows from the principle of duality. &

As a result of Theorem 3.8, it is not necessary to write x(yz) and (xy)z; rather,
xyz is sufficient. The same is true for x + (y + z) and (x + y) + z, which simply
can be written as x + y + z. It is also possible to generalize the above theorem to
handle any number of elements. That is, for any n elements of a Boolean algebra,
the sum and product of the n elements is independent of the order in which they are
taken.

Theorem 3.9
DeMorgan’s law. For each pair of elements x and y in a Boolean algebra
CHAPTER 3 Boolean Algebra and Combinational Networks 69

Proof
By Theorem 3.1 and Postulate PS, for every x in a Boolean algebra there is
a unique x such that x + x = 1 and xx = 0. Thus, to prove part (a) of the
theorem, it is sufficient to show that xy is the complement of x + y. This is
achieved by showing that (x + y) + (xy) = 1 and (x + y) (xy)= 0.

(ety) +39=[+y)t+X]e+y)+ 7] by PCa)


= yee) ex Cae yer yl) by P3a)
=[y+(x+x)][x+(y+y)] _ by Theorem 3.8(a)
=(y+ lat) by PS(a)
=1:1 by Theorem 3.2(a)
= | by Theorem 3.4(b)
Also,

(x + y)(xy) = (xy)@ + y) by P3(b)


= (xy)x + (xy)y by P4(b)
ye 4 Gy) by P3(d)
= y(xx) + x(yy) by Theorem 3.8(b)
= y(xx) + x(yy) by P3(b)
= 20 crex0 by P5(b)
=0+0 by Theorem 3.2(b)
= by Theorem 3.4(a)

Part (b) of the theorem follows from the principle of duality. &

The generalization of DeMorgan’s law is stated as: For the set of elements
WEBS bo a5 y, z} in a Boolean algebra,

(a) Wtexteo +y+z=wx---yz


CDG POV) ar cle Seater eet) ez
Generalized statements were made regarding the associative and DeMorgan’s
laws. It is also possible to make generalized statements about the commutative and
distributive laws which appear as Postulates P3 and P4. In particular, in a Boolean
algebra, the sum or product of n elements can be rearranged in any order. The gen-
eralized form of the distributive law is written as
(Anwar 207 (Wat x) yer Ww A)
(BY Woe yo eZ) wx wy Pe wz
As in the case of the postulates, the symbols x, y, z, * * + appearing in the theo-
rems of this section are considered variables. Consequently, the systematic substitu-
tion of complemented variables or algebraic expressions for these variables does not
change the meaning of the theorems. For example, if the complemented variable x
70 DIGITAL PRINCIPLES AND DESIGN

Table 3.1. Summary of the basic Boolean identities

(a) (b) Theorem or postulate

0=1 20) ng)


X+0=X X:-1=X py)
wear = X:0=0 TD
X+X=<xX DEX =X T4, idempotent law
XX =1 xX =0 P5
(X) =X T5, involution law
XG tee ete Xe XY = YX P3, commutative law
X+XY=X X(X + Y) =X T6, absorption law
XN = Xk X(X+ Y)
= XY 7
(X+ YY=XY (XY) =X+Y T9, DeMorgan’s law
XG Zp" OX tl) CX) XY ap 2) = XOX Z P4, distributive law
Mar (06 SPB) = Ot se VO) sre EX a CN) Ze TS8, associative law
=X —XYZ

is substituted for the variable x, then Theorem 3.6(a) becomes x + xy = x, Theorem


3.7(a) becomes x + xy = x + y, and Theorem 3.9(a) becomes (x + y) = xy. In the
last two examples, use was made of the involution law (Theorem 3.5).
The postulates and basic theorems of a Boolean algebra are very useful in the
study of digital principles. Hence, they are summarized in Table 3.1 for easy refer-
ence. Uppercase letters are used in this table to emphasize the fact that these vari-
ables can represent expressions as well as single variables.
In proving the theorems of this section, every step was shown, and the justifica-
tion for that step was given. Whenever manipulations are performed on expressions
of a Boolean algebra in the remainder of this book, many of the obvious steps are
omitted. Also, the postulate or theorem applied at each step of the manipulation no
longer is stated unless special attention is to be drawn to which postulate or theorem
is being used.

3.3 A TWO-VALUED BOOLEAN ALGEBRA


Up to this point a Boolean algebra has been considered as a general mathematical
system. Two operations were postulated for this mathematical system. However,
even though the actual operations were not defined, identities were established in
the previous section which are true for any Boolean algebra. These identities are a
result of the properties of the operations as stated in the postulates. Certainly, this
general approach can be continued and additional theorems for any Boolean algebra
developed. However, the objective in this chapter is to establish a relationship be-
tween a Boolean algebra and logic networks and to show how a Boolean algebra is
a useful tool in logic network analysis and design.
The definition of a Boolean algebra actually encompasses an entire class of al-
gebras. Two well-known algebras in this class are the algebra of sets and the propo-
CHAPTER 3 Boolean Algebra and Combinational Networks 71

sitional calculus. At this time a special two-valued Boolean algebra is established


by specifying all the elements in the set B and defining the two operations.

— S
——— S

Theorem 3.10
The set B = {0,1} where 0 = | and | = 0 along with the operations (+),
called the or-operation, and (-), called the and-operation, defined by
a8 ar iY

is a Boolean algebra.

Let us now show that the postulates of a Boolean algebra are indeed satisfied
under the conditions stated in the theorem. First it is noted that the set B only con-
sists of two elements. The operations (+) and (-) work on pairs of elements in the
set B. In general, an operation is said to be closed if for every pair of elements in the
set the result of the operation is also in the set. Postulate P1, which requires closure,
is Satisfied since the entries in the tabular definitions of the operations are all speci-
fied and are elements of the set B = {0, 1}.
Postulate P2 requires the existence of identity elements relative to the two oper-
ations. These identity elements are detected by searching for a column and a row in
each of the tables defining an operation such that the entries for that column and
row are the same as the row and column designators, respectively. In the first table,
it is seen that for the 0 column, x + 0 = x for all x € B; and for the 0 row, 0 + x = x.
Similarly, in the second table, it is seen that for the 1 column, x- 1 = x for all x € B;
and for the | row, 1 - x = x. Thus, 0 is the identity element relative to the or-operation,
and | is the identity element relative to the and-operation.
The operations (+) and (+) are commutative since the tables which define these
operations are symmetrical about the diagonal from the upper left to lower right
corners. Hence, Postulate P3 is satisfied.
To show that Postulate P4 is satisfied, perfect induction is used. Perfect induction
is proof by exhaustion in which all possibilities are considered. In this case, by substi-
tuting all possible combinations of the elements 0 and | into Postulate P4 and apply-
ing the definitions of the operations, it becomes a simple matter to determine whether
both sides of the equality sign yield identical results for each combination. In Postu-
late P4 the symbols x, y, and z can each represent the elements 0 and |. Under this
condition there are eight possible combinations of 0 and | for x, y, and z. Table 3.2 has
one row for each of these eight combinations. In the fourth column of the table, x + yz
is evaluated for each combination; while in the fifth column, (x + y)(x + z) is evalu-
ated. For example, in the x= y = z = O row, x + yz becomes 0 + 07-0 =0+0=0
and (x + y)(x + z) becomes (0 + 0)-(0 + 0)= 0:0 = 0. Ina like manner, the entries
72 DIGITAL PRINCIPLES AND DESIGN

Table 3.2 Verifying Postulate P4 by perfect induction for a two-valued Boolean algebra
TE

xtyz (x + y)(x +z) xy + 2) Wee

_—
OS
SS
SooOo
ees
©
OF
=—=
Brerreroodoo§ots SO
Oo
CoO
OO
==

in the fourth and fifth columns of the remaining seven rows are determined. Since
these two columns are identical row by row, the identity of Postulate P4(a) is estab-
lished. Similarly, the sixth and seventh columns of Table 3.2 establish the identity of
Postulate P4(b). Thus, each operation is distributive over the other.
The fifth postulate requires there be an x for every x such that x + x= 1 and
x+x = 0. Table 3.3 is constructed using perfect induction, where 0 = 1 and | =
0 as stated in the theorem. Since the x + x column has all | entries and the x + x col-
umn has all 0 entries, Postulate PS is satisfied.
Finally, to satisfy Postulate P6 it is necessary to establish that 0 and | are dis-
tinct, i.c., 0 # 1. It is immediately seen that in the tables defining the two opera-
tions, only 0 is the identity element for the or-operation and only | is the identity el-
ement for the and-operation. This implies 0 # 1. Since the six postulates of a
Boolean algebra are satisfied, the special two-valued algebra specified in Theorem
3.10 is indeed a Boolean algebra.
The two-valued Boolean algebra just developed is also known as the switching
algebra. In the literature involving the analysis and design of logic networks, it is
common to refer to the two-valued Boolean algebra simply as a Boolean algebra
without any additional qualification. This convention is adhered to in this text. It
should be kept in mind that all the theorems previously established in Sec. 3.2 re-
main valid for this two-valued Boolean algebra. Frequently, the two constants, 0
and |, are referred to as logic-0 and logic-/. In this book, the two constants are usu-
ally referred to as simply 0 and 1. The more specific references of logic-0 and logic-
| are used for emphasis or when it is necessary to avoid confusion with the binary
arithmetic symbols 0 and 1.

Table 3.3 Verifying Postulate P5 by


perfect induction for a two-
valued Boolean algebra
CHAPTER 3 Boolean Algebra and Combinational Networks 73

In the literature, other symbols are occasionally used for the two operations of a
Boolean algebra. In particular, the symbols U and v are used to designate the or-
operation and 1) and ~ for the and-operation. Finally, the overbar (_) is considered
a unary operation since it uniquely determines the value of x for any x. This opera-
tion is called the not-operation. As previously mentioned, x is the complement of x.
It is also referred to as the negation of x. The not-operation is also designated in the
literature by a prime (’), in which case the complement of x is written as x’.
Again it should be mentioned that the words “product” and “sum” are com-
monly used when referring to the operations of “and” and “or,” respectively, owing
to the symbols used. However, it should be kept in mind that these operations have
been defined for a Boolean algebra and must always be interpreted in this context.
Even though conventional addition and multiplication occur occasionally in future
discussions, there should be no difficulty in determining from context at that time
the correct interpretation of the operation.

3.4 BOOLEAN FORMULAS AND FUNCTIONS


Boolean expressions or formulas are constructed by connecting the Boolean con-
stants and variables with the Boolean operations. Boolean expressions, in turn, are
used to describe Boolean functions. For example, if the Boolean expression (x + y)z
is used to describe the function f, then this is written as

f(%y,z) = (x + y)z or f=(xt y)z

The value of the function fis easily determined for any set of values of x, y, and
z by applying the definitions of the and-, or-, and not-operations. For the above, the
value of not-x is first or-ed with the value of y to form the value of x + y. This, in
turn, is and-ed with the value of z. Because of the closure property of the Boolean
operations, the value of f always is either 0 or 1 for a given set of values of x, y, and
z. Hence, fis considered a dependent Boolean variable, while the variables x, y, and
z are the independent Boolean variables.
In general, an n-variable (complete*) Boolean function f(x, x),..., X,) 18 a
mapping that assigns a unique value, called the value of the function, for each com-
bination of values of the n independent variables in which all values are limited to
the set {0,1}. This definition of an n-variable Boolean function suggests that it can
be represented by a table with n + | columns in which the first n columns provide
for a complete listing of all the combinations of values of the n independent vari-
ables and the last column represents the value of the function for each combination.
Since each variable can assume two possible values, it immediately follows that if
there are n independent variables in the function, then there are 2” combinations of
values of the variables. Thus, the table has 2” rows. Such a table denoting a Boolean
function is called a truth table or table of combinations. A simple way of writing all

*In Sec. 3.8 the concept of an incomplete Boolean function is discussed. Unless otherwise indicated,
Boolean functions are assumed to be complete.
74 DIGITAL PRINCIPLES AND DESIGN

Table 3.4 A Boolean function of n variables

the combinations of values is to count in the binary number system from the decimal
equivalent of 0 to 2” — 1.* Once all the combinations of values are established, the
value of the function for each combination is entered. This is achieved by evaluating
the expression describing the function for each combination of values. Letting
FQ. Xo +o Xp) = FOO, 0,....., 0) denote the value. of the function when 4; = 9.
Xp = 0, ss Xe = OLX Gs 2 X,) = 00>... 1) denote. the value-of the tunc-
tion when x, = 0, x, = 0,...,x, = 1, etc., the general form of the truth table for an
n-variable function is shown in Table 3.4.
In line with common usage, the formula describing a function is referred to as
the function itself. For example, it hereafter is said “the function fix,y,z) = (x + y)z”
or “the function f = (x + y)z” rather than “the three-variable function whose for-
mulais (% —F ay)z.”
To illustrate the construction of a truth table, consider the function f = (x + y)z.
Since f is a function of three variables, there are 2’ = 8 different combinations of
values that are assigned to the variables. In Table 3.5 the eight combinations are
listed in the first three columns. It should be noted that these eight rows correspond
to the binary numbers 000 to 111 which, in turn, correspond to the decimal numbers
Oto? = ol =
To complete the construction of the truth table, the expression (x + y)z is eval-
uated for each of the eight combinations on a row-by-row basis. This results in the
last column of Table 3.5. However, an alternate approach to completing the truth
table is to carry out the Boolean operations on the columns of the table according to
the expression being evaluated. For example, since x appears in the equation, a
fourth column is added to the table such that the values in this column are the com-
plements of those in the first column. Next, the or-ing of the values of x given in the
fourth column is performed with the values of y given in the second column. This
results in the fifth column, which shows the evaluation of the expression x + y. Fi-
nally, the entries in the fifth column are and-ed with those in the third column. Thus,

“This rule for determining the rows of the truth table is a convenience that is possible by allowing a one-
to-one correspondence to exist between the Boolean constants and the binary digits. However, the
elements in the resulting table are the Boolean constants logic-0 and logic-1.
CHAPTER 3 Boolean Algebra and Combinational Networks 75

Table 3.5 Truth table for the function f =(x + y)z

es f S(t wz
0 0 0 1 0
0 0 i 1
0 l 0 | 0
0 1 l | 1
1 0 0 0 0
1 0 I 0 0
1 | 0 0 0
1 i l 0 1

the final column shows the value of (x + y)z for each combination of values of the
variables x, y, and z. For the special case when x = 0, y = 1, and z = 0, it is seen
from the third row of the truth table that the value of the expression is 0.
As indicated above, a (complete) Boolean function of n variables is represented
by a truth table with 2” rows. It is now a simple matter to determine the number of
distinct Boolean functions of n variables. For each of the rows of the truth table,
there are two possible values that can be assigned as the value of the function. Con-
sequently, with 2” rows to the truth table, there are 2°” different ways in which the
last column of the truth table can be written, each representing a different Boolean
function.
Because of the closure property of the Boolean operations, every n-variable
Boolean formula describes a unique n-variable Boolean function. However, in
general, many appearingly different Boolean formulas describe the same Boolean
function. Thus, two Boolean formulas A and B are said to be equivalent, written
A = B, if and only if they describe the same Boolean function. The equality sign
occurring in the postulates and theorems of a Boolean algebra relates expressions
that yield identical results for any assignment to the variables. Thus, these theo-
rems and postulates can be applied to a Boolean formula in order to determine an
equivalent expression.

3.4.1 Normal Formulas


It is useful at times to categorize Boolean expressions based on their structure. One
such categorization are the normal formulas. Consider the four-variable Boolean
function
fw,xy,z) =x + wy + wyz (Sy)

A literal is defined as each occurrence of either a complemented or an uncomple-


mented variable in a describing formula. Therefore, Eq. (3.1) consists of six literals. A
product term is defined as either a literal or a product (also called conjunction) of liter-
als. Equation (3.1) contains three product terms, namely, x, wy, and wyz. A Boolean
formula that is written as a single product term or as a sum (also called disjunction) of
76 DIGITAL PRINCIPLES AND DESIGN

product terms is said to be in sum-of-products form or disjunctive normal form and is


called a disjunctive normal formula. Thus, Eq. (3.1) is a disjunctive normal formula.
Consider another four-variable Boolean function
f(w,x%y,z) = ex + y)w+xty) G2)

The expression consists of six literals since a total of six complemented and uncom-
plemented variables appear in the formula. A sum term is defined as either a literal
or a sum (also called disjunction) of literals. In the case of Eq. (3.2), it consists of
three sum terms, namely, z, (x + y), and (w + x + y). A Boolean formula which is
written as a single sum term or as a product (also called conjunction) of sum terms
is said to be in product-of-sums form or conjunctive normal form and is called a
conjunctive normal formula. The Boolean expression of Eq. (3.2) is an example of a
conjunctive normal formula.

3.5 CANONICAL FORMULAS


In the previous section, Boolean formulas and functions were introduced. Boolean
formulas are used to describe Boolean functions, i.e., truth tables; hence, there is a
relationship between truth tables and Boolean expressions. As discussed previously,
to construct a truth table from an expression, it is only necessary to evaluate the for-
mula under all possible assignments of the independent variables. At this time, the
process is reversed and it is shown how Boolean formulas are written from a truth
table. Two types of expressions are obtained directly from a truth table: the minterm
canonical formula and the maxterm canonical formula. These two canonical for-
mula types are special cases of the disjunctive normal formula and conjunctive nor-
mal formula, respectively.

3.5.1 Minterm Canonical Formulas


Table 3.6 gives the truth table for some functionf of three variables x, y, and z. Con-
sider the first occurrence in the table in which the value of the function is 1. This
corresponds to the second row in which x = 0, y = 0, and z = 1. Now consider the

Table 3.6 The truth table for a function


WX, y,Z)
CHAPTER 3 Boolean Algebra and Combinational Networks 77

product term xyz. If the values x = 0, y = 0, and z = 1 are substituted into this
product term, the term then evaluates to 1, i.e.,0°-0-1 = 1-1-1 = 1. In addition,
for all of the remaining possible seven combinations of values of x, y, and z, the
term x yz has the value 0, since at least one of the literals in the term has the value of
O. It is therefore seen that the single product term x yz has a functional value of 1 if
and only if x = 0, y = 0, and z = 1 and, consequently, can be used to algebraically
describe the conditions in which the second row of the truth table has a functional
value of 1. :
Again considering Table 3.6, the next row in which the function has the value
of 1 occurs in the fourth row. This row corresponds to the assignment x = 0, y = 1,
and z = 1. If this assignment of values is substituted into the product term xyz, then
the value of the term is 1. Furthermore, as can easily be checked, this is the only as-
signment of values to the variables that causes the term xyz to have the value of 1.
Thus, the conditions in which the fourth row of Table 3.6 has a functional value of 1
are algebraically described by the product term xyz.
The only remaining row of Table 3.6 in which the function has the value of 1 is
the fifth row. The assignment associated with this row is x = 1, y = 0, and z = O. If
this assignment is substituted into the product term xyz, the term then has the value
of 1. In addition, this product term has the property that it has the value of | only for
this assignment.
Combining the above results, the Boolean expression

f(%y,z) = xyz + xyz + xyz (3.3)


precisely describes the function f(x,y,z) given in Table 3.6 since each product term
in the expression corresponds to exactly one row in which the function has the value
of 1, and the logical sum corresponds to the collection of all such rows. Further-
more, for all those rows in which the function has the value of 0, the above expres-
sion also has the value of 0. Expressions of this type are called minterm canonical
formulas, standard sum-of-products, or disjunctive canonical formulas.
In general, a minterm canonical formula describing an n-variable function is
an expression consisting of a sum of product terms in which each of the n vari-
ables of the function appears exactly once, either complemented or uncomple-
mented, in each product term. Product terms that have this property of all variables
appearing exactly once (and, consequently, having the value of | for only one
combination of values of the function variables) are called minterms* or standard
products.
Generalizing from the above example, a procedure can be stated for writing a
minterm canonical formula for any truth table. Each row of a truth table for which
an n-variable function has the value of | is represented by a single product term,
i.e., a minterm, in which the n variables appear exactly once. Within each minterm,
a variable appears complemented if for that row the value of the variable is 0 and

*The word minterm is derived from the fact that the term describes a minimum number of rows of a
truth table, short of none at all, that have a functional value of 1.
78 DIGITAL PRINCIPLES AND DESIGN

uncomplemented if for that row the value of the variable is 1. If the minterms de-
scribing precisely those rows of the truth table having a functional value of | are
connected by or-operations, then the resulting expression is the minterm canonical
formula describing the function.

3.5.2 m-Notation
Each row of a truth table corresponds to an assignment of values to the independent
variables of the function. If this assignment is read as a binary number, then a row is
readily referenced by its decimal number equivalent. Using the letter m to symbol-
ize a minterm, the notation m; is used to denote the minterm that is constructed from
the row whose decimal equivalent of the independent variable assignment is 7.
Table 3.7 illustrates this notation for the case of three-variable truth tables. For ex-
ample, the minterm xyz is associated with the row in which x = 1, y = 1, and z = 0.
If 110 is read as a binary number, and thereby has the decimal equivalent of 6, then
the corresponding minterm is denoted by m,. This notation is readily extendable to
handle any number of truth table variables.
In an effort to simplify the writing of a minterm canonical formula for a func-
tion, the symbol m;, with the appropriate decimal subscript i, can replace each
minterm in the expression. In this way, the minterm canonical formula given by Eq.
(3.3) is written simply as

F(%Y,Z) = m, + m, + m, (3.4)

In addition, since a minterm canonical formula always consists of a sum of


minterms, the writing of the expression is simplified further by denoting the sum-
mation of minterms by Xm and just listing within parentheses the decimal designa-
tors of those minterms being summed. Thus, Eq. (3.4) becomes

f(x%,y,z) = Ym(1,3,4)
It should be realized that no ambiguity results from this notation if the actual vari-
ables of the Boolean function are listed in the function symbol f(x,y,z) and are as-
sumed to be in the same order within a minterm.

Table 3.7 m™-notation for the three-variable minterms


i
Rows of truth Decimal
table x y z designator of row Minterm m-notation
000 0 XYZ Mo
001 XYZ m,
010 2 XYZ My
O11 3 XYZ M3
100 4 XYZ my
101 5 XYZ Ms
HE) 6 XYZ Me
it i i 7 XYZ mM,
CHAPTER 3 Boolean Algebra and Combinational Networks 79

Consider the four-variable Boolean function given in Table 3.8. The corresponding
minterm canonical formula is

f(wW,%),2) = wxyz + wxyz + wxyz + WXYZ + WxyZz

and is written in m-notation as

JWGY,2) = Ms, + ms + me, + iy + my, = =ME,5,8,10,11)

Table 3.8 A four-variable Boolean function

w x y Z as
0 0 0 0 0
0 0 0 | 0
0 0 1 0 0
0 0 1 | 1
0 1 0 0 0
0 ] 0 I 1
0 1 1 0 0
0 1 1 i 0
1 0 0 0 ]
i 0 0 ] 0
1 0 1 0 1
1 0 1 1 1
1 1 0 0 0
1 1 0 1 0
1 ] | 0 0
1 1 1 1 0

A three-variable Boolean expression in m-notation is


f(x, y,z) = m(0,4,7) = my + my + m,
To obtain its algebraic form, it is only necessary to convert each of the decimal sub-
scripts into a three-digit binary number and then replace each 0-bit by a comple-
mented variable and each 1-bit by an uncomplemented variable. Under the assump-
tion that the three variables x, y, and z of the function occur in the same order within
the minterms, then the algebraic form
f(y,Z) = xyz + xyz + xyz
immediately follows.
80 DIGITAL PRINCIPLES AND DESIGN

3.5.3 Maxterm Canonical Formulas


There is a second type of canonical formula describing a function that is written di-
rectly from a truth table. This is known as the maxterm canonical formula, standard
product-of-sums, or conjunctive canonical formula.
To see how the maxterm canonical formula is obtained, again consider the func-
tion f(x,y,z) given in Table 3.6. Rather than algebraically describing those rows of the
truth table having functional values of 1, those rows having functional values of 0 are
algebraically described instead. For Table 3.6, the first row of the truth table has a
functional value of 0 and corresponds to the condition x = 0, y = 0, and z = 0. Con-
sider the sum term x + y + z. If the assignment x = 0, y = 0, and z = 0 is substituted
into this sum term, then the term evaluates to 0, i.e.,0 + 0 + 0 = 0. Furthermore, for
all of the remaining possible seven combinations of values of x, y, and z, the term
x + y + zhas the value of 1. It is therefore seen that this sum term has a functional
value of 0 only when x = 0, y = 0, and z = 0. Thus, it can be used to algebraically de-
scribe the conditions in which the first row of the truth table has a functional value of 0.
The next row corresponding to a 0 functional value is the third row. In this
case, x = 0, y = 1, and z = 0. Now consider the sum term x + y + z. If the assign-
ment of values corresponding to the third row is substituted into this sum term, then
the value of the term is 0, i.e.,0 + 1 + 0=0+0+ 0 =O. In addition, this is the
only assignment of values to the variables x, y, and z that causes this sum term to be
0. Thus, it is concluded that the conditions in which the third row of Table 3.6 has a
functional value of 0 is algebraically described by the sum term x + y + z.
The analysis of Table 3.6 can be continued for the remaining rows in which the
value of f(x,y,z) is 0. In particular, the sixth row, corresponding to x = 1, y = 0,
z = I, is described by the sum term x + y + z; the seventh row, corresponding to
x= 1,y = 1,z = 0, is described by the sum term x + y + z; and, finally, the last
row, corresponding to x = 1, y = 1, z = 1, is described by the sum term x + y + z.
Taking the Boolean product of the five sum terms results in

SiN Dae OG Vee Te irs 2K eee ie) ee Nea ye ee en)


This expression has the value 0 if and only if any single sum term has the value 0.
Hence, it has the value 0 for the five conditions in which the function f(x,y,z) is 0
and has the value | for the remaining three conditions. Equation (3.5) is the max-
term canonical formula describing the function given in Table 3.6.
In general, a maxterm canonical formula describing an n-variable function is an
expression consisting of a product of sum terms in which each of the n variables of
the function appears exactly once, either complemented or uncomplemented, in
each sum term. Sum terms that have this property of all variables appearing exactly
once (and, consequently, having the value of 0 for only one combination of values
of the function variables) are called maxterms* or standard sums.

*A single maxterm has the value 0 for only one combination of values and has the value of | for all
other combinations of values. Hence, its name depicts the fact that it is a term that assigns a functional
value of | to a maximum number of rows of a truth table, short of all of them.
CHAPTER 3 Boolean Algebra and Combinational Networks 81

Generalizing from the above discussion, a procedure can be stated for writing a
maxterm canonical formula for any truth table. Each row of a truth table for which
an n-variable function has the value of 0 is represented by a single sum term, i.e., a
maxterm, in which the n variables appear exactly once. Within each maxterm, a
variable appears complemented if for that row the value of the variable is 1 and un-
complemented if for that row the value of the variable is 0. If the maxterms describ-
ing precisely those rows of the truth table having a functional value of 0 are con-
nected by and-operations, then the resulting expression is the maxterm canonical
formula describing the function.

3.5.4 M-Notation
As was the case with minterms, a decimal notation is used for maxterms. Again, if
the variable assignment associated with a row of a truth table is read as a binary num-
ber, then the row is readily referenced by its decimal number equivalent. A maxterm
constructed for the row with decimal equivalent i is then denoted by M,. This nota-
tion is illustrated in Table 3.9 for the case of three-variable truth tables. For example,
the maxterm x + y + zis associated with the row in which x = 1, y = 1, andz = 0.
Regarding 110 as a binary number, the decimal equivalent is 6. Hence, the maxterm
is represented by M,. Although Table 3.9 gives the M-notation for three-variable
maxterms, this notation is extendable to handle any number of truth table variables.
Replacing each maxterm in a maxterm canonical formula by its corresponding
M,, the formula is written in a more compact form. For example, the maxterm
canonical formula given by Eq. (3.5) becomes

f(%Y,Z) = MoM,Ms5MoM, (3.6)


A further simplification in writing maxterm canonical formulas is achieved by
using IIM to denote a product of maxterms and listing the decimal designator of
each maxterm for the function within parentheses. Thus, a simplified form of writ-
ing Eq. (3.6) is

f(x,y,z) = TIM(0,2,5,6,7)

Table 3.9 /-notation for the three-variable maxterms


EEE —————————— aS
Rows of truth Decimal
table x y z designator of row Maxterm M-notation

000 0 TE AE PAP M)
001 ] Se aya M,
010 y) AP WAP R M,
O11 3 Ke yz M,
100 4 AP Mare M,
101 ) x ae ar & Ms
hih@ 6 wer Poke M,
itil 7, Rae Por x M,
82 DIGITAL PRINCIPLES AND DESIGN

Under the assumption that the variables of the maxterms are always arranged in
the same order as they appear in the function notation f(x,y,z), no ambiguity results
from this decimal notation.

Consider the four-variable Boolean function given in Table 3.8. The corresponding
maxterm canonical formula is
FWA) = ey 2 ee oe ZW at yt Z).
(WE REE ZO x Py We ez)
Wee ye UW x ey ew ak ys)
(GP sie Te se SP Se ZO? ar ae ae BP a),

and is written in M-notation as

fw, y,2) = MoM\M)M,M6M{M


MoM, 3M 4M
= WV AO MA AG HONS
GAEIS)

A three-variable Boolean expression in M-notation is

f(%y,z) = TIM(0,2,3,6) = M)M.M3M,

To obtain its algebraic form, it is only necessary to convert each of the decimal
subscripts into a three-digit binary number and then write the corresponding sum
term in which a 0-bit becomes an uncomplemented variable and a 1-bit becomes
a complemented variable. Under the assumption that the variables within the
maxterms appear in the same order as in the function notation, the algebraic form
becomes

FRG YZ) SREY eZ ae iri ears ate Cartan ieee)

In closing this section, a comment on the importance of canonical formulas is


in order. For complete Boolean functions, the minterm and maxterm canonical for-
mulas are unique. That is, for any complete Boolean function, there is only one
minterm canonical formula and only one maxterm canonical formula. Thus, if two
dissimilar-looking Boolean equations are manipulated into the same minterm or
maxterm canonical formulas, then it can be concluded that the two formulas are de-
scribing the same Boolean function.
Another application of the canonical formulas is that they serve as the starting
point for formal techniques to determine simple formulas that describe a function.
Simplification procedures are studied in the next chapter.
CHAPTER 3 Boolean Algebra and Combinational Networks 83

3.6 MANIPULATIONS OF BOOLEAN


FORMULAS
A Boolean function is describable by many different formulas. By applying the pos-
tulates and theorems of a Boolean algebra, it is possible to manipulate a Boolean
expression into another form describing the same Boolean function. The type of
manipulation that must be performed depends upon some objective that is to be
achieved. For example, it may be desirable to obtain an expression having the
fewest literals that describes a function, or the objective might be to obtain a canon-
ical formula when the given formula is not in canonical form. In this section several
examples of Boolean equation manipulations with various objectives are given.

3.6.1 Equation Complementation


A Boolean equation is a description of a Boolean function. This description is a rule
expressed in algebraic form that assigns functional values for all combinations of
values of the independent variables of the function. For every Boolean function f
there is associated a complementary function f in which f(x), %,..., X,) = 1 if
ccc en) 0 andy (i.e) = Olt fo, v= tor allicombi-
nations of values of x), X,,...,x,. That is, the functional value column appearing in
the truth table for f has the opposite values from those in the functional value col-
umn in the truth table for fA Boolean formula for f is obtained by complementing
the Boolean expression for f For example, if the function fis described by
f = wxz + w(x + yz)
then the function f is described by
f = [wxz + wx + yz)]
By repeated use of DeMorgan’s law, i.e., Theorem 3.9, the not-operation over the
entire formula is brought inside the parentheses so that not-operations only appear
with the individual variables.

__
Fates
The complementation of the Boolean expression
f = wxz + w(x + yz)

proceeds as follows using DeMorgan’s law (Theorem 3.9) and the involution law
(Theorem 3.5):
f = [wxz + w(x + yz)] = (wxz) [W(x + yz)]
=[(w) +x + (ZI) [w + & + ya)]
=(wt+xt+2[wt+xOol
=(w+x+z {w + x[(y)
+ Z]}
=(wtx+z[wt+xyt2]
84 DIGITAL PRINCIPLES AND DESIGN

3.6.2 Expansion about a Variable


There are occasions when it is desirable to single out a variable and rewrite a
Boolean formula f(x;,...,X;,...,X,) So that it has the general structure

X81 + X22

Ge PRG ee)

where g,, 25, M;, and h, are expressions not containing the variable x;. These special
forms of aBoolean formula f(x;, . . . , X;,.. . ,X,) are said to be expansions about the
variable x;. The expansions about a single variable are achieved by the following
theorem, known as Shannon’s expansion theorem.

Theorem 3.11

CG) iaAy\ Gpeitins trea Sete Lee)


SK TR Rookies Cars OEE RE aCe ees Oe apes)
COVE iC ecto meeek enaenee Co
ct Boa aaOwe, rem ar) Same eee 0 Wl Ueie ty
(Be Re 2 ee ve ae arescay

Wheres] (iin Moye en he ot enor ee=O I denotes hestonnuls


f(X), %,..., Xj... , X,) upon the substitution of the constant k for all oc-
currences of the variable x;.

L
EXAMPLE
3.6 [a
Consider the Boolean expression
S(w,x%, y,Z) = wx + (wx + y)z

Assume that an equivalent expression is desired having the general form


x* 2, (W,y,z) + X > B5(W,y,Z)
This is achieved by expanding the expression f(w,x,y,z) about the x variable as follows
flw,xy,Z) = wx + (wx + y)z
= x[ws 1 + (ws 1 + yz] + x[w-0+(w-0+4y)z] by Theorem 3.11(a)
=xlw:0+ (wet yz] + x[w-1 + yz]
= x(w + y)z + x(w + yz)

3.6.3 Equation Simplification


As is seen in Sec. 3.7, there is a direct correspondence between the structure of logic
networks and Boolean formulas. Since the postulates and theorems of a Boolean al-
gebra enable the manipulations of a formula into equivalent forms, for a particular
Boolean function one formula might be more desirable than another. This is particu-
larly true if the formula is manipulated so as to correspond to the “best” network. Of
CHAPTER 3 Boolean Algebra and Combinational Networks 85

course, a criterion is needed that can be applied to a formula to measure the desir-
ability of the corresponding network. This has led to defining “simple” or “mini-
mal” expressions with the intent that they correspond to the least-cost network.
One possible way of measuring the simplicity of a Boolean expression is to ob-
serve the number of literals contained within the formula. For the purpose of this
chapter, the simplest form of an expression is defined as the normal formula having
the fewest number of literals.*

Me
EXAMPLE
3.7|
Consider the expression
(ey Ky) =P yz

consisting of seven literals. It is simplified as follows:


Gee xyes YY) ceeyz = x(% aay) Fiz by Theorem 6(a)
= xy + yz by Theorem 7(b)
= y(x + Z) by P2(d)

The resulting expression is a conjunctive normal formula, i.e., a formula in product-


of-sums form, consisting of three literals.

EXAMPLE 3.8

Consider the expression


wyz t+ wz t+ yz + xyz

consisting of 10 literals. It is simplified as follows:


wyz t+ wet ye + xyz = wyz + wz + ey + xy)
=wyz+wzt zyt+ x)
=wyztwzt
yzt xz
=wyztwzt
ley. t+ xz
= wyz + we + (w + wy + xz
=wyzt+ wzt+ wyz + wyz + xz
=wyztwyzt+wzt wyz + xz
=wyz + z) + wel + y) + xz
=wyt+wzt xz

The resulting expression is a disjunctive normal formula, i.e., a formula in sum-of-


products form, consisting of six literals.

Certainly it is not obvious that the final expression in Example 3.8 is the simplest
disjunctive normal formula. Nor was it obvious which theorems or postulates were

*The subject of measuring the simplicity of an expression is further explored in Chapter 4.


86 DIGITAL PRINCIPLES AND DESIGN

the most appropriate to apply in order to achieve the reduction. Clearly, there is a
need for systematic reduction techniques that guarantee minimal resulting expres-
sions. In the next chapter algorithmic procedures for obtaining expressions under
different measures of minimality are studied.

3.6.4 The Reduction Theorems


For the purpose of obtaining simple Boolean formulas, two additional theorems are
particularly useful. These are known as Shannon’s reduction theorems.

Theorem 3.12
CQ) eT Xe aes Ries 5 Hk eeOsea nee Lae tg)
(DY Xe AO ae ee 8 Kiy Dee pl ke asec eee eee

where f(%;; %,.«.+ k .«.5 %,), tor k-= 0, 1, denotes the formula
fl%), %, ++, Xj, .-+, X,) upon the substitution of the constant k for all oc-
currences of the variable x;.

Theorem 3.13
(G70) Mie Peay © te Omen cecerar ee© 2) an Ca 1 Ss Neeee Recetas Xn)
(2) AE She a ©, Bee. remnants OER Hecat? 9 Daa ay ah(Pehone So tenn Shekel, fae2

where. fiir Xoje wg Kn oe 6 Xe) On ke =" 0 be denotes the fomughs


TR a ane aye x,,) upon the substitution of the constant k for all oc-
currences of the variable x;.

Consider a function described by the following Boolean expression:


f(w,xy,zZ) = x + xy + wx(w + z)(y + wz)

Denoting xy + wx(w + z)(y + wz) by g(w,x,y,z), the above expression has the form
S(w,x y,Z) = x + g(w,x% y,Z)
By Theorem 3.12(b), all occurrences of the x variable in g(w,x,y,z) can now be re-
placed by the Boolean constant 0. Therefore,
flw,~y,z) =xt+0-y+w-0- (wt ay + wz)
=x+y+ww + zy + wz)
It is next noted that by letting (w + z)(y + wz) be denoted by h(w,y,z), then
w(w + z)(y + wz) = w+ h(w,y,z)
By Theorem 3.13(a) all occurrences of the w variable in h(w,y,z) can be replaced by
the constant 0. Thus,

w(w + z)(y + wz) w(0 + zy + 0°2z)


= wz(y + z)
CHAPTER 3 Boolean Algebra and Combinational Networks 87

At this point the original Boolean expression reduces to


S(w.%y,2Z) =x + y + wey + 2)
Finally, by letting x + wz(y + z) be denoted by k(w,x,y,z), then

Sw,x,y,Z) = y + K(w,x,y,z)
By Theorem 3.13(b) the y variable in k(w,x,y,z) can be replaced by the constant 1. Thus,
Sw.%y,Z) = y + x + we(1 + 2)
= Var oe ar x

which is the simplest disjunctive normal formula describing the given function.

3.6.5 Minterm Canonical Formulas


Section 3.5 introduced the minterm canonical formula and its construction from a
truth table. The significance of the minterm canonical formula is that it is a unique
description of a complete Boolean function. Thus, a method of determining if two
Boolean formulas are equivalent is to express them in their minterm canonical
forms and compare them minterm by minterm.
By the use of the postulates and theorems of a Boolean algebra, it is a simple mat-
ter to manipulate any formula into its minterm canonical form. This is achieved by
first applying DeMorgan’s law, 1.e., Theorem 3.9, a sufficient number of times until all
not-operations only appear with the variables. Next the distributive law of (+) over (+),
i.e., Postulate P4(b), is applied in order to manipulate the formula into its disjunctive
normal form, i.e., a sum of product terms. Duplicate literals and terms are removed by
the idempotent law, i.e., Theorem 3.4, as well as any terms that are identically 0 by
Postulate P5(b). If any product term in the resulting disjunctive normal formula does
not have all the variables of the function, then these missing variables are introduced
by and-ing the term with logic-1 in the form of x, + x; where x; is the missing variable
being introduced. This process is continued for each missing variable in each of the
product terms of the disjunctive normal formula. After applying the distributive law of
(*) over (+) again, each variable appears exactly once in each term. Upon removing
any duplicate terms, the resulting expression is the minterm canonical formula.

Consider the Boolean formula consisting of the variables x,y, and z


fyi aay yt aa ey)
Applying DeMorgan’s law, the expression is rewritten so that the not-operations
only appear with the variables. Thus,
f(Gy.Z = xy + (y + xz)@ + y)
Next the distributive law of (-) over (+) is applied to remove all parentheses. This
results in
flxy,z) = xy + xy + xxz + yy + xyz
88 DIGITAL PRINCIPLES AND DESIGN

The duplication of the x literal in the third term and the identically 0 term yy are re-
moved. At this point a disjunctive normal formula is obtained, i.e.,
FORY:Z) = AY Pe XZ Xz

The first term in this expression is lacking the z variable. The variable is introduced
by and-ing the term with logic-1 in the form of z + z. In a similar manner, the miss-
ing variables in the second and third terms are introduced. Thus,
SOLD = yy La ee yz
XV CZ) Fe Fry ez te ave
Application of the distributive law of (-) over (+) results in

f(%y,Z) = xyz + XyZ + xyz + xyz + xyz + xyz + xyz


Finally, the duplicate occurrences of the xyz and xyz terms are deleted, resulting in
the minterm canonical formula
FCGYZ) = HVE AV ZH ez aay eZ

3.6.6 Maxterm Canonical Formulas


Just as any Boolean expression can be manipulated into its unique minterm canoni-
cal form, it is also possible to manipulate it into its unique maxterm canonical form.
The procedure is the dual to that described previously for obtaining the minterm
canonical formula. In this case the distributive law of (+) over (*) is used, i.e., Pos-
tulate P4(a), to bring the expression into its conjunctive normal form, i.e., product-
of-sums form, after DeMorgan’s law is applied so that all not-operations appear
only with the variables in the function. Then the missing variables are introduced
into the sum terms by or-ing logic-0’s in the form of xx; where x; is the missing
variable, and the distributive law of (+) over (+) is again applied. Duplicate terms
and literals are deleted as well as terms identically | if they occur during the course
of formula manipulation.

Again consider the formula of Example 3.10


FOGViR) =e Ve ee ze Ey)
First DeMorgan’s law is applied so that the expression has all the not-operations oc-
curring only with the variables. Thus,
SOYA Hy Fazae y)
Next the distributive law of (+) over (-) is applied a sufficient number of times to
manipulate the expression into conjunctive normal form. In particular,
JTORY.2) = EN ey ee) ey)
=I YOY = Viena et ct a))
=Gty
yt oe +
+Ny+ oe +yt+
O+ DEtx+ t
HG +x47¥)
CHAPTER 3 Boolean Algebra and Combinational Networks 89

The first, second, fourth, and fifth terms are identically 1, by Postulate P5(a) and
Theorem T2(a), and, hence, are now dropped as well as the duplicate literal in the
last term. The resulting expression is
TCG Oe hee AC ety)
The first term is already a maxterm since all three variables appear. However, the
second term is not a maxterm since it lacks the z variable. The z variable is next in-
troduced into this term in the form zz, i.e.,

FOGY.Z) = (X Fy + Ox + y + 0)
= Fy 2 y+ zzz)
Finally, the distributive law of (+) over (+) is again applied to yield the maxterm
canonical formula

TOYZ) (EE VE at ye 2a tT yz)

3.6.7 Complements of Canonical Formulas


In terms of the truth table, the complement of a complete Boolean function involves
a functional value column having values just opposite to those of the original func-
tion. Recall that the minterm canonical formula of a function is written directly from
a truth table by summing the minterms for precisely those rows in which the function
has the value 1. It therefore immediately follows that the minterm canonical formula
for the complement of a complete Boolean function is obtained by summing those
minterms not contained in the minterm canonical formula of the original function.
A similar observation can be made with regard to the maxterm canonical for-
mulas for a function and its complement since it is the 0 functional-value rows of
the truth table that determine which maxterms are to appear in the canonical for-
mula. That is, the maxterm canonical formula of a complementary function consists
of the product of precisely those maxterms that do not appear in the maxterm
canonical formula of the original function.
For a complete Boolean function of n variables, the truth table consists of 2”
rows. It was stated in Sec. 3.5 that each of these rows is referenced by a decimal in-
teger in the range from 0 to 2”-1. Hence, in terms of the decimal notation, the deci-
mal description of a complementary function consists of the set of integers in the
range from 0 to 2"-1 not appearing in the set for the original function. This is illus-
trated by the next two examples.

The complement of the minterm canonical formula


f(w,x%y,z) = %m(0,1,3,8,9, 12,15)

is given by
f(w,x,y,z) = Lm(2,4,5,6,7,10,11,13,14)
90 DIGITAL PRINCIPLES AND DESIGN

EXAMPLE 3.13
The complement of the maxterm canonical formula
f(w3x,y;z) = TIM1,2;6,10,12,13,14)

is given by
f(w,xy,z) = TIM(0,3,4,5,7,8,9, 11,15)

Now consider a single minterm of n variables. Since the minterm is written as


the product of n variables, if the minterm is complemented by applying DeMor-
gan’s law, then the result is a sum term in which each of the n variables still re-
mains. For example, (wxyz) = w + x + y +z. Thus it is readily seen that the com-
plement of an n-variable minterm is an n-variable maxterm. Similarly, applying
DeMorgan’s law to an n-variable maxterm results in an n-variable minterm.
Assuming that a canonical term is given by its decimal representation, the deci-
mal subscript is not affected by the complementation process even though each lit-
eral is complemented as a result of DeMorgan’s law. This follows from the way in
which the decimal notation was developed in Sec. 3.5. In particular, in the case of
minterms, the decimal representation is derived from the binary number in which
0’s are associated with complemented variables and |’s with uncomplemented vari-
ables. However, for maxterms, the decimal representation is derived from the bi-
nary number in which 0’s are associated with uncomplemented variables and 1’s
are associated with complemented variables. For the above example, it is seen the
complement of minterm 71), 1.e., (wxyz), becomes w + x + y +z, which is maxterm
M,,. In general, m; = M, and M, = m, where i is a decimal subscript.
The above discussion suggests a second way of forming the complement of a
canonical expression and still having a canonical expression result. If a function is
expressed as a minterm canonical formula in decimal notation, then its complement
in decimal notation is a maxterm canonical formula with the same decimal sub-
scripts. Similarly, if a function is expressed as a maxterm canonical formula in deci-
mal notation, then its complement in decimal notation is a minterm canonical for-
mula with the same decimal subscripts.

Consider the minterm canonical formula

LOQGRY2) ye tae ai cia

Applying DeMorgan’s law directly to this expression results in

Finally, since m; = M; the complementary function is written in maxterm canonical


form as
f(w.x4y,z) = M\M3M)M,M,,
CHAPTER 3 Boolean Algebra and Combinational Networks 91

Two methods have been introduced for complementing a canonical expression


when given in decimal notation. From the way the complement of a Boolean func-
tion is defined, if the complement of an expression describing a function is taken
twice, then the resulting expression again describes the original function. In other
words, f(%}, X2,-- +, Xn) = fx, X2,..., X,). Using the above results, it is a simple
matter to transform a minterm or maxterm canonical expression in decimal form into
its equivalent canonical expression in decimal form of the opposite type. First, the
expression is complemented by replacing the set of decimal describers by the set
consisting of decimal integers in the range from 0 to 2”—1 not appearing in the origi-
nal set. Second, the double complement is achieved by applying DeMorgan’s law to
the newly formed complemented expression and using the result m, = M, or M, = m,.
Certainly, the two steps of the above transformation process can be taken in either
order.

eM EXAMPLE 315°
Consider the minterm canonical formula

f(W,X%,y,Z) = my + m, + my + ms +m, + mg + my + my + My,

Forming the complement by listing the minterms not included in the original
expression results in
fW,% y,Z) = m, + mz + me + mo + my + My + Ms

Finally, taking the second complement by DeMorgan’s law results in the maxterm
canonical formula of the original function:
S(W,%Y,Z) = MM3MeMoMN
9M M15

= M)>M,M.M)M \\M,\M,5

= fC.)

3.7 GATES AND COMBINATIONAL


NETWORKS
Thus far in this chapter, attention has been directed to the development of a Boolean
algebra. The purpose for introducing this algebra is its application to the analysis
and design (also called synthesis) of logic systems. Logic systems consist of logic
elements called gates and flip-flops. Gates are electronic circuits whose terminal
characteristics correspond to the various Boolean operations; while flip-flops are
memory devices that are capable of storing the logic constants. The interconnec-
tions of gates and flip-flops result in logic networks. A drawing that depicts the in-
terconnections of the logic elements is called a logic diagram. These networks are
related to the Boolean algebra in that a Boolean function is used to represent or de-
scribe a logic network and, alternatively, a logic network is a realization or imple-
mentation of a Boolean function.
92 DIGITAL PRINCIPLES AND DESIGN

(a) (b) (c)

Figure 3.1 Gate symbols. (a) And-gate. (b) Or-gate. (c) Not-gate.

3.7.1 Gates
Electronic circuits can be designed in which only two possible steady-state voltage
signal values appear at the terminals of the circuits at any time.* Such two-state cir-
cuits receive two-valued input signals and are capable of producing two-valued out-
put signals in the steady state. Rather than dealing with the actual voltage signal
values at the circuit terminals, it is possible to assign two arbitrary symbols to the
two steady-state voltage signal values. Let these symbols be logic-0 and logic-1.
An electronic circuit in which the output signal is a logic-1 if and only if all its
input signals are logic-1 is called an and-gate. This circuit is a physical realization
of the Boolean and-operation. Similarly, an electronic circuit in which the output
signal is a logic-1 if and only if at least one of its input signals is a logic-1 is called
an or-gate. The or-gate is a physical realization of the Boolean or-operation. Fi-
nally, an electronic circuit in which the output signal is always opposite to that of
the input signal is called a not-gate or inverter and is the physical realization of the
Boolean not-operation. Since the input and output lines, or terminals, of a gate have
different values, i.e., logic-0 and logic-1, at different times, each of these lines is as-
signed a two-valued variable. Hence, the terminal characteristics of each of these
gates are describable with a two-valued Boolean algebra. Figure 3.1 illustrates a set
of symbols for the above three gates and the corresponding algebraic expressions
for their behavior at the output terminals.

3.7.2 Combinational Networks


The interconnections of gates result in a gate network. If the network has the property
that its outputs at any time are determined strictly by the inputs at that time, then the
network is said to be a combinational network. A combinational network is repre-
sented by the general diagram of Fig. 3.2. The set of signals applied to the n input ter-
minals at any time is called the input state or input vector of the network; while the set
of resulting signals appearing at the m output terminals is called the output state or out-
put vector. In general, for a combinational network, the outputs z), Z,..., Zn, can be
expressed as a Boolean function of its inputs x), x5, . . ., x,. That is, z) = f{(4), 9, . . .. X,
fori = 1, 2,..., m where fi(X;, X, . . ., X,) is a Boolean function.
Not all interconnections of gates satisfy the above requirement that the network
outputs are a function of only its current inputs. However, a sufficient, but not nec-

*TIn actuality two ranges of voltage signal values are associated with each terminal. However, for
simplicity in this discussion it is not necessary to regard the electrical signals as ranges, but rather two
values suffice. Gate properties are further discussed in Sec. 3.10.
CHAPTER 3 Boolean Algebra and Combinational Networks 93

Combinational
network Outputs
“m

Figure 3.2 Block diagram of a combinational


network.

essary, condition for a combinational network is that the network contains no closed
loops or feedback paths. Networks that satisfy this constraint are said to be acyclic.
A second type of logic network is the sequential network. Sequential networks
have a memory property so that the outputs from these networks are dependent not
only upon the current inputs but upon previous inputs as well. Feedback paths form
a necessary part of sequential networks. At this time, only acyclic gate networks are
studied. This ensures that the networks being considered are indeed combinational.
There is a very important inherent assumption in the establishment of Boolean
algebra as a mathematical model for a combinational gate network. A two-valued
Boolean algebra has only two different symbols to assign to the physical signal val-
ues present at any time at the gate terminals. However, when a physical signal
changes from one of its values to the other, a continuum of values appear at the
input or output terminals. Furthermore, in the real world, changes cannot occur in-
stantaneously, i.e., in zero time. To eliminate these transient-type problems, only
the steady-state conditions occurring in a network are considered. Under this
steady-state assumption, Boolean algebra can serve as a mathematical model for
combinational gate networks. In Sec. 3.10, further remarks are made about the na-
ture of the signals within a combinational network.
At this time, the analysis and synthesis of gate combinational networks is studied.
Analysis involves obtaining a behavioral description of the network. This is achieved
by writing a Boolean expression or, equivalently, by forming a truth table to describe
the network’s logic behavior. Synthesis, on the other hand, involves specifying the in-
terconnections of the gates, i.e., topological structure, for a desired behavior. This, in
turn, results in a logic diagram from which a physical realization is constructed.

3.7.3 Analysis Procedure


Given a gate network with no feedback paths, it is a simple matter to develop an
analysis procedure. To begin with, each gate output that is only a function of the
input variables is labeled. Algebraic expressions for the outputs of each of these
gates are then written. Next, those gate outputs that are a function of just the input
variables and previously labeled gate outputs are labeled. The equations for these
gate outputs are written using the previously assigned labels as input variables.
Then, each of the previously defined labels is replaced by the already written
Boolean equations. This process is continued until the output of the network is la-
beled and the appropriate expression obtained. Finally, the expression can be used
to construct a truth table to complete the analysis procedure.
94 DIGITAL PRINCIPLES AND DESIGN

Figure 3.3 A gate combinational network.

The above analysis procedure is illustrated by the gate network of Fig. 3.3. The
or-gate whose output is labeled G, is described by the formula G, = y + z. Also, the
and-gate whose output is labeled G, is described by the formula G, = wxy. Next the
and-gate whose output is labeled G; is described in terms of the input variable w and
the previously labeled output G,. Thus, the formula for the output of the and-gate la-
beled G; is G; = w+ G, = w(y + 2). Finally, the output of the network is described
in terms of the labels G, and G;; that is, f(w,x,y,z) = G) + G3; = wxy + w(y + 2Z).
Once the expression is written, the corresponding truth table can be constructed, as
previously explained in Sec. 3.4.

3.7.4 Synthesis Procedure


The synthesis procedure, or logic design, begins with the specifications of the de-
sired terminal behavior of a gate network. The truth table is a very convenient form
for describing the network specifications. Each line of the truth table corresponds to
an input state of the network (1.e., a set of input signals represented by 0’s and 1’s)
and an output state (i.e., a set of output signals represented by 0’s and 1’s corre-
sponding to the values of the functions). Once this table is formed, Boolean formu-
las are written, one for each output function. Manipulations of a formula are
achieved by the postulates and theorems of a Boolean algebra. Each time the for-
mula is manipulated, a different, but equivalent, gate configuration is described. For
example, a network implementation with a minimum number of gate input lines is
obtained from the expression with a minimum number of literals. Once the desired
form of the Boolean expression is obtained, it is a simple matter to draw a logic dia-
gram for a gate network without feedback paths. By applying the analysis procedure
in reverse, a direct topological drawing of the equation is constructed. That is, the
order in which the operations occur within the equation is also the order in which
the gates are connected to process the signals.
For simplicity, it is assumed that all variables and their complements are al-
ways available as inputs. This is known as double-rail logic. If the variables and
their complements are not both available, then not-gates can always be used to ob-
tain the complements. This case when complementary variables are not available is
called single-rail logic.
In gate networks, the largest number of gates a signal must pass through from
input to output is called the number of levels of logic. In Fig. 3.3 this path is through
CHAPTER 3 Boolean Algebra and Combinational Networks 95

(a) (b)

Figure 3.4 Synthesis of a gate combinational network. (a) f(w,x,y,z) = wx + SY a2).


(b) f(w,x,y,Z) = Wx + xy + xz.

the or-gate whose output is labeled G,, the and-gate whose output is G;, and finally
the output or-gate. Thus, Fig. 3.3 shows a three-level gate network under the as-
sumption of double-rail logic.
To illustrate the construction of a logic diagram from a Boolean expression,
consider the Boolean function described by the formula
S(wW,x% y,Z) = wx + x(y + z)

The logic diagram of this equation is shown in Fig. 3.4a where the topological
arrangement of the gates is in direct correspondence with the evaluation of the
formula. That is, an or-gate producing y + z is followed by an and-gate to obtain
x(y + z). Concurrently, an and-gate is used to generate wx. Finally, an output
or-gate is used to combine the outputs of the subnetworks for x(y + z) and wx.
The resulting network consists of three levels.
When the above expression is rewritten in sum-of-products form, i.e., disjunc-
tive normal form, it becomes

f(w,%y,2) = wx + xy + xz
This formula suggests the two-level network shown in Fig. 3.4b. Since the two
expressions are equivalent, both of these networks have the same logical terminal
behavior.

3.7.5 A Logic Design Example


The logic design process begins with a set of specifications. Although frequently it
is possible to write a Boolean expression directly from the problem specifications, a
more formal approach is to start with a truth table.
Consider the design of a gate combinational network which is to have the fol-
lowing characteristics: The inputs to the network are the binary numbers 0 to 15 and
the output of the network is to produce the corresponding parity bit using an odd-
parity-bit scheme. The concept of the parity bit was introduced in Sec. 2.11. Thus,
the inputs to the network are 4-bit combinations of 0’s and I’s ranging from 0000 to
1111. The proposed network must generate a | parity bit when there is an even
96 DIGITAL PRINCIPLES AND DESIGN

Table 3.10 Truth table for an odd-parity-


bit generator for the first 16
binary numbers

w _ y V4 Pp
0 0 0 0 |
0 0 0 1 0
0 0 | 0 0
0 0 | 1 |
0 | 0 0 0
0 | 0 1 ]
0 | 1 0 |
0 I 1 | 0
1 0 0 0 0
1 0 0 |
! 0 | 0 1
| 0 1 i 0
| 1 0 0 |
1 i 0 1 0
i | ] 0 0
i i 1 1 1

number of 1’s in the input state of the network and a 0 parity bit when there is an
odd number of 1’s. In this way, the total number of 1’s in the input state and the par-
ity bit collectively is odd.
The truth table for this odd-parity-bit generator is given in Table 3.10. The vari-
ables w, x, y, and z denote the 4 bits of the input state where w corresponds to the
most significant bit and z corresponds to the least significant bit. The truth table has
16 rows corresponding to each of the possible input states. The functional values of
the truth table are determined by the statement of the problem. In this case, the
functional values, column p, are the desired parity bits the network produces and are
assigned so that the number of 1’s in each entire row of the table is odd.
Having obtained the truth table, a corresponding Boolean expression can be
written. For example, the minterm canonical formula for Table 3.10 is

P(W,x,y,Z) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz (3.7)

Although it is not obvious at this time, Eq. (3.7) is also the simplest disjunctive nor-
mal formula, i.e., a formula consisting of a sum of product terms, describing the
odd-parity-bit generator. The reader will be able to readily verify this fact after
studying techniques for obtaining minimal expressions in the next chapter. Alterna-
tively, the maxterm canonical formula for this example could be written, which is
the simplest conjunctive normal formula, i.e., a formula consisting of a product of
sum terms. Equation (3.7) is used to obtain the logic diagram shown in Fig. 3.5.
In general, a disjunctive normal formula always results in a network consisting
of a set of and-gates. followed by a single or-gate; while a conjunctive normal for-
CHAPTER 3 Boolean Algebra and Combinational Networks

=]
<1e1
Au

=|
nls

=|
&
aid

(it
NSIRI=

aS
ard

ees
y

Figure 3.5 J[wo-level gate network for the odd-parity-bit


generator.

mula always results in a network consisting of a set of or-gates followed by a single


and-gate. Under the assumption of double-rail logic, every variable is available in
both its complemented and uncomplemented form. Hence, not-gates are unneces-
sary and a realization based on either a disjunctive or conjunctive normal formula is
a two-level network.

3.8 INCOMPLETE BOOLEAN FUNCTIONS


AND DON’T-CARE CONDITIONS
In Sec. 3.4 a Boolean function was defined as a mapping that assigns a unique func-
tional value to each combination of values of the n independent variables in which
all values are limited to the set {0,1}. This type of Boolean function is said to be
completely specified. An incomplete Boolean function differs from a complete
Boolean function in that functional values are assigned to only a proper subset of the
combinations of values of the n independent variables. Formally, an n-variable in-
complete Boolean function f(x,, x5, ..., X,) 18 a mapping that assigns a unique value,
98 DIGITAL PRINCIPLES AND DESIGN

called the value of the function, to a proper subset of the 2” combinations of values of
the n independent variables in which all specified values are limited to the set {0,1}.
Incomplete Boolean functions are also called incompletely specified functions.
As in the case of a complete Boolean function, an n-variable incomplete
Boolean function is represented by a truth table with n + 1 columns and 2” rows.
Again the first n columns provide for a complete listing of all the 0-1 combinations
of values of the n variables and the last column gives the value of the function for
each row. However, for those combinations of values in which a functional value is
not to be specified, a symbol, say, —, is entered as the functional value in the last
column of the table. Table 3.11a illustrates an incomplete Boolean function in
which functional values are not specified for the combinations (x,y,z) = (0,1,1) and
COM):
The complement, f(x,, %,..., x,), of an incomplete Boolean function
A(X, X), . .. , X,) 18 also an incomplete Boolean function having the same unspecified
rows in the truth table. The functional values in the remaining rows of the truth
table for f, however, are opposite to the functional values in the corresponding rows
of the truth table for f’ Table 3.11b shows the complement of the Boolean function
given in Table 3.1 1a.

Table 3.11 An example of a three-


variable incomplete Boolean
function and its complement

x y eee f
0 0 0 ]
0 0 I l
0 1 0 0
0 I | —
| 0 0 0
| 0 | _
| | 0 0
| I 1 I
(a)

y z ji
0 0 0 0
0 0 l 0
0 | 0 1
0 I | =
0 0 l
0 | —
1 i 0 1
| ] | 0
(b)
CHAPTER 3 Boolean Algebra and Combinational Networks 99

3.8.1 Describing Incomplete Boolean Functions


It was shown in Sec. 3.5 that a complete Boolean function can always be described
by either a minterm canonical formula or a maxterm canonical formula in decimal
notation. In order to obtain similar-type expressions for incomplete Boolean func-
tions, a slight modification is needed.
Because of the way in which incomplete Boolean functions arise from logic-
design problems, those rows of the truth table in which the functional values are not
specified are called don’t-care conditions. As was done previously, minterms and
maxterms are still written for those rows having | and 0 functional values, respec-
tively. However, it is also necessary to append information indicating the unspeci-
fied rows, i.e., the don’t-care conditions. The most common approach is to simply
add to the minterm or maxterm canonical formula a listing of the decimal equiva-
lents of the rows associated with the don’t-care conditions. For example, again con-
sider Table 3.11a. Since the don’t-care conditions correspond to rows 3 and 5, the
term

dc(3,5)

is appended to the canonical formulas in decimal notation. In particular, the


minterm canonical formula for Table 3.1 1a is given as

f(%y,z) = Sm(0,1,7) + de(3,5)

and the maxterm canonical formula is written as

f(%y,z) = IIM@,4,6) + dce(3,5)


Manipulating Boolean equations, using the postulates and theorems of Boolean
algebra, that are derived from incomplete Boolean functions is a very difficult task
since one is free to use at will the various don’t-care conditions in the manipula-
tions. However, as is shown in the next chapter, for the purpose of obtaining mini-
mal expressions there are procedures that can handle the don’t-care conditions with
no great amount of complexity.

3.8.2 Don’t-Care Conditions in Logic Design


In the previous section, consideration was given to the logic design of gate net-
works that are described by complete Boolean functions. This involved constructing
a truth table and specifying an output state for each row. Many logic-design prob-
lems, however, have a restricted set of input states. These problems are character-
ized by incomplete Boolean functions.
There are two ways in which a restricted set of input states can come about dur-
ing the formulation of a mathematical model from the specifications of a problem.
First, some input states may simply never occur. Consequently, the output states are
irrelevant. Second, the input states may occur, but the corresponding output states
need not be specified because of the environment in which the network is placed. In
either case, these input states correspond to don’t-care conditions.
100 DIGITAL PRINCIPLES AND DESIGN

These two types of don’t-care conditions are equivalent for the purpose of for-
malizing the statement of a problem. If an input state never occurs, then the output
state cannot be observed. For the case in which the output state is not specified for
some input state that can occur, by convention it is agreed that the output state is of
no consequence or, equivalently, is not to be observed. Thus, it is concluded that
when a design problem involves don’t-care conditions, a truth table may be formed
without the need of specifying all the functional values. These truth tables are in-
complete Boolean functions.
Once a gate network is synthesized, it is completely deterministic. That is, for
every input state some output state must result. This is true even if a don’t-care condi-
tion is applied to the network. Don’t-care conditions provide the designer with flexi-
bility. In particular, this allows the designer to judiciously choose the assignment of
the output states to the don’t-care conditions so that the realization is optimal. This
procedure does not violate the mathematical concept of incomplete Boolean functions
since incomplete Boolean functions describe a network’s required behavior only for a
selected set of input states. Thus, in designing such networks, the output states for the
required behavior are determined by the problem specifications, and the output states
for the don’t-care conditions are determined by a criterion of an optimal realization.
To illustrate the formulation of an incomplete Boolean function as a mathematical
model for a logic-design problem, assume it is desired to design a gate network that is
an odd-parity-bit generator for the decimal digits 0 to 9 represented in 8421 BCD.
Table 3.12 gives the truth table for the odd-parity-bit generator. The variable w denotes
the bit in the 8421 code group whose weight is 8, the variable x denotes the bit whose

Table 3.12 Truth table of an odd-parity-


bit generator when the
decimal digits are
represented in 8421 BCD

w x y Pp
0 0 0 0
0 0 0 0
0 0 l 0 0
0 0
0 0 0 0
0 0
0 l 1
0
0 0 0
0 0 1
0 =
0 as
0 i
l 0 a
ze
1 SS=
nic
CHAPTER 3 Boolean Algebra and Combinational Networks 101

s1=|
a1

ita
aod

NS

Figure 3.6 Logic diagram for the 8421 BCD oda-


parity-bit generator.

weight is 4, the variable y denotes the bit whose weight is 2, and the variable z denotes
the bit whose weight is 1. The parity bit that is to be produced by the gate network is
shown in the p column. Functional values are only specified for the first 10 rows as de-
termined by the statement of the problem. The last six rows are don’t-care conditions
since the binary combinations 1010 to 1111 cannot occur because they do not represent
possible inputs to the logic network. That is, this network can only have as its inputs the
10 binary combinations associated with the 10 decimal digits expressed in 8421 BCD.
The minterm canonical formula describing Table 3.12 is

p(w,x,y,zZ) = &m(0,3,5,6,9) + de(10,11,12,13,14,15)

Using the procedures of the next chapter, a simplified expression for this function is

P(W,%,y,Z) = wxyz + xyz + xyz + xyz + wz


The corresponding gate network is given in Fig. 3.6.

3.9 ADDITIONAL BOOLEAN OPERATIONS


AND GATES
Up to this point, the development of a Boolean algebra and the correspondence be-
tween a Boolean algebra and gate realizations have been limited to three Boolean op-
erations: and, or, and not. In actuality, these operations are really Boolean functions
since they provide a mapping of a set of 0 and | symbol combinations onto the set of
0 and 1 symbols. These mappings are the tabular definitions previously stated as part
of Theorem 3.10 in Sec. 3.3. Furthermore, a gate is an electronic circuit realization
of a Boolean function. At this time, some additional specialized Boolean functions
that are regarded as operations and having commercial gate realizations are studied.
102 DIGITAL PRINCIPLES AND DESIGN

Table 3.13 The nand-function

x y xy nand(x,y) = (xy)
0 0 0 f
0 | 0 |
1 0 0 |
| 0

3.9.1 The Nand-Function


Consider Table 3.13. The third column corresponds to the and-function of x and y
and, hence, is described by the expression xy. The fourth column is the complement
of the third column and is the definition of the nand-function, or simply nand, of x
and y. Algebraically this is written as (xy). As is evident by the Boolean expression,
the name nand is a contraction of not-and.
It is possible to generalize the definition of the nand-function when more than
two variables are involved. In particular, the nand of n variables, x;, x3,...,- Goris
defined algebraically as*
Nand (Kpex5s = Ca)

The nand-function is readily realizable with electronic circuit elements. A sym-


bol for the nand-gate is shown in Fig. 3.7a. This symbol is a composite of the and-
gate symbol and the inversion bubble associated with a not-gate. In general, a small
circle on an input or output terminal of a gate symbol is regarded as the Boolean
not-operation. Furthermore, as a consequence of DeMorgan’s law, it immediately
follows that

(X|X2""" Xn) = xX sir x ane = ve

This expression provides an alternate algebraic description of the nand-function and


suggests the alternate gate symbol shown in Fig. 3.7b. Verbally, the output of a
nand-gate is a logic-1 if and only if at least one of its inputs has the value of logic-0;
otherwise, the output is logic-0.

Figure 3.7 Nand-gate symbols. (a) Normal symbol. (b) Alternate symbol.

*Occasionally a special symbol is used to denote the nand-operation on a set of variables. One such
symbol is |and is referred to as a stroke. Thus, the stroke-operation on a set of n variables is defined by
the expression x, |x, |---| x, = (4x. °° x,).
CHAPTER 3 Boolean Algebra and Combinational Networks 103

Table 3.14 The nor-function

nor(x,y) = (x + y)

OO
Re
OC|a
- =
oOo
oo

3.9.2 The Nor-Function


The dual concept to the nand-function is the nor-function. The third column of
Table 3.14 corresponds to the or-function as applied to the variables x and y. The
complement of this function is given as the fourth column and serves as the defini-
tion of the nor-function for two variables. The name nor is a contraction for not-or.
Since the fourth column of Table 3.14 is the complement of the third, it immedi-
ately follows that an algebraic description of the nor-function is
nor(x,y) = (« + y)

For the general case of n variables,*

NOMA eee.) eX to te IX, )

In an analogous manner to that for the nand-function, the definition of the


nor-function suggests the gate symbol shown in Fig. 3.8a. An alternate gate symbol
is given in Fig. 3.85 and follows by the use of DeMorgan’s law, i.e.,

(Ce Gok oe ke

Again inversion bubbles are used in the gate symbols to indicate that algebraically a
Boolean nct-operation occurs at the terminal at which the inversion bubble appears.
Verbally, the output of a nor-gate is logic-1 if and only if all of its inputs are at
logic-0; otherwise, the output is logic-0.

3.9.3 Universal Gates


An important property of nand-gates and nor-gates is that it is possible to realize
any combinational network with a collection of just one of these gate types. When

x} x| —

XyXQ0 X= (X} Tee tata X,)


ee TRA ean +X,) “2 :
a
Xp = Xn

(a)

Figure 3.8 Nor-gate symbols. (a) Normal symbol. (b) Alternate symbol.

Occasionally a special symbol is used to denote the nor-operation on a set of variables. One such
symbol is |, referred to as a dagger. Thus, the dagger-operation on a set of n variables is defined by the
expressions, |x 1 > | %=@ Fmt Fx):
104 DIGITAL PRINCIPLES AND DESIGN

network configurations utilizing only a single type of gate result in the realizations
of the and-, or-, and not-functions, such a gate is called a universal gate. Both nand-
gates and nor-gates are universal gates. Another important property of nand-gates
and nor-gates is that their circuit realizations are more easily achieved.
The universal property of nand-gates is illustrated in Fig. 3.9. Since in Boolean
algebra xx = x, by complementing both sides of the expression it immediately fol-
lows that (xx) = x. Hence, as shown in Fig. 3.9a, a two-input nand-gate with its in-
puts tied together is equivalent to a not-gate. Alternatively, since in Boolean algebra
x* 1 =x, then (x + 1) = x implies that a two-input nand-gate in which one input is x
and the other is the constant logic-1 also serves as a not-gate. Figure 3.9 illustrates
how the or-function is realized by the use of just nand-gates. In particular, since
(xy) =x + y, using two nand-gates to form the complements of x and y and then
using these as the inputs to a third nand-gate, the overall behavior of the network is
that of the Boolean or-function. Finally, since [(xy) ]= xy, the Boolean and-function
is achieved by the network of Fig. 3.9c where the inputs x and y are applied to a sin-
gle nand-gate to form (xy) and then the output of the gate is complemented, using a
second nand-gate, to obtain the desired results.
Nor-gates are also universal gates. Thus, they can be used to form x, x + y, and
xy according to the relationships

(x+x)=x or Grae (O) = x

These relationships are illustrated in Fig. 3.10.

a (xy) =x+y

Figure 3.9 The universal property of nand-gates. (a) Not realization. (b) Or realization.
(c) And realization.
CHAPTER 3 Boolean Algebra and Combinational Networks 105

ea xX

e —t) >»—- (X+x)=X or Wwe (x + 0) =x

x ——

(c)

Figure 3.10 The universal property of nor-gates. (a) Not realization. (b) Or
realization. (c) And realization.

As a result of the above discussion, any Boolean expression is realizable with a


sufficient number of a single type of universal gate. For example, this can be done
by simply replacing the and-, or-, and not-gates in a logic diagram by the networks
shown in either Fig. 3.9 or Fig. 3.10. However, such an approach uses an excessive
number of gates.

3.9.4 Nand-Gate Realizations


A better approach for obtaining a logic diagram of a combinational network utiliz-
ing only nand-gates involves manipulating its Boolean expression into the general
form of the algebraic definition of the nand-function, i.e., nand(A, B,..., C) =
(AB: ++: C). To illustrate this procedure, consider the expression

f(w,x,y,z) = wz + wz(x + y)
Using DeMorgan’s law, this expression is rewritten as

f(w,x,y,2) = {(wz) [wz(x + y)]}


It is now noted that the general form of this expression is (AB) where A = (wz) and
B = [wz(x + y)]. Thus, a nand-gate having A and B as inputs results in the desired
realization. This step is shown in Fig. 3.11a. The process is now repeated by manip-
ulating the expressions for A and B so that they have the general form of the alge-
braic definition of the nand-function. Both of the expressions A = (wz) and B =
[wz(x + y)] have the desired form. Thus, the expression for A is realized with a
nand-gate having inputs w and z; while the expression for B is realized with a nand-
gate having inputs w, z, and (x + y). This step is shown in Fig. 3.110. Finally, it is
106 DIGITAL PRINCIPLES AND DESIGN

f=wzt w2zx + ¥) Z
[wz(x + y)] = {(we)lwex+9} x4 5=Gy)
(a) (b)

(c)

Figure 3.11 Steps involved to realize the Boolean expression f(w,x,y,Z) = WZ + wz(x + y)
using only nand-gates.

necessary to obtain the x + y input. Again, the general approach of manipulating an


expression to conform to the definition of the nand-function is applied. In this case,
x + y = (xy). This implies that the x + y input can be obtained as the output of a
nand-gate whose inputs are x and y. Under the assumption of double-rail logic, the
final realization is shown in Fig. 3.1 1c.
To apply the above procedure, it is necessary that the highest-order opera-
tion in the given Boolean expression be an or-operation. The highest-order oper-
ation of an expression is the last operation that is performed when the expres-
sion is evaluated. In the above example, the highest-order operation is the
or-operation connecting the wz and wz(x + y) terms. If the highest-order opera-
tion of a Boolean expression is the and-operation, then the above procedure must
be modified slightly. In particular, the expression is first complemented. This
causes an overbar to occur over the entire expression, with the net result that the
appropriate form for a nand-gate realization is obtained. The above realization
procedure is then carried out on the complemented expression. Finally, a not-
gate (or a nand-gate equivalent as previously shown in Fig. 3.9a) is placed at the
output of the network for f to complete the realization.
As an illustration of this variation, consider the expression

f(%y,2) = x(y + 2)
Since the highest-order operation is the and-operation connecting x with y + z, it is
necessary to first complement the expression to begin the nand-gate realization pro-
cedure; that is,

f%y,2 =O +2]
CHAPTER 3 Boolean Algebra and Combinational Networks 107

(a) (b)

(c)

Figure 3.12 Steps involved to realize the Boolean expression f(x,y,z) = x(y + Z) using only nand-gates.

Now that the desired form for a nand-gate realization is obtained, the procedure ex-
plained previously is carried out. This is illustrated in Fig. 3.12a and b. Finally, in
order to obtain a realization of the original expression f, the output of the network is
complemented as shown in Fig. 3.12c.
The above algebraic procedure to obtain a nand-gate realization can also be
performed graphically. This procedure makes use of the two gate symbols shown in
Fig. 3.7 for a nand-gate. By making use of both symbols within the same logic dia-
gram, the application of DeMorgan’s law, required in the above algebraic proce-
dure, is readily achieved. The steps in the graphical procedure are as follows:
1. Apply DeMorgan’s law to the expression so that all unary operations appear only
with single variables. Draw the logic diagram using and-gates and or-gates.
2. Replace each and-gate symbol by the nand-gate symbol of Fig. 3.7a and each
or-gate symbol by the nand-gate symbol of Fig. 3.70.
3. Check the bubbles occurring on all lines between two gate symbols. For every
bubble that is not compensated by another bubble along the same line, insert
the appropriate not-gate symbol from Fig. 3.13 so that the not-gate bubble
occurs on the same side as the gate bubble.
4. Whenever an input variable enters a gate symbol at a bubble, complement the
variable. If the output line has a bubble, then insert an output not-gate symbol.
5. Replace all not-gates by a nand-gate equivalent if desired.
To illustrate this graphical procedure, again consider the expression

flw.x,y,Z) = wz + w2(x + y)

Figure 3.13 Two equivalent not-gate symbols.


108 DIGITAL PRINCIPLES AND DESIGN

(a) (b)

(c)

Figure 3.14 Steps illustrating the graphical procedure for obtaining a nand-gate realization of the expression
fW,X,Y,Z) = WZ + w2(x + Y).

The corresponding logic diagram using and-gates and or-gates is shown in Fig.
3.14a. Next, according to Step 2, each and-gate symbol is replaced by the nand-gate
symbol of Fig. 3.7a and each or-gate symbol is replaced by the nand-gate symbol of
Fig. 3.7b as shown in Fig. 3.14b. Since each gate output bubble is connected directly
to a gate input bubble, Step 3 is not needed. Finally, as indicated in Step 4, since the
inputs x and y in Fig. 3.146 are entering at bubbles, they are complemented as
shown in Fig. 3.14c. If the alternate nand-gate symbols are replaced by the conven-
tional nand-gate symbols, then the network of Fig. 3.14c becomes that of Fig. 3.1 le.

3.9.5 Nor-Gate Realizations


In much the same way that nand-gate realizations are obtained from Boolean ex-
pressions, nor-gate realizations can also be obtained. One such procedure is based
on manipulating the Boolean expression describing the network behavior to con-
form to the general algebraic definition of the nor-function; that is, nor(A,B, ... ,C)
=(A+B+::+++C). If the highest-order operation of the original expression is
the and-operation, then it is a simple matter to progress step by step, starting with
the output nor-gate, to produce the desired logic diagram. On the other hand, if
the highest-order operation is the or-operation, then the procedure must be modified
slightly. The logic diagram for the complement of the original expression is first ob-
tained and then a not-gate (or a not-gate equivalent as previously shown in Fig.
3.10a) is placed at the output.
Again consider the Boolean expression

f(w,x%,y,2) = wz + wzx + y)
CHAPTER 3 Boolean Algebra and Combinational Networks 109

To construct a nor-gate realization for it, first it is noted that the highest-order oper-
ation is the or-operation appearing between the wz and wz(x + y) terms. Thus, it is
necessary to first complement the original expression, i.e.,

f(w,x,y,2) = [we + wax + y)]


Since the general form of this expression is (A + B) where A = wz and B =
wz(x + y), it immediately follows that f appears at the output of a nor-gate
whose inputs are wz and wz(x + y). This is shown in Fig. 3.15a. The two terms
wz and wz(x + y) are next rewritten so as to have the appropriate form according
to the algebraic definition of the nor-function. In particular, wz = (w + z) and
wz2x + y) =[w+z+ («+ y)]. These two expressions are now realized using
nor-gates as shown in Fig. 3.155. It is immediately noted that the term (x + y)
is simply realizable as the output from a nor-gate having x and y as inputs. At
this point, the logic diagram appears as in Fig. 3.15c. Finally, an output nor-gate
is used to complement the function f thus far realized in order to obtain the real-
ization of the original Boolean expressionf. The final logic diagram is given in
Fig. 3.15d.

SI

wie +5) fa (ierwee+HN) (x + y)__2


wz = (w + Z) ae See w

=[w+z+(x+y)]

(a) (b)

~l (d)

Figure 3.15 Steps involved to realize the Boolean expression f(w,x,y,Z) = wz + wz(x + y) using only nor-gates.
110 DIGITAL PRINCIPLES AND DESIGN

Analogously to that of nand-gate realizations, the above algebraic procedure


can be performed graphically. In this case, use is made of the two gate symbols
shown in Fig. 3.8 for nor-gates. The steps of this graphical procedure are as follows:

1. Apply DeMorgan’s law to the expression so that all unary operations appear
only with single variables. Draw the logic diagram using and-gates and
or-gates.
Replace each or-gate symbol by the nor-gate symbol of Fig. 3.8a and each and-
gate symbol by the nor-gate symbol of Fig. 3.80.
Check the bubbles occurring on all lines between two gate symbols. For every
bubble that is not compensated by another bubble along the same line, insert
the appropriate not-gate symbol from Fig. 3.13 so that the not-gate bubble
occurs on the same side as the gate bubble.
Whenever an input variable enters a gate symbol at a bubble, complement the
variable. If the output line has a bubble, then insert an output not-gate symbol.
Replace all not-gates by a nor-gate equivalent if desired.

Figure 3.16 shows the steps of the graphical procedure again being applied to
the Boolean expression

fw.xy,Z) = wz + w2(x + y)
The logic diagram is first drawn using conventional and-gates and or-gates as
shown in Fig. 3.16a. As specified in Step 2 of the above procedure, each or-gate
symbol is replaced by the nor-gate symbol of Fig. 3.8a and each and-gate symbol is
replaced by the nor-gate symbol of Fig. 3.8b. At this point, the diagram appears as

(b)

Figure 3.16 Steps illustrating the graphical procedure for obtaining a nor-gate realization of the expression
f(W,X,y,Z) = wz + wz(x + Y).
CHAPTER 3 Boolean Algebra and Combinational Networks

Table 3.15 The exclusive-or-function

in Fig. 3.16b. Checking all lines between gates, each inversion bubble at one end
has a matching inversion bubble at the other end. Hence, Step 3 does not have to be
applied. Finally, as indicated by Step 4, the four inputs entering at bubbles are com-
plemented and an output not-gate is appended since the output gate in Fig. 3.16b
has a bubble. This gives the logic diagram shown in Fig. 3.16c. Comparing Fig.
3.16c to Fig. 3.15d and recalling that two symbols are possible for a nor-gate, it is
seen that the same results are obtained.

3.9.6 The Exclusive-Or-Function


Another specialized Boolean function of interest is the exclusive-or-function. The
exclusive-or of x and y is denoted by x@y and is defined by Table 3.15. As indicated
by the definition, the exclusive-or ofx and y is a logic-1 if and only if x or y, but not
both x and y, has the value of logic-1; otherwise, the exclusive-or is logic-0. As is
evident from the definition, it is possible to write the algebraic expression

KDY
= Ky or xy
Comparing the definition of the exclusive-or-function with that of the Boolean or-
function previously defined, it is seen that they differ only when both x and y have
the value of logic-1. To emphasize this distinction, the conventional Boolean or-
function is also referred to as the inclusive-or-function.
A special gate symbol has been defined for the exclusive-or-function. This
symbol is shown in Fig. 3.17 and is frequently referred to as an xor-gate. Normally,
xor-gates are available only as two-input gates.
The exclusive-or-function has many interesting properties. These are summa-
rized in Table 3.16. Uppercase letters are used in this table to emphasize the fact
that these variables can represent expressions as well as single variables. To reduce
the number of occurrences of parentheses, by convention, it is assumed that the and-
operation takes precedence over the exclusive-or-operation. No precedence between
the inclusive-or- and exclusive-or-operations is assumed and, hence, parentheses

x
x@y
y

Figure 3.17 The xor-gate


symbol.
DIGITAL PRINCIPLES AND DESIGN

Table 3.16 Properties of the exclusive-or-operation

(a) (b)
(i) X@Y = XY + XY =(X+ Y\X+Y) GOV) = XY = A ee)
(ii) XOO = x@1 =X
(iii) xox = 0 XOX =]
(iv) NOY = XOY X@Y = X@Y = (X@Y)
(v) X@Y = YOX
(vi) X@(YOZ) = (XOY)OZ = XOYOZ
(vii) X(Y®Z) = XY@XZ
(viii) X + Y= X®YOXY
(ix) XOY = X + Yif and only if XY = 0
(x) If X®Y = Z, then YOZ = X or XOZ = Y

are needed when these two operations appear within an expression. All of the prop-
erties given in Table 3.16 are easily proved by perfect induction.
To illustrate the use of the exclusive-or properties given in Table 3.16 and to
show the usefulness of xor-gates in logic design, again consider the odd-parity-bit
generator that was designed in Sec. 3.7. At that time the Boolean expression for the
generator was obtained, i.e., Eq. (3.7). Starting from that point, the following
Boolean manipulations can now be performed:
p(w,x%,y,Z) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz
wx(yz + yz) + wx(yz + yz) + wx(yz + yz) + wx(yz + yz)
wx(y@z) + wx(y@z) + wx(y@z) + wx(v@z) by Prop. (ia-b)
(wx + wx)(y@z) + (wx + wx)(v@z)

(w@x) (Y@z)_ + (WOx)@z) by Prop. (ia-b)


(wx) ® (v@z)] by Prop. (ib)
= (wOx) © (yz) by Prop. (ivb)
= (W@x) © (y@z) by Prop. (ivb)
A realization based on the final equation is shown in Fig. 3.18 in which it is as-
sumed that only two-input exclusive-or-gates are available. It should be noted how
much simpler this realization is than the one given in Fig. 3.5.

tad

rs
“4

Figure 3.18 Another realization of the


odd-parity-bit generator.
CHAPTER 3 Boolean Algebra and Combinational Networks

Table 3.17 The exclusive-nor-function 5


x
0 0
Figure 3.19 The xnor-gate symbol.
0 l 0
1 0 0
1 | l

3.9.7 The Exclusive-Nor-Function


The last specialized Boolean function to be introduced is the exclusive-nor-function,
which is simply the complement of the exclusive-or-function. Thus, the exclusive-
nor of x and y, written x©y, is a logic-1 if and only if the logic values of both x and y
are the same; otherwise, the value of xy is logic-O. Thus, the exclusive-nor-
function is also called the eqguivalence-function. The definition of the exclusive-nor-
function is tabulated in Table 3.17. The gate symbol for this function is shown in
Fig. 3.19 and is frequently referred to as an xnor-gate. As in the case of the xor-
gates, xnor-gates are normally only available with two inputs.

3.10 GATE PROPERTIES


Several types of logical gates were introduced in this chapter. It has been shown
that there is a direct correspondence between Boolean equations and the topological
structure of combinational gate networks. In establishing the two-valued Boolean
algebra as a mathematical model for combinational logic networks, it was assumed
that only two signal values occur at the gate terminals and that the transient behav-
ior of the combinational networks was to be ignored.
Actual circuits of the logic gates are discussed in the Appendix. In general,
there are many different circuit designs for a given gate type. These designs are de-
pendent upon the components and circuit technology used. A class of digital circuits
having a common circuit technology and general structure is called a logic family.
Among these are transistor-transistor logic (TTL), emitter-coupled logic (ECL),
and complementary metal-oxide semiconductor logic (CMOS logic). Furthermore,
within each logic family there are usually more than one logic series. The circuits
within the series have, in addition to a specific circuit technology and structure,
some common distinctive characteristic. For example, within the TTL logic family
some of the logic series are 54/74 standard TTL, Schottky TTL, advanced Schottky
TTL, low-power Schottky TTL, and advanced low-power Schottky TTL. The ap-
propriate logic family and series for a given digital system depend upon what oper-
ating requirements must be met.
Although the Appendix deals with the specifics of various logic families, it is
appropriate at this time to consider some of the gate properties that are relevant to
the logic design process. As was previously mentioned, the two signal values associ-
ated with logic-0 and logic-1 are not really single values but, rather, ranges of values.
The assignment being used in this book is that if a signal value is in some low-level
114 DIGITAL PRINCIPLES AND DESIGN

Volts

Vi (max)
Voltage range
Nominal logic-1 for logic=1

Vi(min)
Forbidden range of operation
other than during transition

Vmax)
Voltage range
Nominal logic-0 for logic-0

Vic min)

Figure 3.20 Voltage ranges of logic


inputs for positive logic.

voltage range between Vjimin) ANd Vijay), then it is assigned to logic-0. Similarly,
when a signal value is in some high-level voltage range between Vijniny ANd Vi max)s It
is assigned to logic-1. This is illustrated in Fig. 3.20.* As long as the signal values
stay within their assigned ranges, except during transit between ranges, the logic
gates behave as intended. Steady-state signals within the forbidden range result in
unreliable gate behavior. It is because a prespecified range of signal values is re-
garded as the same logic value that digital systems are highly reliable under such
conditions as induced noise, temperature variations, component fabrication varia-
tions, and power supply variations. However, for simplicity in discussion, nominal
values of the signals are frequently regarded as their true values.
In manufacturer’s literature, the terminal behavior of the logic elements is
stated in terms of the symbols L and H, denoting low and high voltage ranges,
rather than logic-0 and logic-1. For example, Table 3.18 illustrates how a manufac-
turer might specify the terminal behavior of some type of gate circuit. Upon substi-
tuting 0 and | for L and H, respectively, the Boolean definition of the and-function
results and, hence, such a circuit is a (positive) and-gate.’
There are several properties associated with logic gates that determine the envi-
ronment in which a digital system can operate as well as introduce constraints on its
topological structure. These include noise margins, fan-out, propagation delays,
and power dissipation. Noise margins are a measure of the capability of a circuit to
operate reliably in the presence of induced noise. The fan-out of a circuit is the

*This is the positive-logic concept. Alternatively, logic-O can be assigned to the higher voltage range
and logic-1 to the lower voltage range. Such as assignment is referred to as negative logic.
‘It should be noted that if 0 and 1 are substituted for H and L, respectively, in Table 3.18, then the circuit
behavior becomes that of an or-gate. Thus, under positive logic, a circuit that behaves as an and-gate
becomes, under negative logic, an or-gate. To avoid any confusion, this book has adopted the positive-
logic convention.
CHAPTER 3 Boolean Algebra and Combinational Networks 115

Table 3.18 Voltage table for a (positive)


and-gate

number of gates or loads that can be connected to the output of the circuit and still
maintain reliable operation. The propagation delays of a gate circuit are influencing
factors that determine the overall operating speed of a digital system since they es-
tablish how fast the circuit can perform its intended function. Finally, power dissi-
pation is the power consumed by the gate that, in turn, determines the size of the
power supply needed for the digital system.

3.10.1 Noise Margins


For performance purposes, gate circuits are designed so that Vipin) is different at
the input and output terminals of a gate. That is, the minimal signal value that is ac-
ceptable as a logic-1 at the input to a gate is different from the minimal logic-! sig-
nal value that a gate produces at its output. A similar situation also occurs for
Vimaxy Thus, manufacturers normally state a Viy(nax)s Viewminy» Vormax)» 4 Vozyminy 10
the gate specifications. That is, the manufacturer guarantees that any input voltage
less than Vir¢max) 18 recognized by the gate as corresponding to a low-range (logic-0)
voltage input. On the other hand, any input voltage greater than Vjjmin) 1S TeCOg-
nized by the gate as corresponding to a high-range (logic-1) voltage input. Further-
more, the manufacturer guarantees that the low-range (logic-0) voltage output of the
gate does not exceed Vo;;max) and that the high-range (logic-1) voltage output of the
gate does exceed Voyimin- Of course, this assumes that any manufacturer-specified
loading, temperature, and power supply constraints are adhered to. In addition, gate
circuit behavior is meaningful only if Voz¢maxy < Viz~maxy < Vieminy < Vorcmin):
To understand the significance of these gate specifications, consider the ef-
fect of connecting the output of a gate to another gate as shown in Fig. 3.21a, in
which noise is induced between the two gates. In Fig. 3.215 the four values dis-
cussed above are drawn on a straight line so as to emphasize their relative val-
ues. Since the low-level output of gate 1, 1.e., logic-0, is less than Vo;(max) and
any signal less than Vij¢max) is regarded as a low-level input to gate 2, i.e., also a
logic-0, it is seen that any additive noise induced between the gates less than
Vitvmax) ~Vormax) does not affect the logic behavior of the two gates in cascade.
This is called the worst-case low-level noise margin. In a similar manner, since
the high-level output of gate 1, i.e., logic-1, is greater than Voy(min) and any input
signal greater than Vj;min) is regarded as a high-level input to gate 2, 1.e., also a
logic-1, it is seen that any subtractive noise induced between the gates less than
DIGITAL PRINCIPLES AND DESIGN

Gate | output Gate 2 input

Actual high-level
output range Allowable
Vonmin) high-level
oe
Worst-case
high-level
noise margin
Vie(min)

+ | Vizrnaxy
~~ Worst-case
low-level
noise margin Allowable
i low-level
Vor max)
le input range
! Giehs =.
| Actual low-level
Gate |
ae output range
=

te (b)
Figure 3.21 Noise effects. (a) Interconnection of two gates with induced noise.
(b) Noise margins.

Vonimin) ~ Vinrmin) does not affect the logic behavior of the two gates in cascade.
This is called the worst-case high-level noise margin. As seen from these defini-
tions, noise margins are a measure of a digital circuit’s immunity to the presence
of induced electrical noise.
It should be noted that the above noise margins are worst-case values. If the ac-
tual low-level output voltage of gate 1 is Vo;;, where Vo;,; S Vozimax), then the actual
low-level noise margin, NM_, is given by

NM, =e V iheman) oa Von

Similarly, if the actual high-level output voltage of gate 1 is Voz,;, where Voy, =
Vormin)s then the actual high-level noise margin, NM jy, is

NM = Vom — Virmin)

3.10.2 Fan-Out
As discussed in the Appendix, the signal value at the output of a gate is dependent
upon the number of gates to which the output terminal is connected. Since for
proper operation the signal values must always remain within their allowable
ranges, this implies that there is a limitation to the number of gates that can serve as
loads to a given gate. This is known as the fan-out capability of the gate. Again,
manufacturers specify this limitation. It is then the responsibility of the designer of
a logic network to adhere to the limitation. To do this, circuits known as buffers,
which have no logic properties but rather serve as amplifiers, are sometimes incor-
porated into a logic network.
CHAPTER 3 Boolean Algebra and Combinational Networks

time

Figure 3.22 Propagation delay times.

3.10.3 Propagation Delays


Digital signals do not change nor do circuits respond instantaneously. For this reason
there is a limitation to the overall speed of operation associated with a gate. Figure 3.22
shows waveforms at the input and output terminals of a not-gate.* Here finite rise and
fall times are indicated as well as delays in response to the input changes. That is, the
signals do not go between their logic-0 and logic-1 values in zero time. In addition, the
effect of a change at the input terminal does not appear immediately as a change at
the output terminal. Rather, owing to the physical behavior of the electronic compo-
nents in the gate there is a time delay before the output changes. Using a specified ref-
erence point, say the 50 percent point, on the rise and fall of the signals, V,.., two time
delays are indicated in Fig. 3.22. These are referred to as propagation delays. The time
required for the output signal to change from its high level to its low level as a conse-
quence of an input signal change is f,,,; while the time required for the output signal to
change from its low level to its high level as a consequence of an input signal change is
tru. These two delay times, f,4, and t,,,, are, in general, not equal. Manufacturers nor-
mally give maximum values for these times in the gate specifications. As a general
measure of the response speed of a gate, one frequently uses an average propagation
delay time, #,,, which is defined as
ond lyHL oF lhLH
Lyd om 2

*Similar waveforms can be drawn for other types of gates under the assumption that all but one of the
gate inputs are held fixed and the remaining input is changed to cause the output to change, possibly, but
not necessarily, with an inversion.
118 DIGITAL PRINCIPLES AND DESIGN

3.10.4 Power Dissipation


In the course of operation, a digital circuit consumes power as a result of the flow
of currents. There are two components to this power dissipation. Static power
dissipation occurs when the circuit is in its steady-state condition; while dynamic
power dissipation occurs as the result of changes of the various signals. In both
cases, the necessary currents must be provided by the power supply of the digital
system.
Certainly it is desirable in a digital system to have gates with low power dis-
sipation and high speed of operation (i.e., low propagation delay times). How-
ever, these two performance parameters are in conflict with each other. As a re-
sult, a common measure of gate performance is the product of the propagation
delay and the power dissipation of the gate. This is known as the delay-power
product.

CHAPTER 3 PROBLEMS
3.1 Using the basic Boolean identities given in Table 3.1, prove the following
relationships by going from the expression on the left side of the equals sign
to the expression on the right side. State which postulate or theorem is
applied at each step.
Ry OE AY = eh
(2+ ay Fixe + ty) = xy
Pye 2 az) = ay
Gy + yz xz) = xy + YZ + XZ
xy + yz + xz = xy + xz (consensus theorem)
sp
mono
(x + y)\(x + z) = xz + xy
SO WY wer) = ay + yz tog
het ey sey 2b ee Sy ch ez

3.2 Prove that in a Boolean algebra the cancellation law does not hold; that is,
show that, for every x, y, and z in a Boolean algebra, xy = xz does not imply
y =z. Doesx + y =x + zimply y = z?
3.3 Using the method of perfect induction, prove the following identities for a
two-valued Boolean algebra.
as ec boy = (eye + Zz)
b. @+tyt+2)=xyz
Ce ey Piyz exc ey oe ez (consensus theorem)
3.4 Prove that no Boolean algebra has exactly three distinct elements.
3.5 Construct the truth table for each of the following Boolean functions.
a. (XYZ) = yore + ye Z)
Ds fGoy.Z) = Gy x2) yz
CHAPTER 3 Boolean Algebra and Combinational Networks 119

CI yz y) tz) (4 4*z)
d. f(w,xy,z) = wxy + wy + z)
3.6 For each of the truth tables in Table P3.6, write the corresponding minterm
canonical formula in algebraic form and in m-notation.
AMI! Write each of the following minterm canonical formulas in algebraic form
and construct their corresponding truth tables.
a. f(x, y,z) = 2m(0,2,4,5,7)
b. f(w,xy,z) = Ym(1,3,7,8,9,14,15)
3.8 For each of the truth tables in Table P3.6, write the corresponding maxterm
canonical formula in algebraic form and in M-notation.

|
Table P3.6

eBeerFPr
O]#
OCC eFeororododrol|an
KEerPoorrools
Cor
EIS
OF
rFOoOrR
—Q ~~

Se
SoSeoSe
aH
sae
So):
So
eS
SSS
Sk &errr|
CO}
OO
KF
FFP eS
Oo
ee
1S
SK
OS
SS
oS
oe
SS
oe
SoS mi
omic
—ooo.
on
BrPododrrdododrrcoorrco
OCOOrF

(b)
120 DIGITAL PRINCIPLES AND DESIGN

39 Write each of the following maxterm canonical formulas in algebraic form


and construct their corresponding truth tables.
a. f(x y,z) = IIM(,1,2,5,7)
b. f(w,xy,z) = IM(0,3,6,7,9,10,12,13,15)
3.10 Complement each of the following Boolean expressions.
a (Wx
+ yyw Fixz) yz
b. x(wy + xyz)
c. wy[(wy) + xz]
d. wxt+zi(wt+xt+y)+aty)]
3.11 Applying the expansion theorem to the Boolean expression
f(w,x y,z) = wxyz + z(xy + wx)
rewrite the expression in the following forms.
a. f(w,x,y,Z)
= x° gi(w,y,Z) + x* go(W,y,Z)
b. f(w,xy,z) = [2 + h(w,xy)][z + Yn(w,xy)]
3.12 Using the Boolean algebra postulates and theorems, simplify each of the
following expressions as disjunctive normal formulas with the fewest
number of literals.
yer XYZ yz
MVS Kye RYVZ AVZ. 7 AVE XZ
RY Ne eee VE ae NZ
Gry) az
wWxyz + wxyz t+ xz + xyz
wxyz +twxy t+wy + xy t+ xy
Tm
2
st (Wax hy
monnc 2) aor Poe Oy x er yz ee de ae)
OGR ar ak ar iP ar ZGKP Ae Se aie WP ae)
h. («+ z)(w+x)(y + z)(w + y)
3.13 Prove Shannon’s reduction Theorem 3.12 in Sec. 3.6.
3.14 Express each of the following functions by a minterm canonical formula
without first constructing a truth table.
ae OFZ) SO Zee
b. f(x y,z) = «+ y)(x+2z)
Sis} Express each of the following functions by a maxterm canonical formula
without first constructing a truth table.
a. fz) = YP 2Zay -Z)
b. f@y,z) =x + XZ +2)
3.16 Express the complement of each of the following functions in disjunctive
canonical form and conjunctive canonical form using decimal notation.
a. f(x,y,z) = &m(0,2,5)
b. f(x,y,z) = TIM(1,2,5,7)
CHAPTER 3 Boolean Algebra and Combinational Networks 121

c. flw,xy,z) = 2m(1,4,6,7,8,12,14)
d. f(w,x,y,z) = ILM(3,7,8,10,12,13)
3.17 Transform each of the following canonical expressions into its other
canonical form in decimal notation.
a. f(x,y,z) = &m(1,3,5)
b. fix y,z) = IIM(3,4)
c. flw,xy,z) = Ym(0,1,2,3,7,9,11,12,15)
d. f(w,xy,z) = ILM(0,2,5,6,7,8,9,11,12)
3.18 Write a Boolean expression for each of the logic diagrams in Fig. P3.18.
3.19 Draw the logic diagram using gates corresponding to the following Boolean
expressions. Assume that the input variables are available in both
complemented and uncomplemented forms.
ane SfWea, 2) = Ey ez) Pe
b. fvywxy2j=at+y{v+mtyvt+twt+avtxtz)}
c. f(v,w,xy,Z) = vlw(xy + z) + xz] + vw

axe

ANS

Figure P3.18
122 DIGITAL PRINCIPLES AND DESIGN

NST

(a)

=|
NS

Figure P3.20

3.20 For each of the gate networks shown in Fig. P3.20, determine an equivalent
gate network with as few gate inputs as possible.
3.21 Besides gate networks, networks consisting of other two-state devices are
also related to a Boolean algebra. For example, a Boolean expression for a
configuration of switches can be written to describe whether there is an open
or closed path between the network terminals. The Boolean constant | is
assigned to a closed switch and the existence of a closed path between the
terminals of the configuration of switches; while the Boolean constant 0 is
assigned to an open switch and the existence of an open path between the
terminals of the configuration of switches. Algebraically, each switch is
denoted by a Boolean variable in which the variable is uncomplemented if
the switch is normally open and complemented if it is normally closed.
Under this assignment, switches placed in series can be denoted by the and-

Figure P3.21
CHAPTER 3 Boolean Algebra and Combinational Networks 123

operation and those in parallel with the or-operation. Using this


correspondence between Boolean algebra and a configuration of
switches:
a. Determine a Boolean expression to describe the behavior of the
configuration shown in Fig. P3.21.
b. Determine a series-parallel configuration of switches whose behavior is
described by the Boolean expression

S(w.x%y,Z) = (wx + Z)Qy + z) + w(x + y)


3.22 For the truth table of Table P3.22
a. Write both the minterm and maxterm canonical formulas in decimal
notation for the function.

|
b. Construct the truth table of the complement function.
Write both the minterm and maxterm canonical formulas in decimal
notation for the complement function.
3.23 Show that the nand-operation is not associative, i.e., nand[x, nand(y, z)] #
nand[nand(x, y), z]. Is the nor-operation associative?
3.24 Write a Boolean expression for each of the logic diagrams in Fig. P3.24.
3.25 Using algebraic manipulations, obtain a logic diagram consisting of
only nand-gates for each of the following Boolean expressions. Do not
alter the given form of the expressions. Assume the independent

Table P3.22

x y z
0 0 0 0 _—
0 0 0 | —
0 0 1 0 1
0 0 1 ] 1
0 I 0 0 0
0 ] 0 | —
0 | | 0 1
0 1 ] | 0)
1 0 0 0 0
] 0 0 ] —
] 0 ] 0 i
| 0 ] | —
1 | 0 0 0
1 | 0 | —
| | 1 0 —_
i 1 1 i 1
124 DIGITAL PRINCIPLES AND DESIGN

2NINS

(c) (d)

Figure P3.24

variables are available in both complemented and uncomplemented


form.
a. f(w,x,y,z) = y + wx + wxz
b. f(w.xy,z) = (wt yx+z(w+xty)
za yy)
©, fOwx,y,2z) = wey Fy)

qd: SWZ) — Oa ya Oye Over Zi


3.26 Repeat Problem 3.25 using the graphical procedure.
3.27 Using algebraic manipulations, obtain a logic diagram consisting of only
nor-gates for each of the Boolean expressions in Problem 3.25. Do not alter
the given form of the expressions. Assume the independent variables are
available in both complemented and uncomplemented form.
3.28 Repeat Problem 3.27 using the graphical procedure.
3.29 Using the graphical procedures, convert the logic diagram of Fig. P3.29 into
a logic diagram consisting of only nand-gates and a logic diagram consisting
CHAPTER 3 Boolean Algebra and Combinational Networks 125

Figure P3.29

of only nor-gates.* Verify your results by obtaining the Boolean expression


for each network.
3.30 Prove that (x + y) ® (x + z) = x(y@z).
SSI Show that the exclusive-or-operation is not distributive over the and-
operation, 1.e., xAyz # (x@®y)(x@z).
3.32 Algebraically verify the following identities.
a. xOl=x b. x00 =x
Gc. xox
=1 d. xox =0
e. xXOy =xOy f. xOy = xOy
Cee WOU ZS % +1GOzZ) “sh. wOyO@ + y) =ay
3.33 iTheaualiji x). X,,) Ol a LUNCIION 7(%,,1.%5.-. , %,) Is denned:as

seen ies Kazi Gaeenes oe)


Using this, show that the dual of the exclusive-or-function is the exclusive-
nor-function.
3.34 In Sec. 2.10 the Gray code was discussed. The following rule converts an n-
bit Gray code group g,,-\2,-2 * * * £0 into its equivalent n-bit binary
number b,,_,b,-2 °° * bybp:
1. The most significant bits are equal, 1.e.,

De = S71

*Networks involving identical gate types following each other can evolve when using gates with a
limited number of inputs. In this network it is assumed that only two-input and-gates and or-gates are
available.
126 DIGITAL PRINCIPLES AND DESIGN

2. Each of the remaining bits is obtained as follows:

b= (a at ay
ee > |
forn-1=k=1
09i AS
a. Design an n-bit Gray-to-binary converter.
Using the above algorithm as a starting point, devise an algorithm to
convert an n-bit binary number into an n-bit Gray code group. Design
the corresponding n-bit binary-to-Gray converter.
3.35 For the Hamming code discussed in Sec. 2.12, design a logic network which
accepts the 7-bit code groups, where at most a single error has occurred, and
generates the corresponding corrected 7-bit code groups. To do this, first
design networks for cj", c3, and c¥. Then, design a network for correcting
the appropriate bit. Use exclusive-or-gates whenever possible. However,
assume that all available exclusive-or-gates have only two input terminals.
(Hint: The network for c* can be constructed with just two-input exclusive-
or-gates.)
CHAPTER

Simplification of Boolean
Expressions

|n Sec. 3.6 it was shown that by use of the theorems and postulates of a Boolean
algebra, it is possible to obtain “simple” expressions. Although the concept of
simplicity was not formally defined, it was observed that neither the approach to
equation simplification nor the capability of concluding when an equation is simple
is obvious.
At this time the simplification problem is studied in detail and formal tech-
niques for achieving simplification are developed. In particular, two general ap-
proaches are presented. First, a graphical method is introduced that can handle
Boolean expressions up to six variables. Then, a tabular procedure is developed that
is not bounded by six variables and that is capable of being programmed on a com-
puter since it is algorithmic. These approaches are applied to both a single Boolean
expression that describes single-output network behavior, and a collection of
Boolean expressions describing multiple-output networks. H

4.1 FORMULATION OF THE


SIMPLIFICATION PROBLEM
In the design of logic networks, many factors should be considered when evaluating
the merit of a network. One such factor is its cost. The cost of a network is a func-
tion of the cost of its components, the cost of the design and construction of the net-
work, and the cost of maintaining the network. In addition, the reliability of the net-
work should also be considered in its overall merit evaluation. Reliability is
achieved by using highly reliable components or by redundancy techniques in
which a greater number of less reliable components are used. A third factor that
should be considered in the merit evaluation of a logic network is the time it takes
the network to respond to changes at its inputs. These three factors do not form an

127
128 DIGITAL PRINCIPLES AND DESIGN

exhaustive list of items that should be considered in the network evaluation, nor are
they independent.
Even though all of the above factors are important, a single simple design pro-
cedure encompassing them all does not exist. However, if certain aspects of these
factors are considered of prominent importance, then a formal approach to the de-
sign of optimal logic networks can be developed.

4.1.1 Criteria of Minimality


Let us assume that the overall response time of a network should be minimal for a
given circuit technology. This is achieved by minimizing the number of levels of
logic that a signal must pass through since all gates introduce propagation delays.
Recalling that every combinational network is describable by a canonical formula,
it follows that it is possible to construct any logic network with at most two levels
of logic under the double-rail logic assumption. Throughout this chapter, it is as-
sumed that a variable and its complement are always available as inputs in a realiza-
tion. By applying the theorems of a Boolean algebra to this canonical expression,
various two-level networks are represented in algebraic form. In general, any nor-
mal formula, i.e., those in sum-of-products form or product-of-sums form, corre-
sponds to a logic network with two levels or less. It can now be concluded, there-
fore, in order to keep the propagation delay time of a network to a minimum,
attention should be restricted to networks with normal formula representations.
Furthermore, let us assume that the component cost is the only other factor in-
fluencing the merit evaluation of a logic network. In general, for any Boolean func-
tion there are many two-level realizations, each represented by a normal formula.
Thus, it is desirable to determine the normal formula with the minimal component
cost in its realization. One simple measurement of component cost is the number of
gates in the realization. In terms of algebraic expressions in normal form, the num-
ber of gates is one greater than the number of terms with more than one literal in the
expression. (However, if there is only one term, then the number of gates is simply
one.) A second measurement of component cost is a count of the total number of
gate inputs within the network. Again this can be related to algebraic expressions in
normal form. The number of gate inputs is equal to the number of literals in the ex-
pression plus the number of terms containing more than one literal. (However, if
there is only one term, then the number of gate inputs is simply equal to the number
of literals.)
By applying either of the above two criteria to a Boolean expression, it is possi-
ble to obtain a measure of its complexity. This numerical quantity is called the cost
of the expression.
Inherent in the above discussion is that only single-output combinational net-
works are being realized. A multiple-output combinational network is described by
a set of Boolean expressions. In such a case, it is necessary to modify the cost crite-
ria. For now, attention is restricted to single-output combinational networks. In
Secs. 4.12 and 4.13, the concepts of cost and minimality are extended to the more
general case.
CHAPTER 4 Simplification of Boolean Expressions 129

4.1.2 The Simplification Problem


The determination of Boolean expressions that satisfy some criterion of minimality
is called the simplification or minimization problem. Unless otherwise specified, the
second cost criterion presented above is assumed. That is, for single-output net-
works, a minimal Boolean expression is that normal expression representing a two-
level network with a minimum total number of gate inputs.
Before beginning the development of formal simplification procedures, one
final point must be considered. Product and sum terms were previously defined as a
product and sum of literals, respectively. This definition permits repetition of vari-
ables to appear in these terms. However, repetition of variables in a term makes lit-
tle sense in a realization and only adds to the cost of the expression. A fundamental
term is defined as a product or sum of literals in which no variable appears more
than once. A fundamental term is obtained from any term by the application of Pos-
tulate PS of a Boolean algebra, ie., x + x = | and x + x = 0, and the idempotent
law, Le., x + x = x and x+x = x. It is hereafter assumed that all terms are funda-
mental terms and are referred to as simply terms.

4.2 PRIME IMPLICANTS AND IRREDUNDANT


DISJUNCTIVE EXPRESSIONS
As the first step toward the development of techniques leading to minimal Boolean
expressions, it 1s necessary to establish some basic concepts. These concepts are
studied in this and the next sections. Once that is done, the techniques to simplify
Boolean expressions readily follow.
In an effort to keep the presentation as simple as possible, it is initially assumed
that the Boolean functions are completely specified. After mastering this special
case, the reader should have no trouble in extending the concepts to incomplete
Boolean functions. This generalization is discussed in Sec. 4.6.

4.2.1 implies
Consider two complete Boolean functions of n variables,f, and f,. The function f,
implies the function f; if there is no assignment of values to the n variables that
makes f, equal to 1 andf, equal to 0. Hence, for the complete Boolean functions/,
and f;, wheneverf, equals 1, thenf; must also equal 1; and, alternatively, whenever
fo equals 0, thenf,must equal 0. Since terms and formulas (or, expressions) describe
functions, the concept of implies may also be applied to terms and formulas, e.g.,
whether or not a particular term implies a function. To illustrate the concept of im-
plies, consider the functions f,(x,),z) = xy + yz and f(x, y,z) = xy + yz + xz tabu-
lated in Table 4.1a. By applying the above definition to the truth table, it is readily
seen that f, implies f,. As a second example, consider the functions f;(x,y,z) =
(x + y)(y + z\(x + z) and f,(x,y,z) = (x + y)(y + z) shown in Table 4.10. In this
casef; implies fy.
Now consider a single term that appears in the normal formula for a function.
In the case of a disjunctive normal formula, i.e., one in sum-of-products form, each
130 DIGITAL PRINCIPLES AND DESIGN

Table 4.1 = |lustration of function


implication. (a) f, implies fh.
(b) f, implies f,

x y z fi hr
0 0 0 0 0
0 0 1 0 1
0 | 0 0 0
0 1 ] | |
| 0 0 0 0
| 0 | 0 0
| 1 0 | |
1 1 | 1 i
(a)

x y Zz 5 fs a Ss
0 0 0 0 0
0 0 1 0 0
0 ] 0 | ]
0 1 1 | 1
| 0 0 0 0
| 0 1 | |
| ] 0 0 l
| 1 1 1 |
(dD)

of its product terms implies the function being described by the formula. This fol-
lows from the fact that whenever the product term has the value 1, the function must
also have the value |. On the other hand, for a conjunctive normal formula, i.e., one
in product-of-sums form, each sum term is implied by the function, i.e., the function
implies the sum term. In this case, whenever the sum term has the value 0, the func-
tion must also have the value 0.

4.2.2 Subsumes
A comparison between two product terms or two sum terms is also possible. A term
t, is said to subsume a term f, if and only if all the literals of the term 1, are also lit-
erals of the term f;. As an example, consider the product terms xyz and xz. From the
definition of subsumes, the product term xyz subsumes the product term xz. In a
similar manner, for the two sum terms x + y + z and x + z, the sum term x + y + Zz
subsumes the sum term x + z.
From the above discussion it is seen that if a product term f, subsumes a prod-
uct term f,, then f, implies f, since whenever f, has the value 1, t, also has the value
1. On the other hand, if a sum term f#, subsumes a sum term ¢, then ft, implies f,
CHAPTER 4 Simplification of Boolean Expressions 131

since whenever f, has the value 0, t, also has the value 0. By the absorption law, i.e.,
Theorem 3.6, if one term subsumes another in an expression, then the subsuming
term can always be deleted from the expression without changing the function
being described.

4.2.3 implicants and Prime Implicants


A product term is said to be an implicant of a complete function if the product term
implies the function. As a ‘result, each of the product terms in a disjunctive normal
formula describing a complete Boolean function is an implicant of the function
since these product terms contribute to describing the functional values of 1. Thus,
the minterms of a function are examples of its implicants. As a further illustration,
consider the truth table shown in Table 4.2. The term x is equal to | for the four
3-tuples (x,y,z) = (0,0,0), (0,0,1), (0,1,0), and (0,1,1) and is equal to O for all other
3-tuples. It is now noted that for those 3-tuples in which the term x equals 1, so does
the function given in Table 4.2. It therefore follows that x is an implicant of that
function. Similarly, another implicant of the function given in Table 4.2 is yz since
the term has the value | when (y,z) = (0,1) and the function has the value 1 for the
two 3-tuples (x,y,z) = (0,0,1) and (1,0,1).
An implicant of a function is said to be a prime implicant if the implicant does
not subsume any other implicant with fewer literals of the same function. Thus, a
prime implicant of a function is a product term that implies the function with the ad-
ditional property that if any literal is removed from the term, then the resulting
product term no longer implies the function. For example, again consider the func-
tion of Table 4.2. The product term xyz is an implicant of the function since it is the
minterm that describes the sixth row of the truth table. The three product terms hav-
ing one less literal that xyz subsumes are yz, xz, and xy. If at least one of these three
terms is an implicant of the function, then xyz is not a prime implicant. As indicated
previously, the product term yz is an implicant of the function. Consequently, the
term xyz is not a prime implicant of the function. On the other hand, although the
product term xz is also subsumed by xyz, it does not imply the function since xz has

Table 4.2 A 3-variable Boolean function

SS —

ol;e

ne) oie
See
eas
rc)
—a SS)
OSS
SS
SS
—-.
132 DIGITAL PRINCIPLES AND DESIGN

the value | and the function has the value 0 when (x,y,z) = (1,1,1). Consequently,
xz is not an implicant of the function. Similarly, the product term xy is not an impli-
cant since it has the value 1 when (x,y,z) = (1,0,0) but the function has the value 0.
Now that it has been established that yz is an implicant of the function given in
Table 4.2, we can ask if yz is a prime implicant of the function. To answer this, con-
sider the possible product terms having one less literal that are subsumed by yz,
namely, the term y and the term z. It is easily checked with the aid of Table 4.2 that
neither the term x nor the term y implies the function. Hence, yz is an implicant of
the function that subsumes no other implicant of the same function. By definition,
yz is a prime implicant. By a similar analysis, x is also a prime implicant of the
function given in Table 4.2.
The significance of prime implicants is given by the following theorem:

Theorem 4.1
When the cost, assigned by some criterion, for a minimal Boolean for-
mula is such that decreasing the number of literals in the disjunctive nor-
mal formula does not increase the cost of the formula, there is at least one
minimal disjunctive normal formula that corresponds to a sum of prime
implicants.

Proof
To justify the above theorem, assume that there is a minimal disjunctive
normal formula of a given Boolean function that is not the sum of only
prime implicants. In particular, let t, be one such term. ¢, must still be an
implicant of the function since, being a term describing the function, it
must imply the function. By definition of a prime implicant, there must be
some term f, that is a prime implicant such that t; subsumes ¢,. By defini-
tion of subsumes, f, must have fewer literals. Since t, also implies the func-
tion, it may be added to the original formula without changing the function
being described. But f, subsumes ¢,. By the absorption law, ft, can be re-
moved, leaving an expression with the same number of terms but with
fewer literals. Since it is assumed that the cost of a formula does not in-
crease by decreasing the number of literals, the cost of the new expression
is no greater than the cost of the original expression. If this argument is ap-
plied to every term that was not originally a prime implicant, then an ex-
pression of only prime implicants and of minimal cost results. i

As a consequence of the above theorem, the prime implicants are of interest for
establishing a minimal disjunctive Boolean formula. This formula, in turn, suggests
a minimal two-level realization with and-gates followed by a single or-gate. The set
of prime implicants of a function can be obtained by forming all possible product
terms involving the variables of the function, testing to see which terms imply the
function, and then, for those that do, checking to see if they do not subsume some
other product terms that also imply the function. Efficient algorithmic procedures
can be developed to carry out this seemingly complex process.
CHAPTER 4 Simplification of Boolean Expressions 133

4.2.4 irredundant Disjunctive Normal Formulas


An irredundant disjunctive normal formula describing a complete function is de-
fined as an expression in sum-of-products form such that (1) every product term in
the expression is a prime implicant and (2) no product term may be eliminated from
the expression without changing the function described by the expression.

Theorem 4.2
For any cost criterion such that the cost of a formula does not increase
when a literal is removed, at least one minimal disjunctive normal formula
describing a function is an irredundant disjunctive normal formula.
Proof
If a minimal disjunctive normal formula is not an irredundant expres-
sion, then it must fail at least one of the two properties of an irredundant
expression. If any of the terms is not a prime implicant, then the term
can be replaced by a product term that is a prime implicant. In this way
the expression is written with fewer literals. If any term can be elimi-
nated from the expression, then the new expression without this term
also has fewer literals. In both cases the number of literals is decreased.
Since it is assumed that the cost of the expression does not increase by
decreasing the number of literals, the resulting expression is still mini-
mal. Furthermore, by definition, it is an irredundant disjunctive normal
formula.

The reader should carefully observe that an expression being an irredundant


disjunctive normal formula is not sufficient to guarantee the expression is minimal.
Part of the algorithmic procedures for determining minimal Boolean expressions
involves the judicious selection of some (but not necessarily all) of the prime
implicants.

4.3 PRIME IMPLICATES AND IRREDUNDANT


CONJUNCTIVE EXPRESSIONS
In the previous section it was shown that minimal two-level gate realizations having
and-gates on the first level and a single or-gate on the second level are related to
disjunctive normal formulas, i.e., those consisting of a sum of product terms, whose
terms are prime implicants. A second form of two-level gate networks involves or-
gates feeding into a single and-gate. These networks are described by conjunctive
normal formulas, i.e., those consisting of a product of sum terms. Using the defini-
tions of minimality introduced in Sec. 4.1, it is now desired to establish the proper-
ties for minimal expressions of this type. Most of the material in this section is sim-
ply the dual concept to that of the previous section.
A sum term is said to be an implicate of a complete Boolean function if the
function implies the sum term. A prime implicate of a complete Boolean function is
134 DIGITAL PRINCIPLES AND DESIGN

an implicate of the function that subsumes no other implicate of the function with
fewer literals. Thus, a prime implicate is a sum term that is implied by the function
with the additional property that if any literal is removed from the term, then the re-
sulting sum term no longer is implied by the function.
The maxterms of a function are examples of its implicates. This follows from
the fact that any time a maxterm of a function has the value 0, the function must
also have the value 0. Therefore, by the definition of implies, the function must
imply its maxterms. Referring again to Table 4.2, it is readily observed that one of
the implicates of the function is the maxterm x + y + z. In addition, the sum term
x + zhas the value 0 for ihe two 3-tuples (x,y,z) = (1,0,0) and (1,1,0). Since the
function given in Table 4.2 has the value 0 for these two 3-tuples, it immediately
follows that the sum term x + z also is implied by the function and is therefore an
implicate of the function. Since x + y + z subsumes x + z, x + y + zis not a prime
implicate. Finally, neither the term x nor the term z is implied by the function. Thus,
x + z must be a prime implicate of the function.
Using an argument similar to that in the previous section, prime implicates can
be used to obtain minimal conjunctive normal formulas. Formally,

Theorem 4.3
When the cost, assigned by some criterion, for a minimal Boolean formula
is such that decreasing the number of literals in the conjunctive normal
formula does not increase the cost of the formula, there is at least one min-
imal conjunctive normal formula that corresponds to a product of prime
implicates.

An irredundant conjunctive normal formula describing a complete Boolean


function is defined as an expression in product-of-sums form such that (1) every
sum term in the expression is a prime implicate and (2) no sum term can be elimi-
nated from the expression without changing the function described by the expres-
sion. The dual of Theorem 4.2 can now be stated.

Theorem 4.4
For any cost criterion such that the cost of a formula does not increase
when a literal is removed, at least one minimal conjunctive normal formula
describing a function is an irredundant conjunctive normal formula.

It should be noted that prime implicates are the dual concept to that of prime
implicants and irredundant conjunctive normal formulas are the dual concept to ir-
redundant disjunctive normal formulas. Future discussions revolve around both
minimal disjunctive normal formulas and minimal conjunctive normal formulas.
Although it is beyond the scope of this book to prove formally, it can be shown that
the prime implicates of a complete Boolean function f are precisely the comple-
ments of the prime implicants of the function f. In addition, the irredundant con-
junctive normal formulas of the complete Boolean function f are exactly the com-
CHAPTER 4 Simplification of Boolean Expressions 135

plements of the irredundant disjunctive normal formulas of f. Finally, the minimal


conjunctive normal formulas of f are the complements of the minimal disjunctive
normal formulas of /.
Reviewing, two criteria for minimal normal expressions have been defined.
Furthermore, it has been established that algebraically the terms that comprise the
minimal expressions can be prime implicants in the case of disjunctive normal for-
mulas and prime implicates in the case of conjunctive normal formulas. At this
point, systematic procedures to determine minimal Boolean expressions can be
developed.

4.4 KARNAUGH MAPS


A method for graphically determining implicants and implicates of a Boolean
function was developed by Veitch and modified by Karnaugh. The method in-
volves a diagrammatic representation of a Boolean function. This representation is
called a map. The variation on the construction of these maps that was proposed by
Karnaugh is used in this text.
In Sec. 3.4, it was shown that an n-variable complete Boolean function is repre-
sented by a truth table. Since each variable has the value of O or 1, the truth table
has 2” rows. Each row of the truth table consists of two parts: (1) an n-tuple which
corresponds to an assignment to the n-variables and (2) a functional value.
A Karnaugh map is a geometrical configuration of 2” cells such that each of the
n-tuples corresponding to a row of a truth table uniquely locates a cell on the map.
The functional values assigned to the n-tuples are placed as entries in the cells.
Thus, for any n-tuple in which the functional value is 0, a O is placed in the associ-
ated cell; while 1’s are placed as cell entries for those n-tuples that have functional
values of 1. In this way, the Karnaugh map becomes a diagrammatic representation
of a truth table and, correspondingly, a diagrammatic representation of a Boolean
function.
Significant about the construction of a Karnaugh map is the arrangement of the
cells. Two cells are physically adjacent within the configuration if and only if their
respective n-tuples differ in exactly one element. For example, the cells for the two
3-tuples (0,1,1) and (0,1,0) are physically adjacent on the map since these 3-tuples
differ only in their third element. On the other hand, the cells for the two 3-tuples
(1,0,1) and (1,1,0) are not physically adjacent since these two 3-tuples differ in two
elements; namely, their second and third elements. As a consequence of this adja-
cency property of the cells, each cell on the map must be adjacent to exactly n cells
since for any n-tuple there are n other n-tuples which differ from it by just one
element.

4.4.1 One-Variable and Two-Variable Maps


The Karnaugh maps for one-variable and two-variable Boolean functions are shown
in Figs. 4.1 and 4.2. These maps consist of 2! = 2 and 2” = 4 cells, respectively.
Referring to these figures, the n-tuple associated with each cell is determined using
136 DIGITAL PRINCIPLES AND DESIGN

x
0 1

(b)

Figure 4.1 A one-variable


Boolean
function.
(a) Truth table.
(b) Karnaugh
map.

a coordinate system according to the axes labeling. Thus, the 2-tuple (x,y) = (0,1)
uniquely locates the cell in the first row, second column of Fig. 4.2b. The corre-
sponding functional values appear as entries in the cells. Note that two cells are
physically adjacent on the map if and only if their corresponding n-tuples differ in
exactly one element.

4.4.2 Three-Variable and Four-Variable Maps


A three-variable Karnaugh map has 2° = 8 cells. To satisfy the adjacency property
of cells, each cell of a three-variable map must be adjacent to exactly three other
cells. To achieve this, a three-variable map is constructed on the surface of a cylin-
der as shown in Fig. 4.3b. In view of the fact that three-dimensional maps are hard
to draw, it is desirable to represent these maps in two dimensions. If the cylinder is
cut along its vertical axis and unrolled, then the two-dimensional map of Fig. 4.3c
results. Although no longer appearing physically adjacent, it must be remembered
that the left and right edges of the map of Fig. 4.3c are really connected, and hence
the cells on the left and right sides are, from an interpretive point of view, still adja-
cent. Particular attention should be given to the labels along the top of the map. In
order to achieve the requirement that two cells are physically adjacent if and only if

0 1
0 0 f(0,0)
S(0,0) |f(0,1)
0 | f(O.1)
vi AO f(1,0)
(yee fa.) ; ; )

(a) (b)

Figure 4.2 A two-variable Boolean


function. (a) Truth table.
(b) Karnaugh map.
CHAPTER 4 Simplification of Boolean Expressions 137

0 0 0 f(0,0,0)
Qaret0:s «I f(0,0,1)
0 0 F(0,1,0) ¥
0 Nay f(0,1,1) 00 01 11 10

aes OY, 0} £(0,0,0) soos}ras a


l Ona! f(,0,1) :
1 veep f(1,1,0)
aes fA.) 1} FC,0,0)f(1,0,0) ||fC1,0,1)
£1.01) |fC1,1)
[f1.1,1) || f0,1,0fC1,0)

(a) (c)

Figure 4.3 A three-variable Boolean function. (a) Truth table.


(b) Karnaugh map. (c) Two-dimensional
representation of the Karnaugh map.

their respective 3-tuples differ in exactly one element, the labels along the top of the
map are 00, 01, 11, and 10.
To illustrate the mapping of a specific Boolean function, consider the truth
table of Fig. 4.4a. Since this is a three-variable function, the general map structure
of Fig. 4.3c is used. The completed Karnaugh map is shown in Fig. 4.45.
A four-variable Karnaugh map has 2* = 16 cells in which each cell is adjacent
to exactly four other cells. This is achieved by having the map appear on the surface
of a torus. By making two cuts on the torus and then unrolling it, a two-dimensional

(a)

Figure 4.4 An illustrative three-variable Boolean function.


(a) Truth table. (b) Karnaugh map.
138 DIGITAL PRINCIPLES AND DESIGN

f(0,0,0,0)
(0,0,0,1)
(0,0,1,0)
f(0,0,1,1)
f(0,1,0,0) ve
00 01 11 10
f(0,1,0,1) + =
f(0,1,1,0) 00 |,(0,0,0,0) |f(0,0,0,1) |£(0,0,1,1) | f(0,0,1,0)
rin enene)
f£(1,0,0,0)
fC,0,0,1) O1 |f(0,1,0,0) |f(0,1,0,1) | f(0,1,1,1) | f(0,1,1,0)

f(1,0,1,0) wx
f0,0,1,))
11 |fC,1,0,0) | fC.,1,0,1) | fC1..1,1,1) |fC.,1,1,0)
if(1,1,0,0) = ae)
f(1,1,0,1)
Gill) 10 | f(1,0,0,0) |f(1,0,0,1) | f,0,1,1) | fC,0,1,0)
Aloo)
(a) (b)

Figure 4.5 A four-variable Boolean function. (a) Truth table. (b) Karnaugh map.

representation of the map is obtained. This map is shown in Fig. 4.5. In this case it
is necessary to keep in mind that, from an interpretive point of view, the left and
right edges are connected and the top and bottom edges are connected. Under these
restrictions it should be noticed that each cell is adjacent to exactly four other cells
and that two cells are physically adjacent if and only if their respective 4-tuples dif-
fer in exactly one element.
A variation on Karnaugh map construction that is occasionally seen is shown in
Fig. 4.6. In this case, the axes are not labeled with the 0 and 1 elements, but rather a
bracket is used to indicate those rows and columns associated with a variable having
an assignment of 1. Thus, the map in Fig. 4.6a is analogous to the map in Fig. 4.3c
and the map in Fig. 4.6 is analogous to the map in Fig. 4.5).

4.4.3 Karnaugh Maps and Canonical Formulas


In Chapter 3 it was established that each row of a truth table is described alge-
braically by a minterm if complemented variables are associated with the 0 ele-
ments of the n-tuple and uncomplemented variables are associated with the 1 ele-
ments. Consequently, each cell of a Karnaugh map can also be associated with a
minterm. For example, the cell for the first row, fourth column of Fig. 4.45 is asso-
ciated with the minterm xyz, as is easily determined by referring to the axes labels.
Since the minterm canonical expression for a function is the sum of those minterms
for which the function has the value of 1, the minterm canonical expression is easily
CHAPTER 4 Simplification of Boolean Expressions 139

= ale
‘Siaaa
(a) (b)

Figure 4.6 Karnaugh map variations. (a) Three-variable map.


(b) Four-variable map.

read from a map. For the map of Fig. 4.4b, each cell containing a 1 represents a
minterm of the function. Thus, directly from the map we can write

OCW BY TILE TAY A IXVZ


The reverse of this process gives a procedure for obtaining a map directly from a
Boolean function expressed in minterm canonical form. For example, the expression

fw,x%y,zZ) =wxyztwxyz+wxyz t+ wxyz + wxyz + wxyz+ wxyz (4.1)

is represented on a map by replacing, in each minterm, complemented variables by


0, uncomplemented variables by 1, and then placing a | on the map for each 4-tuple
describing a minterm. In the remaining cells, 0 entries are placed. Applying this
process to Eq. (4.1) results in the map of Fig. 4.7.

00
SEE 01
EERIE 11
EE 10

00 1 | 0 1

O1

Figure 4.7 Karnaugh map fora


four-variable function.
140 DIGITAL PRINCIPLES AND DESIGN

It was also established in Chapter 3 that each row of a truth table is described
by a maxterm if 0’s of the n-tuples are used to denote uncomplemented variables
and 1’s of the n-tuples are used to denote complemented variables. Correspond-
ingly, each cell of a Karnaugh map can also be associated with a maxterm. The
maxterm canonical expression for a function is obtained by forming the product of
maxterms for those n-tuples in which the function has the value of 0. Thus, for each
cell of a Karnaugh map with a 0 entry, a maxterm can be written. For example, the
map of Fig. 4.4b corresponds to the maxterm canonical expression

Ty) = OCF VS DO EY HOS 4 2 ey)

The reverse of this process permits a Karnaugh map to be formed from a maxterm
canonical expression. If the expression is
fWexeyizT = GE x Ey ZW a ee Pe)

“(Wt k + y+ Zw +a y + Zw +X yz)
“(Wit x yt ZW Pe Vr 200 ae ye 2)

then by using O’s for uncomplemented variables and 1’s for complemented vari-
ables in the maxterms and by entering a 0 on the map for each n-tuple describing a
maxterm and a | otherwise, the map of Fig. 4.7 results.
Finally, recall that a decimal representation for minterms and maxterms was in-
troduced in Chapter 3. In particular, the decimal equivalent of the n-tuple for each
row of a truth table is used to designate the minterm or maxterm associated with that
row. Thus, each cell of a map can be referenced by a decimal number. Figure 4.8
shows the decimal numbers which define each cell of a Karnaugh map. In this way,
for the Karnaugh map of Fig. 4.4b, the decimal representation of the canonical ex-
pression is written directly as

f(%y,z) = Xm(0,2,4,5)
or f(x, y,z) = IIM(1,3,6,7)

YZ
00 01 i 10 ‘
00/0 | 3 2

2 ol] 4 5 7 6
00 01 i 10 Wx
=|
0] 0 | 3 2 Bae ae 13 15 14
‘ IL
Pl a4 5 7 6 iol 8 9| a 10

(a) (b) (c) (d)

Figure 4.8 Karnaugh maps with cells designated by decimal numbers. (a) One-variable map. (b) Two-variable
map. (c) Three-variable map. (d) Four-variable map.
CHAPTER 4 Simplification of Boolean Expressions 141

In a similar manner, given the canonical expression in decimal form

(wx y,z) = Ym(0,1,2,4,5,8,10)


or S(w,x,y,z) = IIMG,6,7,9,11,12,13,14,15)
the map of Fig. 4.7 is constructed.

4.4.4 Product and Sum Term Representations


on Karnaugh Maps
The importance of Karnaugh maps lies in the fact that it is possible to determine
the implicants and implicates of a function from the patterns of 0’s and 1’s appear-
ing on the maps. A cell of a Karnaugh map with a | entry is referred to as a /-cell
and a cell with a 0 entry as a 0-cell. The construction of an n-variable map is such
that any set of 1-cells which form a 2% X 2’ rectangular grouping describes a prod-
uct term with n — a — b variables where a and b are nonnegative integers. Rectan-
gular groupings of these dimensions are referred to as subcubes. Since the dimen-
sions of a subcube are 2° X 2”, it immediately follows that the total number of cells
in a subcube must be a power-of-two, i.e., 2“*”. Thus, for appropriate values of a
and b, 2% X 2’ = 2°*” equals 1, 2, 4, 8, etc. It must be remembered that the three-
variable and four-variable Karnaugh maps have certain edges which are considered
connected, and hence subcubes may be split when viewed on the two-dimensional
representations of the maps. Figures 4.9 to 4.11 illustrate some typical subcubes
which represent product terms. Particular attention should be paid to the sub-
cubes of Fig. 4.10d and e. On the surface of a torus, the four 1-cells of these
subcubes form 2' X 2! rectangles. Similarly, the subcube of Fig. 4.9c illustrates a
2° x 2! rectangle, the subcube of Fig. 4.9d illustrates a 2' X 2° rectangle, the sub-
cube of Fig. 4.11c illustrates a 2' X 2° rectangle, and the subcube of Fig. 4.11d
illustrates a 2° X 2! rectangle.
Again consider the map of Fig. 4.9a. Each 1|-cell represents a minterm, 1.e.,
wxyz and wxyz. Thus, the two I-cells represent the sum of the two minterms, i.e.,
wxyz + wxyz. Notice that by factoring this expression as wxz(y + y), it is equivalent
to the single product term wxz. The Karnaugh map is constructed such that two cells
are adjacent if they differ in exactly one element of their associated n-tuples. Since
each n-tuple describes a minterm, the minterms of any two adjacent |-cells must
only differ in exactly one literal. The relation AB + AB = B can always be applied
in this case, where A represents a single variable and B represents a product of
n — | variables. Thus, any pair of adjacent |-cells represents a product term with
one variable eliminated. Pairs of adjacent 1-cells are always subcubes of dimen-
sions?) Xt2on 2) <:2°.
Given an n-variable map with a pair of adjacent 1-cells, the product term of
n — 1 variables can be read directly from the map. This is done by referring to
the labels along its axes. The n — | variables of the product term are precisely
those n — | variables whose values are the same for each cell in the subcube.
Then, as was done for minterms, the variable in the product term is comple-
mented if its value is always 0 in the subcube and is uncomplemented if its value
is always 1.
142 DIGITAL PRINCIPLES AND DESIGN

yZ yZ
(Oe See Bue eld 00. 0h 11 a2 i0
00 a | |
Ol | 01 i"
Wx wx

1 ul

10 r 10
=
(a) (b)
YZ YZ
OQ emol lest
=) ali
mal |
lineal Cee Uae beelame

00 00 ‘ii

(c) (d)

Figure 4.9 Typical map subcubes for the elimination of one variable in
a product term. (a) wWxz. (b) xyz. (C) wxz. (d) xyZ.

With reference to Fig. 4.9a and the labels on the map’s axes, the variable w has
the value of 0 for both |-cells and the variable x has the value of | for both 1-cells.
Since these variables keep the same value for all cells in the subcube, they must ap-
pear in the product term as wx. In addition, the subcube occurs in the two center
columns of the map. As indicated by the map labels in these two columns, the y
variable changes value while for both |-cells the z variable has the value of 1. Thus,
the y variable is the one that is eliminated as a consequence of the cell adjacencies,
and the product term has the literal z as a consequence of the z variable having the
same value for both |-cells. Combining the results, the subcube of Fig. 4.9a corre-
sponds to the product term wxz. In a similar manner, the product terms correspond-
ing to the other subcubes in Fig, 4.9 are written.
Just as a subcube consisting of two |-cells corresponds to a product term with
n — | variables, any subcube of four l-cells represents a product term with two
variables less than the number of variables associated with the map. To illustrate
this, consider the four |-cells of Fig. 4.10a. Algebraically these four cells corre-
CHAPTER 4 Simplification of Boolean Expressions 143

(a) (b) (c)

(d) (e)

Figure 4.10 Typical map subcubes for the elimination of two variables in a product term. (a) xy. (b) yz. (C) WX.
() wz. (e) XZ.

spond to the expression wxyz + wxyz + wxyz + wxyz. The following algebraic ma-
nipulations can now be performed:
wxyz + wxyz + wxyz + wxyz = wxy(z + z) + wxy(z + Z)
= wxy + wxy
(w + w)xy
= xy

By inspecting the axes labels for these four cells, the product term is written di-
rectly. The subcube appears in the two center rows of the map, from which it is seen
that the variable w changes value (and hence is eliminated) while the variable x has
the value of 1. Thus, the literal x appears in the product term. Furthermore, the sub-
cube appears in the first two columns of this map, from which it is seen that y has
the value of 0 while z changes value (and hence is eliminated). This implies the
144 DIGITAL PRINCIPLES AND DESIGN

(d)

Figure 4.11 Typical map subcubes for the elimination of three variables
in a product term. (a) w. (b) Z. (c) xX. (d) Z.

product term has the literal y. Combining the results, it is concluded that this sub-
cube is associated with the product term xy.
As a final illustration, consider Fig. 4.10b. It is first noted that no row variables
have the same value for every 1-cell of the columnar subcube. Thus, neither the w
nor the x variable appears in the product term. Furthermore, since both the y and z
variables have the value of | for all l-cells of the subcube, the resulting product
term is yz.
Summarizing, any rectangular grouping of l-cells on an n-variable map having
dimensions 2% X 2” consists of 2“*” 1-cells and represents a product term with n —
a — b variables where a and b are nonnegative integers. The corresponding product
term has an uncomplemented variable if the variable has the value of | for every
1-cell associated with the subcube and a complemented variable if the variable has the
value of 0. The variables that are eliminated correspond to those that change values.
The above discussion was concerned with the recognition of product terms on a
Karnaugh map and how they are read directly. A similar discussion applies to
CHAPTER 4 Simplification of Boolean Expressions 145

11 10 00

Ol 01
wx wx = |
11 \]

10 10

(a) (b)

Figure 4.12 Typical map subcubes describing sum terms. (a) w+ X + y. (b) x + y. (c) y.

0-cells. Any 2“ X 2” rectangular grouping, i.e., subcube, of 0-cells on an n-variable


map represents a sum term with n — a — b variables, where a and b are nonnegative
integers. The sum term is read directly from the map by noting which variables do
not change values for the 0-cells of the subcube. As was the case for single 0-cells
representing maxterms, a O along the axis denotes an uncomplemented variable, and
a 1 along the axis denotes a complemented variable. Figure 4.12 shows the sub-
cubes for some typical sum terms. That two adjacent O-cells describe a sum term
with one variable eliminated follows from the relation (A + B)(A + B) = B where
A represents a single variable and B represents a sum of n — 1 variables.
In closing, one final point should be emphasized. By definition of a subcube, it
must have dimensions 2% X 2’, which in turn implies that it must consist of a power-
of-two number of cells. Thus, at no time is a non-power-of-two grouping of cells,
e.g., 3 cells, ever considered a subcube.

4.5 USING KARNAUGH MAPS TO OBTAIN


MINIMAL EXPRESSIONS FOR COMPLETE
BOOLEAN FUNCTIONS
In the previous section it was shown that certain rectangular configurations of
1-cells on a Karnaugh map represent a single product term. Similarly, it was shown
that certain rectangular configurations of O-cells represent a single sum term. At this
time, the simplification problem is attacked.

4.5.1 Prime implicants and Karnaugh Maps


Consider a Karnaugh map for a Boolean function. Every 2“ X 2” rectangular group-
ing, i.e., subcube, of 1-cells represents a product term. Since each term equals 1| for
those n-tuples included in the subcube, each product term implies the function and,
hence, is an implicant of the function. Now assume a set of subcubes is selected
146 DIGITAL PRINCIPLES AND DESIGN

00 Ol

0 0 0
%

1 0 0

Figure 4.13 |llustrating the


concept of prime
implicants on a
Karnaugh map.

such that every |-cell, and no 0-cells, is included in at least one of these subcubes.
The corresponding product terms are all implicants of the function. Furthermore, by
summing the product terms associated with this set of subcubes, an algebraic ex-
pression is obtained that describes the function. One obvious case is if the individ-
ual 1-cells themselves are selected as 2° X 2° subcubes. Then, the resulting expres-
sion is the minterm canonical formula.
Although all the implicants of a function can be determined using a Karnaugh
map, it is the prime implicants that are of particular interest as evident by Theo-
rems 4.1 and 4.2 in Sec. 4.2. Recalling from Sec. 4.2, a prime implicant is an impli-
cant of a function that subsumes no smaller implicant that implies the same func-
tion. The question can now be asked: How is this related to the subcubes on the
Karnaugh map?
To answer this question, consider the map shown in Fig. 4.13. The 2° 2° sub-
cube labeled () corresponds to an implicant of the function; namely, the minterm
xyz. However, the cell associated with subcube @) can also be grouped with the cell
below it to form the 2' < 2° subcube @). The corresponding product term for this
subcube is yz and is also an implicant of the function. Note that subcube (A) is to-
tally contained within subcube (@). Furthermore, the term xyz subsumes the term yz
and, hence, xyz is not a prime implicant. As is seen by this illustration, as a subcube
gets larger, the corresponding product term gets smaller, i.e., has fewer literals. In
addition, if one subcube is totally contained within another subcube, then the literals
associated with the product term of the larger subcube are always a subset of the lit-
erals associated with the product term of the smaller subcube. Again consider the
map of Fig. 4.13. It can now be concluded that a smaller term than yz, which is sub-
sumed by yz, requires a subcube of 1-cells that totally contains subcube (@). How-
ever, since the next allowable larger size subcube of 1-cells must consist of four
1-cells,* it is seen that no such subcube is possible in Fig. 4.13. Hence, this leads to

*Recall that a 2° x 2? rectangular grouping must always consist of a power-of-two number of cells.
Furthermore, all references to subcubes on a Karnaugh map imply that they have the dimensions 2° X 2°.
CHAPTER 4 Simplification of Boolean Expressions 147

the conclusion that yz is a prime implicant of the function. By a similar argument,


subcube ©) corresponds to another prime implicant of the function; namely, xy. It
should be noted that a 1-cell can be used for more than one subcube. In general, any
subcube of 1-cells that cannot be totally contained within another subcube of 1-cells
corresponds to a prime implicant.
Continuing the analysis of Fig. 4.13, a disjunctive normal formula of the func-
tion is given by f(x,y,z) = yz + xy since all the 1-cells appear in at least one sub-
cube. The two product terms in this expression are the only prime implicants of this
function. Knowing there is at least one minimal disjunctive normal formula that is
the sum of prime implicants (by Theorem 4.1) and observing that neither term may
be dropped from the expression without changing the function being described, i.e.,
having all 1-cells appear in at least one subcube, the expression f(x, y,z) = yz + xy
must be a minimal disjunctive normal formula. Hence, it is seen that a minimal ex-
pression is obtained by use of Karnaugh maps. It might be noticed that this expres-
sion may be factored to yield f(x,y,z) = yz + xy = y(x + Z). Using the count of total
number of gate inputs in a two-level realization as the criterion of minimality,* the
first expression has a cost of 6, while the factored expression has a cost of 4. How-
ever, it should also be realized that the first expression is a disjunctive normal for-
mula, i.e., consists of a sum of product terms, while the factored expression is a
conjunctive normal formula, i.e., consists of a product of sum terms. For now, only
minimal disjunctive normal formulas are of interest. Minimal conjunctive normal
formulas are discussed later in this section.
A general procedure can now be stated for determining the set of all prime im-
plicants of a Boolean function from a Karnaugh map. For an n-variable map, if all
2” entries are 1’s, then the function is identically equal to 1. This 1 is the only prime
implicant of the function. If all 2” entries are not 1’s, then a search is made for all
subcubes of 1-cells with dimensions 2¢ x 2”? = 2”"'. Each of these subcubes repre-
sents a l-variable term. Since no two different subcubes can represent the same
term, and since all cells in the subcube are |-cells, they must describe prime impli-
cants. Next, a search is made for all subcubes of 1-cells with dimensions 2 X 2? =
2” for i = 2, such that no subcube is totally contained within a single previously
obtained subcube. Each of these subcubes represents an i-variable product term
which implies the function. Since no subcube is totally contained within a single
previously obtained subcube, its associated product term does not subsume any pre-
viously determined product term and hence is a prime implicant. This process is re-
peated for i = 3, 4,..., . The product terms established are the prime imy:cants.
Furthermore, these are all the prime implicants since any other subcube of 1-cells
must be totally contained within at least one of the subcubes already obtained. A
flowchart for this algorithm is shown in Fig. 4.14.

* As indicated in Sec. 4.1, the number of gate inputs in a two-level realization of a normal formula is
given by the sum of the number of literals and the number of terms having more than | literal in the
expression and then subtracting | if the expression consists of only a single term.
148 DIGITAL PRINCIPLES AND DESIGN

Form all subcubes of 2/ 1-cells


not properly contained in
any other single subcube
previously formed. Write the
corresponding product terms.

The list of
terms are
the prime
implicants

Figure 4.14 Algorithm to find all prime implicants.

To illustrate the above procedure, consider the 3-variable function f(x,y,z) =


m(0,1,5,7). The map for this function is shown in Fig. 4.15. To determine the
prime implicants, note first that there is no subcube of 1-cells consisting of 2* = 8
cells. That is, this function is not identically equal to 1. The next step is to find all
subcubes of 1|-cells consisting of 27 = 4 cells. Again there are no such subcubes. A
search is next conducted for subcubes of 1-cells consisting of 2' = 2 cells. There
are three such subcubes on the map. Subcube @) represents the term xy, subcube
the term xz, and subcube © the term yz. All three subcubes represent prime im-
plicants. It should be noted, in particular, that subcube ©) represents a prime impli-
cant. It is true that subcube © is partially contained in both subcube @) and sub-

Figure 4.15 Karnaugh map for


iCayzZ) =
Si (Orown:
CHAPTER 4 Simplification of Boolean Expressions 149

cube ®); however, it is not totally contained in either of these subcubes. Hence, the
product term associated with subcube ©) does not subsume either of the other two
product terms. Finally, there are no subcubes of 1-cells that consist of 2° = 1 cell
and that are not contained in one of the previous three subcubes. Thus, there are
three prime implicants of this function. It should be noted that all the 1-cells are
contained in some subcube if just the subcubes (4) and @)are considered. Conse-
quently, the function is describable by the expression f(x,y,z) = xy + xz. It can be
concluded from this example that not all the prime implicants are needed, in gen-
eral, to obtain a minimal disjunctive normal formula.
As another example of determining the prime implicants of a function from a
Karnaugh map, consider the function shown in Fig. 4.16. The largest subcubes of
1-cells forming a 2* x 2° = 2**? rectangle are subcubes () and @), each of which
consists of 2* = 4 1-cells. These subcubes represent the terms wz and wy, respec-
tively. Next it is necessary to find all subcubes of 1-cells that consist of 2’ = 2
cells and that are not entirely contained in any other single subcube already
found. Only subcube ©, which represents the term xyz, satisfies this condition.
Finally, all remaining 1-cells (subcubes of 2° = 1 cell) not contained in some al-
ready found subcube correspond to prime implicants. In this case, subcube ©) es-
tablishes that the term wx yz is a prime implicant. For this function there are four
prime implicants, all of which are necessary for the minimal disjunctive normal
formula.
With practice it becomes relatively easy to recognize the prime implicants on a
Karnaugh map. More important still, however, is the fact that an optimum set of
prime implicants is fairly evident by inspection.
Before continuing with the discussion of minimal expressions, a simple obser-
vation can be made. If a function is described by a Boolean formula not in canonical
form, then it is possible to draw the Karnaugh map without having to first obtain the

Figure 4.16 Karnaugh map


for f(W,X,y,Z) =
=M(1,2,3,5,6,
7,8, 13).
150 DIGITAL PRINCIPLES AND DESIGN

Figure 4.17 Using the Karnaugh


map in reverse.

canonical formula or constructing its truth table. The map is obtained by first ma-
nipulating the expression into sum-of-products form and then placing 1’s in the ap-
propriate cells for each product term. For example, consider the function

flw,x% y,Z) = xy + wxz + wxyz


and its representation on a four-variable map. If the term xy were to be read from the
map, then the subcube would have to encompass those |-cells in which the variable
x is | and the variable y is 0. It is the second and third rows of a four-variable map in
which the variable x has the value 1; while the variable y has the value 0 in the first
and second columns. The intersection of these rows and columns corresponds to the
1-cells for this term and appears as the subcube of four |-cells in Fig. 4.17. In a sim-
ilar manner, the other two subcubes in the figure represent the terms wxz and wxyz.
Since these terms also imply the function, the corresponding subcubes locate 1|-cells.
All the remaining cells are then filled in with 0’s.

4.5.2 Essential Prime Implicants


A minimal disjunctive normal formula describing a function consisting of prime
implicants is referred to as a minimal sum. Similarly, a minimal product is a mini-
mal conjunctive normal formula describing a function consisting of prime impli-
cates. As indicated by the statement of the simplification problem, our objective is
to determine minimal sums and minimal products.
Using the decimal notation of Fig. 4.8 to refer to cells of a Karnaugh map,
again consider Fig. 4.15 in which all the prime implicants are shown. It is observed
that some I|-cells may appear in only one prime implicant subcube, e.g., cells 0 and
7; while other I-cells may appear in more than one prime implicant subcube, e.g.,
cells | and 5. A [-cell that can be in only one prime implicant subcube is called an
essential I-cell and the corresponding prime implicant is called an essential prime
implicant. A test for an essential |-cell is that all possible subcubes consisting ‘of
that I-cell are a subset of the single largest subcube (which corresponds to a prime
CHAPTER 4 Simplification of Boolean Expressions 151

implicant) in the collection. Thus, in Fig. 4.15, cells 0 and 7 are essential 1-cells and
xy and xz are essential prime implicants. Similarly, in Fig. 4.16, cells 1, 2, 6, 8, and
13 are essential 1-cells. All the prime implicants of Fig. 4.16 are essential prime im-
plicants. It should be noted in Fig. 4.16 that essential 1-cells 2 and 6 are both associ-
ated with the same prime implicant.
What is significant about essential prime implicants is that every essential
prime implicant of a function must appear in all the irredundant disjunctive normal
formulas of the function and, hence, in a minimal sum. That this is true follows
from the fact that each essential prime implicant has at least one 1-cell that is asso-
ciated with no other prime implicant. Certainly that particular 1-cell represents an n-
tuple for which the function is 1; and the corresponding essential prime implicant is
the only prime implicant that equals | for this n-tuple. Since an irredundant disjunc-
tive normal formula consists of a sum of prime implicants and the expression must
equal | for all n-tuples in which the function is 1, the essential prime implicants are
necessary in forming the irredundant expressions.

4.5.3 Minimal Sums


A general approach for determining a minimal sum can now be stated. First, a map
of the function is drawn. The essential prime implicants are next determined by de-
tecting essential 1-cells. This can be done either by using the test for essential
1-cells described above or by first determining all the prime implicants on the map
and then noting which ones contain essential 1-cells. If all the subcubes established
at this point encompass all the |-cells of the map, then the minimal sum is simply
the sum of the essential prime implicants. However, if there remain 1|-cells that are
not included in some subcube, then additional subcubes (representing prime impli-
cants) must be selected to include the remaining |-cells. As guiding rules, these ad-
ditional subcubes should be as large as possible (i.e., contain as many 1-cells as
possible and still satisfy the constraint that there be a power-of-two cells), and the
number of additional subcubes should be as few as possible. The sum of the terms
associated with all the subcubes selected is the minimal sum.
Several examples are now presented to illustrate the determination of minimal
sums from Karnaugh maps.

Consider the function


f(w,x%y,Z) = wx + wxz + yz

Using the Karnaugh map in reverse, each term is placed on the map as explained
previously. The resulting map is shown in Fig. 4.18a, where the indicated sub-
cubes correspond to the three product terms of the given function.* To obtain a
minimal sum, it is necessary to determine the essential prime implicants by first

*These subcubes are not necessarily the subcubes associated with a minimal sum.
152 DIGITAL PRINCIPLES AND DESIGN

(a)

Figure 4.18 Example 4.1.

detecting essential 1-cells. Consider cell 0. It is observed that all subcubes incorpo-
rating cell 0 are a subset of the single 2° X 2° subcube consisting of the first row of
cells. Thus, cell 0 is an essential 1-cell, indicated by an asterisk in Fig. 4.185, and
is associated with the prime implicant wx. Alternatively, it should be noted that
cell 2 is also an essential 1-cell for the same term. Next, it is noted that cell 5 is an
essential 1-cell and is associated with the prime implicant wz. Finally, it is ob-
served that cell 11 is an essential 1-cell (as well as cell 15) and is associated with
the prime implicant yz. It is now seen in Fig. 4.185 that all the 1-cells are included
in some subcube. Thus, the minimal sum consists of the three essential prime im-
plicants; 1.e.,
f(w,x%y,z) = wx + wzt yz

EXAMPLE
4.2 [a
Consider the function

f(w,x%y,Z) = &m(0,1,2,4,5,7,9,12)
shown in Fig. 4.19. The essential 1-cells are indicated by asterisks, and the subcubes
associated with the essential prime implicants are also shown. Since all 1-cells are
covered, the minimal sum is
fW,%Y,Z) = wxzt wxz t+ xyz + xyz

Note that if the essential 1-cells are not grouped first, then it would be tempting to
group the four l-cells in the upper left corner of the map. If this is done, then it
might erroneously be believed that the term w y, which is a prime implicant, should
be in the minimal sum.
CHAPTER 4 Simplification of Boolean Expressions 153

Figure 4.19 Example 4.2.

SE
EXAMPLE
ee 4.3
Consider the map in Fig. 4.20. There are four prime implicants of this function.
These are indicated by the four subcubes in the figure. Two of the prime implicants
are essential since they include the essential l-cells indicated by asterisks. The es-
sential prime implicants are yz and yz. After these two subcubes are formed, there is
only one I-cell that must still be grouped (cell 6). This cell can be placed in either
the subcube representing the term xy or the subcube representing the term xz, both
of which are indicated as dashed subcubes. Since only one of the two dashed sub-
cubes is needed to complete the covering of all the 1-cells, there are two minimal
sums for this function
TRG) Veet XY
and av) V2 eye a

Figure 4.20 Example 4.3.

EXAMPLE 4.4

Consider the map in Fig. 4.21. First the essential prime implicants are determined.
Cells 0, 2, and 9 are the only essential 1-cells. Cells 0 and 2 belong to the essential
prime implicant wz; while cell 9 belongs to the essential prime implicant wyz. At
154 DIGITAL PRINCIPLES AND DESIGN

Figure 4.21 Example 4.4.

this point, three 1-cells still need to be grouped, namely, cells 7, 12, and 15. Since
none of these cells are essential |-cells, the constraint of using as few subcubes as
possible and keeping the subcubes as large as possible is applied. This suggests that
cells 7 and 15 should be grouped together, which corresponds to the prime impli-
cant xyz. The remaining 1|-cell (cell 12) can be grouped with either the cell above it
or the cell next to it as indicated by the dashed subcubes in the figure. Only one. of
these subcubes is needed to complete the covering of all the 1-cells. Thus, there are
two minimal sums
flw,xy,Z) = wz + wyz + xyz + xyz
and f(w,x%y,z) = wz + wyz + xyz + wxy

|EXAMPLE 4.5
Consider the map in Fig. 4.22. None of the I-cells are essential 1-cells and hence
there are no essential prime implicants. Careful analysis reveals that there are six
prime implicants for this function. These correspond to the six subcubes shown col-
lectively in Fig. 4.22a and b. However, with the interest of using a minimum num-

YZ YZ
00 Ol 1] 10 00 01

tes Seto
0

(a) (b)

Figure 4.22 Example 45.


CHAPTER 4 Simplification of Boolean Expressions 155

ber of prime implicants, either of the groupings shown in Fig. 4.22a or b suggests a
minimal sum. Thus, both

and Tikva ye te
are minimal sums.

4.5.4 Minimal Products


To determine a minimal product, i.e., a conjunctive normal formula describing a
function that is minimal according to some cost criterion, a procedure similar to the
above is applied to the O-cells. In the previous section it was stated that a 2° x 2?
rectangular grouping of O-cells corresponds to a sum term. Applying the principle
of duality to the above discussion in this section, it follows that each of these sum
terms is an implicate of a function. Thus, by applying the techniques that have been
presented to the O-cells, prime implicates and minimal products are determined. In
particular, all the O-cells, and no 1-cells, must be grouped at least once while satis-
fying the constraints of using the largest possible and the fewest number of group-
ings, i.e., subcubes. To read the sum terms corresponding to the subcubes directly
from the map, again it is necessary to observe which labels do not change value. In
this case, a 0 along the axis of the map denotes an uncomplemented variable and a |
along the axis denotes a complemented variable.
The map of Fig. 4.23 represents the function

f(w,x,y,z) = Xm(1,3,4,5,6,7,11,14,15)

The minimal sum, from Fig. 4.23a, is

Wx ate WZ + xy + yz
S(w,x,y,Z) =

Figure 4.23 Map for the function f(w,x,y,Z) = Sm SASK,


Fl At TS).
156 DIGITAL PRINCIPLES AND DESIGN

Figure 4.24 Example 4.4.

Using the cost criterion of number of gate input terminals, the minimal sum has a
cost of 12. Now consider the 0-cells of this function and Fig. 4.23b. Cell 0, as well
as cells 2 and 10, is an essential 0-cell since all subcubes involving this cell are con-
tained within the 2' X 2' subcube labeled ().* The corresponding essential prime
implicate is (x + z). Similarly, cell 13, as well as cells 9 and 12, is an essential
0-cell. Thus, the subcube labeled corresponds to the essential prime implicate
(w + y). Since all the 0-cells are now included in at least one subcube, the minimal
product for this function is
f(w,x y,z) = (x + z)(w + y)

The cost of this expression is 6 using the cost criterion of number of gate input ter-
minals.
When a minimal two-level gate network is to be realized, it is necessary to de-
termine both the minimal sum and the minimal product of the function. Although
both expressions are equivalent, in general one form has a lower cost than the other.
There is no way to determine which form has the lower cost until both the minimal
sum and minimal product are obtained. However, since the minimal sum and mini-
mal product can both be determined from the same Karnaugh map, it is not difficult
to obtain both expressions. For the function of Fig. 4.23, a minimal two-level gate
network would be realized from the minimal product because of its lower cost.
As a second example, consider again the map of Fig. 4.21. This map is redrawn
in Fig. 4.24. All of the essential 0-cells are indicated with asterisks. These three cells
are used to determine the three essential prime implicates. At this point there remain
two 0-cells still ungrouped, 1.e., cells 3 and 11. In an effort to use as few subcubes as
possible, they are grouped together. Thus, for this function the minimal product is

fw.xy.2=wteyt+z(wt+yt2wt+xt+2atytz)

*Recall that for a 4-variable map, the top and bottom edges as well as the left and right edges are
connected to form a torus.
CHAPTER 4 Simplification of Boolean Expressions 157

In this case the minimal sum, having a cost of 15, has a lower cost than the minimal
product, having a cost of 16. It is interesting to note that for this example, there are
two minimal sums but there is only one minimal product.
There is a slight variation to the above procedure for obtaining minimal prod-
ucts that can be used instead. Recall that the complement of a function is obtained
by replacing all 0 functional values by 1’s and all 1 functional values by 0’s. Thus,
if the O-cells of a Karnaugh map are grouped and product terms are written for the
groupings (where, since product terms are being written, uncomplemented variables
correspond to | labels on the axes of the map and complemented variables corre-
spond to 0 labels on the axes), then a minimal sum for the complement function is
obtained. By applying DeMorgan’s law to the expression, and thereby complement-
ing it, the resulting product of sum terms corresponds to the minimal product of the
original function. Thus, for the map in Fig. 4.24 the minimal sum for the comple-
ment function, found by grouping the 0’s and writing product terms, is

f(w,%,y,Z) = wyz + wyz + wxz + xyz


Applying DeMorgan’s law, the minimal product for the original function results,
1.E.,
f(Wxy,2) = fw,x%y.z)=(weyt+zwt+tytaw+xtzaetytz)
which is the minimal product obtained previously.
One final point should be mentioned before closing this section. It was stated
earlier that when a Boolean function is described by a disjunctive normal formula,
the Karnaugh map is easily constructed by reversing the reading process, i.e., enter-
ing 1’s in those cells corresponding to the various product terms. A similar process
can be applied if a function is initially described by a conjunctive normal formula.
That is, for each sum term the corresponding 0’s are entered into the map. Once the
map is constructed, the minimal sum and minimal product are determined.

4.6 MINIMAL EXPRESSIONS OF


INCOMPLETE BOOLEAN FUNCTIONS
It was shown in Sec. 3.8 that incomplete Boolean functions can arise in logic de-
sign. Truth tables in such cases contain dashed functional entries indicating don’t-
care conditions. Since a Karnaugh map is a diagrammatic representation of a truth
table, incomplete Boolean functions result in Karnaugh maps containing three pos-
sible entries —O’s, 1’s, and —’s.* For the purpose of obtaining minimal sums and
minimal products, it is necessary to slightly modify the procedures introduced so as
to handle the occurrences of dashed entries. Cells containing dashed entries on a
Karnaugh map are referred to as don’t-care cells.

*In this text, dashes are used in a map to signify don’t-care functional values to be consistent with the
entries in the truth table. Frequently, however, d’s, X’s, or @’s are used in a Karnaugh map to denote
don’ t-care conditions.
158 DIGITAL PRINCIPLES AND DESIGN

The prime implicants of an incomplete Boolean function are the prime impli-
cants of the complete Boolean function obtained by regarding all the don’t-care
conditions as having functional values of 1. Similarly, the prime implicates of an
incomplete Boolean function are the prime implicates of the complete Boolean
function in which all the don’t-care conditions are regarded as having 0 functional
values. Accepting these conclusions, it is a simple matter to obtain minimal sums
and minimal products for incomplete Boolean functions from a Karnaugh map.

4.6.1 Minimal Sums


To obtain a minimal sum, the prime implicants of the incomplete Boolean function
are determined. To do this, attention is paid to both the |-cells and the don’t-care
cells. For the purpose of forming large 2“ x 2° rectangular groupings, i.e., sub-
cubes, the don’t-care cells are considered as |-cells. However, only those conditions
for which the function actually has the value of | need to be described by the
Boolean expression. Thus, it is only necessary to have a sufficient number of sub-
cubes such that each actual 1-cell is included in at least one subcube. Note that it is
not necessary to include all the don’t-care cells. In this way, don’t-care cells are
used optionally in order to establish the best possible groupings. As in the case of

00

(a) (b) (c)

Figure 4.25 Incomplete Boolean function f(w,x,y,z) = &m(0,1,3,7,8,12) + de(5,10,13,14). (a) Truth
table. (b) Karnaugh map for obtaining minimal sums. (b) Karnaugh map for obtaining
minimal products.
CHAPTER 4 Simplification of Boolean Expressions 159

complete Boolean functions, the essential prime implicants should first be obtained.
In the case of incomplete functions, however, only the actual 1-cells in the map are
candidates for essential 1-cells. The don’t-care cells are not candidates since the
don’t-care cells do not have to be included in at least one subcube.
As an example of obtaining a minimal sum for an incomplete Boolean func-
tion, consider the truth table in Fig. 4.25a. The corresponding Karnaugh map is
shown in Fig. 4.255. Cell 3 (as well as cell 7) is an essential 1-cell. The subcube
with this essential 1-cell corresponds to the only essential prime implicant of this
function. It should be noted that the subcube for the essential prime implicant in-
cludes a don’t-care cell since don’t-care cells are regarded as containing 1’s for the
purpose of maximizing the size of the subcubes. It is next observed that cell 12 can
be placed in a subcube of four cells by including two don’t-care cells. Cell 12 is not
an essential 1-cell since grouping it with don’t-care cell 13 forms another prime im-
plicant for this function. At this point only cell 0 still needs to be grouped. Two
prime-implicant subcubes, shown dashed, are possible, each containing two cells.
Hence, there are two minimal sums

Sw,% y,2) = wz + wz + xyz


and f(w,x%y,z) = wz + wz + wry

Notice that don’t-care cell 13 is never grouped since only the |-cells must be covered.

4.6.2 Minimal Products


When a minimal product is to be obtained, the don’t-care cells are regarded as
0-cells for the purpose of establishing the prime implicates. However, only the ac-
tual O-cells must be included in at least one subcube. In all other respects, the proce-
dure for obtaining the minimal product is the same as that for complete Boolean
functions.
Because of the way don’t-care conditions are interpreted, some don’t-care cells
may appear in both the groupings of the |-cells and the O0-cells. Also, since don’t-
care cells do not necessarily have to be grouped, it is possible that some don’t-care
cells may appear in no groupings. As a consequence of the fact that don’t-care cells
are regarded as both, or neither, 0-cells and 1-cells, minimal sum and minimal prod-
uct expressions, in general, are not algebraically equivalent. However, for all condi-
tions in which the functional values are specified, the minimal sum and minimal
product expressions do yield the same values.
Again consider the function of Fig. 4.25a. The minimal product for this func-
tion is determined from the map of Fig. 4.25c. In this case, cells 2 and 9 are essen-
tial 0-cells. The remaining ungrouped 0-cell, cell 4, can be grouped in either of two
equal-sized subcubes. Thus, two minimal products are found, 1.e.,

fw.x%y,z) = (y + Zw + Zw tx + y)
and f(w,xy,z) = (y + zw + Z)(w t+ x + 2)
It should be noted that don’t-care cells 10 and 14 appear in both a subcube of
1-cells and a subcube of 0-cells. That is, their use when grouping the |’s did not
160 DIGITAL PRINCIPLES AND DESIGN

preclude their later use when grouping the 0’s. Using the gate input terminal
count as the cost criterion, all four of the minimal expressions for this function
have the same cost.

4.7 FIVE-VARIABLE AND SIX-VARIABLE


KARNAUGH MAPS
It is possible to extend the basic concept of the Karnaugh map to handle more than
four variables. As is seen in this section, the maps get more difficult to interpret
since the 2° X 2? rectangular groupings can become split when viewed on their 2-
dimensional representations. However, the use of maps for obtaining simple expres-
sions for five-variable and six-variable functions is still manageable with effort. In
keeping with the principle behind the Karnaugh map, each cell in a five-variable
map must be adjacent to five other cells, while each cell in a six-variable map must
be adjacent to six other cells. This is achieved by using 2 or 4 four-variable maps,
respectively.

4.7.1 Five-Variable Maps


Figure 4.26a shows one variation of a five-variable Karnaugh map. Each cell is
marked with the decimal equivalent of the n-tuple for the corresponding row of
the truth table. This five-variable map consists of 2 four-variable maps which are
the mirror image of each other about the double center line. It is the mirror image
of each cell that provides its fifth adjacency. For example, cells 9 and 13 are con-
sidered adjacent since they are mirror images of each other about the double cen-
ter line.
Within each half, any 2“ X 2” rectangular grouping, i.e., subcube, that is per-
missible on a four-variable map is also permissible on the five-variable map. In ad-
dition, subcubes are also possible about the mirror-image line. In particular, if there
are two rectangular groupings of the same 2“ X 2” dimensions on both halves of the
map, and, in addition, the two groupings are the mirror image of each other, then
the two groupings collectively form a single subcube. Notice that the number of
cells in all such single subcubes is a power of 2; namely, 2“*’*'. The number of lit-
erals in the term represented by the subcube is n — a — b — 1. As in the case of all
other maps, the literals of a term are determined by noting which variables have the
same values for all cells that comprise the subcube.
A common variation of the five-variable map is shown in Fig. 4.26b. In this
case, the 2 four-variable maps are envisioned as being on top of each other, rather
than being mirror images of each other. Here the upper map corresponds to those n-
tuples in which v = 0 and the lower map corresponds to those n-tuples in which vy =
1. In Fig. 4.26b each cell is again marked with the decimal equivalent of the n-tuple
for the corresponding row of the truth table. With such a structure, the fifth adja-
cency occurs between the two layers. For example, cell 0 is adjacent to cell 16, cell
1 is adjacent to cell 17, etc. Subcubes that are permissible on a four-variable map
CHAPTER 4 Simplification of Boolean Expressions 161

000 001

00 0 1

Ol 8 9
vw

11 24 25

10 16 17

(a)

Figure 4.26 Five-variable Karnaugh maps. (a) Reflective structure.


(b) Layer structure.

are also permissible on each layer of the five-variable map. In addition, if each layer
contains a 2° X 2? subcube such that they can be viewed as being directly above and
below each other, then the two subcubes collectively form a single subcube consist-
ing of 2°*”*! cells. As was done previously, the literals of the corresponding term
are determined by noting which variables have the same values for all cells that
comprise the subcube.
162 DIGITAL PRINCIPLES AND DESIGN

vwz

WXYZ

Figure 4.27 Typical subcubes on a five-variable map.

Figure 4.27 shows various ways of forming subcubes on a reflective five-


variable map and their associated product terms. No attempt has been made to
perform any type of minimization on this map. The subcube consisting of cells 5
and 21, on the right half of the map, is associated with the product term wxyz since
each half of the five-variable map is a four-variable map and, correspondingly, the
top and bottom edges of the map are assumed connected. Similarly, the subcube
consisting of cells 16, 17, 18, and 19, on the left half of the map, corresponds to the
product term vw x. Cells 0 and 2 can be grouped on the left side of the map and cells
4 and 6 can be grouped on the right side. Since the mirror images of cells 0 and 2
are cells 4 and 6, all four cells form a single subcube as explained above. This sub-
cube represents the term v wz because the variables v, w, and z do not change values
for these four cells. Finally, the 2' X 2' grouping consisting of cells 9, 11, 25, and
27, on the left half of the map, and the 2' X 2! grouping consisting of cells 13, 15,
29, and 31, on the right half of the map, correspond to the single product term wz
since the two groupings are mirror images of each other and collectively form a sin-
gle subcube.
As an example of using a reflective five-variable map for minimization, con-
sider Fig. 4.28. Again essential cells, indicated by asterisks, should first be detected.
In this example, all terms appearing in the minimal expressions are essential terms.
The minimal sum is

fv.w,x%,y,Z) = wxy + yz + vwy


and the minimal product is

fv,w,x9,2) = (W + yw + V+ + DEF)
In the minimal product, the sum term (w + y) is the result of grouping cells 8, 9, 12,
13, 24, 25, 28, and 29; while the sum term (w + z) is the result of grouping cells 8,
10, 12, 14, 24, 26, 28, and 30.
CHAPTER 4 Simplification of Boolean Expressions 163

vw

000 001 O11 010 110 111 101 100

vw

Figure 4.28 Map for f(v,w,x,y,z) =


+M(0,1,2,3,6,7,11,15,16,17,19,23,27,31). (a) Subcubes
for the minimal sum. (b) Subcubes for the minimal
product.

4.7.2 Six-Variable Maps


The six-variable Karnaugh map consists of 4 four-variable maps. One possible
structure of a six-variable map is shown in Fig. 4.29a. In this case, every cell has an
adjacent cell about the horizontal and vertical mirror-image lines. For example,
cells 13 and 41 are both adjacent to cell 9. As in the case of the four-variable map,
any rectangular grouping of dimensions 2% X 2’, i.e., subcube, in a single quadrant
represents a single term. Also, two groupings having the same 2“ X 2” dimensions
that are the mirror image of each other about either the vertical or horizontal mirror-
image lines are a single subcube. These situations are similar to those of five-vari-
able maps. A third possible way in which subcubes may be formed on six-variable
maps is if each quadrant has a rectangular grouping of dimensions 2“ X 2” and each
164 DIGITAL PRINCIPLES AND DESIGN

000 001 Oll O10 110 111 101 100

000} 0 | r 3 2 | 6 a 5 4
ee
O01 8 9 itil 10 14 15 13 1

Oll| 24
|_|
25 pif 26
Lea
30 31 29
ie 28

010 16 17 19 18 | De pie) 21 20

uvw t = a ee ice et ae = |

110 48 49 SI 50 54 S05) 53 52

111 56 Sy) 59 58 62 63 6] 60

a ll Sata eo) | | a =a

101 | 40 41 43 42 46 47 45 44
— | ers oad
100 a2) 33 35 34 38 39 oi] 36

(a)

Figure 4.29 Six-variable Karnaugh maps. (a) Reflective structure.

grouping is a mirror image of the other about both the horizontal and vertical
mirror-image lines. Then, the four groupings collectively form a single subcube and
correspond to a single term.
As in the case of the five-variable map, an alternate structure is also possible. In
this case, 4 four-variable maps are assumed to be layered one upon the other as
shown in Fig. 4.29b. For each layer, the values of the variables v and w are assumed
to be fixed. In Fig. 4.29b, uv = 00 in the top layer, uv = O1 in the second layer,
uv = I1 in the third layer, and uv = 10 in the bottom layer. Since each cell must be
adjacent to six cells in a six-variable map, in addition to the four adjacencies within
each layer, the fifth and sixth adjacencies occur between adjacent layers where it is
assumed that the top layer, i.e., the wv = OO layer, and the bottom layer, i.e., the
uv = 10 layer, of the overall structure are adjacent. For example, cell 21 on the sec-
ond layer is adjacent to cells 17, 20, 23, and 29 on that layer as well as cell 5 on the
first layer and cell 53 on the third layer. Similarly, cell 4 on the first layer is adjacent
to cells 0, 1, 5, and 6 on that layer as well as cell 20 on the second layer and cell 36
on the fourth layer. Subcubes occurring in corresponding positions on two adjacent
layers collectively form a single subcube. In addition, subcubes occurring in corre-
sponding positions on all four layers collectively form a single subcube.
Sy Le | ees —— ene
be ge om Cod
es
i | |
<4O]
S.
SE Uliece up
sere
al = a
oo. =4n Pose |
Oe ce Seare: in
Boaoe
ail
fl oe | r i
lee ce oc op oR
‘ PS |
N10
odo00 |
fat tor ee
4

e xM
“SO Be sa
ts] li TS

vs aa 8S Ss ied

~
| | Ea
x.
Sigg Se ss
ee(ilSaa oe = 6S ra ae

# = ] |

ae
+
=
oe es Seco LS
~ ek | if
ie ee 7

ce zs eeu WS -
ae ema4 hI9
Tice aang: (Gls
1
XN

Sk Se “=
Ss
L |.
(9)
8I aN Gon
as

Stans e
mE
+ ie
ete t I€ = cance
\.
10
PR. ee 61 ey
fe Mie | = By
ie =4n
Leen
LI
6
ates oe
-<
ne is| a | =
ati Be eeamlon

‘ainjoniys
Lio
:

8 Sh 00 I
a
NX

eri nt xm
2
JeAe7
01 i.
eSZ Ol
=
r
i SI van
€ L
(Gq)

II
(JuOD)

ZA re
a0 SS
I ic el 6
ern
einbi46e'p

Ke
00 0 p cle 8

|00 |10 iT {O01
XM

165
166 DIGITAL PRINCIPLES AND DESIGN

Wwxy

~000 001 O11

amu
000 l |

O11 1 1

010 |
eae te |

uvw

110 | l

fT
101
|
uvwxy uvwyz uwxy vwz

Figure 4.30 Typical subcubes on a six-variable map.

Figure 4.30 shows various subcubes on a reflective six-variable map and their
associated product terms. No attempt has been made to perform any type of mini-
mization on this map. The reader should pay particular attention to the subcube for
the vwz product term. This subcube corresponds to the situation in which part of the
grouping appears in each of the four quadrants. In particular, this subcube consists
of cells 25227,.29, 31, 57,09; Ol; and63.
Interpreting five-variable and six-variable maps can be a challenge. Although
the map concept is extendable to more than six variables, it is evident that another
procedure is needed when dealing with functions having a large number of variables.

4.8 THE QUINE-McCLUSKEY METHOD


OF GENERATING PRIME IMPLICANTS
AND PRIME IMPLICATES
The Karnaugh map method for obtaining simplified Boolean expressions is a very
effective method for functions with no more than six variables. However, a general
procedure which is applicable to functions of any number of variables is desirable.
CHAPTER 4 Simplification of Boolean Expressions 167

Such a procedure should be algorithmic so that it can be programmed for a digital


computer. One such procedure was originally suggested by Quine and later modi-
fied by McCluskey.
The Quine-McCluskey method consists of two phases. In the first phase, the set
of all prime implicants (or prime implicates) of the function is systematically ob-
tained. In the second phase, the set of all irredundant expressions for the function is
determined. From the set of irredundant expressions the minimal expressions are
selected.

4.8.1 Prime Implicants and the Quine-McCluskey


Method
All the implicants of a complete Boolean function can be generated from its set of
minterms by repeatedly applying the relationship AB + AB = B. This relationship
provides for combining two product terms to form a new single product term when
A denotes a single variable and B denotes a product of variables. In essence, the ap-
plication of this relationship involves repeated comparisons. To do this, the
minterms are initially placed in a list. Two minterms that differ in precisely one lit-
eral are related by AB and AB and hence combine to form a single term B. This new
term B is then placed in a second list. Furthermore, although the two minterms AB
and AB are implicants of the function, it can be concluded that neither of these
minterms is a prime implicant of the function since they each subsume the gener-
ated term B. This is indicated by placing a check mark next to the two generating
minterms. The comparison process is carried out on all pairs of minterms that com-
prise the initial list. However, the existence of a check mark does not disqualify a
minterm from further comparisons with other minterms in order to generate addi-
tional terms.
The new list of terms consisting of one less variable is then subjected to the
above comparison process. That is, if two terms have the forms AB and AB where A
is a single variable and B is a product of variables, then the two terms are combined
to form a new single term B that is entered in a third list. Again the two generating
terms are checked to indicate that they are not prime implicants. Duplicate terms are
not entered in the new list. Once all pairs of terms in the second list of terms are
compared, the comparison process is applied to the third list to form a fourth list.
The comparison process is continued on each new list until no new list is generated.
At that time, all terms contained in the set of lists are all the implicants of the func-
tion and those that are not checked are the prime implicants.
To illustrate the basic concept of repeated application of the relationship
AB + AB = B in order to generate implicants, consider the function
fw.x%y,2) = Sm(1,3,4,5,7,8,
15)
The minterms are listed in the first column of Table 4.3 along with their decimal
designators. It is now necessary to consider every pair of minterms in the first col-
umn to determine if the relationship AB + AB = B is applicable. First consider
minterms | and 3. Both of these minterms are identical except in the y variable.
Thus, these two minterms are combined using the relationship AB + AB = B to
168 DIGITAL PRINCIPLES AND DESIGN

Table 4.3 Obtaining the prime


implicants of the function
f(w,Xy,Z) = =m(1,3,4,5,7,8,15)

Il: wxyz J wxz J WZ


3: wxyz \ wyzy
4: wxyz wyz J
5: wxyz J wxy
7: wxyz ¥ wxz
8: WXYZ XYZ
15: wxyz

form the product term wxz, which is entered in the second column. Furthermore,
since w x yz and w xyz subsume the term w xz, the two minterms are checked to indi-
cate that they are not prime implicants. The comparison process is continued by
next considering minterms | and 4. Notice that even though minterm | has a check
mark, it is still used for further comparisons. However, minterms | and 4 do not
combine since they differ in more than one literal.* Next minterms | and 5 are com-
pared. They are combined to form a single term in accordance with the relationship
AB + AB = B. This results in the term w yz being entered in the second column and
a check mark being placed next to minterm 5. Since minterm | already has a check
mark, it is not necessary to place a second check mark next to it. The reader can eas-
ily verify that upon comparing minterm | with the remaining minterms in the first
column, no additional terms are formed. Next, minterm 3 is compared with all the
minterms in the first column. However, it is only necessary to apply the comparison
process to the remaining minterms below the one currently being studied, in this
case minterm 3, due to the commutative property of a Boolean algebra. As a result,
minterms 3 and 4 are compared and then minterms 3 and 5. In both cases they differ
in more than a single literal and the relationship AB + AB = B is not applicable.
Next, minterms 3 and 7 are used to generate the term wyz shown in the second col-
umn. Furthermore, minterm 7 is checked off, since it subsumes the new term. Con-
tinuing as above, after all comparisons of the minterms in the first column are com-
pleted, the terms of the second column result and the minterms successfully used in
the comparison process are those with check marks.
The above comparison process is now carried out on the second column of
Table 4.3. Starting with the first term in the second column, terms w xz and wxz are
the first pair of terms that satisfy the relationship AB + AB = B where A corre-
sponds to x and B corresponds to wz. As a consequence, these two terms are used to
form the term wz shown in the third column. The two terms wxz and wxz are
checked to indicate that they are not prime implicants, since they both subsume the
generated term wz. Continuing the comparison process on all of the remaining pairs
of terms in the second column, the only other successful application of the relation-

*The check mark next to minterm 4 in Table 4.3 is the result of a future comparison.
CHAPTER 4 Simplification of Boolean Expressions 169

ship AB + AB = B involves terms w yz and wyz. These two terms again generate the
term wz, and hence the duplicate is not entered in the third column. However, the
two terms w yz and wyz are checked in the second column to again indicate they are
not prime implicants.
The process is now continued by comparing all pairs of terms in the third col-
umn. Since only one term appears, no new terms are generated. Since a fourth col-
umn is not formed, the comparison process terminates. All the terms appearing in
Table 4.3 are implicants of the original function, while those that are not checked
correspond to its prime implicants.
Before stating the above process formally as an algorithm, some general obser-
vations are appropriate that reduce and simplify the mechanics of the procedure.
Any product term of an n-variable function can be represented without ambiguity
by 0’s, 1’s, and dashes if the ordering of the variables is specified, where 0 is used
to represent a complemented variable, | is used to represent an uncomplemented
variable, and a dash (—) is used to represent the absence of a variable. For example,
if wy is a term of the function f(v,w,x,y,z), then it is represented as -O—-1— where the
ordering of the variables in the term is given by the arrangement of the variables in
the function notation. The first dash signifies that variable v does not appear in the
term, while the second and third dashes represent the absence of the x and z vari-
ables. The 0 represents the literal w, and the | represents the literal y. When repre-
senting minterms with this notation, there are no dashes, and this procedure simply
yields the binary representation of the minterm, as was discussed in the previous
chapter.
Now let the index of a term be defined as the number of 1’s appearing in the
0)-1-dash representation of the term. Any two minterms which satisfy the relationship
AB + AB = B where A is a single variable must have the same binary representa-
tion except for the one position corresponding to A. In this position one binary rep-
resentation has a O and the other a |. Thus, the indices of the two terms differ by ex-
actly 1. Furthermore, when one term is formed from combining two minterms, the
literals that are the same in both minterms are the literals appearing in the resulting
term. A representation for this combined term has the same 0’s and 1’s as the origi-
nal minterms except for that position in which the minterms differed. This position
has a dash.
In the comparison technique introduced above for generating prime implicants,
all pairs of minterms are inspected to see if they combine by the relationship AB +
AB = B. If the minterms of the function are divided into sets such that the minterms
in each set have the same index, then it is only necessary to compare the minterms
in those sets whose indices differ by exactly 1. Two terms whose indices are the
same or differ by more than | can never result in their combining into a single term.
This enables a reduction in the number of comparisons that must be made.
It has been seen that two product terms of a function can combine into a single
term if they have the same variables and differ in exactly one literal. In terms of the
0-1-dash notation, this implies that the dashes in the representations for both terms
must appear in the same relative positions, and that in all but one of the remaining
positions the two representations must have the same 0’s and 1’s. For example, the
170 DIGITAL PRINCIPLES AND DESIGN

terms vwy and vwy of a function f(v,w,x,y,z) can combine to form the term vy. The
first term is represented by 10—O— and the second term by 11—O-—. Since the
two 0-1-dash representations have their dashes in the same relative positions and
the representations differ only in the second position, they can combine to form
1--0-, which represents the term vy. The dash now appearing in the second posi-
tion denotes the elimination of the x variable as the result of the two terms being
combined.

4.8.2 Algorithm for Generating Prime Implicants


The Quine-McCluskey algorithm for determining the prime implicants of a function
can now be stated.

1. Express each minterm of the function in its binary representation.


List the minterms by increasing index.
Separate the sets of minterms of equal index with lines.
Leti=0.
ree)
ib)
Sd
a Compare each term of index i with each term of index i + 1. For each pair of
terms that can combine, i.e., differ in exactly one bit position, place the newly
formed term (also in 0-1-dash notation) in the section of index 7 of a new list,
unless it is already present. In either event, place a check mark next to the two
terms that combined (if not already checked). In the comparison process, a
checked term does not disqualify it from further comparisons. After all pairs of
terms with indices i andi + | are inspected in the original list, a line is drawn
under the last term in the new list.
6. Increase i by | and repeat Step 5. The increase of 7 is continued until all terms
are compared. The new list contains all those implicants of the function that
have one less variable than those implicants in the generating list.
7. Each section of the new list formed has terms of equal index. Steps 4, 5, and 6
are repeated on this list to form another list. Recall that two terms combine
only if they have their dashes in the same relative positions and if they differ in
exactly one bit position.
8. The process terminates when no new list is formed.
9. All terms without check marks are prime implicants.
To illustrate the algorithm, consider the function

fw,xy,zZ) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz + wxyz
II +m(0,5,6,7,9, 10,13,14,15)

The process for obtaining the prime implicants using the above procedure is shown
in Table 4.4. To begin, all minterms with index zero are placed in the first column.
Only minterm 0 has index zero. This is indicated in Table 4.4 by the first entry in
the first column where minterm 0 is represented in both its decimal and binary
forms. (The decimal representation is included for easy reference and is not really
CHAPTER 4 Simplification of Boolean Expressions 171

Table 4.4 Obtaining the prime implicants of the function f(W,X,y,Z) =


~7(0,5,6,7,9,10,13, 14,15)
nn

WXYZ WXYZ WXYZ


i ae 571315
6,7,14,15

necessary in the actual process.) A line is drawn after this minterm. Next all
minterms of index one are listed. Since there are no minterms for this function hav-
ing index one, no entries are made for it in the first column. Again a line is drawn.
All minterms of index two are next listed. These are minterms 5, 6, 9, and 10. A line
is drawn after the last minterm of this set. In a similar manner, minterms 7, 13, and
14, having index three, are added to the list. Finally, minterm 15, having index four,
is added. This completes the first three steps of the algorithm. Next, the minterm of
index zero is compared with all minterms of index one to see if they can combine
by the rule AB + AB = B. Since there are no minterms of index one, minterm 0
cannot be used in comparisons with another minterm. Similarly, no minterms of
index one combine with minterms of index two since the set of minterms having
index one is empty. Next the minterms of index two are compared with those
of index three. Minterms 5, represented by 0101, and 7, represented by 0111, com-
bine since they differ in exactly one bit position to form the term wxz, which is rep-
resented by Ol-I in the second column. Check marks are then placed after
minterms 5 and 7. For convenience, the 01—1 in the second column is labeled with
the decimal numbers 5,7 to indicate that these are the minterms that combined to
form the term represented by 01—1. Next minterms 5 and 13 are compared. Notice
that a check mark does not disqualify a minterm from further comparisons. The
minterms 5 and 13 form the term represented by —101 in the second column. Since
minterm 5 already has a check mark, it is not checked again. However, a check
mark is placed after minterm 13. Finally, minterms 5 and 14 are compared. Since
they cannot be combined because they differ in three bit positions, no new entry is
made in the second column as a result of this comparison and minterm 14 is not
checked (at this time). Next minterms 6 and 7, 6 and 13, and 6 and 14 are compared.
These comparisons account for the entries 011— and —110 in the second column and
check marks being placed next to minterms 6 and 14 in the first column. The
process is repeated until minterms 10 and 14 are compared. A line is then drawn
under the partially completed list of the second column. Next the minterms in the
172 DIGITAL PRINCIPLES AND DESIGN

first column of index three are compared with those of index four. These compar-
isons yield three terms in the second column. At this point all minterms, except
minterm 0, in the first column have been checked off, and no additional compar-
isons are possible. This completes steps 4, 5, and 6 of the algorithm.
Using the second column as a new list, each term in a group is compared with
each term in its adjacent group. First, term 01—1 is compared with —111. Since their
dashes are in different positions, they cannot be combined. Next term 01-1 is com-
pared with 11-1. This comparison forms a new term —1—1 in the third column since
their 0-l-dash representations differ in exactly one bit position. The two terms
which combined, i.e., 01-1 and 11-1 in the second column, are then checked. Term
01-1 is next compared with term 111—. These terms cannot be combined since their
dashes do not align. Next terms —101 and —111 are compared. This comparison also
yields the term —I—1 in the third column. Since this term already is written once, it
is not entered again. However, terms —101 and —111 must be checked. After all
comparisons are made, the newly formed list, column 3, has only two terms, both of
which appear in the same group since they have the same index. Since no compar-
isons are possible with the terms of the third column, the process terminates. It is
seen that five terms have no check marks. These terms, labeled A through E, are the
prime implicants of the given function. Replacing the symbols 0 and | by their as-

Table 4.5 Obtaining the prime implicants of the incomplete Boolean function
f(V,W,X,Y,Z) = &M(4,5,9,11,12,14,15,27,30) + de(1,17,25,26,31)

VWXYZ VWXYZ VWXYZ

00001 , 1,5 | 00-O1F Mal ert


00100 | 1,9 | 0-001 J O11 2527") 10208
1,17. | -0001 , ifs 7 eee
4,5 | 0010-G 1.153031) Pattie
4,12 | 0-100H 76973031 9 Tite
On i One
O95 =T00te
12,14 | O11-O7
17,25 | 1-001,
Pies a olen te
for | Storrs
(aisee Orme
14,30 | -1110 J
aT \ Oot,
Vey a) ables
26,30 | 11-10,
iso oe
al flat.
30,31 Pte
CHAPTER 4 Simplification of Boolean Expressions 173

sociated complemented and uncomplemented literals, respectively, the algebraic


forms of the prime implicants are xz, xy, wyz, wyz, and wx yz.
As a second example, consider the incomplete Boolean function
f,w,x%y,Z) = &m(4,5,9,11,12,14,15,27,30) + de(1,17,25,26,31)
As was mentioned in the discussion of Karnaugh maps, the prime implicants of an in-
complete Boolean function are the prime implicants of the complete Boolean func-
tion in which all the don’t-care conditions are regarded as having functional values of
1. Thus, to determine the prime implicants of the above incomplete Boolean function,
the Quine-McCluskey procedure is carried out on the complete Boolean function

f0,W,xy,z) = Xm(1,4,5,9,11,12,14,15,17,25,26,27,30,31)
This is illustrated in Table 4.5. The nine prime implicants are labeled A through J,
1.€., X YZ, WXZ, WYZ, WXY, VWy, VW yz, VWxy, vxy z, and vwxz.
Although the procedure just presented is very tedious for hand computation, the
intent of the Quine-McCluskey method is to provide an algorithmic procedure for
obtaining prime implicants that can be programmed for a digital computer. This ob-
jective has been achieved.

4.8.3 Prime Implicates and the Quine-McCluskey


Method
If it is desired to obtain the prime implicates of a function using the Quine-
McCluskey method, then two approaches are possible. By the duality principle, the
Quine-McCluskey method can be applied directly to the set of maxterms. In this
case, the comparison procedure makes use of the relationship (A + B)\(A + B)=B
where A is a single variable and B is a sum of variables. Each sum term is repre-
sented in 0-1-dash notation where O is used to denote an uncomplemented variable,
1 is used to denote a complemented variable, and a dash is used to denote the ab-
sence of a variable.* For example, if vy+ w + z is a sum term of the function
flv,w,x,y,z), then it is represented as 10——0. If the function is incompletely speci-
fied, then the don’t-care conditions are regarded as having O functional values.
The second approach for obtaining the prime implicates of a function is based
on the fact that the complements of the prime implicants of the complement of a
Boolean function are the prime implicates of the function. Thus, given a Boolean
function, the prime implicants of the complement function are first determined by
the Quine-McCluskey method. DeMorgan’s law is then applied to each of the prime
implicants to form the prime implicates of the original function. Since the Quine-
McCluskey method is being applied to product terms in this case, 0 is used to repre-
sent a complemented variable and | is used to represent an uncomplemented vari-
able in the 0-1-dash notation.

*It should be noted that this is consistent with the binary notation for maxterms introduced in the
previous chapter.
174 DIGITAL PRINCIPLES AND DESIGN

4.9 PRIME-IMPLICANT/PRIME-IMPLICATE
TABLES AND IRREDUNDANT
EXPRESSIONS
Having obtained the set of prime implicants of a function, the next step is to deter-
mine the minimal sums. Since minimal sums consist of prime implicants, it now be-
comes a matter of determining which prime implicants should be used. To do this, a
subset of the prime implicants is selected, subject to some cost criterion, such that
each minterm of the function subsumes at least one prime implicant in the selected
subset. This is analogous to the selection of a set of subcubes on a Karnaugh map
such that each 1|-cell is included in at least one subcube.
A convenient way to show the relationship between the minterms and prime
implicants of a function is the prime-implicant table. The minterms are placed along
the abscissa of the table and the prime implicants are placed along the ordinate. At
the intersection of a row and column, an X is entered if the minterm of that column
subsumes the prime implicant of that row. For an incompletely specified Boolean
function, only those minterms which describe rows of the truth table where the
function equals | are placed along the abscissa. That is, the minterms describing the
don’t-care conditions are not included along the abscissa. Ignoring the don’t-care
terms is valid since the minimal-cost selection of prime implicants does not require
that don’t-care terms subsume at least one prime implicant.
The prime implicant table for the example of Table 4.4 is given in Table 4.6;
while that for the example of Table 4.5 is given in Table 4.7. In Tables 4.6 and 4.7
the single letter designating the prime implicant is also shown. For example, prime
implicant xz in Table 4.6 is also referred to as prime implicant A. The decimal num-
bers appearing next to the prime implicants of Tables 4.4 and 4.5 are the minterms
that subsume that prime implicant, since these were the minterms that combined to
form the prime implicant. These numbers simplify the process of determining the
entries in the prime-implicant table. For any particular prime implicant, these num-
bers indicate which columns of the prime-implicant table should contain an X. For
example, prime implicant A in Table 4.4 is the result of combining minterms 5, 7,
13, and 15. Thus, X’s appear in precisely these columns of the first row of the
prime-implicant table given in Table 4.6. In Table 4.7 it should be noted that those
conditions associated with don’t-cares do not appear along the abscissa. Corre-

Table 4.6 Prime-implicant table for the example of Table 4.4

my ms Ng m, My My Ore My Ms
A: XZ x x x x
B: Xx) x x x x
G: WYZ Xx x
Ds: wyz x x
ES Wee vz x
CHAPTER 4 Simplification of Boolean Expressions 175

Table 4.7 Prime-implicant table for the example of Table 4.5

Xyz
WXZ
WyZ x x x
wxy x x x
vwy x x
VWyz os
vwxy x x
VWXYZ x x
er
eeeVWxz x x

spondingly, even though prime implicant A in Table 4.5 results from combining
minterms 1, 9, 17, and 25, an X only appears in the m, column of Table 4.7.
In Sec. 4.2 an irredundant disjunctive normal formula was defined as a sum of
prime implicants such that no prime implicant from the sum can be removed with-
out changing the function being described. In addition, it was shown that minimal
sums are irredundant disjunctive normal formulas. In terms of the prime-implicant
table, any set of rows such that each column of the table has an X in at least one row
of the set and subject to the constraint that the removal of any row from the set re-
sults in at least one column no longer having an X in at least one row of the reduced
set defines an irredundant expression. The irredundant expression is formed by sum-
ming those prime implicants that are in the set that satisfy the above requirement.

4.9.1 Petrick’s Method of Determining


irredundant Expressions
At this time a general method is given that was suggested by Petrick to determine
all the irredundant disjunctive normal formulas of a function. In this way, the mini-
mal sums of the function are also determined since they are irredundant. In the next
section it is shown that certain manipulations on the prime-implicant table can be
performed to simplify the process of obtaining the irredundant expressions and min-
imal sums.
From the way the prime-implicant table is constructed, each X entry signifies
that the minterm of its column subsumes the prime implicant of its row. Alterna-
tively, it can be stated that an X entry signifies that the prime implicant of its row
covers the minterm of its column. The entire prime-implicant table is said to be cov-
ered by a subset of the prime implicants if and only if each minterm of the table is
covered at least once by the prime implicants of the subset. The problem of deter-
mining a subset of the prime implicants that covers the table is commonly referred
to as the covering problem. An irredundant cover is a cover from which no prime
implicant corresponding to a row can be deleted and still remain a cover. Each irre-
dundant cover corresponds to an irredundant disjunctive normal formula of the
176 DIGITAL PRINCIPLES AND DESIGN

function. A minimal cover of a prime-implicant table is an irredundant cover that


corresponds to a minimal sum of the function.
For simplicity in future discussions, no distinction is made between a row of a
prime-implicant table and the prime implicant associated with that row. Similarly, a
column of a prime-implicant table is considered synonymous with the minterm as-
sociated with the column. Thus, occasionally a row is said to cover a column in-
stead of the prime implicant corresponding to a row covers the minterm correspond-
ing to a column.
Consider now the conditions under which Table 4.7 is covered. The first col-
umn of the table indicates that either prime implicants G or H must be selected to
cover minterm 4. Similarly, prime implicants F or G must be selected to cover
minterm 5. In a like manner, by noting the X entries in each column, it is a simple
matter to determine which prime implicants must be selected to cover a given
minterm. Furthermore, to determine a cover of the prime-implicant table, all of the
covering conditions for its minterms must be satisfied.
If a variable is now associated with each row of the prime-implicant table such
that the variable has the value of | if the corresponding prime implicant is selected
and the value 0 if it is not selected, then an algebraic expression can be written,
called a p-expression, that describes the conditions for covering a prime-implicant
table. Let that variable be the single-letter designator of the prime implicant. Then,
the p-expression is a product of sum terms in which each sum term corresponds to
one column of the table and consists of the sum of the prime implicant designators
in which the column has X’s. Thus, for Table 4.7 we can write

p=(G+H\(F + GyA + BB + CH + DID + D(C + D)


-(B+C+E(D+B) (4.2)
The p-expression equals 1 only when a sufficient subset of the prime implicants is
selected, by assigning the value | to a subset of the variables in the p-expression, to
cover the prime-implicant table. When the p-expression has the value of 0, indicat-
ing the prime-implicant table is not covered, at least one sum term must be 0, which
in turn implies that the corresponding column is not covered.
To illustrate the interpretation of a p-expression, let B = D = G = H = | and
A=C=E=F=I1= 0in Eq. (4.2). In this case, it is seen that each sum term
equals 1 and hence p = |. This means that if rows B, D, G, and H of the prime-
implicant table are selected, then each column has a least one X. Furthermore, select-
ing only three of these four rows in the table results in some column not having an X.
This is easily verified using Eq. (4.2) by noting that if only three of the four variables
B, D, G, and H are assigned the value | and all the other variables are assigned the
value 0), then the p-expression has the value 0. Thus, prime implicants B, D, G, and H
form an irredundant cover of Table 4.7, and the sum of these prime implicants corre-
sponds to an irredundant disjunctive normal formula describing the Boolean function
having the minterms listed along the abscissa of the prime-implicant table.
To determine all the irredundant disjunctive normal formulas of a Boolean func-
tion, it is necessary to determine all the irredundant covers of its prime-implicant
table. This can be achieved by using the p-expression. The p-expression is in itself a
CHAPTER 4 Simplification of Boolean Expressions 177

Boolean expression, and hence can be manipulated according to the rules of a


Boolean algebra. If a p-expression is manipulated into its sum-of-products form
using the distributive law, duplicate literals deleted in each resulting product term,
and subsuming product terms deleted, then each remaining product term of the re-
sulting expression represents an irredundant cover of the prime-implicant table.
When any product term in the sum-of-products form of the p-expression has the
value of 1, p also has the value of 1. Thus, by selecting those prime implicants repre-
sented by the variables in a single product term of the sum-of-products form of the
p-expression, a cover of the prime-implicant table is obtained. Furthermore, since all
subsuming product terms have been deleted, the resulting product terms must each
describe an irredundant cover. Correspondingly, each product term in the sum-of-
products form of the p-expression suggests an irredundant disjunctive normal for-
mula. The formula is obtained by summing the prime implicants indicated by the
variables in a product term.
Consider again the p-expression for Table 4.7, i.e., Eq. (4.2), then*

p=(Gt+H\(F+G\(A+ BB+ C(\H + DID + D(C + D(B+E+E\(D + E)


= (G+ H\(F + GA + BB + C(H + DID + I(C + D\D + EB)
= (G + FH)\(B + AC)\(I + DH)\(D + CE)
= (BG + ACG + BFH + ACFH)(DI + CEI + DH + CDE#)
= (BG + ACG + BFH + ACFH)(DI + CEI + DH)
= 65DGI + BCEGI + BDGH + ACDGI +ACEGI + ACDGH + BDFHI
1 BCEF A! + BDH + ACPriil + ACEFHI + ACDFH
= BDGI + BCEGI + BDGH + ACDGI + ACEGI + ACDGH + BCEFHI
BUM ACEH AC DP id

This last expression, which was obtained from the expression above it by dropping
the subsuming terms indicated by slashes, implies that there are 10 irredundant dis-
junctive normal formulas for the Boolean function. The first term suggests that one
irredundant expression is the sum of prime implicants B, D, G, and /. Referring to
Table 4.5 or Table 4.7, this expression is

fi0.w,%y,Z) = B+ D+ G+ I= wxz + wxy + vwxy + vwxz

The second term of the p-expression describes the irredundant expression

PAVED Yo) 1B Cree G1


= wxz + wyz + vwy + vwxy + vwxz

Notice that f£,(v,w,x,y,z) is irredundant even though it has more terms than
fi(v,w,x,y,Z), Since if any term from f5(v,w,x,y,z) is deleted, then it no longer de-
scribes the function of Table 4.5. Thus, not all irredundant disjunctive normal

*Slashed terms are deleted as a consequence of the absorption property of a Boolean algebra, i.e.,
Theorem 3.6.
178 DIGITAL PRINCIPLES AND DESIGN

expressions are minimal sums. If each of the 10 irredundant expressions is now


evaluated by the cost criterion proposed in Sec. 4.1 involving the total number of
gate inputs, then the minimal sums are obtained since a minimal expression is irre-
dundant. For this particular example, there are three minimal sums. These cor-
respond to the first, third, and eighth terms in the sum-of-products form of the
p-expression. The minimal sums are
fiw, yz) = wxz + wxy + vwxy + vwxz
fx(V,w,%,y,Z) = wxz + wxy + vwxy + vxyz
fa(V,W,%,y,2) = wxz + wxy + vwyz + vxyZ

4.9.2 Prime-Implicate Tables and Irredundant


Conjunctive Normal Formulas
When dealing with prime implicates for the purpose of determining irredundant
conjunctive normal formulas and minimal products, a prime-implicate table is
formed in a completely analogous manner as a prime-implicant table. In such a
case, the prime implicates are placed along the ordinate and the maxterms associ-
ated with functional values of 0 along the abscissa. Again a p-expression is written.
Since a p-expression indicates how a table is covered, it can be used to determine
the irredundant covers of a prime-implicate table. Thus, each product term in the
sum-of-products form of the p-expression obtained by using the distributive law (in
which subsuming terms are dropped) corresponds to an irredundant cover that, in
turn, suggests an irredundant conjunctive normal formula. The formula is obtained
by forming the product of the prime implicates indicated by the variables in a prod-
uct term of the sum-of-products form of the p-expression. From the set of irredun-
dant conjunctive normal formulas, the minimal products are determined by apply-
ing an appropriate cost criterion.
As an alternative for determining irredundant conjunctive normal formulas and
minimal products, the prime implicants of the complement of a given Boolean func-
tion are obtained as indicated at the end of the previous section. In this case, the
prime-implicant table is formed and the p-expression written. The p-expression is
then manipulated into its sum-of-products form using the distributive law and sub-
suming terms are dropped. From the product terms in the resulting p-expression the
irredundant disjunctive normal formulas for the complement of the original Boolean
function are determined. If DeMorgan’s law is applied to each of these expressions,
then the irredundant conjunctive normal formulas for the originally given Boolean
function are obtained. Upon applying a cost criterion to each irredundant expres-
sion, the minimal products are established.

4.10 PRIME-IMPLICANT/PRIME-IMPLICATE
TABLE REDUCTIONS
As was seen in the previous section, once a prime-implicant table for a Boolean
function is obtained, the irredundant disjunctive normal formulas and, in particular,
the minimal sums are readily determined. However, the amount of work necessary
CHAPTER 4 Simplification of Boolean Expressions 179

to do this is dependent upon the size of the prime-implicant table. It is frequently


possible to reduce the number of columns and rows of a prime-implicant table, thus
allowing irredundant disjunctive normal formulas and minimal sums to be more
quickly obtained.

4.10.1 Essential Prime Implicants


In Sec. 4.5 it was established that every essential prime implicant of a function
must appear in all its minimal sums. In actuality, from the definition of essential
prime implicants, they must also appear in all the irredundant disjunctive normal
formulas of the function. Essential prime implicants are easily detectable in the
prime-implicant table.
An essential prime implicant was defined, relative to the Karnaugh map of a
function, as a product term corresponding to a prime-implicant subcube that con-
tains a |-cell that cannot be a member of any other prime-implicant subcube. Real-
izing that a 1-cell on a Karnaugh map corresponds to a minterm of a function, an es-
sential prime implicant can then be redefined as a prime implicant that is subsumed
by a minterm of the function that subsumes no other prime implicant of the func-
tion. From the construction of the prime-implicant table, if a column has only a sin-
gle X, then the minterm associated with that column subsumes only the prime im-
plicant of the row in which the X appears. Thus, by definition, the prime implicant
is essential. The row in which the X appears is called an essential row. All essential
rows must be selected when forming an irredundant cover of a prime-implicant
table.
Once it is known that a given prime implicant must be selected for a cover, it is
possible to reduce the size of the prime-implicant table in order to establish which
additional prime implicants should be selected in order to obtain an irredundant or
minimal cover of the table. To determine the additional prime implicants, a reduced
table is formed in which (1) all columns corresponding to the minterms that sub-
sume the selected prime implicant are deleted and (2) the row corresponding to the
selected prime implicant is deleted. All the irredundant covers of this reduced
prime-implicant table, along with the selected prime implicant, are the irredundant
covers of the original table. The validity of this procedure follows from the fact that
in order to obtain a cover of a prime-implicant table, each minterm corresponding to
a column of the table must subsume at least one prime implicant in the selected set
of rows that forms the cover. However, once the minterm of a column is covered, it
is not necessary to cover it again. Thus, the reduced table indicates only those
minterms that remain to be covered.
To illustrate the usefulness of the above results, again consider Table 4.6. In
this table, five columns have single X’s, and the rows of these <’s correspond to
the essential prime implicants of the function. Upon the selection of these prime im-
plicants, all the columns of the table are covered. Thus, the sum of just the essential
prime implicants represents the only irredundant expression for this example. Con-
sequently, this expression is also a minimal sum. In general, whenever the set of es-
sential rows forms a cover of a prime-implicant table, there is only one minimal
sum for the function. This minimal sum is given by the sum of the prime implicants
180 DIGITAL PRINCIPLES AND DESIGN

associated with the essential rows. Although in this particular example all the prime
implicants are essential, in general, this is not the case, even when the minimal sum
is unique.*

4.10.2 Column and Row Reductions


Besides the table reduction due to essential prime implicants, additional reductions
of a prime-implicant table are frequently possible. However, before considering
these reductions, it is necessary to review the two criteria of minimality introduced
in Sec. 4.1. The first criterion defined a minimal sum as a disjunctive normal for-
mula having the minimal number of terms with more than one literal. Since only
terms having more than one literal can contribute to the cost of an expression, for
the purpose of cost evaluation, a cost of | is assigned to each prime implicant hav-
ing more than one literal and a cost of 0 to each prime implicant having exactly one
literal.
In the second criterion of minimality, the cost of an expression is related to the
total number of gate inputs in a two-level realization. In this case, for terms having
more than one literal, the cost associated with each prime implicant is one greater
than the number of literals in the term since the number of literals corresponds to
the number of input terminals of the first-level and-gate and the additional one
corresponds to the input terminal of the second-level or-gate. If the term has only a
single literal, then the cost is simply one that corresponds to the direct input to the
second-level or-gate. Once a cost criterion is established, the appropriate cost for a
prime implicant can be appended to each row of a prime-implicant table. This is il-
lustrated in Table 4.8, where a cost column is added to the prime-implicant table for
the function f(v,w,x,y,z) = 2m(1,9,10,11) + de(0,3,14,25,27) under the assumption
of the second criterion of minimality.
Two columns of a prime-implicant table having their X’s in exactly the same
rows are said to be equal. Furthermore, a column c; of a prime-implicant table is
said to dominate column c; of the same table if column c; has X’s in all the rows in
which column c; has X’s and if, in addition, column c; has at least one X in a row in
which column c; does not have an X. For the prime-implicant table shown in Table
4.8, the column for minterm m,, dominates the column for minterm 71.
In a prime-implicant table, a column c; can be removed without affecting the ir-
redundant covers being sought if (1) there is another column c; in the same table
that is equal to column c; or (2) there is another column c, in the same table that is
dominated by column c;. The reason a dominating column can be removed from the
prime-implicant table follows from the fact that any set of rows that covers the
dominated column must also cover the dominating column. Hence, it is only neces-
sary to ensure the covering of the dominated column. Similarly, when two columns
are equal, the cover of one column is sufficient to guarantee the cover of the other
column. Therefore, only one of the two columns is needed in the prime-implicant

“Example 4.2 in Sec. 4.5 illustrates a function having a unique minimal sum in which not all its prime
implicants are essential.
CHAPTER 4 Simplification of Boolean Expressions 181

Table 4.8 Prime-implicant table for the function


f(V,W,X,y,Z) = 2m(1,9,10,11)
Ce (Ono nlai25,
2174)

table. Applying this result, the reduced prime-implicant table of Table 4.9 can be
used to determine the irredundant covers of Table 4.8.
The concepts of dominance and equality can also be applied to rows of a
prime-implicant table. Two rows of a prime-implicant table are said to be equal if
they have X’s in exactly the same columns. A row r; of a prime-implicant table is
said to dominate another row r; of the same table if row r; has X’s in all the columns
in which row r; has X’s and if, in addition, row r; has at least one X in a column in
which row r; does not have an X. Referring again to Table 4.8, it is seen that row A
dominates both rows B and C, while row D dominates row E.
The row dominance and equality concepts can also be used for prime-implicant
table reductions. Assume that there is some irredundant cover of a prime-implicant
table and that this cover contains a row 1; that is dominated by or that is equal to a row
r;. Those columns covered by row r; are also covered by row r;, by definition of a dom-
inating or equal row. Hence, an irredundant cover still results if row r; is used instead
of r;. Furthermore, if the cost of row 7; is not greater than the cost of row r;, then the ex-
pression obtained by summing the prime implicants associated with the cover having
row r; does not have a higher cost than the disjunctive normal formula obtained if the
prime implicant associated with r; is used instead. Since a minimal cover is an irredun-
dant cover, a row r; of a prime-implicant table can be removed and at least one mini-
mal cover of the original table, from which a minimal sum is written, is obtainable
from the reduced table if (1) there is another row r; of the same table that is equal to
row r, and does not have a higher cost than r; or (2) there is another row 7, of the same
table which dominates row r; and that does not have a higher cost than row 1;.

Table 4.9 Table 4.8 after deleting the


dominating column
182 DIGITAL PRINCIPLES AND DESIGN

It is important to note that the removal of an equal or dominated row 1s only ap-
plicable if it is not necessary to obtain all irredundant disjunctive normal formulas
or all minimal sums of a function. This is usually not a serious restriction, since in
most cases only one minimal sum is needed. In addition, the column and row reduc-
tion procedures can be applied any number of times to a prime-implicant table and
in any order.
To illustrate the application of the above column and row reduction concepts to
obtain minimal sums, again consider Table 4.7. The table is redrawn in Table 4.10a
with a cost column according to the second criterion for minimality. In particular,
since each prime implicant has more than a single literal, the cost assigned to each
row is the number of literals in its prime implicant plus one. Next, it is noted that
the column for m,, dominates the column for m,,. Since dominating columns can be
removed, Table 4.10b results. In the reduced table, row B dominates row A, row D
dominates row E, and row G dominates row F. Since in each case the dominated
row has a cost equal to its dominating row, the dominated rows are deleted. This re-
sults in Table 4.10c. Even though column and row reductions have been used, a
minimal sum can still be obtained by finding a minimal cover of Table 4.10c. It is
now noted that three columns of Table 4.10c have a single X. These X’s are cir-
cled. The rows in which the X’s appear must be selected for the minimal cover to
guarantee that the corresponding columns are covered. Rows B, D, and G have dou-
ble asterisks placed next to them to indicate that they are selected. The selection of
rows B, D, and G results in minterms m4, 5, M9, 7), 7,4, M5, and m3) being cov-
ered. Hence, these rows and columns are deleted from the table for the purpose of
determining which additional rows must be selected to obtain a minimal cover. The
resulting table is shown in Table 4.10d. Notice that row C does not appear in the
table since it does not cover any of the remaining minterms. Upon selection of ei-
ther row H or row J, a minimal cover is obtained. Therefore, one minimal cover
consists of rows B, D, G, and H, and another minimal cover consists of rows B, D,
G, and /. These two covers were also obtained in the previous section when Pet-
rick’s method was applied to Table 4.7. However, as was mentioned previously, not
all minimal covers are necessarily obtained when row reduction is applied, but at
least one minimal cover is guaranteed. For this example, Petrick’s method did yield
a third minimal cover. Finally, once a minimal cover is established, the minimal
sum expression can be written.
In general, it is not possible to obtain a minimal cover solely by applying the
table reduction procedures. A prime-implicant table in which each column contains
at least two X’s and in which no column or row can be deleted as a result of domi-
nance or equality is said to be a cyclic table. When this condition occurs, one can
revert to Petrick’s method for completing the determination of a minimal cover of
the table. In this way, columns and rows are deleted until a cyclic table results.
Once the reduced prime-implicant table becomes cyclic, Petrick’s method is ap-
plied. The rows selected by Petrick’s method plus any rows selected previously dur-
ing the prime-implicant table reduction procedures form a minimal cover of the
original table.
CHAPTER 4 Simplification of Boolean Expressions 183

Table 4.10 Prime-implicant table reduction applied to Table 4.7. (a) Table 4.7 with a
cost column. (6) After deleting dominating column. (c) After deleting
dominated rows. (d) After selecting rows B, D, and G

A x 4
B Xx x Xx 4
G x x x 4
D x x x 4
E x x 4
F x 5
G || Xx x 5
H x x 5
if x x 5)
(a)

m4 Ms My my, mM M44 M45


A x
B x x
G x x
D x x

E
le x
Gaim ><
lal \ xX x
I x x

(b)

m4 ms My my M2 M4 N45 N39 Cost


** BD ®) S< 4

CG x x 4
kD) x x ®&) 4
XG 4 ®) 5

lal x x 5
I x Xx 5
(c)

Mm, | Cost

(d)
184 DIGITAL PRINCIPLES AND DESIGN

4.10.3 A Prime-implicant Selection Procedure


When a single minimal sum is to be obtained, the following procedure can be
applied to a prime-implicant table:
1. Find all the essential prime implicants by checking the table for columns with a
single <. Place an asterisk next to each essential prime implicant. Rule a line
through the essential rows and all columns which have an X in an essential
row. If all columns are ruled out, then the minimal sum is the sum of the
essential prime implicants.
2. Rule a line though all dominating columns and dominated rows, keeping in
mind the cost restriction for deleting rows, until no further reductions are
possible.
3. Check to see if any unruled column has a single X. If there are no such
columns, then the table is cyclic. If there are some columns with a single X,
then place a double asterisk next to the rows in which these ’s appear. These
are called secondary essential rows. Rule a line through each secondary
essential row and each column in which an X appears in a secondary essential
row.
4. If all columns are ruled out, then the minimal sum is given by the sum of all
the prime implicants which are associated with rows that have asterisks next to
them. If all columns are not ruled out, then repeat Steps 2 and 3 until either
there are no columns to be ruled or a cyclic table results.
5. Ifacyclic table results, then Petrick’s method is applied to the cyclic table, and
a minimal cover is obtained for it. The sum of all prime implicants that are
marked with asterisks plus the prime implicants for the minimal cover of the
cyclic table as determined by Petrick’s method is a minimal sum.

Although the discussion in this section involved prime-implicant tables, the


procedures introduced can equally well be applied to prime-implicate tables for ob-
taining minimal products.

4.11 DECIMAL METHOD FOR OBTAINING


PRIME IMPLICANTS
In Sec. 4.8 the prime implicants of a function were obtained by comparing the
Q-1-dash representations of the minterms and generated product terms. This process
can also be carried out using the decimal representations of the minterms. So that
the reader can compare the decimal procedure with that of Sec. 4.8, the process is
applied to the function f(w,x,y,z) = 2m(0,5,6,7,9,10,13,14,15), which was previ-
ously studied in Table 4.4. The generation of the prime implicants using the decimal
method is given in Table 4.11.
In the first column of Table 4.11, the decimal numbers are arranged in groups
such that their associated minterms have equal index within each group, and the
groups are ordered by increasing index. This part of the decimal procedure is identi-
cal to that of Sec. 4.8.
CHAPTER 4 Simplification of Boolean Expressions 185

Table 4.11 Obtaining prime implicants using decimal numbers


OE Sa) Dol lShlS) (23) zA\
SNOK(S) 6,7,14,15 (1,8) B
6,7 (1) J
6,14 (8) |
Sy 4) 9.13 (4) C
6 | 10,14 (4) D
9 = ee
10 , WS CS)
13, 15@)
WA 14,15 (),
34
14,

(Sy

The next step is to compare the minterms in adjacent groups. In general, two
minterms represented by decimal numbers can combine to form a single product
term if and only if their decimal difference is a power of 2 and the smaller decimal
number represents the minterm with index i and the larger decimal number repre-
sents a minterm with index i + 1 where 7 = 0,1,2,.... Thus, two minterms m, and
m, of index i andi + 1, respectively, can combine if b — a is positive and a power
of 2. The combined term is written as a,b(c), where c is the power-of-2 difference.
Again referring to Table 4.11, since the second group of minterms, i.e., those
having index 1, is an empty set, no comparisons are possible between the first and
second groups and the second and third groups. However, applying the above rule
to the third and fourth groups of the first column generates the first group in the sec-
ond column. For example, minterms 5 and 7 combine, since 7 — 5 = 2, to form the
second-column entry 5,7(2). On the other hand, minterms 6 and 13 cannot combine
since the difference 13 — 6 = 7 is not a power of 2; whereas minterms m, and m,
cannot combine since the difference 7 — 9 is negative. Similarly, the fourth and
fifth groups of the first column yield the second group in the second column. As in
Sec. 4.8, terms that combine are checked. It should be noted that the power of 2 ap-
pearing in parentheses is the weight of the variable, under a binary representation,
which is eliminated.
To continue the comparison process, special attention must be given to the
numbers in parentheses since the combining of terms is only possible when they
have the same variables. Let a,,ay,°**,d>(C,,C>,"**,c,) denote a term, referred to as
term a, which is the result of combining minterms m,,, m,,, . . , Mq,,- That 1s, a), a,
- + +, a » are the decimal numbers of the minterms that combined, and c;, c,* + *, Cy
are the power-of-2 differences that represent the variables eliminated. Further-
more, let the decimal numbers a,, a>, * * *, a be in increasing numerical order;
similarly for the decimal numbers c;, c), * * *, cy. In an analogous manner, let
186 DIGITAL PRINCIPLES AND DESIGN

b,,bo,"**,b(d,,d>,"**,d,) denote another term, referred to as term b. Term a and


term b can combine to form a single term if and only if the numbers in parentheses
for the two terms are the same (i.e., c; = dj, Cc) = do, ... , C, = d,), the difference
b, — a, is positive and a power of 2, and term a has index i while term b has index
i + 1. It should be noted that only the leading (i.e., smallest) decimal designators of
terms a and b are subtracted. The newly combined term consists of all the minterm
decimal designators of terms a and b. The numbers in parentheses for the new term
are the same as terms a and b along with the new difference b, — a,. Again, the
terms which enter into combination, i.e., term a and term b, are checked.
Returning to Table 4.11, the third column is obtained from the second column.
Term 5,7(2) combines with term 13,15(2) in its adjacent group to form the term
5,7,13,15(2,8) since the two terms have the same numbers in parentheses and the
difference between each term’s smallest designator (i.e., 13 — 5 = 8) is a power of
2. Terms 5,7(2) and 13,15(2) are both checked. The difference 13 — 5 = 8 is in-
cluded in the parentheses of the combined term since this number indicates which
new variable is eliminated as a result of the combination. Next it is noted that term
5,13(8) combines with term 7,15(8). The result of this combination is the term
5,7,13,15(2,8), which was obtained previously and hence is not written a second
time. However, any time two terms combine, they must be checked to indicate that
they are not prime implicants. As a result, terms 5,13(8) and 7,15(8) in the second
column are checked. After all comparisons between the two groups of the second
column in Table 4.11 are completed, the resulting third column consists of only a
single group having two terms. With only a single group in the third column, no
comparisons are possible, and the prime-implicant generation process is completed.
The five unchecked terms are the prime implicants.
In order to write the prime implicants in algebraic form, it is necessary to
transform the decimal representation of a term into its 0-1-dash representation.
Consider prime implicant C in the second column of Table 4.11, i.e., term 9,13(4).
The steps for the transformation are shown in Table 4.12a. First, the variables of
the function are listed along with the weights of these variables when a binary rep-
resentation is used. In the decimal representation of a term, the numbers in paren-
theses indicate the weights of the eliminated variables. Thus, since the weight of
the eliminated variable in the term 9,13(4) is 27 = 4, a dash is placed under the x
variable in Table 4.12a. Next, the 1’s of the 0-1-dash representation are deter-
mined. The weights which sum to the smallest minterm number in the term deter-
mine the positions where |’s appear in the 0-1-dash representation. Since 9 is the
smallest minterm number in the term 9,13(4), I’s are entered in the columns of
weights 2° = | and 2° = 8 in Table 4.1 2a. Finally, all the remaining positions that
do not have dashes or 1’s must have 0’s in the 0-1-dash representation. Thus, the
Q-1-dash representation of term 9,13(4) is 1-01, which corresponds to the alge-
braic form wyz.
As a second example, consider prime implicant A, i.e., term 5,7,13,15(2,8).
Table 4.12b shows the steps leading to its 0-1-dash representation. In this particular
case, after the positions for the dashes and 1’s are determined, no positions are |
CHAPTER 4 Simplification of Boolean Expressions 187

Table 4.12 Transforming decimal representation of terms into O-1-dash


representation. (a) 9,13(4). (b) 5,7,13,15(2,8)

w a y Zz oe variables
De oa) vba to4 — weights of the binary representation
— — dash for position of weight 4
| == l’s for the positions that sum to the smallest minterm
number (i.e., 9)
0 Sas 0’s for all remaining positions
1 - 0 i == 0-1-dash representation
(a)

w x y Z << variables
2: De ee Oe —— weights of the binary representation
- - == dashes for positions of weights 2 and 8
1 | <= 1’s for the positions that sum to the smallest minterm
number (1.e., 5)

- 1 - l <= Q-1-dash representation


(b)

available for 0’s. Hence, no 0’s appear in the 0-1-dash representation. Since —1—1 is
the 0-1-dash representation of the prime implicant, its algebraic form is xz. The
reader can easily verify that if all the entries of Table 4.11 are transformed into
0-1-dash representation (including the checked terms), then Table 4.4 results.
The decimal process introduced in this section yields all the prime implicants
of a Boolean function. Once the prime implicants are obtained, the prime-implicant
table is drawn, and a minimal sum is found by the techniques of Secs. 4.9 and 4.10.
Although the above discussion pertained to a decimal method for obtaining
prime implicants, by starting with a set of decimal maxterms the procedure pro-
duces the prime implicates of a function.

4.12 THE MULTIPLE-OUTPUT


SIMPLIFICATION PROBLEM
Up to this point, the simplification problem has been directed to combinational
networks having a single output. However, general combinational networks can
have several output terminals as illustrated in Fig. 4.31. In such a case, the out-
put behavior of the network is described by a set of functions, f,, fo, ..-. fn, One
for each output terminal, each involving the same input variables, x), x5, ... , Xp.
This set of functions is represented by a truth table with m + n columns. The
first n columns denote all the combinations of values of the n input variables,
and the last m columns represent the values at the m output terminals for each
input combination.
188 DIGITAL PRINCIPLES AND DESIGN

Combinational
Inputs Outputs
network

Figure 4.31 A multiple-output combinational


network.

With the objective of designing a multiple-output network of minimal cost, one


might approach the design by simply constructing a realization based on the mini-
mal expressions for each output function independently of the others. For example,
Table 4.13 gives a truth table for a combinational network having two outputs.
Using the Karnaugh maps in Fig. 4.32a, the minimal sum for each function is

fiQy,2) = xy + yz
foQoy,Z) = yz + xy
The corresponding realization is shown in Fig. 4.326, which has a total of 6 gates
and 12 gate inputs. However, since the two expressions share a common term, 1.e.,
yz, a more economical realization is shown in Fig. 4.32c having only 5 gates and 10
gate inputs.
Unfortunately, the multiple-output simplification problem is normally more
difficult than simply sharing common terms in the independently obtained minimal
expressions. For example, consider the pair of functions

fiey.z) = 2m ,3.5)
folx%y,z) = Xm(3,6,7)

Table 4.13 Truth table for a multiple-


output combinational
network described by the
pair of functions 4,(x,y,z) =
Sm Oney Aranda. Zi
+M(3,6,7)

0 0 0 | 0
0 0 | | 0
0 | 0 0 0
0 | l | 1
| 0 0 0 0
I 0 | 0 0
| | 0 0
| | | l |
CHAPTER 4 Simplification of Boolean Expressions 189

fi yz ha yz
11 10 00 Ol

0 l 0 | 0 0 0

ae x
1 1 0 | | 0 0

(a)

ue

y
6
fi =
y
y

zZ ff
y

y -
1D)
XS

to y

oe

(b) (c)

Figure 4.32 Realization of Table 4.13. (a) Karnaugh maps for minimal sums.
(b) Realization of individual minimal sums. (c) Realization based
on shared term.

Figure 4.33a shows the Karnaugh maps for the independently obtained minimal
sums that suggest the realization of Fig. 4.33b. This realization uses 6 gates with 12
gate inputs. On the other hand, if the 1-cells of the maps are grouped as shown in
Fig. 4.33c to obtain the expressions

fi@y,Z) = yz + xyz
TOYZ) = Ae oy
then the corresponding realization of Fig. 4.33d results, which uses 5 gates and 11
gate inputs. Certainly, this is a lower-cost realization if cost is measured by either
the number of gates or the number of gate input terminals. Furthermore, it was
achieved without every term in the expressions being prime implicants of the indi-
vidual functions. In particular, the term xyz is a prime implicant of neither f, nor f..
However, since the two functions have a term in common, it should be suspected
that a relationship exists between the common term and the product function f;, * f,,
that is, the function obtained when the two functions are and’ed together. In this
particular case, the product function has the single prime implicant xyz.
190 DIGITAL PRINCIPLES AND DESIGN

<1

nN

x
ae

fi
00 01 11 10 y

0 0 So 0 ig
; th

eo | (ao
LH Xe

(b)

Figure 4.33 Realization of the pair of functions f,(x,y,z) = &M(1,3,5) and £(x, y,Z) = 4(8,6,7). (a) Karnaugh
maps for minimal sums of individual functions. (b) Realization of independently obtained minimal
sums. (c) Karnaugh maps for alternate groupings. (da) Realization based on alternate map
groupings.

From an algebraic standpoint, a minimal representation of a multiple-output


logic network is defined as a set of normal expressions that has associated with it a
minimal cost as given by some cost criterion. As in the case of single-output net-
works, the cost measure can be based on either the number of gates in the realiza-
tion* or the number of gate input terminals in the realization. If the minimal repre-
sentation consists of all disjunctive normal expressions, then the representation is
called a multiple-output minimal sum. On the other hand, if the minimal representa-
tion consists of all conjunctive normal formulas, then the representation is called a
multiple-output minimal product.

*In the case of multiple-output networks, the number of gates is closely related to the number of distinct
terms appearing in the set of output expressions.
CHAPTER 4 Simplification of Boolean Expressions 191

4.12.1 Multiple-Output Prime implicants


A prime implicant was defined previously for a single Boolean function as a prod-
uct term which implies the function and subsumes no shorter product term that also
implies the function. However, when dealing with a set of functions in which prod-
uct terms can be shared, it is necessary to extend this definition. A multiple-output
prime implicant for a set of Boolean functions f,,ff,....,f,,, involving the same set
of variables, is a product term that is either (1) a prime implicant of one of the indi-
vidual functions f;,fori = 1, 2,..., m, or (2) a prime implicant of one of the prod-
uct functions

i ee oy UA alcoe eee
Ley poe sk
Thus, a multiple-output prime implicant is a prime implicant of any of the m single
functions f\, fs... . fn, aS Well as a prime implicant of any of the possible product func-
tions, 1.€., a prime implicant of any two of the functions f;- f,, any three of the functions
fi* f° f. up to and including a prime implicant of all m functions f, -f, + + + f,,. For the
special case of a single function, only the first part of the definition applies.
The significance of multiple-output prime implicants is given by the following
generalization to Theorem 4.1.

Theorem 4.5
Let the cost, assigned by some criterion, for a multiple-output minimal
sum be such that decreasing the number of literals in the set of disjunctive
normal formulas does not increase its cost. Then there is at least one set of
formulas for the multiple-output minimal sum that consists only of sums of
multiple-output prime implicants such that all the terms in the expression
forf;are prime implicants of f;or of a product function involvingf,.

Although the above discussion involved multiple-output prime implicants and


multiple-output minimal sums, in an analogous manner multiple-output prime im-
plicates are defined for multiple-output minimal products. In the next section, the
Quine-McCluskey method is generalized to accommodate the multiple-output sim-
plification problem.

4.13 OBTAINING MULTIPLE-OUTPUT


MINIMAL SUMS AND PRODUCTS
To obtain a multiple-output minimal sum with Karnaugh maps is, in general, a diffi-
cult process since it is necessary to work with several maps simultaneously. For ex-
ample, for a set of three functions f,, 5,and f,, it is necessary to consider the prime
implicants on seven different maps, i.e., the maps forfi,L.A. Ai fe fi WE ee
and f, * f° f;, and determine an optimal collection of these prime implicants for the
minimal sum. As a variation, one might first obtain all the multiple-output prime
192 DIGITAL PRINCIPLES AND DESIGN

implicants from the seven maps and then use the multiple-output prime-implicant
table, which is described later in this section, to perform the selection process. The
approach that is taken here is to extend the more formal Quine-McCluskey proce-
dure since it is intended for computer-aided logic design.
When considering a set of Boolean functions, it is possible that one or more
functions in the set are incomplete. For the purpose of determining the multiple-
output prime implicants, all don’t-cares are assigned a functional value of 1, as was
done with single functions earlier in this chapter. In this way, the incomplete functions
are transformed into complete functions. The prime implicants of these newly ob-
tained complete functions are also the prime implicants of the incomplete functions.

4.13.1 Tagged Product Terms


In order to describe product terms that are shared in the expressions for more than one
function, the concept of tagged product terms is introduced. These terms consist of
two parts, a kernel and a tag. The kernel is a product term involving the variables of
the function and corresponds to an implicant. The tag is an appended entity used for
“bookkeeping” purposes and denotes which functions are implied by its kernel. For
example, given a truth table involving a set of functions, a tagged product term is con-
structed for each row in which at least one of the functional values in the row is not 0.
In this case, the kernel is the minterm corresponding to the row of the table, and the
tag denotes which complete functions include this minterm. However, for conve-
nience in applying the comparison process of the Quine-McCluskey method, it is
common to use the binary representation of the kernel or its decimal equivalent, as
was done for single functions, rather than the algebraic form. Furthermore, to facilitate
readability, the tags in this discussion consist of the same number of symbols as there
are functions in the set. Thus, for the jth row of the truth table, the ith symbol of the
tag is f, if the minterm for the jth row implies the function f; (that is, if the functional
value in the f; column is | or dash); otherwise, the ith symbol of the tag is a dash.
Table 4.14 gives the truth table for a two-output combinational network. The
minterm associated with the fourth row is xyz, its binary representation is 011, and
its decimal equivalent is 3. The tag is written as —f, to indicate that this minterm im-

Table 4.14 A truth table for a multiple-


output Combinational
network

tid v z ti ip
0 0 0 | 0
0 0 | I 1
0 | 0 1 |
0 | | 0 1
| 0 0 0 0
| 0 1 1 -
i 0 0 0
1 i 1 | 1
CHAPTER 4 Simplification of Boolean Expressions 193

Table 4.15 The tagged product terms


for Table 4.14. (a) Kernels in
algebraic form. (b) Kernels
in binary form. (c) Kernels in
decimal form

XY Zhi- 000f,- Ofi-


x yefihr OO1fi A Ifih
xyehwh O10f.6 2hih,
xyZ-fy O11-f, 3-f2
xyehih 101f if: Shih
xyehihs lfh Thih
(a) (b) (c)

plies the f, function but not the f, function. The entire tagged product term is there-
fore written as xyz—f;, 011-/;, or 3-f;. In a similar manner, the sixth row of Table
4.14 is written as xyzf,fo, LOLA, or Sf,f:. Even though the functional value of f, for
the 3-tuple (x,y,z) = (1,0,1) is unspecified, a | is assigned to the functional value for
the purpose of determining the multiple-output prime implicants. The complete list
of tagged product terms for Table 4.14 is shown in Table 4.15 in all three forms. It
should be noted that no tagged product terms are written for those rows in which
both functional values are 0 since the kernels for these rows imply no functions.

4.13.2 Generating the Multiple-Output


Prime Implicants
Through the use of tagged product terms, the Quine-McCluskey method is easily
modified for the determination of the tagged multiple-output prime implicants. For-
mally, a tagged multiple-output prime implicant is a tagged product term in which
the kernel implies each of the functions indicated by the tag with the provisions
there is no other tagged product term simultaneously having a shorter kernel that is
subsumed and a tag with at least the same function symbols. To obtain the multiple-
output prime implicants, the comparison process of the Quine-McCluskey method
is applied to the kernels of the tagged product terms and a simple and-type opera-
tion is applied to the tags. To do this, the tagged product terms are first written from
the truth table for a given set of Boolean functions. From these terms, additional
tagged product terms are generated. Two tagged product terms generate a new
tagged product term by the following rules: (1) The kernel of the generated term is
the result of the relationship AB + AB = B if it can be successfully applied to the
kernels of the two generating terms as prescribed by the Quine-McCluskey method
and (2) the tag of the generated term has an /; if and only if f; appears in both the
tags of the generating terms; otherwise, the ith position of the tag is a dash. During
this process, a generating term is checked, as in the Quine-McCluskey method, if
and only if the kernel of the generating term subsumes the kernel of the generated
term and the tag of the generating term is the same as the tag of the generated term.
If the tag of a generated term has no f; symbols, then the term is not added to the list
194 DIGITAL PRINCIPLES AND DESIGN

of newly generated terms. The comparison process is then repeated on all the gener-
ated tagged product terms until all comparisons have been made and no new tagged
product terms are generated, at which time the unchecked terms are the tagged
multiple-output prime implicants.
It should be noted in the above procedure that there are only two modifications to
the Quine-McCluskey method when dealing with tagged product terms. First, there is
a tag which must be considered. When combining terms, the tag of the new term con-
sists of only those f;’s that are common to the two terms being combined. Second,
when two terms are combined, the two generating terms are not necessarily checked.
A generating term is checked only if its tag is identical to the tag of the generated term.
To illustrate the above procedure, consider the binary representations of the
tagged product terms shown in Table 4.155. These terms are grouped according to
the index of the kernels. This results in the first column of Table 4.16a. Terms
whose index differs by one are now compared. For example, consider the terms
OO00f,— and OOIf,f;. By the Quine-McCluskey method, the kernels are combined
since they differ in exactly 1 bit position. Furthermore, sincef,appears in both tags,
these two terms generate the term 00-f,- shown in the second column of Table
4.16a. At this time, however, only the term OOOf|— is checked since its kernel sub-

Table 4.16 Obtaining the tagged multiple-output prime implicants for the functions of
Table 4.14. (a) Using 0-1-dash notation. (6) Using decimal notation

XYZ XYZ XYZ


000 fi- V 00-f,- B ETE A
-———— 0-0 f,- C
O01 fifo \
OOF AG 0-1 fy
a -O1fif,D
O1l -f J Ol--,
101 fit V
=e oo ia
LN fifi vy anes
(a)

Ofi-V 0,1 (1) f,- B 1,3,5,7 (2,4) fA


0,2 (2) f-C
l fife
2 fils G 1,3 (2) fo J
1,5 (4) fifo D
3 fr ¥ 253) fae
S fir ——
3,7 (4)
Thi 57 (2) fifi F
(b)
CHAPTER 4 Simplification of Boolean Expressions 195

sumes the kernel of the generated term and its tag is the same as the tag of the gen-
erated term. The term 0OIf,,f, is checked at a later time when it and the term 101f,f,
are used to generate the term —O1If,f;. After all comparisons of terms whose index
differs by one in the first column of Table 4.16a are completed, the comparison
process is applied to the second column to construct the third column. It should be
noticed in the second column that the two terms 0—-Of,— and 0-1-f, do not generate a
new term, even though the kernels can be combined by the Quine-McCluskey
method, since there is no f; symbol common to both of these terms. The seven
tagged product terms in Table 4.16a with a letter after them are the tagged multiple-
output prime implicants, and their kernels are simply the multiple-output prime im-
plicants. A tagged multiple-output prime implicant having more than one function
symbol in its tag corresponds to a multiple-output prime implicant for the product
function indicated by the tag.
The above procedure can also be carried out using the decimal representation
of the tagged product terms. In this case, the decimal Quine-McCluskey method, as
explained in Sec. 4.11, is applied to the kernels, while the tags are determined as
with the 0-1-dash notation. The procedure is illustrated in Table 4.16b.

4.13.3 Multiple-Output Prime-Implicant Tables


Having established the multiple-output prime implicants, it is next necessary to se-
lect a subset of them for the purpose of obtaining a multiple-output minimal sum.
This can be done by extending the concept of the prime-implicant table to handle a
set of Boolean functions.
The multiple-output prime-implicant table is a direct extension of the prime-
implicant table introduced in Sec. 4.9. However, since the multiple-output prime
implicants are associated with the set of Boolean functions f,, fi,.... f,, provi-
sion is made for each of the Boolean functions in the set. This is done by con-
structing a table whose abscissa is partitioned into m sections, one section for
each function f;. Within each section, the abscissa is labeled with the minterms
associated with the function of that section; that is, those minterms of f; which
have a functional value of 1. Don’t-care terms are not included. Along the ordi-
nate of the table, the multiple-output prime implicants are listed. For conve-
nience, the ordinate can also be partitioned in accordance with the tags associated
with the multiple-output prime implicants. A X is placed at the intersection of a
row and column in the multiple-output prime-implicant table if the minterm of
the column for the function f, subsumes the multiple-output prime implicant of
the row and the multiple-output prime implicant implies the function f;. Thus,
those multiple-output prime implicants whose tags consist of a single function
symbol provide X’s only for that section of the multiple-output prime-implicant
table associated with the function. On the other hand, the multiple-output prime
implicants having more than a single function symbol in their tags provide X’s
for each of the sections of their associated functions.
For the multiple-output prime implicants obtained in Table 4.16, the multiple-
output prime-implicant table shown in Table 4.17 is constructed. This table has two
196 DIGITAL PRINCIPLES AND DESIGN

Table 4.17 Prime-implicant table for the functions of Table 4.14

sections, one for f, and the other for f;. The minterms associated with each function
are listed along the abscissa. As in Sec. 4.9, any minterms associated with don’t-
care conditions are not listed. Also in Table 4.17, the ordinate is partitioned accord-
ing to whether the multiple-output prime implicant implies f,, f, or f; * f. Those
multiple-output prime implicants associated only with f, or f, have X’s just in their
respective sections of the table; the multiple-output prime implicants associated
withf, :f; have X’s in both sections.

4.13.4. Minimal Sums Using Petrick’s Method


Once the multiple-output prime-implicant table is constructed, an extension of Pet-
rick’s method can be applied to determine the irredundant covers from which the
multiple-output minimal sums are obtained. To do this, again the p-expression is
written as a product-of-sum terms in which each sum term corresponds to a single
column of the multiple-output prime implicant table. However, since the various
prime implicants are associated with different functions, this information must also
be included. This is achieved by subscripting the variables that denote the multiple-
output prime implicants to indicate which function utilizes it. That is, each variable
in a sum term written for a column in the f; section of the multiple-output prime-
implicant table has the subscript 7. For example, referring to the first column of
Table 4.17, prime implicant B or C must be used to cover minterm mp of function f,.
Algebraically this is written as (B, + C)).
Once the p-expression is written, it is then manipulated into its sum-of-products
form using the distributive law and the subsuming terms are dropped in the result-
ing expression. However, it should be realized that a variable with different sub-
scripts must be regarded as corresponding to different prime implicants. For exam-
ple, if a term in the sum-of-products form of the p-expression has both an R; and
an R;, then they are considered to be distinct since in one case the multiple-output
prime implicant R is related to function f; and in the other case it is related to func-
tion f;. Finally, by evaluating each product term in the sum-of-products form of
the p-expression according to some cost criterion, a multiple-output minimal sum is
obtained.
CHAPTER 4 Simplification of Boolean Expressions 197

As an illustration of this procedure, again consider Table 4.17. The p-expression


for this table is written as

Pao Dine, Oy Gy) = (Bra C,)(By + DC, G,)(D, + F))


F(A, + Dy)(E, + G)(Ag + E,)(A2 + F%)

Upon manipulating it into its sum-of-products form using the distributive law and
dropping subsuming terms, the p-expression becomes

DAS Bey. Ge Gs) AaB CEs #458, Cy GBC DEAE IP)


+ A,B,E,F\G, + A,B,F,G,G + B,D EF FG,
+ A,C\D,E,F, + A,C\D,F\G, + C\D,D,E,F Fy (4.3)
Each product term in Eq. (4.3) corresponds to an irredundant cover of the two func-
tions. Depending upon the cost criterion being used, each of these covers is evalu-
ated for minimality. Two cost criteria were introduced in the previous section. In
the first criterion, the cost of the realization is associated with the number of distinct
terms in the set of disjunctive normal formulas. With respect to Eq. (4.3), this im-
plies the multiple-output minimal sums correspond to the product terms in the
p-expression having the fewest number of distinct letters, i.e., without regard to the
subscripts. The reason for this is that no additional cost is associated with using a
prime implicant (which is denoted by the letter) more than once. For Eq. (4.3), two
terms satisfy this criterion, namely, A,B,F\G,G, and C,D,D,E,F,F>). In each case,
the minimal sum consists of four distinct terms. Using the multiple-output prime
implicants with the subscript | in the expression for f, and those with the subscript 2
in the expression for f5, the term A,B,F,G,G, in the p-expression suggests the ex-
pressions for the multiple-output minimal sum

Fil GYiZ) = KY ATE XZ

fxGy,Z) = z+ xyz
while the term C,D,D,E,F|F> suggests the expressions

f,Gy,Z) = xz + ye + xz
SQGY,2) = ye + xy + xz
If a multiple-output minimal sum is to be based on the fewest number of gate
input terminals as suggested by the second cost criterion introduced in the previ-
ous section, then it is necessary to evaluate each of the product terms in Eq. (4.3)
using this criterion. The number of gate input terminals is calculated as follows:
Let fi, fi, .--, ff, be the set of normal Boolean expressions describing a multiple-
output combinational network and let ¢), t,,..., t, be the set of all distinct terms
appearing in the m output expressions. Now let a; equal the number of terms in f;
unless there is only a single term, in which case let a; equal 0. Also, let B; equal
the number of literals in the term ¢; unless the term consists of a single literal, in
which case let 8, equal 0. The number of gate input terminals in the realization of
the multiple-output combinational network is given by the numerical quantity
ea NECA Under the assumption of the second cost criterion, only one
198 DIGITAL PRINCIPLES AND DESIGN

multiple-output minimal sum is indicated by Eq. (4.3). This corresponds to the


fifth term, A,B,F,G,G>, which has an associated cost of 12. The set of expres-
sions for this multiple-output minimal sum is

fC = ay 4 xe aye
folGy.Z) = 2+ xyz

4.13.5 Minimal Sums Using Table Reduction


Techniques
The prime-implicant table reduction techniques given in Sec. 4.10 can also be ap-
plied to multiple-output prime-implicant tables. To begin with, in the f; section of
the multiple-output prime-implicant table there may be a column with a single x. In
such a case, the row in which the X appears corresponds to an essential multiple-
output prime implicant for the functionf;,since only this prime implicant covers the
particular minterm associated with the column having the single X. It should be
noted that if the multiple-output prime implicant implies more than one function,
then it is not necessarily essential for all the functions that it implies, but rather for
only those functions in which the prime implicant is an only cover for a minterm of
a function. Certainly, an essential multiple-output prime implicant for the functionf;
must appear in the expression describing the function. When a multiple-output
prime implicant is essential for a function f;, all columns in the f; section of the
multiple-output prime-implicant table are deleted which have X’s in the row corre-
sponding to the prime implicant. The row itself is deleted only if the prime impli-
cant is essential for each function it implies.
As for single-function prime-implicant tables, a dominated or equal row is
deleted when its associated cost is not less than the cost associated with a dominat-
ing or equal row. A column associated with a function f; is removed if it dominates
or equals another column associated with the same functionf;.By deleting columns
and rows and selecting multiple-output prime implicants that must cover minterms
of a function, the size of a multiple-output prime-implicant table is reduced until
either it becomes cyclic, that is, cannot be reduced any further, or a multiple-output
minimal sum is determined.
To illustrate the prime-implicant table reduction techniques, let us now obtain a
multiple-output minimal sum for Table 4.17 under the input terminals count crite-
rion. The table is redrawn in Table 4.18a, where a column is added to indicate the
cost associated with the multiple-output prime implicants. For example, prime im-
plicant x y consists of two literals and hence requires an and-gate with two input ter-
minals in any realization. If this term is used in a realization for f,, then the output
of the and-gate must go to a terminal of the output or-gate for f,. Thus, a total of
three input terminals is associated with the realization of the term xy, and hence a
cost of 3 is assigned to this prime implicant. The cost associated with prime impli-
cant z is simply 1. If it appears in a realization for f;, then no and-gate is necessary
to generate the term and only the single terminal on the output or-gate forf,must be
CHAPTER 4 Simplification of Boolean Expressions 199

Table 4.18 Obtaining a multiple-output minimal sum by prime-implicant table


reduction using the minimal input terminals count criterion

BemmniGy nex Xx 3
Care| OX x 3
A: Zz x x x ]
Bs Xx) » x 3
Dy Vz Xx x x 3,4
Bs: XZ x ®) x 3,4
G: xyz x x 4,5
(a)

Bae y x x 3
Cia si x x 2
A: z x x x |
E: xy x x 3
DE yz x x 3,4
Ai de xz, x l
Ge XYZ x x 4,5
CRS) = ak ar PS
frY,2) SS 6x76

(b)

BK xy
CG 2
*2 A: Z |
E: xy 3
ibe yz 3,4

G: xyz 4,5
Vie ye XE > ot
TCR) = an OO
(c)
(Cont.)
200 DIGITAL PRINCIPLES AND DESIGN

Table 4.18 (Cont.)

B:
Gs
ine
Ds
G:

(d)

Fiyi2) Ske ye
HACER) = ar oS
(e)

WlCop Wo) = 284 ar ane ae oS


On),2) = aa
(f)

counted. Two costs are associated with prime implicant yz. The first cost corre-
sponds to the situation in which the term is used forf, or f, but not both; the second
cost pertains to the situation in which it is used in the realization for both functions.
It is now noted in Table 4.18a that the fifth column has a single X. Thus, the
multiple-output prime implicant xz is essential for the function f;. The fourth and
fifth columns can be then removed since the selected prime implicant xz covers
minterms ms; and mz, of f;. However, the m, column in the f, section of the table can-
not be removed since xz is not essential forf5. At this point, the reduced version of
Table 4.18a appears as shown in Table 4.18). The symbol *1 next to row F indi-
cates that prime implicant xz is used in the expression for f,. The partial multiple-
output minimal sum thus far established is given beneath the table. Furthermore, the
CHAPTER 4 Simplification of Boolean Expressions 201

cost for prime implicant xz is changed to | to indicate that if the term is also to be
used for f,, then only the additional single input terminal of the output or-gate for f,
is needed.
Table 4.185 is now searched for dominated rows and dominating columns. It is
seen that row A dominates row F. Furthermore, since the cost associated with row A
is not greater than the cost associated with row F, it follows that row F can be
deleted. Once this is done, Table 4.18c results. It is now observed that the multiple-
output prime implicant-z must appear in the expression forf, since this is the only
term that covers m, of f,. Again the partial expressions for the multiple-output mini-
mal sum are given beneath the table. Upon selecting prime implicant z, those
columns in which X’s appear in row A are deleted. This results in Table 4.18d.
The cost associated with each multiple-output prime implicant is recalculated
and included in Table 4.18d. The only change from Table 4.18c is that prime impli-
cant yz now can be used only for f,; hence, its cost is simply 3. At this time row D can
be deleted since it is dominated by row B and since the cost associated with row B is
not greater than the cost associated with row D. However, row E cannot be deleted
even though it is dominated by row G since the cost associated with row G is greater
than the cost associated with row E. The reduced table appears as Table 4.18e.
It is now seen that the multiple-output prime implicant x y is needed in the ex-
pression for f,. This results in the partial expressions given beneath the table. After
the columns with X’s in row B are deleted, Table 4.18fis obtained. In this table it is
not permissible to delete the dominated rows C and E since their associated costs
are lower than the cost associated with their dominating row G. Thus, this table is
cyclic since no further reductions are possible. Formally, Petrick’s method can be
applied at this point to determine the remaining covers. However, by observation it
is seen that cost is minimized by letting the multiple-output prime implicant xyz ap-
pear in the expressions for both f,; and f;, with a cost of 5, rather than having xz in
the expression forf, and xy in the expression for f5, with a total cost of 6. This re-
sults in the multiple-output minimal sum

fiQey,2) =xz+ XV + XZ
F:G),2) = 2+ XYZ

which was obtained previously by Petrick’s method.


Although table reduction was illustrated using the input terminals count crite-
rion, the distinct terms criterion could have been used instead, the difference being
how the cost column is calculated. In this case, the cost of each prime implicant is
initially 1. If a prime implicant is used for additional functions after being selected
the first time, then its future cost is 0.

4.13.6 Multiple-Output Minimal Products


The discussion thus far in this section was concerned with determining multiple-
output minimal sums for a set of Boolean functions. However, it is possible that a
lower-cost two-level multiple-output realization exists based on a multiple-output
minimal product.
202 DIGITAL PRINCIPLES AND DESIGN

Multiple-output minimal products can be obtained by determining the multiple-


output minimal sums for the complementary functions and applying DeMorgan’s
law. In this approach, tagged product terms are again written from the truth table.
The kernel of a tagged product term is the minterm corresponding to a row, and its
tag indicates the functions nor implied by the minterm; that is, the tag contains an /f;
if the functional value forf; is 0 in the row. In this case, don’t-care conditions are
considered to have 0 functional values. Once the initial list of tagged product terms
is formed, the multiple-output prime implicants are obtained by the procedure given
earlier in this section. A multiple-output prime-implicant table is then constructed,
and Petrick’s method or prime-implicant table reduction techniques are applied to
the multiple-output prime-implicant table. This results in a multiple-output minimal
sum for the complementary functions. If DeMorgan’s law is applied to each expres-
sion obtained, then the multiple-output minimal product results.
As an alternate approach, tagged sum terms are written from the truth table.
The kernel denotes the maxterm corresponding to a row in which at least one func-
tion has the value of 0, and the tag contains an f; for each function in that row hav-
ing a 0 functional value. Again, don’t-care conditions are assumed to have 0 func-
tional values. Once the initial list of tagged sum terms is formed, the comparison
process previously explained is carried out to generate the multiple-output prime
implicates. Upon constructing a multiple-output prime-implicate table, a multiple-
output minimal product is obtained by using either Petrick’s method or table reduc-
tion techniques.
In closing, it should be emphasized that the Quine-McCluskey method for sin-
gle functions and sets of functions along with Petrick’s method and table reduction
techniques are not intended for hand computation. Rather, these are algorithmic
procedures that are applicable to computer-aided design. Their inclusion in this
chapter has been to present concepts that are computer programmable and provide a
solution to the simplification problem.

4.14 VARIABLE-ENTERED KARNAUGH MAPS


In the study of Karnaugh maps, the entries within the cells were limited to 0’s, 1’s,
and don’t-cares. An interesting and useful extension of the map concept allows
functions of one or more Boolean variables as map entries. Such Karnaugh maps
are referred to as variable-entered maps. The variables associated with the entries in
these maps are called the map-entered variables.
The significance of variable-entered maps is that they provide for map com-
pression. Let the order of a Karnaugh map be defined as the total number of vari-
ables associated with its column and row labels. Up to this point, an n-variable
function was associated with a Karnaugh map of order n. However, by letting func-
tions of one variable appear as entries within the map, a map of order n — | is used
to represent an n-variable Boolean function. In general, by permitting entries corre-
sponding to m-variable functions, a map representation of an n-variable function is
possible with a Karnaugh map of order n — m where m < n. |
CHAPTER 4 Simplification of Boolean Expressions 203

A useful application for variable-entered maps arises in problems that have in-
frequently appearing variables. In such a situation, it is convenient to have the func-
tions of the infrequently appearing variables be the entries within a map, allowing a
high-order Boolean function to be represented by a low-order map.

4.14.1 Constructing Variable-Entered Maps


To understand the construction of variable-entered Karnaugh maps, consider the
generic truth table shown in Fig. 4.34a, where the functional value for row i is de-
noted by f;.* From the truth table, the Karnaugh map given in Fig. 4.345 is con-
structed. The entries within the cells are the f;'s which, in turn, correspond to the
0’s, 1’s, and don’t-cares that normally appear in the last column of the truth table.
Alternately, a generic minterm canonical formula for the truth table of Fig. 4.34a
is written as

RV oeoy ap RY 2 te pt VS ele RVC


atta Ae anypeeae cafe * xyZ + fh xyz (4.4)

If this expression is manipulated according to the rules of Boolean algebra, then a


possible factored form of Eq. (4.4) is

TOYZ) =X Sorz Thi 2) + tyne Z + f°z)hixy


as 2+ fez) + xy(fo°Z t fp2)
Since this equation consists of the four combinations of the x and y variables in their
complemented and uncomplemented form, a map is constructed from the equation
by having the x and y variables appear as the row and column labels and the expres-
sions within parentheses as cell entries. This is illustrated in Fig. 4.34c. It is seen,
therefore, that a Karnaugh map of order 2 is now being used to represent a three-
variable function. Hence, map compression is achieved. In this case, x and y are
called the map variables. The cell entries are functions of the single variable z.
Thus, z is referred to as the map-entered variable.
The above factored form of Eq. (4.4) is only one of three possibilities. Another
factored form of Eq. (4.4) is

FOGN:2) = 420 Foy + facy) + afi y + fy) + azn y + oy) + xz(fs°y + fy"y)

This expression suggests the compressed Karnaugh map shown in Fig. 4.34d. In
this case, x and z are the map variables and y is the map-entered variable. The third
possible factored form of Eq. (4.4) is

fGy.2) = Valo * + fax) + Yai % + f°x)+ yelha-x+ fox) + yea x + f°)


and the corresponding compressed Karnaugh map is given in Fig. 4.34e. Here, y
and z are the map variables and x is the map-entered variable.

*Note the different use of f; from the previous section.


204 DIGITAL PRINCIPLES AND DESIGN

i=) Hho 23" ii 2 lipo Gariey? &

a Ll) Vise ating @ || Was oaeiaes


Ni) tha fs fi |te
Le |
(b) (c)

0 : | 0 |

Ol for Wriey || dese


yee y Ol) qesesb ines || pokes
re 2k

x in y

Wl jee Pee || see


Parate» a eg Oa ees

(d) (e)

0 1

fo: YZ+h- ¥Z + fo: yz + fa: yz ta YZ+h5-Yethe: yeth- yz

(f)

Figure 4.34 (Map compressions of a three-variable function. (a) A generic three-variable truth
table. (b) Conventional three-variable Karnaugh map. (c) Compressed Karnaugh
map of order 2 with x and y as the map variables and z as the map-entered
variable. (a) Compressed Karnaugh map of order 2 with x and z as the map
variables and y as the map-entered variable. (e) Compressed Karnaugh map of
order 2 with y and zas the map variables and x as the map-entered variable.
(f) Compressed Karnaugh map of order 1 with x as the map variable and y and z
as the map-entered variables.
CHAPTER 4 Simplification of Boolean Expressions 205

In the three cases illustrated above, a three-variable Boolean function is repre-


sented by a Karnaugh map of order 2. Conceptually, further compression is possible
by having the cell entries of the variable-entered map be functions of two variables.
For example, if Eq. (4.4) is factored as

TOGN2 Sp ye fie Ve ty et fe ye) xg yz fs Yoofg YZ fy ye)


then the variable-entered map of Fig. 4.34f is constructed. Now, only x is the map
variable and y and z are the map-entered variables associated with the two-variable
functions that appear as cell entries. Thus, the three-variable function is represented
by a Karnaugh map of order |. As is discussed shortly, the degree of difficulty in in-
terpreting compressed maps lies in the complexity of the entered functions.
In the above discussion, the concept of variable-entered maps was developed
from the factorization of a Boolean expression describing a truth table. An alter-
nate way of arriving at variable-entered maps is via a partitioning of the truth table
itself. For example, Table 4.19 shows a generic three-variable truth table. The
rows are paired such that they correspond to equal values of the x and y variables.
The two possible values of the z variable appear within each pair. This pairing of
the z variable corresponds to the single-variable functions of z given in the last col-
umn of Table 4.19. Since within each partition the x and y variables have fixed val-
ues, this partitioned truth table can now be used to form the variable-entered map
of Fig. 4.34c. That is, for each of the four combinations of values of the x and y
variables, the cell entries become the single-variable functions of z. In a similar
way, partitioning a truth table around the y and x variables, respectively, leads to
the corresponding maps of Fig. 4.34d and e.
At this time, let us turn our attention to the entries of a variable-entered map
when they correspond to single-variable functions. As seen in Fig. 4.34c—e, these
entries always have the form f;- v + f;* v, where f; and f; correspond to the functional
values in the ith and jth rows of the truth table and v is the map-entered variable.
Assuming the Boolean function is completely specified, the values of f; and f; are

Table 4.19 A generic three-variable


truth table partitioned around
the z variable

x y

0 0
0 0
0 I Onn aeee :
seal {A0 aleee
1 0 0 if, an
Je ae NE RR ei aan
1 1 0 te neat AB
1 1 ] i I frre
206 DIGITAL PRINCIPLES AND DESIGN

Table 4.20 Single-variable map entries


for complete Boolean
functions

restricted to only 0’s and 1’s. Table 4.20 tabulates the four possible value assign-
ments to f; andf,,the evaluation of f;- v + f;* v, and the corresponding entries for a
variable-entered map. Later in this section the effects of don’t-cares are considered.
Figure 4.35 illustrates a three-variable truth table and the corresponding variable-
entered map. From the truth table we can write

[ON RV Zee A Voce ya Ne


I CCS ore) re er (2)
KVL) ry) ey)

The expressions within parentheses correspond to the single-variable functions


that serve as the map entries. Furthermore, since the functional value is 0 for rows
(x,y,z) = (1,1,0) and (x,y,z) = (1,1,1), no term appears in the equation for these two
cases, 1.e., the term xy does not appear. This is equivalent to the term xy(0). Thus,
the entry in the cell for x = 1 and y = 1 is 0. Equivalently, the map of Fig. 4.35b
could have been obtained by comparing Table 4.19 and Fig. 4.35a.

------4-----4------

(a)

Figure 4.35 A three-variable function.


(a) Truth table. (b) Variable-
entered map.
CHAPTER 4 Simplification of Boolean Expressions 207

00 Ol Ll 10

OnE
- Figure 4.36 An example ofa
variable-entered
map with
infrequently
appearing
variables.

As a second example, consider the Boolean expression


S(A,B, x,y,z) = Axyz + xyz + xyz + Buyz + xyz

In this expression, the variables A and B appear infrequently; while the variables x,
y, and z appear in each term. By using x, y, and z as map variables and A and B as
map-entered variables, the variable-entered map of Fig. 4.36 is easily constructed.
The entries are simply the coefficients of the terms xyz where the asterisks indicate
various combinations of the complemented and uncomplemented form of the vari-
ables x, y, and z. In this case, a five-variable function is represented by a map of
order 3.

4.14.2 Reading Variable-Entered Maps


for Minimal Sums
Just as minimal sums and minimal products are obtained by proper interpretation of
regular Karnaugh maps, these types of minimal expressions can also be obtained
from variable-entered maps. Again rectangular groupings having dimensions 2° < 2”,
i.e., subcubes, are formed. However, when obtaining a minimal sum, it is necessary
to form subcubes involving the map-entered variables in addition to the 1’s. Simi-
larly, when obtaining a minimal product, the map-entered variables are grouped as
well as the 0’s.
Consider first the problem of getting a minimal sum of a completely specified
Boolean function from a variable-entered map. To understand how subcubes are
formed on these maps, three cases must be considered. The map of Fig. 4.37a
shows two adjacent cells having a z literal. These cells correspond to the two terms
x yz and xyz. Furthermore, the sum of these two terms describes the Boolean func-
tion associated with this map, ie., (x,y,z) = xyz + xyz. Factoring the yz from each
term gives xyz + xyz = yz(x + x) = yz. This same result is obtained by grouping
the two corresponding cells on the map. The resulting product term is formed by
noting which variables along the map’s axes do not change value. In this case, the
product term contains the y literal since y = 0 for the cells within the subcube. This
208 DIGITAL PRINCIPLES AND DESIGN

Figure 4.37 Variable-entered maps grouping techniques. (a) Grouping cells with the same literal.
(b) Grouping a 1-cell with both the Z literal and the Z literal. (c) Grouping a 1-cell with the z literal.

literal is then and-ed with the z literal occurring within the subcube, i.e., the map
entry. In general, normal Karnaugh map techniques are applied to form subcubes of
cells that contain the same literal. The described product term is obtained by and-
ing the map variables that do not change values with the literal used to form the
subcube.
As indicated in Table 4.20, four entries are possible in variable-entered maps
for complete Boolean functions, 1.e., a literal, its complement, 0, and 1. Literals and
their complements are grouped separately as just described. Furthermore, O’s are
never grouped when forming minimal sums. Now consider how 1-cells are handled.
The first map in Fig. 4.37b shows a situation in which z, z, and | appear as cell en-
tries. The cells containing the literals z and z correspond to the product terms xyz
and xyz. Now consider the 1-cell. From the laws of Boolean algebra, the constant |
can be written as z + z. This equivalent form of | is shown as an entry in the second
map of Fig. 4.375. In this case, the |-cell is regarded as the expression x yz + x yz.
Combining these results, the maps of Fig. 4.37 correspond to the expression xyz +
xyz + xyz + xyz. This expression can be factored as yz(x + x) + xz(y + y) =
yz + xz, which is a minimal sum. As indicated in Fig. 4.37b, the same result is
achieved by grouping the z-cell with the z portion of the (z + z)-cell to form the
term yz and the z-cell with the z portion of the (z + z)-cell to form the term xz. With
both the z and z portions being used, the 1-cell is said to be completely covered. In
general, 1-cells can be used when forming subcubes involving literals. Furthermore,
if a 1-cell appears in a subcube for a literal and in another subcube for the comple-
ment of the same literal, then no further consideration of the 1-cell is needed when
obtaining a minimal sum.
The final case that needs to be discussed is shown in the first map of Fig. 4.37c.
Here there is a |-cell and a z-cell, but there are no z-cells. Again the I-cell is re-
garded as a (z + z)-cell in the second map of Fig. 4.37c. This map corresponds to
the expression xyz + xyz + xyz. The first two terms in the expression can be com-
bined to form the term yz. This is analogous to forming the vertical subcube shown
in the second map of Fig. 4.37c. But what about the x yz term in the above expres-
sion? It is noted that the (z + z)-cell in the second map of Fig. 4.37c¢ is not com-
pletely covered since the z portion is not used. To complete the covering, the x yz
portion of the (z + z)-cell is grouped with the x yz portion of the same cell. This re-
CHAPTER 4 Simplification of Boolean Expressions 209

sults in the term x y, which is equivalent to simply regarding the (z + z)-cell as a


I-cell and grouping it as on a regular Karnaugh map. The two subcubes yield the
expression yz + x y, which is the minimal sum of xyz + xyz + xyz.
It is important to note in the last two cases that |-cells must also be considered
when forming subcubes on a variable-entered map. In particular, they must be
grouped with a map-entered literal and its complement or they must be used as
basic 1-cells. Although the above discussion was restricted to subcubes of two cells,
any rectangular grouping of dimensions 2“ X 2’ involving a common literal denotes
a single product term.
As a result of the above discussion, a two-step procedure for obtaining minimal
sums for completely specified Boolean functions from a variable-entered map with
a single map-entered variable can now be stated:

Step 1. Consider each map entry having the literals v and vy. Form an optimal
collection of subcubes involving the literal v using the cells containing
1’s as don’t-care cells and the cells containing the literal v as 0-cells.
Next form an optimal collection of subcubes involving the literal v
using the cells containing 1|’s as don’t-care cells and the cells
containing the literal v as O-cells. As in the case of regular Karnaugh
maps, by an optimal collection it is meant that the size of the subcubes
should be maximized and the number of subcubes should be
minimized.
Step 2. Having grouped the cells containing the literals v and v, an optimal
collection of subcubes involving the 1-cells not completely covered in
Step 1, i.e., 1-cells that were not used for both a subcube involving the
v literal and a subcube involving the v literal in Step 1, is next
determined. One approach for doing this is to let all cells containing
the literals v and v become 0-cells and all 1-cells that were completely
covered in Step 1 become don’t-care cells. Another way of handling
the not completely covered 1|-cells is to use v-cells or v-cells from Step
| that ensure that the 1-cells now become completely covered.*

Figure 4.38 illustrates how the above procedure is applied to obtain a minimal
sum from a variable-entered map. The compressed map is shown in Fig. 4.38a. The
first step is to consider each of the distinct literals z and z, in turn, and form optimal
subcube collections for each case. This step is shown in Fig. 4.385. In the first map,
in which the z-cell is replaced by 0 since it cannot be used in z-subcubes, the z-cell
is grouped with the 1-cell to its right since all 1-cells are regarded as don’t-cares in
this step. The resulting term is wxz. At this point all cells containing z entries ap-
pear in at least one subcube. Next, the cells containing z entries are considered. This
is shown in the second map of Fig. 4.38, where the z-cell is replaced by 0. Again
1-cells are used as don’t-cares for the purpose of maximizing the size of the sub-
cubes. The indicated subcube corresponds to the term yz. This completes Step | of

*This second case is illustrated shortly when Fig. 4.40 is discussed.


210 DIGITAL PRINCIPLES AND DESIGN

00 Ol 11 10

0 Zz 1 1 0

WwW

0 Z 1 0

(a)

Figure 4.38 Obtaining a minimal sum from a map having


single-variable map entries. (a) Variable-entered map.
(b) Step 1. (c) Step 2.

the map-reading procedure. Step 2 involves those 1-cells that were not completely
covered in Step 1. Note that the 1-cell at location (w,x,y) = (0,0,1) in Fig. 4.38b was
used for the grouping of both the z literal and the z literal. Hence, this 1-cell is re-
garded as a don’t-care cell in Step 2. All the remaining |-cells must next be grouped
optimally since they were not completely covered. This is shown in Fig. 4.38c. Here
the 1|-cells are placed into a single subcube, yielding the term xy. Collecting the
three terms corresponding to the three subcubes that were formed, the resulting
minimal sum is

flw,xy,Z) = wxz + yz + xy
For illustrative purposes, the above example was done using three separate maps.
However, the entire process can readily be carried out on the variable-entered map.
Some care must be taken in applying the above two-step process if a mini-
mal sum is to be obtained. To see this, consider the variable-entered map shown
in Fig. 4.39, where all the subcubes are shown on a single map. Starting with the
z entry, the only possible subcube is with the I-cell to its right, since z-cells are
CHAPTER 4 Simplification of Boolean Expressions 211

Figure 4.39 Illustrating optima


groupings on a
variable-entered
map.

regarded as O-cells, to form the term wyz. Now consider the z entries. Since the
z-cell in the lower left corner can only be grouped with the cell above it, the two
cells that comprise the first column of the map yield the term x yz. With a z entry
remaining ungrouped, another subcube is necessary. Two equal-sized subcubes
are possible in this case: the subcube shown that corresponds to the term xyz and
the subcube consisting of the z entry with the I-cell to its right that corresponds
to the term wxz. Although both subcubes appear to be equally good since they
consist of the same number of cells, it should be noted that the first possibility
uses a |-cell that was previously grouped with the z entry. Anticipating Step 2 of
the process, if the xyz subcube is selected rather than the wxz subcube, then the
1-cell corresponding to (w,x,y) = (1,1,1) becomes a don’t-care cell in Step 2,
since it is completely covered, and only the one remaining 1-cell needs to be
grouped at that time. If the alternate possibility is elected, then two 1-cells must
be grouped in Step 2. This results in an extra term at that time. Thus, the xyz sub-
cube must be selected. Finally, according to Step 2, the not completely covered
1-cell is grouped. This results in the term wxy. The minimal sum is

f(w,x%y,Z) = wyz + xyz + xyz + wxy


As a final example, consider the variable-entered map in Fig. 4.40a. Step | re-
quires the three subcubes shown in Fig. 4.40b. As required by Step 2, it is still

SW%y,2) = wz + YZ + wyz + {wx y,2) = wz + yz + wyz + XZ


(a) (b) (c)

Figure 4.40 Obtaining a minimal sum from a map having single-variable map entries. (a) Variable-entered
map. (b) Step 1. (c) Step 2.
212 DIGITAL PRINCIPLES AND DESIGN

necessary to either form a subcube for the not completely covered 1|-cell or ensure
that the 1-cell becomes completely covered by using it with some of the z-cells. In
the first case, the 1-cell must be grouped alone, resulting in the term wxy. However,
as illustrated in Fig. 4.40c, the 1-cell can be grouped with a collection of z-cells.
This results in the term x z. If this is done, then the 1-cell becomes completely cov-
ered since it was previously grouped when forming the wyz term. Because the term
xz has one less literal than the term wxy, the xz subcube should be selected. The
minimal sum is

flw,x%y,z) =wz+yzt+wyzt+ xz

4.14.3 Minimal Products


By duality, it is a simple matter to modify the above two-step process to obtain a
minimal product from a variable-entered map for a completely specified function
with a single map-entered variable. In this case, the O-cells are regarded as don’t-
care cells when grouping the v and v literals in Step 1. Step 2 requires the grouping
of all not completely covered 0-cells. Completely covered 0-cells are used as don’t-
cares in Step 2. The subcubes formed denote sum terms. When writing the sum
terms in Step 1, the literals appearing in the subcubes are or-ed with the literals nor-
mally read from the axes of the map.* A formal statement of the procedure is ob-
tained by replacing every occurrence of “1” by “0” and every occurrence of “0” by
“1” in the above algorithm for obtaining minimal sums.
An example of obtaining a minimal product from a variable-entered map is
shown in Fig. 4.41. Two subcubes are necessary to optimally cover the z literals en-
tered in the map. These correspond to the sum terms x + y + zandw + y + z It

X+V+Z

Figure 4.41 Obtaining a


minimal product
from a map having
single-variable
map entries.

*Recall that when reading sum terms from a Karnaugh map, 0’s on the map axes denote
uncomplemented variables and 1’s denote complemented variables.
CHAPTER 4 Simplification of Boolean Expressions 213

should be noted that a 0-cell was used in one of the subcubes as a don’t-care cell.
The z literal is covered by a single subcube that again uses the 0-cell as a don’t-care
cell. This results in the sum term x + y + z. Finally, only one 0-cell is not com-
pletely covered. Hence, an additional subcube is needed, corresponding to the sum
term w + x + y. The resulting minimal product is

fW,xy,Z2) = aty+t wry t2atytnwtxty)

4.14.4 Incompletely Specified Functions


Up to this point it has been assumed that the Boolean function represented by a
variable-entered map was completely specified. However, incompletely specified
Boolean functions, i.e., those having don’t-cares, commonly occur in logic-design
problems. It is possible to generalize the construction and reading of variable-
entered maps to handle don’t-care conditions.
Again assume that the map entries in a variable-entered map correspond to
single-variable functions. It was previously shown that these entries correspond to
the evaluation of the expression f;* v + f;* v where f; and f; are the functional values
in the ith and jth rows of the truth table and v is the map-entered variable. Since the
Boolean function is now assumed to be incompletely specified, the values of f; and fj
are 0, 1, or don’t-care. Table 4.21 lists the nine possible assignments to f; and fj, the
evaluation of f;- v + f;- v in each case, and the corresponding entries for a variable-
entered map. For the purpose of this table, the don’t-care dash (—) is assumed to be a
legitimate symbol in a Boolean expression.
It is seen from the table that double entries may appear in a map. For example,
consider the case whenf; = 0 andf;= —. The reduced form of f;* v + f;* v is —* v.
Since the dash represents either a 0 or a 1, the expression — * v evaluates to 0 when
the dash is a 0 and evaluates to v when the dash is a |. Hence, the map entry v,0 sig-
nifies that the map cell can be regarded either as a v-cell, 1.e., a cell having the entry
v, or as a O-cell. Similarly, whenf;= — andf,= 1, the reduced form of f;-v + fj v
is —:v + v. Under the assumption that the dash denotes a0,0-v +v=0O+v=vy;
while when the dash denotes a 1, 1: v + v = vy + v = 1. In this case, the double

Table 4.21 Single-variable map entries for incompletely specified Boolean functions
ee eeeeEeEeEeeEeEeeeeEeEeEeEEE——E————— EEE
if f fiivtfeyv Map entry

0 0 0-v+0-v=0+0=0 0
0 1 O-v+1-v=O0+v=y v
0 ~ O-v+—--v=0+-‘v=--yv v,0
1 0 l-y+0-v=v+0=y v
1 1 ley ley pay | |
1 - l-yvt+--v=v+-<y v,1
— 0 —-y+0°-v=-:v+0=--v v,0
- 1 =~y + loy—=—-y + Vv v,]
~ Me = 07) Fp Oy =
214 DIGITAL PRINCIPLES AND DESIGN

entry v,1 is used to signify that the map cell can be a v-cell, i.e., a cell having the
entry v, or a I-cell. In the following discussion, the first part of a double entry is re-
ferred to as the literal part and the second part of a double entry is referred to as the
constant part.
The process of reading a variable-entered map for incompletely specified
Boolean functions is more complex since the double-entry cells in the map provide
flexibility. Again, reading the map is done as a two-step process. For obtaining min-
imal sums, all cells containing the v and y literals alone are grouped separately in
the first step, as was done previously. Cells containing 1’s and —’s alone are used as
don’t-cares. In addition, the cells with double entries must also be considered. A
double-entry cell having a 0 constant part can be used as a don’t-care for subcubes
involving its literal part during Step 1. These cells, regardless of how used in Step |,
become 0-cells in Step 2. On the other hand, any double-entry cell having a 1 con-
stant part can be used as a don’t-care in Step | regardless of the literal part. How-
ever, how it is used in Step | determines how it must be used in Step 2. To illustrate
this, consider a cell with the double entry v,1. This cell can be used optionally for
subcubes involving the v literal in Step 1, in which case it becomes a don’t-care in
Step 2. Alternately, it can be used as a |-cell in Step 1. Since 1-cells are normally
regarded as don’t-care cells in Step 1, the v,l-cell can then also be used to group
v literals in Step |. If this option is used, it must be either completely covered in
Step 1, in which case it becomes a don’t-care in Step 2, or, if not completely cov-
ered, it must be considered a |-cell in Step 2, as was done previously when reading
variable-entered maps of completely specified functions.
The two-step process for obtaining minimal sums for incompletely specified
Boolean functions from a variable-entered map with a single map-entered variable
is summarized as follows:

Step 1. Form an optimal collection of subcubes for all entries that consist of
only a single literal, i.e., v and v, using the —’s, 1’s, and double entries
having a | constant part as don’t-cares. In addition, double entries
having a O constant part can be used as don’t-cares for subcubes that
agree with the literal part of the double entry. The subcubes must be
rectangular and have dimensions 2“ X 2? and should be minimized in
number and maximized in size.
Step 2. Form a Step 2 map as follows:
a. Replace the single literal entries, i.e., v and v, by 0.
b. Retain the single 0 and — entries.
c. Replace each single | entry by a — if it was completely covered in
Step 1; otherwise, retain the single | entry.
d. Replace the double entries having a 0 constant part, i.e., v,0 and
v,0, by 0.
e. Replace each double entry having a | constant part by a — if the cell
was used in Step | to form at least one subcube agreeing with the
literal part; otherwise, replace the double entry having a | constant
part by a 1. (It should be noted that the second case corresponds to
CHAPTER 4 Simplification of Boolean Expressions 215

the cell not being covered at all or only used in subcubes involving
the complement of the literal part of the double entry.)
The resulting Step 2 map only has 0, 1, and — entries. An optimal
collection of subcubes for the 1-cells should be determined using the
cells containing —’s as don’t-care cells.

To illustrate the above process, consider the Boolean function f(w,x,y,z) =


>m(3,5,6,7,8,9,10) + dce(4,11,12,14,15). The corresponding truth table is given in
Fig. 4.42a. Arbitrarily selecting w, x, and y as the map variables and z as the map-
entered variable, the truth table is partitioned as indicated by the dashed lines. For
corresponding pairs of rows within a partition, the expression f;* z + f° z is evalu-
ated and the map entries are determined. The variable-entered map is then con-
structed as shown in Fig. 4.42b. Figure 4.42c shows how the map entries are
grouped in Step |. It is observed that there is only one z-cell. Since both |-cells and

|
i Map entry
|
|
|
|
|
[e
|
|
|
!
|
|r

|
1
i
i
|
KF
|
|
|
|
!
L
| This cell is being |
|
| 1 used as a |-cell.
|
|
L
|
|
|
|
|
|
r
|
|
|
!
|
Fr
|
|
|
|
|

Figure 4.42 Obtaining a minimal sum for the incompletely specified Boolean function
f(w,xX,y,Z) = =m(3,5,6,7,8,9,10) + de(4,11,12,14,15) using a variable-
entered map. (a) Truth table. (6) Variable-entered map. (c) Step 1 map
and subcubes. (d) Step 2 map and subcubes.
216 DIGITAL PRINCIPLES AND DESIGN

z,l-cells can be used as don’t-cares when grouping z-cells, the single subcube
shown in the figure is formed. This results in the term yz. With no other cells having
single-literal entries remaining ungrouped, Step | is completed. The Step 2 map is
shown in Fig. 4.42d. It should be noted that Step 2e requires the z,1-cell to be
replaced by a | rather than by a — since this cell was previously used only in a
z-subcube and not also in a z-subcube, and the z, l-cell to be replaced by a | since it
was not used as a z-cell. The 1-cells are now grouped on the Step 2 map, resulting in
the terms wx and wx. The minimal sum is thus given as

f(w,x%,y,Z) = yz + wx + wx
As a second example, consider the incomplete function f(w,x,y,z)
m(0,4,5,6,13,14,15) + dc(2,7,8,9). The partitioned truth table under the assump-
tion that z is the map-entered variable is shown in Fig. 4.43a, and the corresponding
variable-entered map is shown in Fig. 4.43b. The z-cell is covered by grouping
the first row of the map. This is permissible since the z,0-cell, the z,1-cell, and the
1-cell can all be regarded as don’t-cares when grouping a z-cell. This gives the term
wz. The z-cell is next put into a 2 X 2 subcube. Here the 1|-cells and z,1-cell are

Map entry

0 0
| ; z+0 Z
eo eee i Pi faearea||ES | Qkee een ba eee, This cell is This cell is
0 0 7 used in a double
0 0 —-2+0 z0 Z-subcube. covered.

0 Pe Wolo leete ls Bo te wilewe


Va re l
0 |

0 | oe
Pcs eee Zz 7] W
0 |

OE yes 9 |G eae el eee


erat d oe i =
i 0

{>On Ouellet onl cee ae


0+0 0
| 0

f= 12:0 Ol Mole ae oe on ee en
O+2z Z
| |
wo catne osha 1) aa lee =| Sa Toute w
| ]
Z+Z |
] |

(a)

Figure 4.43 Obtaining a minimal sum for the incompletely specified Boolean function
f(w,x,y,Z) = 2m(0,4,5,6,13,14,15) + dce(2,7,8,9) using a variable-entered
map. (a) Truth table. (b) Step 1 map and subcubes. (c) Step 2 map and
subcubes.
CHAPTER 4 Simplification of Boolean Expressions 217

used as don’t-cares. The term associated with this subcube is xz. At this point the
Step 2 map is formed as shown in Fig. 4.43c. Since the z,1-cell in Fig. 4.43b was
used in a z-subcube, the corresponding cell is a don’t-care in Fig. 4.43c. In addi-
tion, the 1-cell in the upper right corner of Fig. 4.43b was completely covered dur-
ing Step 1. Hence, it also becomes a don’t-care cell in the Step 2 map. However,
since the other |-cell in Fig. 4.43b was not completely covered, it remains a 1-cell
in Fig. 4.43c. The single |-cell in Fig. 4.43c is now grouped with a don’t-care cell,
resulting in the term xy. The minimal sum is

flw.x,y,Z) = wz t+ xz + xy
By duality, the above two-step procedure for obtaining minimal sums of in-
completely specified Boolean functions can be restated for minimal products. To re-
state the procedure, all occurrences of “1” should be replaced by “0” and all occur-
rences of “O” replaced by “1” in the algorithm for determining minimal sums.
As an illustration of obtaining a minimal product, again consider the partitioned
truth table given in Fig. 4.43a from which the variable-entered map of Fig. 4.44a is
constructed. Step | requires grouping the z-cell and the z-cell individually using —’s,
0’s, double entries containing 0’s, and double entries having a | constant part with
an agreeing literal part as don’t-cares. This results in the two subcubes shown in
Fig. 4.44a and the corresponding sum terms x + z and w + y + z. The Step 2 map
is then constructed as shown in Fig. 4.445 using the dual construction procedure (to
the one) previously stated for Step 2 minimal-sum maps, i.e., upon interchanging all
occurrences of |’s and 0’s. On the Step 2 map, the 0’s are optimally grouped using
dashes as don’t-cares. In this case, there is only one 0 entry. Since it can be grouped
in two ways, one minimal product is

flw,.4y,Z) = (& + Zw + y + zx + y)
and another is

flw,xy,z) = (x + z2(w + y + zw + x)

xy
00 Ol 1] 10

| ee a al | |
I 1

w a

Soh Ooi al ha
fg oe ha -b--,j

(a) (b)

Figure 4.44 Obtaining a minimal product for the incompletely


specified Boolean function f(w,x,y,Z) =
>m(0,4,5,6,13,14,15) + dc(2,7,8,9) using a variable-
entered map. (a) Step 1 map and subcubes. (b) Step 2
map and subcubes.
218 DIGITAL PRINCIPLES AND DESIGN

4.14.5 Maps Whose Entries Are Not Single-Variable


Functions
In the discussion on constructing variable-entered maps, it was shown that concep-
tually the cell entries can be functions of more than a single variable. This was illus-
trated in Figs. 4.34f and 4.36. In general, the analysis of variable-entered maps is
rather difficult when complex expressions appear as cell entries. However, for the
case of infrequently appearing variables, as in Fig. 4.36, the cell entries are nor-
mally simple enough expressions so that map analysis is manageable. Three cases
are particularly common on variable-entered maps having infrequently appearing
variables: the expressions within the cells are single literals, the sum of single liter-
als, and the product of single literals. These cases are now considered in detail for
obtaining only minimal sums of completely specified Boolean functions. The reader
should be able to extend these concepts to incompletely specified functions. Obtain-
ing minimal products from the same map is a rather difficult problem when the en-
tries are more complex than simply single literals, and is not discussed.
Figure 4.45a shows a variable-entered map in which the map entries involve
two variables. However, the variable entries themselves are simply single literals. In
such cases, Step | of the two-step process discussed previously is carried out on
each literal individually. That is, by setting all the literals to 0 except one, optimal
subcubes for each literal, in turn, are obtained. Again, all 1-cells are regarded as
don’t-cares in Step 1. In actuality, however, a |-cell is a (v + v)-cell. Ifa 1-cell does
not become completely covered in Step 1, i.e., functionally equal to 1, then it must
be grouped in a Step 2 map. For the map of Fig. 4.45a, the z literal is set to 0 and
the y-cell is grouped with the 1-cell, which is regarded as a (y + y)-cell. This is
shown in Fig. 4.45b. The resulting term is wy. Next, the y literal is set to 0 and the
appropriate subcube for the z-cell is formed as shown in Fig. 4.45c, where the 1-cell
is now considered a (z + z)-cell. This subcube corresponds to the term xz. Since
each of the individual literals is grouped, it is next necessary to consider all the
I-cells that were not completely covered. A 1-cell is completely covered if the cov-
ered part is functionally equal to 1. In this example, since the first subcube covered
a y and the second subcube covered a z, the covered part of the cell is algebraically

Figure 4.45 Maps having entries involving more than one variable. (a) Variable-
entered map. (6) Grouping the yliteral. (c) Grouping the Z literal.
(d) Grouping the not completely covered 1-cell.
CHAPTER 4 Simplification of Boolean Expressions 219

; (a)

Figure 4.46 Obtaining a minimal sum from a variable-entered map


having several single-literal map entries. (a) Variable-
entered map. (6) Optimal collection of subcubes.

described by y + z, which is not functionally equal to 1. Any 1-cells that are not
completely covered in Step 1 become 1-cells in a Step 2 map. Thus, the Step 2 map
of Fig. 4.45d must be constructed for this example and a minimal covering ob-
tained, which is wx. This results in the minimal sum

f(w,x% y,z) = wy + xz + wx
Although separate maps were drawn in this example to illustrate the various sub-
cubes, the procedure can be carried out on a single map.
Another example of a variable-entered map with two-variable map entries is
shown in Fig. 4.46a. In Fig. 4.46b the Step | subcubes, i.e., subcubes for each lit-
eral, are shown. The |-cell is viewed as a (y + y)-cell when grouping the y and y lit-
erals, and as a (z + z)-cell when grouping the z literal. The covered portion of this
1-cell can be written as y + y + z. Since y + y + zis functionally equal to 1, i.e.,
y+y+z=1+z = 1, this cell is completely covered and no Step 2 map is re-
quired. The minimal sum is

f(v,w,x%y,Z) = vwy + wxy + vxz


The situation when cells consist of sum terms, 1.e., the or-ing of single literals, is
illustrated in Fig. 4.47a. This case is handled in essentially the same manner as the

28 x
0 1 0 1

Ww Ww

(c) (d)

Figure 4.47 Maps having sum terms as entries. (a) Variable-entered map.
(b) Grouping the y literal. (c) Grouping the z literal. (7) Grouping the not
completely covered 1-cell.
220 DIGITAL PRINCIPLES AND DESIGN

(d)

Figure 4.48 Maps having product terms as entries. (a) Variable-entered map.
(b) Grouping the yz term. (c) Grouping the y literal. (d) Grouping the
not completely covered 1-cell.

previous case. In Step | each literal in the map is considered, in turn, by setting all
the other literals to 0. The optimal subcubes are formed using |-cells as don’t-cares.
Again Step 2 is used to group all 1-cells that are not completely covered in Step 1. In
Fig. 4.47a it is noted that two distinct literals appear within the cells, i.e., y and z.
Setting the z literal to 0 yields the map of Fig. 4.47b. The subcube involving the y lit-
eral results in the term wy. Next the y literal in the map of Fig. 4.47a is set to 0. Fig-
ure 4.47c illustrates the resulting map where the 1-cell is rewritten as a (z + z)-cell
for emphasis. A subcube involving the z literal is then formed that results in the term
xz. Since all the literals of the original map are grouped, it is next necessary to group
all 1-cells that are not completely covered. The Step 2 map is formed by replacing
all literals by 0’s, completely covered I-cells by —’s, and not completely covered
1-cells by 1’s. For this example, the map shown in Fig. 4.47d results. The grouping
of the 1-cell corresponds to the term wx. Thus, the minimal sum is given by

f(w,xy,Z) = wy + xz + wx
Again individual maps were drawn to illustrate the process. However, normally all
subcubes can be formed on a single map.
The third case to be considered involves cells containing product terms, i.e., the
and-ing of single literals. An example of this is shown in Fig. 4.48a. Each distinct
product term, in turn, is grouped as an entity while setting the literals that comprise
the product term to | and those not contained within the product term to 0 in all the
remaining cells that contain a different product term.* For Fig. 4.48a the product
term is yz. Thus, all cells having the y and z literals alone are replaced by 1’s. This
results in the map shown in Fig. 4.485. Using |-cells as don’t-cares, a yz-subcube is
formed, resulting in the term wyz. Since the map of Fig. 4.48a also has a cell with a
single-literal product term, this cell must also be grouped. This is done by setting all
literals, except y, equal to 0. As a consequence, the cell containing the yz term is re-
placed by 0. The resulting map is shown in Fig. 4.48c, where the 1-cell is rewritten
as a(y + y)-cell for emphasis. Grouping the y literal results in the term xy. Finally,
Step 2 is performed to cover all |-cells in the original map that are not completely
covered in Step I. All map entries involving product terms are replaced by 0’s and

*Recall that a product term can consist of a single literal.


CHAPTER 4 Simplification of Boolean Expressions 221

completely covered |-cells by don’t-cares. In this example, Fig. 4.48d is the Step 2
map and the cover is given by wx. The minimal sum is
S(w,x% y,Z) = wyz + xy + wx

When both sum and product terms appear within the same map, the analysis
needed to obtain a minimal sum becomes more difficult since greater attention must
be given to the functional covering of cells. In all the previous examples, the con-
cept of functional covering only involved the 1-cells. In general, a cell is function-
ally covered if the sum of the coverings from the subcubes involving the cell is
equal to the function specified within the cell. To illustrate this point, consider the
map of Fig. 4.49a. To obtain a minimal sum, the yz-cell is first considered. As was
done previously, the occurrences of the y and z literals in the remaining cells are re-
placed by 1’s and the necessary subcubes established. This results in the map shown
in Fig. 4.495, from which the product term xyz is obtained. Since the original
(y + z)-cell was included in the subcube, it now has become partially covered.
The expression in the (y + z)-cell of Fig. 4.49a can be written as y + z = y +
(y + yz = y + yz + yz. It is noted that the second term of this expression is cov-
ered by the yz-subcube. The remaining two terms simplify to y + yz = y. Thus,
this cell may now be regarded as simply a y-cell for the purpose of determining
the remaining subcubes for the map. That is, the simplification problem is now
reduced to the map shown in Fig. 4.49c. From this map, the y-subcube results in
the term w y. Finally, since no 1-cells appeared in the original map, the Step 2 part
of the process is not needed and the resulting minimal sum is
f(w,x% y,Z) = xyz + wy

Although obtaining minimal expressions from variable-entered maps is rather


difficult at times, these maps provide a convenient solution to obtaining “good” ex-
pressions for functions having a large number of variables. This is particularly true
when dealing with problems having infrequently occurring variables that would
normally require the use of high-order maps. Such situations commonly occur when
designing controller-oriented synchronous sequential systems. The application of
variable-entered maps to these design problems is further discussed in Chapter 8.

Figure 4.49 Maps having product and sum terms as entries.


(a) Variable-entered map. (b) Grouping the yz
term. (c) Grouping the y literal.
222 DIGITAL PRINCIPLES AND DESIGN

CHAPTER 4 PROBLEMS
4.1 Relative to the Boolean function
f(w,xy,z) = Ym(5,8,9,10,11,12,14)
classify each of the following terms as to whether it is (1) a prime implicant,
(2) an implicant but not prime, (3) a prime implicate, (4) an implicate but not
prime, (5) both an implicant and an implicate, or (6) neither an implicant nor
an implicate.
au Wz ee Saye areeZ Cc wtx
d. wxz 6.x f, Wisk ¥ Zz
e. Week h. wxyz
4.2 Represent each of the following Boolean functions on a Karnaugh map.
a. f(w,x,y,z) = wxyz + wxyz + wxyz + wxyz + wxyz + wxyZz
bi fwisyD = wrx yt owt xy + Ze Paty tz)
GW Fae ei GW aia tyr eZ Cs ak evecare)
c. f(w,xy,z) = 2m(1,6,7,8,10,12,14)
d. f(w,xy,z) = IM(0,3,4,7,9,13,14)
e. f(%,y,z) = xy + xy + yz
{ fiey2) =O OY + 2 +z)
4.3 Using a Karnaugh map, determine all the implicants of the function
f(w,x% y,z) = Lm(0,1,2,5,10,11,14,15). Which of these are prime
implicants?
4.4 Using Karnaugh maps, determine all the prime implicants of each of the
following functions. In each case, indicate the essential prime
implicants.
a. f(w,x,y,z) = %m(0,1,2,5,6,7,8,9,10,13,14,15)
b. f(w,x,y,z) = IIM(0,2,3,8,9,10,12,14)
c. f(w,xy,zZ) = wyz + wyz + xyz + wxy + wxyz
d TWO WS 2 ea RO ee eet Cee eye)
NG Ei ay ae)

The flowchart shown in Fig. 4.14 for determining prime implicants on a


Karnaugh map can equally well be used to determine the prime implicates.
In this case the 0-cells are inspected and sum terms are written. For each of
the functions in Problem 4.4, determine all the prime implicates and indicate
which are essential prime implicates.
4.6 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
a. f(x, y,z) = &m(1,3,4,5,6,7)
b. f(x, y,z) = Xm(2,3,4,5,7)
c. f(x, y,z) = IIM(,4,7)
d. f(x y,z) = IIM(1,2,5,6,7)
CHAPTER 4 Simplification of Boolean Expressions 223

4.7 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
a. f(%y,z) = 2m(2,4,5,6,7)
b. f(x y,z) = Sm(0,1,2,3,4,6,7)
c. f(x y,z) = ILM(,4,5,6)
d. f(x y,z) = IIM(1,4,5)

4.8 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
fw,x y,Z) = 2m(0,1,6,7,8,14,15)
f(w,x% y,z) = 2mGB,4,6,9,11,12,13,14,15)
fw,x y,z) = 2m(1,3,4,6,7,9,11,13,15)
S(w,x%y,z) = IIM(1,3,4,5,10,11,12,14)
fiw,xy,z) = TIM(,4,5,6,14)
[@© S(w,x%y,z) = IIM(4,6,7,8,12,14)
EF
g. fw.xy,zZ) = wxz t+ xyz + wxz + xyz
h. f(w,xy,z) = xz + xyz + wxy + wyz
Py VEG) i— Waa Core ey VCR iz CW eae)
H(i kee are)
j. fwx%y.z)=wreyt+tzaxtyt+zwtetyweetytsz
oy se See Pe ND ae oe ae WP ae Z))

4.9 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following Boolean functions.
f(w,xy,z) = Ym(0,2,6,7,9,10,15)
flw,x,y,z) = Sm(0,1,2,4,5,6,7,8,9)
f(w,x% y,z) = {m(0,1,5,6,7,8,
15)
fiw,xy,z) = TIM(0,2,6,8,10,12,14,15)
fw,xy,z) = TLM(0,2,3,5,6,9,10, 11,13)
f(w,x,y,z) = IIM(1,5,10,14)
flw,x%y,Z) = wx + yz + wxy + wxyz
ee fw,xy,Z) = xy
Oo
Se
On
Spey
Sas + yz + xyz+ xyz wxy + wyz
i. fow.xy,2) = Ow + Hw ty t QW ++ DW + H+]
j. SW,%Y,Z =(wt y =P z)(w Seu els y )(w Siaeeatn y ateZ)

HC a eer haya)

4.10 Using Karnaugh maps, determine all the prime implicants and prime
implicates for each of the following incomplete Boolean functions. In each
case, indicate which are essential.
a. f(w,xy,z) = Xm(0,2,5,7,8,10,13,15) + de(1,4,11,14)
b. flw,xy,z) = 2m(1,3,5,7,8,10,12,13,14) + de(4,6,15)
c. f(w,x,y,z) = TIM(0,1,4,5,8,9,11) + de(2,10)
224 DIGITAL PRINCIPLES AND DESIGN

4.11 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following incomplete Boolean functions.
flw,xy,2) = Sm(0,1,2,5,8,15) + dce(6,7,10)
fW,%Y,Z) = Ym(2,8,9,10,12,13) + de(7,11)
f(w,xy,z) = Lm(1,7,9,10,12,13,14,15) + de(4,5,8)
flw,xy,2) = Ym(7,9,11,12,13,14) + de(3,5,6,15)
f(w,xy,z) = Ym(0,2,6,8,10) + de(1,4,7,11,13,14)
f(w,xy,z) = LmC1,4,6,8,9,10,11,12,13) + de(3,15)
f(w,xy,z) = Ym(2,6,7,8,9,10,12,13) + de(0,1,4)
© f(w.x%y,z) = TIM(0,8,10,11,14) + dce(6)
Oo
7
S
Ss
i. f(wxy,z) = TIM(2,8,11,15) + de(3,12,14)
j. f,xy,z) = TIM(0,2,6,11,13,15) + de(1,9,10,14)
4.12 Using Karnaugh maps, determine all the minimal sums and minimal
products for each of the following incomplete Boolean functions.
a flw.xy,z) = Zm(6,7,9,10,13) + de(1,4,5,11,15)
b. f(w,x,y,z) = &m(1,5,8,14) + dc(4,6,9,11,15)
c. f(w,x,y,z) = Xm(0,2,4,5,8,13,15) + dce(1,10,14)
d. f(w,x,y,z) = 2m(1,3,4,6,12) + de(0,9,13)
e. f(w,xy,z) = m(0,1,2,3,4,9,13) + dc(5,10,11,14)
fe flw,xy,z) = m(1,2,3,4,6,9,12,14) + de(5,7,15)
g. flw,xy,z) = Ym(2,3,6,8,13,14,15) + de(4,5,12)
h. f(w,xy,z) = TIM(,4,9,11,13) + de(0,14,15)
i. f(w.xy,z) = TIM(1,2,3,4,9,10) + dc(0,14,15)
j. f(wxy,z) = TIM(0,3,4,11,13) + de(2,6,8,9,10)
4.13 Let g(w,xy,z) = 2m(1,3,4,12,13) and f,(w,x,y,z) =
+m(0,1,3,4,6,8,10,11,12,13). Determine a minimal sum and a
minimal product for the function f,(w,x,y,z) such that g(w,x,y,z) =
fiw, Y,2) * fo(W,%,Y,Z):
4.14 Using a Karnaugh map, determine a minimal sum and a minimal product for
each of the following functions.
a. f(v,w,% yz) = 2m(1,5,9,11,13,20,21,26,27,28,29,30,31)
b. flv,w,x,y,Z) = 2m(3,7,8,9,11,12,13,15,16,19,20,23,27,30,31)
c. flv,wxy,z) = Ym(1,3,4,5,11,14,15,16,17,19,20,24,26,28,30)
d. fiv,w,x,y,z) = TM(0,2,4,6,8,12,14,15,16,18,20,22,30,31)

4.15 Using a Karnaugh map, determine a minimal sum and a minimal product for
the function
f(uv,w,xy,z) = &m(4,5,6,7,8,10,12,14,36,37,38,39,40,42,44,
46,48,49,50,51,52,53,54,55,56,58,60,62)
CHAPTER 4 Simplification of Boolean Expressions 225

4.16 Design a three-input, one-output minimal two-level gate combinational


network that has a logic-1 output when the majority of its inputs are logic-1
and has a logic-0 output when the majority of its inputs are logic-0.
4.17 In Sec. 3.8 the truth table for generating the odd-parity bit for decimal digits
in 8421 code was developed. Verify that the minimal sum stated in that
section is correct. What is the minimal product? Can a simpler network be
obtained if exclusive-or-gates are used?
4.18. A network appearing in many digital systems is the binary full adder. This
network has as its inputs | bit from the addend (x;), | bit from the augend
(y;), and | bit corresponding to the carry (c;) from the previous order
addition. The outputs from the adder are a sum bit (s,) and a carry bit (c;,,) to
be used in adding the next pair of addend and augend bits. By letting a one-
to-one correspondence exist between a binary symbol and a logic symbol, a
truth table can be constructed for a binary full adder. Design a three-input,
two-output minimal two-level gate combinational network to generate the
sum and carry bits of the adder. Treat each output independently.
4.19 A network appearing in many digital systems is the binary full subtracter.
This network has as its inputs | bit from the minuend (x;), 1 bit from the
subtrahend (y,), and | bit corresponding to the borrow (0,) from the previous
order subtraction. The outputs from the subtracter are a difference bit (d;)
and a borrow bit (b;,,) to be used in subtracting the next pair of minuend and
subtrahend bits. By letting a one-to-one correspondence exist between a
binary symbol and a logic symbol, a truth table can be constructed for a
binary full subtracter. Design a three-input, two-output minimal two-level
gate combinational network to generate the difference and borrow bits of the
subtracter. Treat each output independently.
4.20 Design a minimal two-level gate combinational network that detects the
presence of any of the six illegal code groups in the 8421 code by providing
a logic-1 output.
4.21 A panel light in the control room at the launching of a satellite is to go on if
and only if the pressure in both the fuel and oxidizer tanks is equal to or
above a required minimum and there are 10 min or less to liftoff, or if the
pressure in the oxidizer tank is equal to or above a required minimum and
the pressure in the fuel tank is below a required minimum but there are more
than 10 min to liftoff, or if the pressure in the oxidizer tank is below a
required minimum but there are more than 10 min to liftoff. Design a
minimal two-level gate combinational network to control the panel light.
4.22 Design a four-input, four-output gate combinational network that converts
the decimal digits in 2421 code (as previously given in Table 2.7) into their
equivalent forms in 8421 code. Treat each output independently.
4.23 Design a four-input, one-output gate combinational network that has the
7536 code groups as inputs (as previously given in Table 2.7) and has an
output of logic-1 if the input digit D is in the range 0 = D = 3.
226 DIGITAL PRINCIPLES AND DESIGN

Combinational Seven- ip|g ib


network . segment ==,
(decoder) : display e fa] c

(a) (b

(J
i ee
Figure P4.24

4.24 A BCD-to-seven-segment decoder is a combinational network that accepts a


decimal digit expressed in the 8421 code and generates outputs for controlling
the segments of a seven-segment display that shows the corresponding
decimal digit. The decoder has the structure shown in Fig. P4.24a where the
four input variables w, x, y, and z correspond to the four bits of the 8421 code.
The general form of the seven-segment display and the relationship between
the segments and the outputs of the decoder are shown in Fig. P4.24b. When a
logic-1 appears as a network output, the corresponding segment is lit. Since
only the decimal digits (in binary form) can appear as inputs to the decoder,
the patterns shown in Fig. P4.24c are chosen to represent the decimal digits.
Note that segments b and c are used to indicate the decimal digit 1. Design a
logic network based on minimal sum expressions for controlling each of the
seven segments. Treat each output of the network independently.
4.25 Using the Quine-McCluskey method, obtain all the prime implicants for
each of the following Boolean functions.
a. f(w,xy,z) = &m(0,2,3,4,8,10,12,13,14)
b. f(v,w,x,y,z) = Lm(4,5,6,7,9,10,14,19,26,30,31)
c. f(w,xy,z) = %m(7,9,12,13,14,15) + de(4,11)
d. f(w,x,y,z) = &m(0,1,2,6,7,9,10,12) + dc(3,5)
4.26 Using the Quine-McCluskey method, obtain all the prime implicates for
each of the following Boolean functions.
a. f(v,w,xy,zZ) = IM(,3,6,10,11,12,14,15,17,19,20,22,24,29,30)
b. f(w.xy,z) = HM(0,2,3,4,5,12,13) + dc(8,10)
4.27 Using the Quine-McCluskey and Petrick methods, determine all the
irredundant disjunctive normal formulas for the following Boolean
functions. Indicate which expressions are minimal sums.
a. f(wx%y,z) = &m(4,5,7,12,14,15)
b. f(w.xy,z) = Um(4,5,8,9,12,13) + dce(0,3,7,10,11)
CHAPTER 4 Simplification of Boolean Expressions 227

4.28 Using the Quine-McCluskey and Petrick methods, determine all the
irredundant conjunctive normal formulas for the following Boolean
functions. Indicate which expressions are minimal products.
a. f(w,xy,z) = NM(,4,5,8,9,11,13,14,15)
b. f(w,x,y,z) = IIM(0,6,7,8,9,13) + dc(5,15)

4.29 For each of the prime-implicant tables shown in Table P4.29, determine a
minimal cover. The cost column indicates the cost associated with each row.
State your reasons for deleting any rows or columns.

Table P4.29
Cy C2 C3 C4 Cs C6 Cy Cost

(d)

C1 G C3 €; Cs C6 C7 Cost

ry x “ °
ry x «x x 4

rs x x x 3
rs x x x x >
rs x x< x 5S)

re x x x x 6
ry x x x V
228 DIGITAL PRINCIPLES AND DESIGN

4.30 Using the Quine-McCluskey method and prime-implicant table reductions,


determine a minimal sum for the incomplete Boolean function
f(w,.xy,2) = m(3,4,5,7,10,12,14,15) + de(2)

4.31 Using the decimal Quine-McCluskey method and prime-implicant table


reductions, determine a minimal sum for the incomplete Boolean function
f(w,xy,Z) = Lm(1,3,6,8,9,10,12,14) + de(7,13)

4.32 For the following set of Boolean functions, apply Petrick’s method to
determine all the multiple-output minimal sums based on the number of
distinct terms and on the number of gate input terminals in the realization.

fiGayz) = S(1,2,4,6)
fal%y,z) = &m(0,1,3,4,7)
fy(wy,z) = &m(1,4,5,7)
4.33 For each of the following sets of Boolean functions, determine a multiple-
output minimal sum based on the number of gate input terminals in the
realization.
a. f,(%y,z) = &m(0,2,3,4,6)
foqy,z) = &m(0,2,5)
fy(%y,Z) = 2m(3,4,5,6)
b. fiGzy.z) = 2m(1,2,5) + de,7)
f@y,2) = 2MGB,5,6,7) + de(1,4)
fa(%y,z) = {m(1,4,6) + de(0)
4.34 For each of the following Boolean functions, determine a minimal sum and a
minimal product using variable-entered maps where w, x, and y are the map
variables.
f(w,xy,z) = &m(2,3,4,5,10,12,13)
fw,xy,z) = &m(0,3,4,5,8,9,11,12,13)
f(w,xy,z) = Ym(1,3,8,9,10,11,12,14,15)
flw,xy,z) = &m(4,7,8,12,13,15)
f(w,x y,z) = &m(3,4,5,7,8,11,12,13,15)
oe
@
Se
me f(w,xy,Z) = Xm(0,2,5,8,9,10,11,13,15)

4.35 For each of the following Boolean functions, determine a minimal sum and a
minimal product using variable-entered maps where w, x, and y are the map
variables.
a. f(w,xy,z) = 2m(2,3,5,12,14) + dc(0,4,8,10,11)
b. f(w,xy,z) = Ym(,5,6,7,9,11,12,13) + de(0,3,4)
f(w,x,y,z) = Xm(1,5,7,10,11) + de(2,3,6,13)
f(w,x y,Z) = &m(5,6,7,12,13,14) + dce(3,8,9)
©
{Ss
@ f(w,x,y,Z) = Ym(2,3,4,10,13,14,15) + de(7,9,11)
CHAPTER 4 Simplification of Boolean Expressions 229

Figure P4.37

4.36 For each of the following Boolean functions, determine a minimal sum using
variable-entered maps where x, y, and z are the map variables.
f(A,B,x y,z) = Axyz + Bxyz + Bxyz + xyz
b. f(A,B,x,y,z) = Axyz + Axyz + Axyz + Bxyz + Bxyz + xyz + xyz
f(A,B,x,y,Z) = Axyz + Axyz + ABxyz + ABxyz + xyz
4.37 For each of the variable-entered maps in Fig. P4.37, determine a minimal
sum.
Logic Design with MSI
Components and
Programmable Logic
Devices

t is possible to obtain fabricated circuit chips, or packages, that have from a small
set of individual gates to a highly complex interconnection of gates corresponding
to an entire logic network. The complexity of a single chip is known as the scale
of integration. As a rough rule of thumb, circuit chips containing from | to 10 gates
are said to be small-scale integrated (SSI) circuits, those having from 10 to 100
gates as medium-scale integrated (MSI) circuits, those having 100 to 1,000 gates as
large-scale integrated (LSI) circuits, and those having more than 1,000 gates as
very-large-scale integrated (VLSI) circuits.
Chapter 4 was concerned with obtaining optimal logic networks. At that time,
emphasis was placed on minimizing the number of gates and the number of gate
input terminals. Thus, the realization cost was based on chips having single gates.
However, if more than one gate is included on a chip, then the cost more realisti-
cally should be associated with the entire package rather than the individual gates.
In such a case, a good realization from a cost point of view does not necessarily re-
quire the use of a minimal expression according to the previous criteria but rather
requires one that does not exceed the capacity of the circuit package.
Another occurring situation in logic design is that certain gate configurations
have become so common and useful that manufacturers fabricate these networks on a
single chip using medium-scale and large-scale integration. These configurations nor-
mally provide a high degree of flexibility, allowing them to be used as logic-design
components. Again, good realizations of logic networks are achieved by proper use
of these generalized circuits without having to form minimal expressions.
This chapter first introduces some specialized MSI components that have ex-
tensive use in digital systems. These include adders, comparators, decoders, en-

230
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 231

coders, and multiplexers. Their principle of operation and, in some cases, how they
can be used as logic-design components are presented.
Unlike the MSI circuits which are designed to perform specific functions, LSI
technology introduced highly generalized circuit structures known as programma-
ble logic devices (PLDs). In their simplest form, programmable logic devices con-
sist of an array of and-gates and an array of or-gates. However, they must be modi-
fied for a specific application. Modification involves specifying the connections
within these arrays using a hardware procedure. This procedure is known as pro-
gramming. As a result of programming the arrays, it is possible to achieve realiza-
tions of specific functions using generalized components.
In the second part of this chapter, three programmable logic device structures are
studied. In particular, the programmable read-only memory (PROM), the program-
mable logic array (PLA), and the programmable array logic (PAL)* are discussed. @

5.1 BINARY ADDERS AND SUBTRACTERS


The most fundamental computational process encountered in digital systems is that
of binary addition. In Sec. 2.3 the concept of binary addition was introduced. As
was seen at that time, when two binary numbers are added, in general, it is neces-
sary to consider at each bit position an augend bit, x;, an addend bit, y,, and a carry-
in from the previous bit position, c;. The result of the addition at each bit position is
a sum bit, s;, and a carry-out bit, c;,,;, which is used when adding at the next higher-
order bit position. Table 5.1 summarizes the addition process for each bit position,
i.e., the values of s; and c;,, for all the possible assignment of values to x;, y;, and cj.
Although Table 5.1 was constructed as an addition table, it can also be regarded
as the truth table for a logic network that performs addition at a single bit position.
Such a network is referred to as a binary full adder.

Table 5.1 Truth table for a binary full


adder

xj Ji Cj Ci+1 Si
0 0 0 0 0
0 0 | 0 |
0 1 0 0
0 | l i 0
1 0 0 0 |
| 0 | | 0
| 1 0 1 0
| 1 ] 1 |

*PAL is a registered trademark of Advanced Micro Devices, Inc.


232 DIGITAL PRINCIPLES AND DESIGN

Figure 5.1 Karnaugh maps for the binary full adder.

Having obtained a truth table, let us determine a logic network realization. Kar-
naugh maps for the sum and carry-out outputs of the binary full adder are shown in
Fig. 5.1. The corresponding minimal sums are
Si = XNCp + XN AVC XC,
Cray = XD 1 Xia Vi (5.1)
Although the minimal sum for the sum output is just its minterm canonical formula,
a possible simplification of the sum equation is achieved by making use of the
exclusive-or operation. In particular,

5, = xyiC;i + xyC; + XC; + XiC;

= cay; + xy) + cay; + xi)


= cx; ® y,) + cfx; © yi)

c, B® yi) (5.2)
In arriving atEq. (5.2), it should be noted that the form of the expression on the line
above it is AB + AB, where A = c; and B = (x; @ y,), which corresponds to A @ B.
The logic diagram for the binary full adder based on Eqs. (5.1) and (5.2) is shown in
Fig. 3.2.
The binary full adder is only capable of handling one bit each of an augend and
addend along with a carry-in generated as a carry-out from the addition of the previ-

i+]

Figure 5.2 A realization of the binary full adder.


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 233

Xn-1Yn-1 Xn-2 Yn-2


Co = 0

Figure 5.3. Parallel (ripple) binary adder.

ous lower-order bit position. Consider now the addition of two binary numbers each
consisting of n bits, i.e., X,_\X,-2° * * XX and y,_1V,-2* * * YY. This, in general, re-
sults in an (n + 1)-bit sum s,5,,; * * * 5,59. A direct approach for designing a binary
adder in this case is to write a truth table with 2”” rows corresponding to all the
combinations of values assignable to the 2n operand bits, and specifying the values
for the n + | sum bits. Clearly, this is a formidable task.
As an alternate approach, n binary full adders, e.g., of the type shown in Fig. 5.2,
can be cascaded as illustrated in Fig. 5.3, where c,, the carry-out from the highest-order
bit position, becomes the highest-order sum bit, s,.* Since for the least-significant-bit
position there is no carry-in, a 0 is entered on the corresponding input line. When in-
puts are applied simultaneously to a logic network, as in Fig. 5.3, it is commonly re-
ferred to as a parallel input. Thus, the adder network shown in Fig. 5.3 is called a par-
allel binary adder. Although the inputs to this adder are applied simultaneously, the
output sum bits do not necessarily occur simultaneously due to the propagation delays
associated with the gates. In particular, the network of Fig. 5.3 is prone to a ripple ef-
fect in that a carry-out generated at the ith-bit position can affect the sum bits at higher-
order bit positions. Hence, the value for a higher-order sum bit is not produced until
the carry at its previous order bit position is established. Consequently, this logic net-
work is also referred to as a ripple binary adder.
As was discussed in Chapter 2, binary numbers can be signed or unsigned, in
which case the output of the adder must be interpreted accordingly as a signed or
unsigned result. Another factor affecting the interpretation of the output of the adder
is if a final carry-out occurs, i.e., s,, since it may correspond to an overflow. The
reader is referred back to Chapter 2 for the details of binary arithmetic with signed
and unsigned numbers and the concept of overflow.

5.1.1 Binary Subtracters


Binary subtraction was also discussed in Sec. 2.3. A binary subtracter can be de-
signed using the same approach as that for a binary adder. The binary subtraction
process is summarized in Table 5.2. Again, in general, three bits are involved at each
bit order, a minuend bit, x;, a subtrahend bit, y,, and a borrow-in bit from the previ-
ous bit-order position, b;. The result of the subtraction is a difference bit, d;, and a

*Networks consisting of a cascade connection of identical subnetworks are frequently referred to as


iterative networks.
234 DIGITAL PRINCIPLES AND DESIGN

Table 5.2 Truth table for a binary full


subtracter

Xx; Ji b; Diss d;

0 0 0 0 0
0 0 | 1
0 i 0 1 |
0 | 1 1 0
1 0 0 0 1
1 0 1 0 0
| 1 0 0 0
1 ] i ] 1

borrow-out bit, b;,,. The difference bit at each order is obtained by subtracting both
the subtrahend and borrow-in bits from the minuend bit. To achieve this result, how-
ever, a borrow-out from the next higher-order bit position may be necessary.
For the purpose of obtaining a realization, Table 5.2 can also be viewed as a
truth table for a binary full subtracter. Since the d; column of Table 5.2 is identical
to the s; column of Table 5.1, it is immediately concluded that the difference equa-
tion for a binary full subtracter is

d; = x, ® 0; @ B))
By using a Karnaugh map, the minimal-sum expression for the borrow-out is read-
ily determined as

Dix, = Xy; + X;b; + yd;

These results can be used to construct a logic diagram for a binary full subtracter.
As was done for addition, by cascading n binary full subtracters, a ripple binary
subtracter 1s realized for handling two n-bit operands. The structure of such a real-
ization is shown in Fig. 5.4, where x,_\x,-> ** * X;Xo is the n-bit minuend and
Yn-1Yn-2° ° * ¥1Yo iS the n-bit subtrahend.
Recalling from Secs. 2.8 and 2.9, subtraction can be replaced by addition
through the use of complements. For example, adding the 2’s-complement of the
subtrahend to the minuend results in the difference between the two numbers.

Xn-1Yn-1 Xn-2Yn2 x) Yi Xo Yo

Binary
full
subtracter

Figure 5.4. Parallel (ripple) binary subtracter.


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 235

Yn-1 Yn-2

Binary
full
adder

Difference

Figure 5.5. Parallel binary subtracter constructed using a parallel binary adder.

Furthermore, the 2’s-complement of a binary number is readily obtained by


adding one to its 1’s-complement. Figure 5.5 shows the design of a subtracter
using this approach. The 1’s-complement of the subtrahend is formed by invert-
ing each of its bits, and a carry-in of | in the least-significant-bit position pro-
vides for the addition of | to the 1’s-complement.
By making use of the fact that y, © 1 = y,, Fig. 5.6 gives a realization of a par-
allel adder/subtracter. The behavior of this network is determined by the control
signal Add/Sub. The subtraction operation is obtained by letting Add/Sub = 1,
in which case 1’s are appropriately entered into the exclusive-or-gates to provide
the 1’s-complement of the subtrahend and the initial carry-in of 1. The parallel bi-
nary adder then produces the difference. On the other hand, when the two
operands are to be added, Add/Sub = 0. The bits of the addend are not modified

Sum or difference

Figure 5.6. Parallel binary adder/subtracter.


236 DIGITAL PRINCIPLES AND DESIGN

by the exclusive-or-gates prior to entering the parallel binary adder and the neces-
sary initial carry-in of 0 is provided.
Since the operands in subtraction can be either signed or unsigned, the output
of a binary subtracter must be interpreted appropriately. For example, for unsigned
operands, the output from the binary subtracter of Fig. 5.4 is the true difference if
the minuend is greater than or equal to the subtrahend. However, the output from
the subtracter is the 2’s-complement representation of the difference if the minuend
is less than the subtrahend. Again the reader is referred back to Chapter 2 for the de-
tails of binary arithmetic with signed and unsigned numbers.

5.1.2 Carry Lookahead Adder


In view of the fact that subtraction is readily achievable through the addition of
complements, further discussion of the addition/subtraction process is restricted
only to the realization of binary adders.
Although the operands are applied in parallel, all the networks illustrated thus
far in this section are subject to a ripple effect. The ripple effect dictates the overall
speed at which the network operates. To see this, consider the ripple binary adder of
Fig. 5.3. It is possible that a carry is generated in the least-significant-bit-position
stage, and, owing to the operands, this carry must propagate through all the remain-
ing stages to the highest-order-bit-position stage. For example, such a situation oc-
curs when the two n-bit operands are 01 +++ 11 and 00+ +--+ 01 so that the n-bit sum
10+ + - 00 is produced. Assuming the binary full adder of Fig. 5.2, two levels of logic
are needed to generate the carry at the least-significant-bit-position stage, two levels
of logic are needed to propagate the carry through each of the next n—2 higher-order
Stages, and two levels of logic are needed to form the sum or carry at the highest-
order-bit-position stage. If each gate is assumed to introduce a unit time of propaga-
tion delay, then the maximum propagation delay for the ripple adder becomes 2n
units of time. Of course, this is a worst-case condition. However, since normally all
signals must complete their propagations through a network before new inputs are
applied, this worst-case condition becomes a limiting factor in the network’s overall
speed of operation. To decrease the time required to perform addition, an effort must
be made to speed up the propagation of the carries. One approach for doing this is to
reduce the number of logic levels in the path of the propagated carries. Adders de-
signed with this consideration in mind are called high-speed adders.
Equations (5.1) and (5.2) are the sum and carry equations for the outputs at the
ith stage of a binary adder. As seen by these equations, the sum and carry outputs at a
given stage are a function of the output carry from the previous stage, which, in turn,
is a function of the output carry from still another previous stage, etc. This corre-
sponds to the undesirable ripple effect. If the input carry at a given stage is expressed
in terms of the operand variables themselves, i.e., x9, X),..., X,—; and yo, y;,...,
y,~1, then the ripple effect is eliminated and the overall speed of the adder increased.
To see how this is done, again consider Eq. (5.1) for the output carry at the ith
stage, 1.e.,

Cit, = XY; 1 XC, + Vic;


= My; ene
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 237

The first term in the last equation, x,y, is called the carry-generate function since it
corresponds to the formation of a carry at the ith stage. The second term, (x, + VJG;,
corresponds to a previously generated carry c; that must propagate past the ith stage
to the next stage. The x; + y, part of this term is called the carry-propagate function.
Letting the carry-generate function be denoted by the Boolean variable g; and the
carry-propagate function by p,, i.e.,

Si = OSM (S33)

Dina OG Yi (5.4)

the output carry equation for the ith stage is given by

(OE eG VORes

Using this general result, the output carry at each of the stages can be written in
terms of just the carry-generate functions, the carry-propagate functions, and the
initial input carry co as follows:

C1 = 80 F Po€o Gs)
Ce Sita VC
= 81 + Pi(8o + Polo)
= 2, + D180 + PiPoCo (5.6)

C3 = 82 * Prep
= 82 + prl8i + PiSo + PiPoo)
= 82 + P281 + P2P180 + P2P1PoCo (5.7)
oF a Sa Ee)
= 83 + p3(82 + Pr81 + P2Pi80 + P2PiPoCo)
= 83 + P382 + P3P281 + P3P2P180 T+ P3P2P1P0Co (5.8)

Creme Oats Ge in DSi oate et Pisa (Piso! PiPi-r. 2 Polo (5.9)

Since each carry-generate function and carry-propagate function is itself only a


function of the operand variables as indicated by Eqs. (5.3) and (5.4), the output
carry and, correspondingly, the input carry, at each stage can be expressed as a
function of the operand variables and the initial input carry co. In addition, since
the output sum bit at any stage is also a function of the previous stage output carry
as indicated by Eq. (5.2), it also can be expressed in terms of just the operand
variables and cy by the substitution of an appropriate carry equation having the
form of Eq. (5.9). Parallel adders whose realizations are based on the above equa-
tions are called carry lookahead adders. The general organization of a carry
lookahead adder is shown in Fig. 5.7a where the carry lookahead network corre-
sponds to a logic network based on Eqs. (5.5) to (5.9). The sigma blocks corre-
spond to the logic needed to form the sum bit, the carry-generate function, and the
238 DIGITAL PRINCIPLES AND DESIGN

Carry lookahead network


Pi

Si

(a) (b)

Figure 5.7 A carry lookahead adder. (a) General organization. (6) Sigma block.

carry-propagate function at each stage. A sigma block based on Eggs. (5.2) to (5.4)
is shown in Fig. 5.7b.
From the above discussion, the logic diagram of a carry lookahead adder which
handles two 4-bit operands is shown in Fig. 5.8. Generalizing from this figure, the
path length from the generation of a carry to its appearance as an input at any
higher-order stage, i.e., the path length through any stage of the carry lookahead
network, is two levels of logic. Thus, with one level of logic to form g;, two levels
of logic for the carry to propagate between any two stages, and one level of logic to
have the carry effect a sum output, the maximum propagation delay for a carry
lookahead adder is 4 units of time under the assumption that each gate introduces a
unit time of propagation delay.

5.1.3 Large High-Speed Adders Using the Carry


Lookahead Principle
The basic carry lookahead principle involves minimizing the propagation delay
time between the generation of a carry and its utilization at any higher-order stage
of an adder. Essentially this is done by having the carry input to each stage be a di-
rect function of the operand bits. Thus, the idea of having sum and carry outputs of
the ith stage be a function of x;, y,, and c; is replaced by having these outputs be a
functwonrohgn cede LEM VGAV eae y;, and cy. In this way, the rippling effect of
generated carries to higher-order stages is alleviated.
Although the carry lookahead adder of Fig. 5.8 performs high-speed addition
based on the carry lookahead principle, it presents a limitation in the realization of
large high-speed adders. The carry lookahead network can get quite large in terms
of gates and gate inputs as the number of bits in the operands increases. One ap-
proach to circumvent this problem is to divide the bits of the operands into blocks.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 239

Carry lookahead network

C4

53 S92 S| SO

Figure 5.8 A 4-bit carry lookahead adder.

Then, by using carry lookahead adders for each block, their cascade connection re-
sults in a large adder. Figure 5.9 illustrates this approach by cascading 4-bit carry
lookahead adders. In this case, ripple carries occur between the cascaded 4-bit carry
lookahead adders.
Another approach to realizing large high-speed adders again relies on the parti-
tioning of the operands into blocks. However, use is made of generic carry look-
ahead networks called carry lookahead generators. Figure 5.10a shows a possible
4-bit carry lookahead generator. It is the same as the first three stages of the carry
lookahead network of Fig. 5.8 with two additional outputs G and P, described by
the expressions

G = 83 + p32 + P3P281 + PsP2P 180


and ,

P = P3PrP\Po
240 DIGITAL PRINCIPLES AND DESIGN

Ay, A,oAo Ag B,, By) By By A, Ag As Aq By Be Bs Ba As ASAP AnmEB Boe ee Bn

aD A MO) V8 You TO 3 X72 X Xo 3 Y2 Yi YO %3 %2 X; Xo y3 2 1 0


c4 4-bit carry lookahead adder co c4 4-bit carry lookahead adder cp c4 4-bit carry lookahead adder oo

$3. S251 So $352 5; So 53 82 S$; So

511 S19 So Sg SSense S. S3 Sy S, So

Figure 5.9 Cascade connection of 4-bit carry lookahead adders.

These outputs provide for a block carry-generate signal and a block carry-propagate
signal. Using this 4-bit carry lookahead generator, the 16-bit high-speed adder shown
in Fig. 5.10b is realized where the {-blocks correspond to the network of Fig. 5.7).
Both of the above two compromises, i.e., cascading carry lookahead adders or
utilizing block carry lookahead generators, result in a large parallel adder much
faster than that of the ripple parallel adder.

§3 P3 ee} &§2 P2 c2 &1 Pi c) &§0 PO

(a)

Figure 5.10 (a) A carry lookahead generator.


y1qQ-+ Aueo peoyeyooy Joyer1ouas 05
es Cais Idlg 15 0q08

d9 y1qg-p Ares peoyeyooy ao j1g-p Aires peayeyooy jI1g-p Ales peoyeyxoo]


IoyeIaues 05 dO y1q-p Ares yoo, peoye
Joye1oues iD Joyer1oues 05
E7es€, tgts% ldI8(5 005 Joyelouas 05.
aes© tqacs& Idls Io Od9s Eqgésl5 tdtgtp Id!§ 15 9498 Eqegf5 tdtelt Idl8 Ip 003

915
eanbigOF e(q) yq-91 pesds-ybJeppe
iy

241
242 DIGITAL PRINCIPLES AND DESIGN

5.2 DECIMAL ADDERS


At times, digital systems are required to handle decimal numbers. As was men-
tioned in Chapter 2, owing to the availability and reliability of two-valued circuits,
the decimal digits are represented by groups of binary digits. Numerous codes for
this purpose were given in Sec. 2.10. When performing arithmetic in these digital
systems, the system must be capable of accepting the operands in some binary-
coded form and producing results also in the same coding scheme.
Again only addition is considered in this section since subtraction can be
achieved by means of complements. The use of complements with decimal numbers
was also discussed in Chapter 2.
The general form of a single-decade decimal adder, 1.e., an adder correspond-
ing to a single-order digit position, is given in Fig. 5.11. In this figure it is assumed
that a 4-bit code is used for each decimal digit. Here the two decimal digits serving
as operands are denoted by AjA,A,Ay and B;B,B,Bp. A carry, denoted by C;,, also
appears as an input from the addition of the previous decade. The outputs from the
adder are a single sum digit, Z,;Z,Z,Z, and a carry, C,,,. Although the signal values
denoting the decimal digits depend upon the code being used, the carries, C;, and
Cou,out? are only O or 1. Thus, only a single line is shown in the figure for each of them.
From Fig. 5.11 it is seen that the five output variables Z3, Z;, Z,, Zp), and C,,, are
Boolean functions of the nine input variables A3, A>, A,, Ap, B3, Bs, B,, Bo, and C,,.
To design the single-decade adder, a truth table for the output functions can be con-
structed in which the desired sum digit and output carry are given for each possible
pair of input digits and input carry. This truth table has 2? = 512 rows. However,
since each of the decimal digits has only 10 code groups and since the carry from
the previous decade is only 0 or 1, it follows that the output variables have specified
values for only 200 of the 512 rows. Even so, this is an extremely large table with
which to work, so an alternate approach should be considered.
The 8421 weighted coding scheme is the most commonly occurring in digital
systems and is frequently referred to as simply BCD for binary-coded decimal.
When using BCD, a single-decade decimal adder can be constructed by first per-

Two input decimal digits


Rr
(C SN
Az Ay Ay Ay Bz By By Bo
Cout | | | | | | { | Cin
(Carry to (Carry from
next decade) previous decade)
Single decade decimal adder ——————— eas

pan
Z; LZ, Z,; Zo

Sum decimal digit

Figure 5.11 Organization of a single-decade decimal adder.


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 243

A3 Ay A; Ag B; By B, Bo
feet Ftd
Pea ee SINS Via NG
C4 4-bit binary adder _ GG
33.59 Sy

Cout Z; Z, Z, Zo

Figure 5.12 Organization of a single-decade


BCD adder.

forming conventional binary addition on the two binary-coded operands and then
applying a corrective procedure. This approach is illustrated in Fig. 5.12. The code
groups for the two decimal digits are added using a 4-bit binary adder as discussed
in the previous section to produce intermediate results KP,P,P,P). These results are
then modified so as to obtain the appropriate output carry and code group for the
sum digit, i.e., C,,.Z3;Z,Z,Z. Since each operand digit has a decimal value from 0 to
9 along with the fact that a carry from a previous digit position is at most 1, the dec-
imal sum at each digit position must be in the range from 0 to 19. Table 5.3 summa-
rizes the various outputs from the 4-bit binary adder and the required outputs from
the single-decade decimal adder. As shown in the table, if the sum of the two deci-
mal digits and input carry is less than 10, then the code group for the required BCD
sum and output carry digits appear at the outputs of the 4-bit binary adder. In this
case no corrective procedure is necessary since KP3P,P);P) = CyyZ3;Z,Z,Z. On the
other hand, when the two decimal-digit operands and carry from the previous
decade produce an output from the 4-bit binary adder of KP3P,P,|Py) = 01010,
O1011,..., 10011, which corresponds to the decimal sums of 10 through 19, cor-
rective action must be taken to get the appropriate values for C,,,.Z3;Z)Z|Zp.
The need for a correction is divided into two cases as indicated by the dashed
lines in Table 5.3. Consider first the situation when the decimal sums are in the
range from 16 to 19. Here, the outputs from the 4-bit binary adder appear as
KP3P,P,P) = 10000, 10001, 10010, or 10011; while the required outputs from the
single-decade decimal adder should be C,,.Z3Z,Z,Z) = 10110, 10111, 11000, or
11001, respectively. In each of these cases, it is immediately recognized that the oc-
currence of the carry K indicates that a carry C,,, also is necessary. Furthermore, if
the binary quantity 0110 is added to the output P;P,P,Po, then the correct sum digit,
Z;Z,Z,Zo, is obtained. That is, the addition of a decimal 6, i.e., binary 0110, to the
output from the 4-bit binary adder is the necessary correction whenever the carry bit
Kis 1.
244 DIGITAL PRINCIPLES AND DESIGN

Table 5.3. Comparing binary and BCD sums


Binary sum Required BCD sum
Decimal sum K P, P, P,
0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 | 0 0 0 0 1
2 0 0 0 | 0 0 0 0 1 0
3 0 0 0 | | 0 0 0 |
4 0 0 | 0 0 0 0 1 0 0
5 0 0 | 0 1 0 0 1 0 l
6 0 0 | | 0 0 0 1 | 0
7 0 0 | | | 0 0 1 1 1
8 0 | 0 0 0 0 1 0 0 0

MeO
9 0
aHES Grey
Sie
aah
al
a
oe(wae eli. eee
1 0 0
ee 0 0
iL 0 i 0 I 1 1 0 0 0 1
12 0 | il 0 0 1 0 0 1 0
13 0 1 1 0 | 1 0 0 1 1
14 0 1 | 1 0 i 0 | 0 0
ES 0 | | | | | Oread:Gilat nate Oy an as
16aoi Lie AO SPA cme, WO rah aghReaD | eG
17 1 0 0 0 | | 0 1 1 1
18 l 0 0 | 0 1 I 0 0 0
19 | 0 0 l 1 l ! 0 0 1

Now consider the situation when the output from the 4-bit binary adder corre-
sponds to the decimal sums 10 to 15. These outputs appear as KP;P,P,P) = 01010,
01011,..., 01111 and the required outputs are C,,.Z;Z,Z,Z) = 10000, 10001,...,
10101, respectively. In each of these cases, it is necessary to have C,,, = 1 even
though K = 0. Again it is immediately recognized that the addition of decimal 6,
i.e., binary 0110, to the output from the 4-bit binary adder, P,;P,P,P,, results in the
correct sum digit. That is, whenever the six binary combinations P;P,P,P, = 1010,
KONBE aeee 1111 occur, the corrective procedure is to add the decimal quantity 6.
These six binary combinations correspond to the invalid code groups in the 8421
code. To obtain a Boolean expression to detect these six binary combinations, a
Karnaugh map is constructed as shown in Fig. 5.13. Obtaining the minimal sum
from the map, it is seen that a correction is needed to the binary sum whenever the
Boolean expression P,P, + P3P, has the value of 1.
In summary, to design a single-decade BCD adder having the organization of
Fig. 5.12, the two decimal digits are added as binary numbers. No correction to the
binary sum is necessary when KP3;P,P,P) = 01001, but the binary equivalent of the
decimal 6 must be added to P;P,P,P) when KP3;P,P,P, > 01001. The Boolean ex-
pression describing the need for a correction is

Add 6 = K + P;P; + P3P, (5.10)


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 245

Figure 5.13 Karnaugh map to


detect the
combinations
P3P>P,P) = 1010,
MOR sao Wild

The first term corresponds to the situation when 10000 = KP3P P,P) = 10011, Le.,
whenever the carry bit K is 1, and the remaining two terms correspond to the situation
when 01010 = KP3P,P,P,) = 01111, i.e., whenever the code group for the sum digit
is invalid. It is also noted from Table 5.3 that whenever a corrective action is neces-
sary, a carry C,,, should be sent to the next decade. Thus, Eq. (5.10) also describes
the conditions for the generation of a carry. Figure 5.14 shows the logic diagram of

i A MEB- UB EE eRe
>

2 *1 Xo a5) Yi 0
in
C4 4-bit binary adder Co |<
$$

$3 So Sy, SO

* Xo) B32. nO
C4 4-bit binary adder cy |=— 0
53502 7210 20

Z; Z, Z, Zo

Figure 5.14 A single-decade BCD adder.


246 DIGITAL PRINCIPLES AND DESIGN

a single-decade BCD adder. In this diagram whenever C,,, = 0, the outputs from
the upper 4-bit binary adder are sent to the lower 4-bit binary adder and the decimal
quantity of zero is added to it, which results in no corrective action. However,
whenever C.,, = 1, decimal 6, i.e., binary 0110, is added to the outputs from the
upper 4-bit binary adder so that the correct sum digit is obtained.
The above discussion was concerned with the design of a single-decade BCD
adder. A decimal adder for two n-digit BCD numbers can be constructed by cascad-
ing the network of Fig. 5.14 in much the same way as was done for the ripple binary
adder.

5.3 COMPARATORS
A commonly encountered situation in logic design is the need for a network to com-
pare the magnitudes of two binary numbers for the purpose of establishing whether
one is greater than, equal to, or less than the other. A conceptually simple approach
to the design of such a network, called a comparator, makes use of a cascade con-
nection of identical subnetworks in much the same way as was done in the design of
the parallel adder.*
To see how such a subnetwork is designed, consider two n-bit binary numbers
A =A,_ 1° * * A;Aj-;°°* AjAp and B = B,_,ill ++: B,B;_, +++ B,Bo. For the purpose of
this design, assume that only one bit of corresponding order from each number is
entering the subnetwork, say, A; and B&;, and that the two binary numbers are to be
analyzed from right to left. This subnetwork is called a /-bit comparator. The func-
tion of the 1-bit comparator is to establish whether A;A;_, +++ A,Ap is greater than,
equal to, or less than B,;B,;_, -*+ B,By given the values of A;, B;, and whether A;_, °°:
A,Ao is greater than, equal to, or less than B;_, +++ B,Bo. The three conditions de-
scribing the relative magnitudes of A;_, +++ A,Ayg and B;_, +++ B,Bo are assigned to
three variables G,, E;, and L; where G; = 1 denotes A;_, °+* A;Ay > B;_, °** B,Bo,
E; = 1 denotes A;_, °-- AjAp = B;_, °** B,Bo, and L, = 1 denotes A;_; >+> A,Ay <
B,_,*** B,Bo. Thus, the 1-bit comparator is a 5-input, 3-output network as shown in
Figeouls.
Having obtained the organization of the 1-bit comparator, it is now necessary
to develop a rule for specifying the values of G;,,, E;.,, and L;,, given the values of
A;, B;, G;, E;, and L;. Upon a little thought it should become clear that, regardless of

G
(AjA\_,** A\Ao > BiB;_, -- - By Bo) ey ——— (A;_1°* AjAp >Bi + B,Bo)
(A;A;_|-- > A\Ag = B;B;_; - + > By Bo) |+»——-(A;_, -- A} Ap = B;_, - +: By Bo)
comparator :
(A;Aj_1 > + A\Ag < BB, ° By Bo) a (A AAG <Br BB)

Figure 5.15 Organization of a 1-bit comparator.

*This design of a comparator is another example of an iterative network.


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 247

Table 5.4 Truth table for a 1-bit comparator

A; Bix) Lis A; By OG, E; EL, | Gis Eis Lis


On FS i es [20 eins ROMO = si
roa ie Fae i Oo 120 Ot al | 0 0
Of 4 area 0 | 0 <—e0> tO ie 0 I 0 0
Gre Give Se Se | z im OO) Om 1
Th <0e8 Sele I 0 0 Pee Om ior bal | 0 0
Goa 0 ed seca és rm he hee La oe
i Oe ae ieee z =, " (iy Oca at RG
OP LO me 1 = = i om On ier | | es < 2
Omni 8203) 70 = = = | i 0--Omp 0
(nO, 8407s | 0 0 | | ite SNe ag, Pe 0 0 I
Coie One bo x0 0 0 | | te rd ae 0 | 0
On ete 0, xl ht a Z = | as On I _ z =
On a i 0. 0 0 0 1 | i i) 0 20 I 0 0
Or I ie cone = = | peat 0 1 = ~
0 ! | ets i s = Z | | ! 0
Ou Lal 1 1 = = E | jusab el I |
2

the relative magnitudes of A;_;°-- A Ap and B,_, -*+ B,Bo, if A; = 1 and B; = 0 then
A;°** A\Ay > B;°** B,Bo; while if A; = 0 and B; = 1 then A;--* A,Ay < B;+++ B,Bo.
However, if A; and B; are the same, then the relative magnitudes of A; +++ A,Ay and
B;°+* B,Bo are the same as the relative magnitudes of A;_, +++ A,;Ap and B;_, °°
B, Bo. From this analysis, the truth table shown in Table 5.4 is constructed. The large
number of don’t-care conditions should be noted. This is a consequence of the fact
that one and only one of the three variables G,, E;, and L; has the value of logic-1 at
any time. The minimal sum Boolean expressions for Table 5.4 are

Gua A,B, 1 AG BG,


La A,B,E; + ABE;
L;+, = A,B, + BL; + A,L;
and the corresponding logic network is shown in Fig. 5.16a. Cascading n 1-bit com-
parators, as shown in Fig. 5.16, results in a network capable of determining the rel-
ative magnitudes of two n-bit binary numbers A and B. Particular attention should
be given to the 1-bit comparator having the bit-pair Ap and By as inputs. In order to
commence the comparison process, it is necessary to indicate that no previous digits
exist. This is achieved by assigning the values E, = | and Gy = Ly = 0 to the first
1-bit comparator.
For the purpose of illustrating the concept of binary comparison, the above dis-
cussion was based on the design of a 1-bit comparator. Several MSI comparators are
commercially available. A typical commercial comparator provides for 4 bits of each
number to be compared within each subnetwork. This allows for a more efficient
248 DIGITAL PRINCIPLES AND DESIGN

Gis
Gj

Fist

E;

it Lis)

(a)

A, Bn

G Gz
(A > B) E eee

(A = B)<—* tes
comparator) L,_,
(A <B) cnere

(b)

Figure 5.16 Comparing two binary numbers A and B. (a) 1-bit comparator network.
(b) Cascade connection of 1-bit comparators.

design of the subnetwork. Numbers consisting of more than 4 bits are then compared
by cascading these 4-bit comparator subnetworks in the same manner as was illus-
trated above for the 1-bit comparators.

5.4 DECODERS
Frequently, digital information represented in some binary form must be converted
into some alternate binary form. This is achieved by a multiple-input, multiple-output
logic network referred to as a decoder. The most commonly used decoder is the
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 249

n-to-2”
[— DEC 0 -—

—1 i
g ©
SD all ae etl
—_n-|l | ———

et fee eee
Figure 5.17 An 1n-to-2’-line
decoder symbol.

n-to-2"-line decoder. This digital network has n-input lines and 2”-output lines with
the property that only one of the 2”-output lines responds, say with a logic-1, to a
given input combination of values on its n-input lines. A symbol for such a device is
shown in Fig. 5.17.
The realization of the n-to-2"-line decoder is straightforward. Figure 5.18
shows the logic diagram, truth table, and symbol of a 3-to-8-line decoder. In this
figure the three input lines are assigned the variables xo, x,, and x5; while the
eight output lines are assigned the variables Zo, z;,..., Z7. AS shown in the truth
table, only one output line responds, i.e., is at logic-1, for each of the input
combinations.
To further understand the labels in the symbol of Fig. 5.18c, let a binary 0 be
associated with a logic-0 and a binary | be associated with a logic-1. In addition,
let the ith-input line be weighted by 2' fori = 01122. In this way, the input com-
binations can be regarded as binary numbers with the consequence that the jth-
output line is at logic-1, for 7 = 0, 1,..., 7, only when input combinationj is
applied.
The n-to-2”-line decoder is only one of several types of decoders. Function-
specific decoders exist having fewer than 2” outputs. For example, a decoder having
4 inputs and 10 outputs in which a single responding output line corresponds to a
combination of the 8421 code is referred to as a BCD-to-decimal decoder. There are
also function-specific decoders in which more than one output line responds to a
given input combination. For example, there is a four-input-line, seven-output-line
decoder that accepts the 4 bits of the 8421 code and is used to drive a seven-
segment display. However, the n-to-2”-line decoders are more flexible than the
function-specific decoders. It is now shown that they can be used as a general
component for logic design.

5.4.1 Logic Design Using Decoders


In Fig. 5.18, the Boolean expressions describing the outputs of the decoder are
also written. Each of these output expressions corresponds to a single minterm.
250 DIGITAL PRINCIPLES AND DESIGN

Xo
Zp = XXXQ = My

xy

Xo
Z4 = XpXjXQ = M4

Z5 = XX XQ = Ms

% = XXXp = Mo

SPE SEY) S

(a)

2 = XXX = Mo

ORORO | O10) 0 0010 =My


21 = XQXyXq
ORO 0 LO MOMO ROM ORO Zp = XXX
= My
Om O. 0 OR OO Om ORO 23 = XgX1Xq
= M3
OR 0
LOR O) 0
ty 0 00110. O21, 20.50 sae
a del
i it © 0 OR ORO ORO aG = Me
26 = XnX1Xo
00 0 0 0 1 27 = XN XQ = M7

(b)

Figure 5.18 A 3-to-8-line decoder. (a) Logic diagram. (b) Truth table.
(c) Symbol.

Hence, an n-to-2"-line decoder is a minterm generator. Recall that any Boolean


function is describable by a sum of minterms. Thus, by using or-gates in con-
junction with an n-to-2"-line decoder, realizations of Boolean functions are pos-
sible. Although these realizations do not correspond to minimal sum-of-products
expressions, the realizations are simple to produce due to the nature of the
n-to-2"-line decoder. This is particularly convenient when several functions of
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 251

the same variables have to be realized. To illustrate this, consider the pair of
expressions

Fi %o,X1,%) = Ym(1,2,4,5)
Fo(%r,X1,Xo) = >m(1,5,7)

Using a single 3-to-8-line decoder and two or-gates, the realization shown in Fig. 5.19
is immediately obtained.
In the realization of Fig. 5.19, the number of input terminals required of each
or-gate is equal to the number of minterms that must be summed by the gate. When
more than one-half the total number of minterms must be or-ed, it is usually more
convenient to use nor-gates rather than or-gates to perform the summing. This re-
sults in a net reduction in the total number of input terminals required of the sum-
ming gates. For example, consider the pair of expressions

Fi(%2.X1,Xo) aS >m(0,1,3,4,5,6)

fi%2,X1,X9) = Xm(,2,3,4,6)

It is possible to realize these expressions with a 3-to-8-line decoder and two


or-gates having a total of 11 input terminals between them. However, recalling
that the complement of a minterm canonical formula is the sum of those min-
terms not appearing in the original formula, the complementary expressions are
written as

F a%1%0) = Ym(2,7)

F (%2%15%0) = >m(0,5,7)

3-to-8
DEC 0

fi

Figure 5.19 Realization of the Boolean expressions


f,(Xo,X4,Xo) = 2M(1,2,4,5) and
fo(Xo,X4,Xo) = LM(1,5,7) with a 3-to-8-
line decoder and two or-gates.
252 DIGITAL PRINCIPLES AND DESIGN

3-to-8
DEC 0

ff

5 h

Figure 5.20 Realization of the Boolean expressions


fy(Xo,X4,X) = 2 (0)1,3,4,5,6) = 3m(2,7)
and fo(Xo,X1,Xo) = 2M(1,2,3,4,6) =
~m(0,5,7) with a 3-to-8-line decoder and
two nor-gates.

Finally, complementing these expressions by DeMorgan’s law gives


fon) = fi(X2,X1,X%) = Ym(2,7)
F (2X1 Xo) = fr(%y,X1,X9) = Ym(0,5,7)
This final pair of expressions corresponds to the realization shown in Fig. 5.20.
Here, a total of only five gate-input terminals are required.
It is also possible to obtain realizations of maxterm canonical formulas using
n-to-2"-line decoders. In Chapter 3 it was shown that any maxterm canonical for-
mula can be converted into an equivalent minterm canonical formula. For example,
consider the pair of expressions
f(%,X),Xo) = TIM(0,1,3,5)
fo(%,X1,X%) = TIM(,3,6,7)
Using the transformation technique introduced in Sec. 3.6, these expressions are
rewritten as
Ff»%},%) = ILM(0,1,3,5) = Sm(2,4,6,7)
fo(%.X1,X%q) = TIM(1,3,6,7) = Sm(0,2,4,5)
These expressions lead to the realization shown in Fig. 5.21a. Alternately, from the
above discussion on using nor-gates as summing devices, the expressions can also
be written as
fi (%,X1,%) = ILM(0,1,3,5) = &m(0,1,3,5)
fal%,%1,%) = IIMG,3,6,7) = >m(1,3,6,7)
which suggests the realization of Fig. 5.21b.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 253

(a) (b)

Figure 5.21 A decoder realization of f,(x2,X;,X) = TIM(0,1,3,5) and f,(x2,X;,X9) = IIM(1,3,6,7). (a) Using output
or-gates. (6) Using output nor-gates.

Frequently, n-to-2"-line decoders are constructed from nand-gates. An example


of a 3-to-8-line decoder using nand-gates, along with its truth table and symbol, is
shown in Fig. 5.22. The Boolean expressions of the outputs are also given in the fig-
ure. In this case, for each input combination the single responding output line is as-
sociated with a logic-0, as is readily seen by the truth table. Since each output is
logic-O for only one input combination, it follows that each output is describable by
a single maxterm. Thus, a nand-gate realization of a decoder is a maxterm genera-
tor. Particular attention should be given to the output terminals in the symbol of the
decoder where bubble notation is used to indicate complementation is occurring. It
should also be recalled from Sec. 3.6 that m; = M,.
Since any Boolean function is describable by a product of maxterms, a nand-
gate decoder, along with an and-gate, can serve as the basis of a maxterm canonical
formula realization. For example, the realization of the pair of maxterm canonical
expressions

Si ,X1,%o) = [IM(0,3,5)

Ai%,X),Xo) = TIM(2,3,4)

is shown in Fig. 5.23.


When more than one-half of the total possible maxterms occur in a Boolean ex-
pression, the output and-gate can be replaced by a nand-gate so as to reduce the
number of inputs needed to the output gate. To illustrate this, consider the pair of
expressions

Si %2,X),X0) = IIM(0,1,3,4,7)

fol%2,%1,%) = TIMC,2,3,4,5,6)
254 DIGITAL PRINCIPLES AND DESIGN

ZO = XXX = Xo ae xX se Xy=Mo
oo

F ‘
Xo
ea XX XG = Xo Ie xy + Xyp=M,

= = =
| pas = XX XQ = X% ar x) aia X9=M>,

x)

= =
tf p—23 = XgXjXq = XQ + Xp + XQ= Mz

X> a 24 = XpXXq =X +X + X= My

Fp iXy = y+ X, + =U
Zs = Hah

t—f J p— % = XoX|Xq = Xo +X, + XQ=Me

ee Zq = XQX1Xq = XQ +X, + XQ=MZ

Inputs Outputs 3-to-8


2G) Aa 250) “9 %1 22 %3 24 %5 %@ %7 DEC

3 O— %3 = X9X Xp = XQ +X, + X= MG
: 4 O— 24 = XXX = XQ +X, + xXQ=My
2 = 5 JO 25 = xh Xo = % + xy + XQ= Ms

6 O— 2% = XXX = XQ + Xp + XQ= Me

TO 27 = X0N
1X9 = XQ +X} + XQ= Mz
ee

(b) (c)

Figure 5.22 A 3-to-8-line decoder using nand-gates. (a) Logic diagram.


(b) Truth table. (c) Symbol.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 255

Figure 5.23 Realization of the pair of maxterm


canonical expressions f,(X5,X;,X9) =
TIM(O,3,5) and f(%5,x;,X9) = TM(2,3,4)
with a 3-to-8-line decoder and two
and-gates.

Using the fact that the complement of a maxterm canonical formula is the product of
those maxterms not appearing in the original formula, the complementary formulas are

Ff,Q2*1.%) = IIM(2,5,6)
Ff,(%2-%1 Xo) = IIM(0,7)

Complementing both sides of each equation results in

F ,.%1.%0) = fi(%,X),X9) = HM(2,5,6)

Ff,Q2-%1%0) = fal%2%1X0) = IIM(0,7)


These expressions suggest the realization of Fig. 5.24.
Although the nand-gate version of an n-to-2"-line decoder is a maxterm genera-
tor, it can also be used to realize expressions in minterm canonical form. This is
done by simply transforming the minterm canonical formula into its equivalent
maxterm canonical formula, as was discussed in Sec. 3.6. Again, either and-gates or
nand-gates are used to collect the maxterms. For example, the pair of minterm
canonical formulas
fiCsxi%) = 2mO0,2,6,7)
olostio) = 2770(3,9, 0x1)
can be written as

fi(X%,X,Xo) = Sm(0,2,6,7) = TIM(1,3,4,5)


fo(X>,X1X0) = &m(3,5,6,7) = IIM(0,1,2,4)
256 DIGITAL PRINCIPLES AND DESIGN

Figure 5.24 Realization of the Boolean expressions


f,(X>,X;,X0) = TIM(0,1,3,4,7) = IIM(2,5,6)
and f(Xs,X;,X0) = HIM(,2,3,4,5,6) =
IIM(0,7) with a 3-to-8-line decoder and
two nand-gates.

which has the realization shown in Fig. 5.25a, or written as


fi (%5X1,Xo) = Zm(0,2,6,7) = IIM(0,2,6,7)
fr(%5X1Xo) = Bm(3,5,6,7) = IIM(3,5,6,7)
which has the realization shown in Fig. 5.25.

5.4.2 Decoders with an Enable Input


Normally decoders have one or more additional input lines that are referred to as
enable inputs. This is illustrated in Fig. 5.26, where a single enable input, F, is used
in an and-gate realization of a 2-to-4-line decoder and in Fig. 5.27, where a single
enable input, E, is used ina nand-gate realization of a 2-to-4-line decoder. In each
figure, a truth table and symbol are included. These truth tables are said to be com-
pressed since not all input combinations explicitly appear. In compressed truth ta-
bles the X’s indicate don’t-care conditions. In this way, several rows of a normal
truth table are replaced by a single row in a compressed truth table.
To function as the previously explained decoders, a logic-1 is applied to the en-
able input E of the decoder in Fig. 5.26; while in Fig. 5.27 a logic-0 is applied to the
enable input E. In such cases, the decoders are said to be enabled. On the other
hand, when the enable inputs are such as to prevent the decoding process, the de-
coders are said to be disabled. In the case of Fig. 5.26, all outputs of the decoder are
at logic-O when it is disabled; while in the case of Fig. 5.27, all outputs of the de-
coder are at logic-1 when it is disabled. Particular attention should be given to the
symbol of Fig. 5.27c, where the bubble on the enable input line indicates that the
decoder is enabled when the input on this line is at logic-0.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 257

3-to-8
DECIS op

it
f
2)
70110
3
“Si — 1
4
X97 ~ 2

5)

6a :
J2
=

(a) (b)

Figure 5.25 A decoder realization of f,(xX2,x;,X9) = £(0,2,6,7) and f,(x>,X;,X%>) = &m(3,5,6,7). (a) Using output
and-gates. (b) Using output nand-gates.

Enable 23 = XXL
(E)
(a)

Inputs Outputs
E Xx; Xo “Oi 2G 2a 48)

(b)

Figure 5.26 And-gate 2-to-4-line decoder with an enable input. (a) Logic diagram.
(b) Compressed truth table. (c) Symbol.
258 DIGITAL PRINCIPLES AND DESIGN

Il = &S ie)
aS +

Xo

mine
Zz} = X\XpE =

= aparenr

Enable
(®)
po
_Inputs Outputs
Ex, X% Z % 2% % Xy ——

Enable
(E)

Figure 5.27 Nand-gate 2-to-4-line decoder with an enable input. (a) Logic
diagram. (b) Compressed truth table. (c) Symbol.

The enable input provides the decoder with additional flexibility. For example,
suppose a digital network is to be designed which accepts data information and
must channel it to one of four outputs. This is achieved using a decoder in the con-
figuration shown in Fig. 5.28. Here, the data are applied to the enable input. By en-
tering a binary combination on the other two input lines, labeled as select lines in
the figure, precisely one output line is selected to receive the information appearing
on the data input line. In particular, if x, = 0 and x) = 1, then the output line labeled

Zg = XXL
Select | *0
Bey
lines
4 Outputs
Zp = XXL
Data input
line paAe

Figure 5.28 Demultiplexer.


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 259

1, which is described by the Boolean expression x,%)E, corresponds to 0: 1+E = E


and hence follows the data on the input line £; i.e., the output line z, has the same
bit value as on the data input line E. All the other output lines are at logic-O during
this time. This process is known as demultiplexing. For this reason, decoders with
enable inputs are also referred to as decoders/demultiplexers.
Decoders with enable inputs are also used to construct larger decoders from
smaller decoders. An example of this is shown in Fig. 5.29, where all the decoder

2-to-4
x 0 DEC QL — 85% 1%p

x 1 1 -—— %3%2% 1X9

2-— X3XpX Xo

eeerse se E
3 feXXX 1X0

2-to-4
DEC 0 -— X34X9X
1X

7 1 LH 03X91 xX
:
x| | 1 32 0

2 | 33%2*1X0
EB Z
3 AEE)

2-to-4
DEC Oa X3XpX Xo
x»—— 0
| 1 iaX3XqX 1X0
xy at

2+— X3X 9X 1X0

3 ieN3XqX4XO

2-to-4

0
1 ——*3X2*
1X0
]
|
__X3X 9X 1XQ

3 3X9 X09)

Figure 5.29 A 4-to-16-line decoder constructed from 2-to-4-line decoders.


260 DIGITAL PRINCIPLES AND DESIGN

output lines are labeled with their corresponding Boolean expressions. Here the
first-level decoder is used to generate the four combinations of the x, and x; vari-
ables since E = |. Each of these combinations is applied to the enable input at a
second-level decoder that introduces the four combinations of the x9 and x, vari-
ables. The net result is a network that generates the 16 minterms of four variables
or, equivalently, serves as a 4-to-16-line decoder.

5.5 ENCODERS
Like decoders, encoders also provide for the conversion of binary information from
one form to another. Encoders are essentially the inverse of decoders. Normally de-
coders have more output lines than input lines. On the other hand, decoders that
have more input lines than output lines are usually called encoders.
Perhaps the simplest encoder is the 2”-to-n-line encoder in which an assertive
logic value, say, logic-1, on one of its 2” input lines causes the corresponding binary
code to appear at the output lines. If it is assumed that at most one input line is as-
serted* at any time, then the 2”-to-n-line encoder is simply a collection of or-gates.
Figure 5.30 shows a 2"-to-n-line encoder symbol and Fig. 5.31 shows the logic dia-
gram for an 8-to-3-line encoder. The equations for the three outputs of Fig. 5.31 are

Z0 = x AF X3 ar Xs at X7

Z| = X7 ar X3 ta X6 ae X7

1G) oe OG WE RR AP ONG I 3G

In general, the Boolean expression for the output z; is the sum of each input x;
in which the binary representation of j has a | in the 2'-bit position.

2”-to-n
0 ENCODER 0 ~

—1 1}-—
ine) in)
y=
Inputs sjnding

Figure 5.30 A 2”-to-n-line


encoder symbol.

*When a named input signal to a logic network is to cause an action when at logic-1, the signal is said to
be active high. Similarly, when a named input signal to a logic network is to cause an action when at
logic-O, the signal is said to be active low. When a signal is at its active level, it is said to be asserted.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 261

x7

Figure 5.31 An 8-to-3-line encoder.

The assumption that at most a single input to the 2”-to-n-line encoder is as-
serted at any time is significant in its operation. For example, in the encoder of
Fig. 5.31, assume that both x; and x; are simultaneously logic-1. Logic-1’s then
appear at all three output terminals, implying that x, must have been logic-1. For
this reason, priority encoders have been developed. In a priority encoder, a prior-
ity scheme is assigned to the input lines so that whenever more than one input
line is asserted at any time, the output is determined by the input line having the
highest priority. For example, Table 5.5 is a condensed truth table specifying the
behavior of a priority encoder where the output is determined by the asserted
input having the highest index, i.e., x; has higher priority than x; if i > j. Thus,
referring to Table 5.5, if x, = x5 = x6 = x7 = 0 and x, = 1, then 22,2) = O11 re-
gardless of the values of the x9, x,, and x, inputs.
An output is also included in Table 5.5, labeled valid, to indicate that at least
one input line is asserted. This is done so as to distinguish the situation that no
input line is asserted from when the x, input line is asserted, since in both cases
221% = 000.

Table 5.5 Condensed truth table for an 8-to-3 line priority encoder
Outputs
N rs) a NNSo = 2 =jo”

S
KS
x
KKK
KM MOK
Soe
ae
KK
KR 26
Oh
x2S
=
SS <x
xSoS oe
SS Xx
So Se)
eS
SS
eS)
xSS
TSorSolo
KH Oo
©
(Slo
Qo
Ooreono
SS
Ss
oa
Se
SoeoO So
SoS
oe
OR
Soe
BrPoedodorraco
PrePrFrROOOC
262 DIGITAL PRINCIPLES AND DESIGN

5.6 MULTIPLEXERS
Another very useful MSI device is the multiplexer. Multiplexers are also called data se-
lectors. The basic function of this device is to select one of its 2” data input lines and
place the corresponding information appearing on this line onto a single output line.
Since there are 2” data input lines, n bits are needed to specify which input line is to be
selected. This is achieved by placing the binary code for a desired data input line onto
its n select input lines. A symbol for a 2”-to-1-line multiplexer is shown in Fig. 5.32.
Typically an enable, or strobe, line is also included to provide greater flexibility as in
the case of decoders. The multiplexer shown in Fig. 5.32 is enabled by applying a
logic-1 to the E input terminal. Some commercial multiplexers require a logic-O for en-
abling. In such a case an inversion bubble appears in the symbol at the E input terminal.
A realization of a 4-to-1-line multiplexer is given in Fig. 5.33 along with its
compressed truth table and symbol. The X’s in the compressed truth table denote ir-
relevant, i.e., don’t-care, conditions. As shown in the figure, each data input line /;
goes to its own and-gate. The select lines are used to uniquely select one of the and-
gates. Thus, if the multiplexer is enabled, then the output corresponds to the value
on the data input line of the selected and-gate. As in the case of decoders, the 0-1
combinations on the select lines are regarded as binary numbers. The decimal
equivalents of these numbers determine which data input lines are selected and
serve to identify the corresponding input terminals in the symbol.
Table 5.6 provides an alternate description of the behavior of the 4-to-1-line
multiplexer. This description is frequently referred to as a function table. Here,
rather than listing the functional values on the output lines, the input that appears at
the output is listed for each combination of values on the select lines. Again a X in
the table indicates an irrelevant condition. From either the logic diagram or function
table, an algebraic description of the multiplexer can immediately be written as

FSS + LSiss ae bSisa + 1S) So)E (5.11)

Data input lines :


Output line

Enable
line

Select input
lines

Figure 5.32 A 2”-to-1-line multiplexer symbol.


CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices
263

ace
:
iB)
ei
To eens

| A|

(a)

4-to-|
Wye See eee 0 Ip MUX
LO Oa Ole ae x 0 mee
I @) @) ih 26 Seex% 1
iets
2Woes cad fan) By Be
it Ok Oe xe | 1;
le Ome <a (ex 0
bol Oxi 1 x I “sae
lik i Se See © 0 S; So
il i il 3S se i | ae) Sethe p

—> ae —5 Nae

Figure 5.33 A 4-to-1-line multiplexer. (a) Logic diagram. (b) Compressed


truth table. (c) Symbol.

Table 5.6 Function table for a 4-to-1-line


multiplexer
264 DIGITAL PRINCIPLES AND DESIGN

_| Output

Figure 5.34 A multiplexer tree to form a 16-to-1-line


multiplexer.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 265

In addition to the 4-to-1-line multiplexer shown in Fig. 5.33, 2-to-1-line, 8-to-


|-line, and 16-to-1-line multiplexers are also commercially available. By intercon-
necting several multiplexers in a treelike structure, it is possible to produce a larger
multiplexer. For example, Fig. 5.34 illustrates how five 4-to-1-line multiplexers are
used to construct a 16-to-1-line multiplexer. Particular attention should be given to
the select lines. As a 16-to-1-line multiplexer, there are four select inputs $3, 55, 5,
and So. It should be noted that S; is the most significant select line in that its input
is most heavily weighted when viewed as a binary digit; while Sy is the least signif-
icant select line. In this way, if 7 is the binary combination on the $,5,S,Sp-lines,
then data line J; is selected to appear at the output. In actuality, the $,5p-inputs se-
lect one data input from each of the first-level multiplexers. The second-level mul-
tiplexer, via its S;55-lines, then selects which data input reaches the output of the
multiplexer tree.
One of the primary applications of multiplexers is to provide for the transmis-
sion of information from several sources over a single path. This process is known
as multiplexing. When a multiplexer is used in conjunction with a demultiplexer,
i.e., a decoder with an enable input, an effective means is provided for connecting
information from several source locations to several destination locations. This
basic application of multiplexers and demultiplexers is illustrated in Fig. 5.35.* In
this figure, one bit of information from any of four sources is selected according to
the source address lines. This information is then placed on a wire, known as a bus,
that connects to a demultiplexer similar to the one described in Sec. 5.4. The bit-
combination on the destination address lines then determines on which of the four
output lines of the demultiplexer the data information is placed. By using n of the

One-bit bus

Source Destination
address address
lines lines

Figure 5.35 A multiplexer/demultiplexer


arrangement for information
transmission.

*In Fig. 5.35 the demultiplexer symbol was modified from Fig. 5.28 to emphasize the multiplexer/
demultiplexer arrangement.
266 DIGITAL PRINCIPLES AND DESIGN

structures shown in Fig. 5.35 in parallel, an n-bit word from any of four source loca-
tions is transferred to any of four destination locations.

5.6.1 Logic Design with Multiplexers


Multiplexers are also used as general logic-design devices for realizing Boolean func-
tions. Let us begin this discussion by considering the most direct way this is done.
Consider a three-variable Boolean function and its truth table as shown in Fig. 5.36a.
The Boolean expression corresponding to this truth table can be written as

IQY,Z) = fo XYZ + fi “XYZ + fo aye + fyAyzZ


KVL Pile RVR ee pe ez (5.12)

where f; denotes functional values 0 and 1.* The Boolean expression for a 4-to-1-
line multiplexer was previously written as Eq. (5.11). In an analogous manner to
Eq. (5.11), an 8-to-1-line multiplexer is described by the Boolean expression

i (Tg52515o ate SSS) i DSSS) oe 1855156


+ LSS See EESSS*S 4 ESES85 ESS SE (5.13)

fo—flo MUX
i a ei I,

3 5
is

Ta —— Ta i,
(=:
te Le
fy; —1h,

Sousa 55

(a) (b)

Figure 5.36 Realization of a three-variable function


using an 8-to-1-line multiplexer.
(a) Three-variable truth table.
(b) General realization.

*Note that/;denotes a functional value in this presentation and not an entire function as in other sections
of this book.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 267

If E is assumed to be logic-1, then Eq. (5.13) is transformed into Eq. (5.12) by re-
placing /; withf;,,S, with x, S; with y, and Sp with z. In other words, by placing x, y,
and z on select lines S,, $;, and So, respectively, and placing the functional valuesf,
on data input lines /;, an enabled 8-to-1-line multiplexer realizes a general three-
variable truth table. This realization is shown in Fig. 5.36b.
As a specific example, consider the truth table of Fig. 5.37a. By placing a logic-
1 on the enable input line, the eight functional values on the eight data input lines of
an 8-to-1-line multiplexer, and connecting the select lines $5,5),S9 to x,y,z, respec-
tively, the configuration of Fig. 5.37b becomes a realization of the given truth table.
Rather than working from a truth table, one could start with a minterm canoni-
cal formula to obtain a realization with a multiplexer. Since each minterm in an ex-
pression algebraically describes a row of a truth table having a functional value of
1, the realization is obtained by simply applying a | input to the /; line if minterm m;
appears in the expression and applying a 0 input to the J; line if m; does not appear
in the expression. For example, consider the minterm canonical formula
f(%y,z) = Ym(0,2,3,5)
The realization is obtained by placing x, y, and z on the Sy, S;, and So lines, respec-
tively, logic-1 on data input lines Jp, /5, /;, and /;, and logic-O on the remaining data
input lines, i.e., /,, 14, 4g, and /;. In addition, the multiplexer must be enabled by set-
ting E = |. This again is the realization shown in Fig. 5.37b.
If at least one input variable of a Boolean function is assumed to be available
in both its complemented and uncomplemented form, or, equivalently, a not-gate

0 0 0
OPO 0
OREO ]
Oy dh dl i
Ome) 0
i @ i 1
b i @ 0
toi 0

(a)

Figure 5.37 Realization of f(x, y,Z) = 2™(0,2,3,5).


(a) Truth table. (b) 8-to-1-line
multiplexer realization.
268 DIGITAL PRINCIPLES AND DESIGN

is used to generate the complement of a variable, then any n-variable function is


realizable with a 2” '-to-1-line multiplexer. For example, in the case of a three-
variable function, this implies that only a 4-to-1-line multiplexer is needed for a
realization. To see this, again consider Eq. (5.12). Doing some simple factoring,
Eq. (5.12) becomes

f%y.z) = Go 2+ fi Oxy + Gz tfr-ory + Va-z + fs Z)xy


Tits
Rate fata )xy
Furthermore, when E = | Eq. (5.11) has the form

f = IpS{Sp + 18:89 + 1S,Sq + 135,Sp


Comparing these last two equations, it immediately follows that a realization of any
three-variable Boolean function is obtained by placing the x and y variables on the
S, and Sp) select lines of a 4-to-1-line multiplexer, the single-variable functions
f,* 2 +f, z on the data input lines, and letting E = | as shown in Fig. 5.38. In any
particular situation, the single-variable functionsf; * z + f;° z reduce to 0, 1, z, or Ye
depending upon the values of f; and /;.
As an illustration, again consider the truth table in Fig. 5.37a. Since fo = 1 and
fi = 9, fo°z +f, °z evaluates to z. Similarly, withf,= 1 andf, = 1, thenf,-z+f,°z
= 1; with f, = O andf; = 1, then f,-z + f° z = z; and with f, = 0 andf, = 0, then
fo°2+f°z = 0. Thus, the realization is obtained by placing x and y on the S; and
So select lines, respectively, z on the /p line, logic-1 on the /, line, z on the J, line,
and logic-0 on the /, line. In addition, the multiplexer must be enabled. The result-
ing realization is shown in Fig. 5.39.
Alternatively, the minterm canonical formula for the truth table in Fig. 5.37a is

LOVES SVE eye Kye ye

4-to-]
fo: Z+hf,-2—\1 0 MUX

hither
e I, " f
YO CAPR

U6 ae l,

| E
Si So

Xx y

Figure 5.38 A general realization


of a 3-variable
Boolean function
using a 4-to-1-line
multiplexer,
CHAPTER 5 logic Design with MSI Components and Programmable Logic Devices
269

i y

Figure 5.39 Realization


ONIOGY,2) =
“WiO),2,8,5)
using a 4-
to-1-line
multiplexer.

When the expression is factored into the following form

FR%Y,z) = Xy(Z) + XZ + 2) + xyz)


= xy(z) + xy(1) + xy(z) + xy(0)
the realization of Fig. 5.39 again results, where the entities in parentheses appear
on the data input lines. The last term, xy(0), was included to indicate what input
must appear on the /; line to provide for the appropriate output when selected with
x=y=l1.
Although in the above discussion the x and y variables appear on the select
lines and functions of z appear on the data lines, by appropriate factoring of Eq.
(5.12) realizations are possible where other variables appear on the select and data
lines. In this way, if only one variable is available in its complemented and uncom-
plemented form, then it should be used for the data lines; while the remaining vari-
ables are used for the select lines. Furthermore, it should be noted that the order in
which variables are assigned to the select lines affects the order in which the single-
variable functions appear as inputs to the data input lines.
Karnaugh maps provide a convenient tool for obtaining multiplexer realiza-
tions. First it is necessary to establish which variables to assign to the select lines.
Once this is done, the inputs for the /; data lines are read directly from the map.
To illustrate this, again consider a three-variable Boolean function of x, y, and z.
Assume x is placed on the S, line and y is placed on the Sp line. Figure 5.40a
shows a three-variable Karnaugh map along with this assignment indicated by dou-
ble arrows. Applying this assignment to Eq. (5.11) and letting E = 1, Eq. (5.11)
becomes
f = Ipty + hy boxy + xy
270 DIGITAL PRINCIPLES AND DESIGN

yz
00 Ol 11 10

Smet |

|
(a)

bai ()) a) coll eal

z wpe tl Z y=0 Zz wel Z


y=0
S 0 i SS | 0 Se 0 | Ss 1 0

J) map 7, map I, map I, map


(Db)

Figure 5.40 Obtaining multiplexer realizations using Karnaugh maps. (a) Cell groupings
corresponding to the data line functions. (b) Karnaugh maps for the /;subfunctions.

Now consider each term in this expression. The first term, /jxy, corresponds to
those cells in which x = 0 and y = 0. These are the two upper left cells of the Kar-
naugh map in Fig. 5.40 labeled as Jp. These two cells can be regarded as a submap
for the z variable as indicated in Fig. 5.405. Thus, depending upon the 0-1 entries
within this submap, the expression for /) is readily obtained. In a similar manner,
the second term, /,xy, corresponds to those cells in which x = 0 and y = 1. These
are the two upper right cells of the map. The entries within these cells correspond to
the /, input. The cells associated with /, and /; are obtained in a like manner and are
also shown in Fig. 5.40a.
As an example, again consider the truth table of Fig. 5.37a. The Karnaugh map
is drawn in Fig. 5.41a. For emphasis, the four pairs of cells corresponding to the
data inputs are redrawn as single-variable submaps in Fig. 5.41b. It should be noted
that the axis labels for the 7, and /; submaps are shown in reverse order to be consis-
tent with the Karnaugh map of Fig. 5.41a. Grouping the |-cells, the expressions for
the subfunctions are now written. In particular, J) = z, 7; = 1, 4 = z, and J, = 0.
This again leads to the realization shown in Fig. 5.39. Although submaps were
drawn in Fig. 5.415, the expressions for the subfunctions are obtained from the
original map by noting the patterns within the appropriate pair of cells. When both
cells contain 0’s or 1’s, then the subfunctions are 0 or 1, respectively. When one cell
contains a 0 and the other a I, /; = z if the | occurs in the cell in which z = 1; while
I, = zif the | occurs in the cell in which z = 0.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices

So

00 Ol il 10

rt (a eS ieie
Sj X:

|} oo | eo :
(a)

x=0 sel
ya 0 7h y=
0 1

Ty map
(b)

Figure 5.41 Realization of f(x, y,Z) = 2M(0,2,3,5). (a) Karnaugh map. (6) I, /;, 1, and /, submaps.

Karnaugh maps can readily handle other assignments of the input variables to
the select lines. For example, Fig. 5.42 illustrates the J; submaps under two addi-
tional assignments. In Fig. 5.42a, input variable y is applied to select line S$, and
input variable z is applied to select line So. In Fig. 5.42b, input variable x is applied
to select line Sy and input variable y is applied to select line $,. Depending upon the
assignment, the submaps for functions of the third variable are located differently.

(b)

Figure 5.42 Using Karnaugh maps to obtain multiplexer realizations under


various assignments to the select inputs. (a) Applying input
variables y and zto the S, and Sp select lines. (b) Applying input
variables x and y to the Sj and S, select lines.
272 DIGITAL PRINCIPLES AND DESIGN

(a)

| 4-to-]
i YZ jie MUX
00 Ol 11 10
I= |4

— ~i)
iS — i —s ff

0
SQ) ———

I, I; y EG

(b)

Figure 5.43 Alternative realizations of f(x,y,z) = m(0,2,3,5). (a) Applying input


variables y and zto the S, and S, select lines. (b) Applying input
variables x and y to the Sy and S, select lines.

However, in each case, the submaps correspond to the four combinations of values
to the variables on the select lines. Realizations of the truth table of Fig. 5.37a using
the two assignments of Fig. 5.42 are shown in Fig. 5.43.
An 8-to-1-line multiplexer can be used to realize any four-variable Boolean
function. Three of the variables are placed on the select lines. The inputs to the data
lines are then the possible single-variable functions of the fourth variable, namely,
0, 1, the variable, and its complement. Figure 5.44 shows the relationships between
the map cells and the data-line inputs under the assumption that the input variables
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 273

So

Figure 5.44 A select line


assignment and
corresponding data
line functions for a
multiplexer realization
of a four-variable
function.

w, x, and y are applied to select lines $5, $,, and So, respectively. In this case, the
eight /; inputs are determined by pairs of cells associated with the eight combination
of values to the x, y, and z variables. An example of a four-variable function on a
Karnaugh map, along with the multiplexer realization, is given in Fig. 5.45. Particu-
lar attention should be given to J, since z = 0 corresponds to the right cell and z = |
corresponds to the left cell of the /, submap. As in the case of the three-variable
Karnaugh map, it is a simple matter to reinterpret a four-variable map for different
assignments of the input variables to the select lines.
In the above discussion, 2”-to-1-line multiplexers were used to realize functions
of n + | variables. This was achieved by applying functions of a single variable to
the data input lines. By allowing realizations of m variable functions as inputs to the
data input lines, 2”-to-1-line multiplexers can be used in the realization of (n + m)-
variable functions. To illustrate this, Fig. 5.46 shows a four-variable Karnaugh map
in which it is assumed that the input variables w and x are applied to the S, and So
select inputs, respectively, of a 4-to-1-line multiplexer. This implies that functions
of the y and z variables must appear at the data input lines in the overall realization.
To determine these functions, it is necessary to consider the four cases correspond-
ing to the four assignments of 0’s and 1’s to the variables on the select lines. As in-
dicated in Fig. 5.46, there are four cells corresponding to wx = 00. These four cells
form the submap for the function at the /) terminal. Similarly, the input to the /, ter-
minal is described by the four cells in which wx = 01, the input to the /, terminal is
274 DIGITAL PRINCIPLES AND DESIGN

So

(a)

Figure 5.45 Realization of f(w,x,y,z) = >m(0,1,5,6,7,9, 12,15).


(a) Karnaugh map. (b) Multiplexer realization.

described by the four cells in which wx = 10, and the input to the /; terminal is de-
scribed by the four cells in which wx = 11. By analyzing these submaps, appropri-
ate logic is readily determined for these input terminals.
As an example, consider the Karnaugh map of Fig. 5.47a. Although the four
submaps can be interpreted directly on the Karnaugh map itself, they are redrawn in
Fig. 5.476 to e for clarity. These are two-variable Karnaugh maps where it is as-

Figure 5.46 Using a four-variable


Karnaugh map to
obtain a Boolean
function realization
with a 4-to-1-line
multiplexer.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 275

NSS
I<]

Sais
nN

(g)

Figure 5.47 Realization of the Boolean function f(w,x,y,Z) = 2m(0,1,5,6,7,9,13,14). (a) Karnaugh map.
(b) lj submap. (c) |, submap. (d) /> submap. (e) /3 submap. (f) Realization using a 4-to-1-line
multiplexer. (g) Realization using a multiplexer tree.
276 DIGITAL PRINCIPLES AND DESIGN

sumed that the left and right edges are connected. From the four submaps, it imme-
diately follows that

ineeasy,

Ly
L= YZ

I,=yzt+tyz=y@z

The realization of the Boolean function is given in Fig. 5.47f


As a further variation in using multiplexers to realize functions of n + m vari-
ables, each of the functions involving the m variables can itself be realized with
multiplexers creating a treelike structure of multiplexers. For example, each two-
variable function at the data input lines in Fig. 5.47f can be realized with 2-to-1-line
multiplexers. This results in the realization shown in Fig. 5.47g.

5.7 PROGRAMMABLE LOGIC DEVICES


(PLDs)
With the advent of large-scale integration technology, it has become feasible to
fabricate large circuits within a single chip. One such consequence of this technol-
ogy are the programmable logic devices (PLDs). Three such devices are studied
in the remainder of this chapter: the programmable read-only memory (PROM),
the programmable logic array (PLA), and the programmable array logic* (PAL)
device.
The general structure of programmable logic devices is illustrated in Fig. 5.48.
The inputs to the PLD are applied to a set of buffer/inverters. The logic equivalent
of the buffer/inverter is shown in Fig. 5.49. These devices have both the true value
of the input as well as the complemented value of the input as its outputs. In addi-

n p
buffer/ product-term
inverters lines

m
input output
lines lines

Figure 5.48 General structure of PLDs.

*Programmable array logic is a registered trademark of Monolithic Memories, Inc.. a division of


Advanced Micro Devices, Inc.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 277

_— = imi r
tad]

(a) (b)

Figure 5.49 Buffer/inverter. (a) Symbol. (b) Logic equivalent.

tion, these devices produce the necessary drive for the and-array which follows
since, in general, the outputs from these devices serve as inputs to a very large
number of gates. The array of and-gates accepts the n input variables and their
complements and is used to generate a set of p product terms. These product terms,
in turn, serve as inputs to an array of or-gates to realize a set of m sum-of-product
expressions.
In PLDs, one or both of the gate arrays are programmable in the sense that the
logic designer can specify the connections within an array. In this way, PLDs serve
as general circuits for the realization of a set of Boolean functions. Table 5.7 sum-
marizes which arrays are programmable for the various PLDs. In the case of the
programmable read-only memory (PROM) and the programmable array logic
(PAL) devices, only one array is programmable; while both arrays are programma-
ble in the case of the programmable logic array (PLA).
In a programmable array, the connections to each gate can be modified. One
simple approach to fabricating a programmable gate is to have each of its inputs
connected to a fuse as illustrated in Fig. 5.50a. In this figure, the gate realizes the
product term abcd. Assume, however, that the product term bc is to be generated.
To do this, the gate is programmed by removing the a and d connections. This is
done by blowing the corresponding fuses. The net result is to have a gate with the
desired connections as illustrated in Fig. 5.50b. It is assumed in this discussion that
an open input to an and-gate is equivalent to a constant logic-1 input and that an

Table 5.7 Types of PLDs


And-array Or-array
Fixed Programmable
Programmable Programmable
Programmable Fixed

a ————o

S8
Qo /\"\_o— ——.

(a) (b)

Figure 5.50 Programming by blowing fuses. (a) Before programming.


(b) After programming.
278 DIGITAL PRINCIPLES AND DESIGN

a IB G
a ————

b —
c

(b)

@ to
a = ;
en) ===
-

(c)

i (OG

=D ee
a ——

C5)
(d)

Gi WD Ga Dac

(h)
:
Figure 5.51 PLD notation. (a) Unprogrammed and-gate. (6b) Unprogrammed or-gate.
(c) Programmed and-gate realizing the term ac. (d’) Programmed or-gate realizing
the term a+ b. (e) Special notation for an and-gate having all its input fuses intact.
(f) Special notation for an or-gate having all its input fuses intact. (g) And-gate with
nonfusible inputs. (h) Or-gate with nonfusible inputs.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 279

open input to an or-gate is equivalent to a constant logic-0 input. Although other


schemes are used in addition to simple fuse inputs, for the purpose of this presenta-
tion, this simple approach is assumed.
It should be noted that programming is really a hardware procedure. Special-
ized equipment, called programmers, is needed to carry out the programming of a
PLD by an end-user. Clearly, fused-programmable devices are programmed only
once. However, manufacturers offer devices that are reprogrammable, called
erasable PLDs. In this case, the connections can be reset to their original conditions
and then reprogrammed. Depending upon the type of device, erasing is achieved by
exposing the PLD to ultraviolet light or using electrical signals.
In the above discussion it was stated that the PLD is programmed by the
user. These PLDs are said to be field programmable. Alternatively, the user can
specify the desired connections and supply the information to the manufacturer.
The manufacturer then prepares an overlay that is used to complete the connec-
tions as the last step in the fabrication process. Such PLDs are referred to as mask
programmable.

5.7.1 PLD Notation

To indicate the connections in the and-array and or-array of a PLD, a simplified


notation is frequently used. This notation is illustrated in Fig. 5.51. Rather than
drawing all the inputs to the and-gates and or-gates, the gates are drawn with a sin-
gle input line. The inputs themselves are indicated by lines at right angles to the
single gate line. The intersections between the input lines and the single gate line
correspond to the types of connections. A cross at the intersection denotes a fusible
link that is intact; while the lack of a cross indicates the fuse is blown or no con-
nection exists. The occurrence of a hard-wired connection, 1.e., one that is not
fusible, is indicated by a junction dot. Figure 5.51la and b illustrates the notation
for an and-gate and an or-gate prior to being programmed; while Fig. 5.51c and d
shows examples of the notation for these gates after programming. For the special
case when all the input fuses to a gate are kept intact, instead of showing a cross at
the intersection between each input line and the single gate line, a cross is simply
placed inside the gate symbol as indicated in Fig. 5.51e andf Finally, an and-gate
and or-gate with nonfusible inputs, but, rather, having hard-wire connections, are
illustrated in Fig. 5.51g and h.

5.8 PROGRAMMABLE READ-ONLY


MEMORIES (PROMs)
The basic structure of a programmable read-only memory (PROM) is shown in
Fig. 5.52 and its equivalent logic diagram in Fig. 5.53a. As a PLD, it consists of an
and-array with a set of buffer/inverters and an or-array. The and-array with buffer/
inverters is really an n-to-2”-line decoder and the or-array is simply a collection of
programmable or-gates. The or-array is also called the memory array. The decoder
serves as a minterm generator. The n-variable minterms appear on the 2” lines at the
280 DIGITAL PRINCIPLES AND DESIGN

n-to-2” 0
Programmable
. DEE or-array . ie
input * (and-array output
(address) f with (bit)
lines : buffer/ (memory array) lines
n—-1 inverters) 21

Figure 5.52 Structure of a PROM.

decoder output. These are known as word lines. As is seen in Fig. 5.53a, all 2” outputs
of the decoder are connected to each of the m gates in the or-array via programmable
fusible links. The n input lines are called the address lines and the m output lines the
bit lines. A PROM is characterized by the number of output lines of the decoder and
the number of output lines from the or-array. Hence, the PROM of Fig. 5.53a is re-
ferred to as a 2” X m PROM.
The logic diagram of Fig. 5.53a is redrawn in Fig. 5.53b using the PLD nota-
tion introduced in the previous section. Since the and-array is fixed, i.e., not pro-
grammable, connections are shown by junction dots. The fusible connections in the
or-array, however, are shown by crosses since this array is programmable.
The realization of a set of Boolean expressions using a decoder and or-gates
was discussed in Sec. 5.4. The very same approach is applicable in using a PROM
since a PROM is a device that includes both the decoder and or-gates within the
same network. Given a set of Boolean expressions in minterm canonical form or a
set of Boolean functions in truth table form, it is only necessary to determine which
programmable links of a PROM to retain and which to open. The programming of
the PROM is then carried out by blowing the appropriate fuses. PROMs are typi-
cally used for code conversions, generating bit patterns for characters, and as
lookup tables for arithmetic functions.
As a simple example of using a PROM for combinational logic design, con-
sider the Boolean expressions

fi (%5,%1-%0) = &m(O,1,2;5,7)
falX2,X1,X%) = Xm(1,2,4,6)
The corresponding truth table is given in Fig. 5.54a. Since these are functions of
three input variables, a PROM having a 3-to-8-line decoder is needed. In addi-
tion, since there are two functions being realized, the or-array must consist of
two gates. Hence, an 8X2 PROM is needed for the realization. The realization is
shown in Fig. 5.545 using the PLD notation. A blown fusible link on the input of
an or-gate is equivalent to a logic-O input. It should be emphasized that this ex-
ample is for illustrative purposes only. From a practical point of view, PROMs
are intended for combinational networks having a large number of inputs and
outputs.
z
ia}
‘es
xy 5

eo
:
ee

Xn-1 ea
OQ
ie)
°
far,
io)
a

a
e Si
5
2 h
: 2=I
|
=
oe
= Jin

(bd) -

Figure 5.53 A 2”xmPROM. (a) Logic diagram. (b) Representation in PLD


notation.
281
282 DIGITAL PRINCIPLES AND DESIGN

x0

OO
@ | i @
ee)
ONO |e ae
Ot
@ | ii

il O © 0 1
101 lO —— fi
i i © 0 1
i i il a0) Shs h

(a) (b)

Figure 5.54 Using a PROM for logic design. (a) Truth table. (6) PROM realization.

It may seem strange that the structure of Fig. 5.52, as a logic-design device, is
called a read-only memory. Read-only memory devices were originally developed
to store permanent data in a digital system. In these devices each piece of data,
called a word, is accessible by specifying an address.
To see how the structure of Fig. 5.52 is viewed as a memory device, again con-
sider Fig. 5.54. By applying a 3-bit combination to the xo, x, and x, lines, precisely
one and-gate in the decoder is selected in the sense that its output line, 1.e., word
line, is logic-1. Thus, each input combination is regarded as an address of one of the
word lines. As a consequence of selecting a given word line, a pattern of 0’s and
l’s, i.e., a word, as determined by the fusible connections to the selected word line
appears at the output terminals, 1.e., the bit lines, of the device. This 0-1 pattern is
considered the word stored at the address associated with the selected word line.
For example, the word stored at address x,x,;x) = 100 in Fig. 5.54 is f,f{ = O1. Fi-
nally, the fact that the connections associated with the fusible links normally cannot
be altered once they are formed makes the term read-only appropriate for this de-
vice. Hence, the realization shown in Fig. 5.54 is a read-only memory storing four
words each consisting of 2 bits.
For each additional input line to a PROM, the number of gates in the decoder
and the number of inputs to each gate in the or-array double. This is because all pos-
sible minterms are generated by the decoder and all the minterms appear as inputs
to the gates in the or-array. However, in many applications, not all the minterms are
necessary. In such cases, the and-array is not utilized efficiently. Also, as was seen
in the discussion on minimization, collections of minterms can frequently be re-
placed by a single product term. If the and-array is made programmable so that only
necessary product terms are generated, then its size can be controlled. As is seen in
the next two sections, programmable and-arrays occur in the PLA and PAL devices.
CHAPTER 5 Logic Design with MS! Components and Programmable Logic Devices 283

5.9 PROGRAMMABLE LOGIC ARRAYS (PLAs)


Another type of programmable logic device is the programmable logic array (PLA).
The PLA has the general structure of Fig. 5.48 where both the and-array and the or-
array are programmable. A logic diagram for a general PLA is given in Fig. 5.55.
For proper operation it is assumed that open input terminals to an and-gate, i.e., ter-
minals connected to blown fuses, behave as logic-1’s; while open input terminals to
an or-gate behave as logic-0’s. PLAs are characterized by three numbers: the num-
ber of input lines n, the number of product terms that can be generated p, i.e., the
number of and-gates, and the number of output lines m. Consequently, they are des-
ignated as nXpXm PLAs. A typical PLA is 16X48 x8.
As was mentioned in the previous section, in many logic design situations, not all
the minterms are needed for a realization. This is particularly true in problems involv-
ing a large number of don’t-care conditions, since minterms denoting these conditions
do not have to appear in the implementation. For n input variables, there are 2”
minterms. This is also the number of gates in the and-array of a PROM. However, in a
PLA the number of gates in the and-array is significantly less than 2”. To see the extent
of the reduction in the size of the and-array, consider functions of 16 input variables.

Programmable and-array

lines
input
n

product-term ———
lines

yndyno
wsour]

Programmable or-array

Figure 5.55 Logic diagram of an nxpxm PLA.


284 DIGITAL PRINCIPLES AND DESIGN

In this case there are 2'° = 65,536 minterms. However, in a 16X48 8 PLA, provision
is made to realize only 48 product terms. Referring to Fig. 5.55, it should be noted that
both complemented and uncomplemented inputs, for a total of 2n inputs, appear at
each and-gate to provide maximum flexibility in product-term generation.
Since all minterms are generated in a PROM, the realization of a set of Boolean
functions is based on minterm canonical expressions. It is never necessary to mini-
mize these expressions prior to obtaining a realization with a PROM. On the other
hand, in the case of PLAs, depending upon how the fuses are programmed, the and-
gates are capable of generating product terms that are not necessarily minterms. As
a consequence, a realization using a PLA is based on sum-of-product expressions
that may not be canonical. However, what is significant is that the logic designer is
bounded by the number of product terms that are realizable by the and-array. This
implies that it is necessary to obtain a set of expressions in which the total number
of distinct product terms does not exceed the number of gates in the and-array.
Thus, some degree of equation simplification generally is appropriate. Techniques
for minimizing a set of Boolean expressions using the criterion of minimal number
of distinct terms were previously discussed in Chapter 4.
To illustrate the use of a PLA for combinational logic design, consider the
expressions

fi%y,z) = &m(0,1,3,4)
fo@sy,z) = &m(1,2,3,4,5)

Assume that a 3X42 PLA is available for the realization of the expressions. Before
continuing, however, the reader should be well aware that this is not a practical appli-
cation of the use of PLAs due to its simplicity, but it does serve the purpose of show-
ing the concept of PLA combinational logic design. It is now noted that the size of the
or-array in the available PLA is sufficient since it has two output or-gates. However,
there are six distinct minterms between the two expressions. A realization based on
the canonical expressions is therefore not possible with the assumed PLA since only
four and-gates appear in the and-array. A formal approach to obtaining a pair of
equivalent expressions, hopefully having at most four distinct terms, is to first estab-
lish the multiple-output prime implicants using the Quine-McCluskey method and
then, using a multiple-output prime-implicant table, to find a multiple-output minimal
sum having the fewest terms as discussed in Secs. 4.12 and 4.13. Of course, for real-
world problems the minimization mechanics is done by specialized software written
for this purpose. However, at this time let us attempt to obtain a solution using simple
observations. When dealing with two output functions, it is known from Chapter 4
that the complete set of multiple-output prime implicants consists of all the prime im-
plicants of the individual functions f; and f; as well as the prime implicants of the
product functionf; f). It was also established in Chapter 4 that there exists a multiple-
output minimal sum consisting of just multiple-output prime implicants. A subset of
the prime implicants of f; andf; * f, are used in the multiple-output minimal sum for f;;
while a subset of the prime implicants of f; andf, - f, are used in the multiple-output
minimal sum for f,.Figure 5.56a shows the prime implicants of f,,5,andf,: f, as they
p ve fh yz

Prime implicants: Prime implicants: Prime implicants:


XY, XZ, YZ XZ, XY, XY, YZ XZ, XYZ

(d)

Figure 5.56 Example of combinational logic design using a PLA. (a) Maps showing the multiple-output prime
implicants. (0) Partial covering of the f, and f maps. (c) Maps for the multiple-output minimal
sum. (da) Realization using a 3x42 PLA.

285
286 DIGITAL PRINCIPLES AND DESIGN

appear on Karnaugh maps.* There are a total of seven distinct prime implicants. Re-
ferring to thef,andf, : f;maps to determine the terms for the minimized f, expression,
it is now noted that of the four distinct prime implicants in these maps only prime im-
plicant xz covers the xyz = 011 I-cell of f,. Similarly, referring to the f, and f, « f, maps
to determine the terms for the minimizedf, expression, xy is the only prime implicant
of the five distinct prime implicants in these maps that covers the xyz = 010 1-cell of
fy. Hence, these two prime implicants must occur in the multiple-output minimal sum.
Furthermore, it is next noted that prime implicant xz, which is being used for f;, can
also be used forf, to cover the xyz = 001 1I-cell. Figure 5.56b shows the covering of
the f, and f; maps at this point, along with the incomplete multiple-output minimal
sum having two distinct product terms. From these maps it is immediately seen that
using one additional prime implicant subcube for each of the functions, as shown in
Fig. 5.56c, results in a multiple-output minimal sum having four distinct terms, 1.e.,

fiQoy,Z) = xz + yz
Sry, 2) = xy + xz + xy
The corresponding 3X42 PLA realization is shown in Fig. 5.56d.
Although, in the above example, the final expressions for f, and f, could have
been obtained using the prime implicants of the individual functions and ignoring
the product functionf; ° f;, it should not be concluded that simply minimizing the in-
dividual expressions always results in a multiple-output minimal sum. A second ex-
ample illustrates this point. Consider the expressions

fi@y,z) = Xm(0,1,3,5)
how y,z) = 2m(3,5,7)

Again a realization with a 3X42 PLA is attempted. The Karnaugh maps display-
ing the multiple-output prime implicants are shown in Fig. 5.57a. Using an analysis
similar to the previous example, Fig. 5.57b shows the covering for the multiple-
output minimal sum

fiQy.Z) = XY + XO xyz
SoQuy,z) = yz + xyz
which consists of only four distinct product terms. Hence, a realization using a
3X4X2 PLA is possible. An alternative covering, shown in Fig. 5.57c, corresponds
to the multiple-output minimal sum

Fiy,2) = XY + ye + xyz
JAGVi2) = Xe + yz
The realization based on the expressions obtained from Fig. 5.57b is shown in
Fig. 5.57d using the PLD notation. It should be noted that a realization would not
be possible with the assumed 3X42 PLA if the expressions were individually
minimized.

“Recall that the minterms of the product functionf, *f, are the minterms common to both f, and f..
a
hh YE fi h
10 00 10 11

0 0 0 0 0
x x

0 l 0 0 | 0

Prime implicants: Prime implicants: Prime implicants:


XY, XZ, YZ XZ, YZ XYZ, XYZ

(a)

f, = AY + yz + xyz fy = x2 + XyZ

(d)

Figure 5.57 Example of combinational logic design using a PLA. (a) Maps showing the multiple-output prime
implicants. (b) A multiple-output minimal sum covering. (c) Alternative multiple-output minimal
sum covering. (d@) realization using a 3x42 PLA.
287
288 DIGITAL PRINCIPLES AND DESIGN

From gate From gate


in or-array in or-array Fane
(f) Output (f)) utpu

+V Programmable
fuse

(b)

(a)

Figure 5.58 Exclusive-or-gate with a programmable fuse. (a) Circuit diagram.


(b) Symbolic representation.

For greater flexibility, PLAs normally make provision for either a true output or a
complemented output. One way in which this is achieved is illustrated in Fig. 5.58a.
The output from each gate in the or-array, f;, feeds into one input of an exclusive-or-
gate. The other input to the exclusive-or-gate, having a programable fuse to ground, is
connected to a pull-up resistor as shown in the figure. Assuming positive logic, when
the fuse is left intact, the lower input to the exclusive-or-gate is at ground which is
equivalent to a logic-0. Since f;@ 0 = f,, it follows that the output of the exclusive-or-
gate is the same as the upper input. That is, the output corresponds to the true realiza-
tion of f;,On the other hand, when the fuse is blown, a positive voltage, 1.e., logic-1, is
applied to the lower input of the exclusive-or-gate. Since f,@ 1 = f;, the net result is
that the output of the exclusive-or-gate corresponds to the complemented realization
of f;,. The symbolic representation of the programmable exclusive-or-gate is given in
Fig. 5.58b. The general structure of a PLA with true or complemented output capabil-
ity is shown in Fig. 5.59.
Now consider the Boolean functions

fi@%y,z) = Sm(1,2,3,7)
fox y,z) = Xm(0,1,2,6)
The Karnaugh maps of these functions are given in Fig. 5.60. The upper two maps
are used to obtain a multiple-output minimal sum forf, andf;; while the lower two
maps are used to obtain the multiple-output minimal sum for f, and f,. Again as-
sume a realization of these functions using a 3X42 PLA is to be attempted. As
in the previous examples, realizations of functions of this simplicity are not justi-
fied using PLAs. However, the interest here is to illustrate the use of comple-
mented functions. If a 3X42 PLA is to be used, then only four product terms
can be generated. Thus, a realization is not possible using the subcubes of 1-cells
as indicated in the upper two maps of Fig. 5.60. On the other hand, the indicated
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 289

(CO eo ee

eee
n
oO

Ee ° °
>
Q e e
|S _ 4
iw

<P
P
product-term eed
lines

e e e

yndyno
sour]
w

Figure 5.59 General structure of a PLA having true and complemented output
capability.

fr

f ye hy ye
00 Ol 11 10 00 Ol 1] 10

fe feeafely
ee of | a| 1
xX

f= ty OR + ye fratyz

Figure 5.60 Karnaugh maps for the functions f,(x,y,Z) =


Ym(1,2,3,7) and £0, y,Z) = %m(0,1,2,6).
230 DIGITAL PRINCIPLES AND DESIGN

subcubes of the 1-cells forf,and the subcubes of the O0-cells forf,in Fig. 5.60 re-
sult in the expressions
FiGYV2) = Xz ay yz
Pay =o
For these two expressions there are only four distinct product terms: xz, xy, yz, and
xy. Thus, the fuses in the and-array and or-array can be programmed for the f, and
f> expressions. If the 3X42 PLA has provisions for complementing its outputs as
was illustrated in Fig. 5.58, then by leaving the fuse for the f; output exclusive-
or-gate intact and blowing the fuse for the f; output, the desired realization is possi-

XZ xy yz

mK

(a) =

y -_ f

|
f

fy
(b) =

Figure 5.61 Two realizations of f,(x,y,z) = &m(1,2,3,7) and £(x,y,z) = 217(0).1,2,6).


(a) Realization based on f, and fz (b) Realization based on f, and f,.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 291

ble. This is shown in Fig. 5.61la. It should be noted that f, really occurs at one of
the outputs of the or-array. By programming the corresponding exclusive-or-gate
fuse, f, = f, appears at the output of the PLA.
In Fig. 5.60, it is also observed that there are only four distinct product terms in
the expressions for f,; and f. Hence, an alternative realization using a 3X42 PLA
with output complementation capability can be based on these expressions. In this
case, both output exclusive-or-gate fuses must be blown. This results in comple-
menting the expressions so that the original functions are realized. The correspond-
ing realization is shown in Fig. 5.61.
A common way of specifying the connections in a PLA is via the PLA table.
PLA tables for the two realizations of Fig. 5.61 are given in Table 5.8. In general,
the PLA table has three sections for indicating connections: an input section, an out-
put section, and a 7/C section. Each product term is assigned a row in the table. The
input section is used to specify the connections between the inputs and the gates in
the and-array, thereby describing the connections needed to generate the product
terms. The input variables are listed across the top of the input section. A | entry in
this section indicates that a connection is to exist between the uncomplemented form
of the input variable listed in the column heading and the and-gate associated with
the row. On the other hand, a 0 entry in the input section indicates that a connection
is to exist between the complemented form of the input variable listed in the column
heading and the and-gate associated with the row. Finally, a dash indicates that there
are no connections for the associated variable and the corresponding and-gate.

Table 5.8 PLA tables for the realizations


of the functions given by the
Karnaugh maps of Fig. 5.60.
(a) PLA table for Fig. 5.6142.
(b) PLA table for Fig. 5.616
Product Outputs
term

Product
term
292 DIGITAL PRINCIPLES AND DESIGN

The output section of the PLA table is used to specify the connections between
the outputs of the and-gates and the inputs to the or-gates. The column headings
correspond to the functions being realized. Here a | entry indicates that a connec-
tion is to exist between the and-gate associated with the row and the or-gate associ-
ated with the column. A dash entry in the output section indicates that the and-gate
associated with the row is not connected to the or-gate associated with the column.
The 7/C section indicates how the exclusive-or-gate fuses are programmed. A
T entry means that the true output is used, thereby implying the fuse should be kept
intact; while a C entry means that the output should be complemented, implying the
fuse should be blown.
The above examples were contrived so that multiple-output minimal expres-
sions were required to obtain the desired PLA realizations. However, PLAs are avail-
able in a variety of sizes. Nothing is gained by performing minimization if the mini-
mized and nonminimized expressions result in using the same size PLA. PLAs are
intended to provide for convenient realizations. For this reason, complete minimiza-
tion becomes a secondary consideration when obtaining a PLA realization, since no
simplification or only slight simplification of expressions may be sufficient for a real-
ization using a PLA of a specified size. For example, simply minimizing the individ-
ual expressions and making use of any common terms might be sufficient to obtain
an efficient realization without the need for determining the multiple-output minimal
sum that involves the prime implicants of the product functions. In Chapter 8, PLAs
are used without regard to determining multiple-output minimal sums. It will be seen
that the networks being designed at that time are modeled in a form that immediately
suggests a PLA realization.

5.10 PROGRAMMABLE ARRAY LOGIC (PAL)


DEVICES
The final PLD to be discussed is the programmable array logic (PAL) device. In this
type of device, only the and-array is programmable. The or-array is fixed by the
manufacturer of the device. This makes the PAL device easier to program and less
expensive than the PLA. On the other hand, since the or-array is fixed, it is less flex-
ible than the PLA.
To illustrate the structure of a PAL device, a simple four-input, three-output
PAL device is shown in Fig. 5.62. The reader should be aware that this PAL device
is for illustrative purposes only and does not represent one that is commercially
available. Commercial PAL devices can handle 10 or more inputs and may provide
complemented outputs. In the figure, particular attention should be given to the
fixed or-array. Here, two of the or-gates have three inputs each, while the third or-
gate has only two inputs. All the input variables and their complements appear at
the inputs to each of the and-gates. This allows each and-gate to generate any prod-
uct term up to four variables. However, the output of each and-gate serves as an
input to only one or-gate. For this simple, illustrative PAL device, three Boolean
expressions can be realized in which two expressions can have at most three prod-
uct terms and one expression can have at most two product terms.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices
293

Aviie-pur
spqrvuuresis01g

Tis
eae
Ne
aa
Fixed
or-array

L ita
Figure 5.62 A simple four-input, three-output PAL device.

As an illustration of using a PAL device to realize combinational logic, con-


sider the two functions

f:;G%YZ) = BmC,2,4,5,7)
$Oy,2) = 2m0,1,3,5,7)
The corresponding Karnaugh maps are drawn in Fig. 5.63a from which the
minimal sums are found to be

ie a= Oy eae Vz at V,

Wag) = 4 esa
To use the illustrative PAL device of Fig. 5.62, a problem occurs with the realiza-
tion of f, since the minimal expression consists of four product terms, while no
or-gate in this device has more than three inputs. However, a realization is
achievable if the realization is based upon the three expressions

f= fat Ye + x2
fea 2 EX
fp = xy + xz

This realization is shown in Fig. 5.63b. Here the first two product terms of f; are
generated as the subfunction f;. Thef, subfunction is then fed back into an input
terminal and combined with the remaining product terms of f; to produce the de-
sired realization of f,. To realize f,, only two terms need to be generated. Since a
three-input or-gate is used, the third input must correspond to a logic-0 so as not
294 DIGITAL PRINCIPLES AND DESIGN

(a)

i a | oaks
ieee

External connection ———_,_

(b)

Figure 5.63 An example of using a PAL device to realize two Boolean


functions. (a) Karnaugh maps. (b) Realization.

to affect the f, output. This 1s achieved by keeping all the fuses intact to the and-
gate that serves as the third input to thef, or-gate. With a variable and its comple-
ment as inputs to an and-gate, the output of the gate is always at logic-0. As was
mentioned in Sec. 5.7, the X in the gate symbol indicates that all its fuses are kept
intact.

CHAPTER 5 PROBLEMS
5.1 Assume an adder/subtracter of the type shown in Fig. 5.6 is capable
of handling two 5-bit operands. For each of the following set of
unsigned operands, X and Y, and control input, Add/Sub, determine
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 295

the output. Check your answers by converting the binary numbers into
decimal.
a. X = 10111, Y = 00110, Add/Sub = 0
b. X= 11010, Y = 01101, Add/Sub = 0
c. X = 11001, Y = 00101, Add/Sub = 1
d. X= 10011, Y = 11010, Add/Sub = 1
SH Assume the binary adder/subtracter shown in Fig. 5.6 is to handle signed
binary numbers in which x,,_, and y,,_, are the sign bits. Two methods were
given in Sec. 2.8 for the detection of an overflow condition, one based on the
sign bits of the operands and the other based on the carries into and from the
sign digit position during addition.
a. Determine the additional logic needed if an overflow condition is to be
detected based on the sign bits of the operands.
b. Determine the additional logic needed if an overflow condition is to be
detected based on the carries into and from the sign digit position during
addition.
fe) Consider the cascade connection illustrated in Fig. 5.9 of 4-bit carry
lookahead adders to obtain a large parallel adder. For this configuration,
calculate the maximum propagation delay time, assuming each gate
introduces a unit time of propagation delay, for a parallel adder handling
a. 8 bits.
b. 20 bits.
c. 40 bits.
d. n bits where n is divisible by 4.
5.4 Consider the 16-bit adder using carry lookahead generators shown in Fig.
5.10b. Calculate the maximum propagation delay time assuming each gate
introduces a unit time of propagation delay.
See) a. Using a 4-bit binary adder, design a network to convert a decimal digit
in 8421 code into a decimal digit in excess-3 code.
b. Using a 4-bit binary adder, design a network to convert a decimal digit
in excess-3 code into a decimal digit in 8421 code.
5.6 Using an approach similar to that for the design of a single decade 8421
BCD adder, design a single decade 8421 BCD subtracter incorporating 4-bit
binary subtracters.
Si) Using an approach similar to that for the design of a single decade 8421
BCD adder, design a single decade adder in which the operand digits are in
excess-3 code.
5.8 Design a specialized comparator for determining if two n-bit numbers are
equal. To do this, design the necessary 1-bit comparator that can be cascaded
to achieve this task.
oe In the design of the 1-bit comparator in Sec. 5.3, conditions A > B, A = B,
and A < B corresponded to GEL = 100, 010, and 001, respectively. Another
approach to the design of a 1-bit comparator is to code the three conditions.
296 DIGITAL PRINCIPLES AND DESIGN

BX itShoei i |

(A > B) ig0, S S 0,n-1 Sine


0,i+1 : San
0,i rs
(A =B)| Output 5 : J-bit co ea one S.. 1-bit arr a -
.
(A <B)| network L"__| comparator qlee Lit] |eomparitor| 1 . , ,

Figure P5.9

One possible code is $,S) = 10, 00, and 01 for A > B, A = B, andA < B,
respectively. This implies that only two output lines occur from each 1-bit
comparator. However, at the output of the last 1-bit comparator, an
additional network must be designed to convert the end result into terms of
G, E, and L. This approach is illustrated in Fig. P5.9. Design a 1-bit
comparator and output network for this approach.
5.10 Using or-gates and/or nor-gates along with a 3-to-8-line decoder of the type
shown in Fig. 5.18, realize the following pairs of expressions. In each case, the
gates should be selected so as to minimize their total number of input terminals.
dak) = m3)
fr(%.X1,Xo) = 2m(3,6,7)
bs fiGotity = O,1,5,637)
Days) = 21, 2,3.6,7)
C. fi%>,%1.%) = Dm(0,2,4)
Prox) = Sn 2A)
5.11 Using or-gates and/or nor-gates along with a 3-to-8-line decoder of the type
shown in Fig. 5.18, realize the following pairs of expressions. In each case, the
gates should be selected so as to minimize their total number of input terminals.
a. fi(%2,X1X9) = IIM(0,3,5,6,7)
fo(%,X1,Xo) = TIM(2,3,4,5,7)
b. f,Gy,%y3X0) = ILM(0,1,7)
foQ%,X1,Xo) = TLM,5,7)
c.f (%,%1,.%9) = HM(1,2,5)
fr%,X) Xo) = HM(0,1,3,5,7)
5.12 Using and-gates and/or nand-gates along with a 3-to-8-line decoder of the
type shown in Fig. 5.22, realize the pairs of expressions of Problem 5.11. In
each case, the gates should be selected so as to minimize their total number
of input terminals.
5.13 Using and-gates and/or nand-gates along with a 3-to-8-line decoder of the
type shown in Fig. 5.22, realize the pairs of expressions of Problem 5.10. In
each case, the gates should be selected so as to minimize their total number
of input terminals.
5.14 Using a 4-to-16-line decoder constructed from nand-gates and having an
enable input E, design an excess-3 to 8421 code converter. Select gates so as
to minimize their total number of input terminals.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 297

5.15 Using two 2-to-4-line decoders of the type shown in Fig. 5.26 along with
any necessary gates, construct a 3-to-8-line decoder.
5.16 Write the condensed truth table for a 4-to-2-line priority encoder with a valid
output where the highest priority is given to the input having the highest
index. Determine the minimal sum equations for the three outputs.
5.17 Repeat Problem 5.16 where the highest priority is given to the input having
the lowest index.
5.18 Figure 5.34 showed the structure of a 16-to-1-line multiplexer constructed
from only 4-to-1-line multiplexers. Other structures are possible depending
upon the type of multiplexers used. Construct a multiplexer tree for a 16-to-
1-line multiplexer
a. Using only 2-to-1-line multiplexers.
b. Using 2-to-1-line and 4-to-1-line multiplexers. (Note: three different
structures are possible.)
c. Using 2-to-1-line and 8-to-1-line multiplexers. (Note: two different
structures are possible.)
5.19 Determine a Boolean expression in terms of the input variables that
correspond to each of the multiplexer realizations shown in Fig. P5.19.

4-to-1
0——\40 MUX
(= I,

| — iD) dhru
0-4

me lay car oe. 4-to-|


FoaiPalaat IG MLK
Ww Xx Doone!
fan)
_ 2) if lf
1——+ I
i 4-to-] 3
v 1) MUX i E
v if S; So
o—_ 1, f—— ye

=| 75
ON
ON
N&

1 —“E
S, Spo

Ww x

(a) (b)

Figure P5.19
298 DIGITAL PRINCIPLES AND DESIGN

5.20 For each of the following assignments to the select lines of an 8-to-1-line
multiplexer, show the location of the J, submaps, for i = 0,1,..., 7, ona 4-
variable Karnaugh map having the variables w, x, y, and z.
a. x, y, and zon select lines S,, S,, and So, respectively.
b. w, y, and zon select lines S,, S|, and So, respectively.
c. y, x, and w on select lines S,, S,, and So, respectively.
5.21 Realize each of the following Boolean expressions using an 8-to-1-line
multiplexer where w, x, and y appear on select lines $5, S,, and So,
respectively.
a. f(w,x%y,z) = Ym(1,2,6,7,9,11,12,14,15)
b. (wx y,z) = 2m(2,5,6,7,9,12,13,15)
c. f(w,xy,z) = &m(1,2,4,5,8,10,11,15)
d. f(w,xy,z) = >m(0,4,6,8,9,11,13,14)
5.22 Repeat Problem 5.21 where x, y, and z appear on select lines S,, S,, and So,
respectively.
5.23 For the function given by the Karnaugh map in Fig. 5.47a, determine a
realization using a 4-to-1-line multiplexer and external gates if the w and x
variables are applied to the S,) and S; select lines, respectively.
5.24 Realize the Boolean expression
f(w,x%y,z) = Xm(4,5,7,8,10,12,15)
using a 4-to-1-line multiplexer and external gates.
a. Let w and x appear on the select lines S,; and So, respectively.
b. Let y and z appear on the select lines S$, and So, respectively.
5.25 Realize the Boolean expression
Sw,x y,zZ) = 2m(0,2,4,5,7,9,10,14)
using a multiplexer tree structure. The first level should consist of two 4-to-
|-line multiplexers with variables w and z on their select lines S$; and So,
respectively, and the second level should consist of a single 2-to-1-line
multiplexer with the variable y on its select line.
5.26 A shifter is a combinational network capable of shifting a string of 0’s and
1’s to the left or right, leaving vacancies, by a fixed number of places as a
result of a control signal. For example, assuming vacated positions are
replaced by 0’s, the string 0011 when shifted right by | bit position becomes
0001 and when shifted left by | bit position becomes 0110. A shifter to
handle an n-bit string can be readily designed with n multiplexers. Bits from
the string are applied to the data input lines. The control signals for the
various actions are applied to the select input lines. The shifted string
appears on the output lines. Design a shifter for handling a 4-bit string where
Table P5.26 indicates the control signals and the desired actions. Vacated
positions should be filled with 0’s.
CHAPTER 5 Logic Design with MSI Components and Programmable Logic Devices 299

Table P5.26

Action

No change, i.e., pass input string to output


Shift right | bit position
Shift right 2 bit positions
Shift left 1 bit position

5.27 For the PROM realization shown in Fig. P5.27, determine the corresponding
Boolean expressions for the outputs.

Phos aorice
+ =

—*

4 h
Figure P5.27

5.28 An application of PROMS is to perform code conversion. Using a PROM of


an appropriate size, draw the logic diagram in PLD notation for a PROM
realization to convert 4-bit binary numbers into Gray code. (Refer to Table
2.9 for the Gray code.)
5.29 An application of PROMS is to realize lookup tables for arithmetic functions.
Using a PROM of the smallest appropriate size, draw the logic diagram in
PLD notation for a PROM realization of the lookup table corresponding to
the decimal arithmetic expression F(X) = 3X + 2 forO = X =7 where F(X)
and X are expressed in binary.
5.30 The pair of Boolean functions
fiw,%y,Z) = Ym(2,4,5,10,12,13,14)
fo(w.xy,z) = Ym(2,9,10,11,13,14,15)
are to be realized with a PLA having only true outputs. By considering just
the prime implicants of each individual function and the product function,
300 DIGITAL PRINCIPLES AND DESIGN

determine the minimal number of product terms needed for a realization.


Draw the logic diagram of the realization in PLD notation and show the
corresponding PLA table.
S31 The following sets of Boolean functions are to be realized with PLAs having
both true and complemented outputs. By considering just the prime
implicants of the individual functions and their complements, determine the
minimal number of product terms needed for each realization. In each case,
draw the logic diagram of the realization in PLD notation and show the
corresponding PLA table.
a. fi(,y,z) = &m,6,7)
fix y,z) = Xm(0,1,2,6,7)
fx y,z) = &m(0,1,3,4,5)
b. fi@y,z) = &m(0,1,2,5,7)
fily,z) = &m3,4,5)
Auy,z) = 2m(3,4,5,6)
c. fiGy,z) = Lm(1,3,4,6)
fiwy,2) = &m(0,2,4,5,7)
fx(%,y,2) = Ym(1,3,5,6,7)
3.32 Using the PAL device in Fig. 5.62, draw the logic diagram of a realization in
PLD notation for the following set of Boolean functions.
fi@y,z) = m(1,2,4,6,7)
fou y,z) = 2m(2,4,5,6)
falay,z) = &m(1,4,6)
CHAPTER

Flip-Flops and Simple


Flip-Flop Applications

he logic networks studied thus far are combinational networks. A combina-


tional network is defined as a two-valued network in which the outputs at
any instant are dependent only upon the inputs present at that instant. As a
consequence of this definition, it is possible to describe each output of a combina-
tional network by a single algebraic expression whose variables are the inputs to the
network.
The above behavioral definition established only one class of logic networks.
At this time, attention is turned to another class of logic networks. A sequential net-
work is defined as a two-valued network in which the outputs at any instant are de-
pendent not only upon the inputs present at that instant but also upon the past his-
tory (or sequence) of inputs. The past history of inputs must be preserved by the
network. For this reason, sequential networks are said to have memory. The mecha-
nism which is used to explain and represent the information preserved is referred to
as the internal state, secondary state, or simply state, of the network. Physically,
the internal state is a collection of signals at a set of points within the network. In
this way, the inputs at any time to a sequential network, along with its present inter-
nal state, determine the outputs of the network.
There are two basic types of sequential networks. They are distinguished by the
timing of the signals within the network. A synchronous sequential network is one in
which its behavior is determined by the values of the signals at only discrete instants
of time. These networks typically have a master-clock generator which produces a
sequence of clock pulses. It is these clock pulses that effectively sample the input
signals to determine the network behavior. This type of network is formally studied
in Chapters 7 and 8. The second type of sequential network is the asynchronous se-
quential network. In this case the behavior of the network is immediately affected by
the input signal changes. Asynchronous sequential networks are studied in Chapter 9.
The basic logic element that provides memory in many sequential networks is
the flip-flop. Actually, the flip-flop itself is a simple sequential network. It can be

301
302 DIGITAL PRINCIPLES AND DESIGN

shown that all sequential networks require the existence of feedback. In Sec. 6.1 it
is seen that feedback is present in flip-flop circuits. A flip-flop has two stable condi-
tions. To each of these stable conditions is associated a state, or, equivalently, the
storage of a binary symbol. This chapter is concerned with the structure and opera-
tion of several types of flip-flops and some simple networks, e.g., registers and
counters, that are constructed using them.

6.1 THE BASIC BISTABLE ELEMENT


Central to all flip-flop circuits is the basic bistable element which is shown in Fig. 6.1.
This circuit has two outputs, Q and Q. As seen from the figure, it consists of two
cross-coupled not-gates, i.e., the output of the first not-gate serving as the input to the
second and the output of the second not-gate serving as the input to the first. Clearly,
this structure involves feedback.
As its name implies, the basic bistable element is a circuit having two stable
conditions (or states). To see this, first assume x = O in Fig. 6.1. The output of the
upper not-gate is then I, i.e., Q = x = 1. Since the output of the upper not-gate is the
input to the lower not-gate, x = y = 1. Consequently, the output of the lower not-
gate, i.e., y, is 0. However, since the output of the lower not-gate is connected to the
input of the upper not-gate, Q = y = x = 0. This is precisely what was assumed to
be the value of x. Thus, the circuit is stable with OQ= x = y =OandQ=x=y=1.
Using a similar argument, it is easy to show that if it is assumed that x = 1,
then the basic bistable element is stable with Q = x = y = 1 andQ=x=y=0.
This is the second stable condition associated with the basic bistable element.
As a result of having two stable conditions, the basic bistable element is used to
store binary symbols. In the case of positive logic, when the output line Q is 1, the
element is said to be storing a 1; while when the output line Q is 0, the element is
said to be storing a 0. It should be noted that the two outputs are complementary.
That is, when QO = 0, O = 1; and when O=hoO-0:
The binary symbol that is stored in the basic bistable element is referred to
as the content or state of the element. The state of the basic bistable element
is given by the signal value at the Q output terminal. Hence, the Q output termi-

Ol

Figure 6.1 Basic bistable element.


CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 303

nal is called the normal output; while the Q output is referred to as the comple-
mentary output. When the device is storing a 1, it is said to be in its /-state or
set. On the other hand, when the device is storing a 0, it is said to be in its 0-state
or reset.
Although the bistable element is normally in one of its two stable conditions,
there is one more equilibrium condition that can exist. This occurs when the two
output signals are about halfway between those associated with logic-0 and logic-1.
Thus, the output is not a valid logic signal. This is known as the metastable state.
However, a small change in any of the internal signal values of the circuit, say, due
to circuit noise, quickly causes the basic bistable element to leave the metastable
state and enter one of its two stable states. Unfortunately, the amount of time a de-
vice can stay in its metastable state, if it should occur, is unpredictable. For this rea-
son, the metastable state should be avoided. To avoid the metastable state, certain
restrictions are placed on the operation of the basic bistable element. This is further
discussed in Sec. 6.3.
The basic bistable element of Fig. 6.1 has no inputs. When power is applied, it
becomes stable in one of its two stable states. It remains in this state until power is
removed. For the circuit to be useful, provisions must be made to force the device
into a particular state. A flip-flop is a bistable device, with inputs, that remains in a
given state as long as power is applied and until input signals are applied to cause
its output to change. It consists of a basic bistable element in which appropriate
logic is added in order to control its state. The process of storing a 1 into a flip-flop
is called setting or presetting the flip-flop; while the process of storing a 0 into a
flip-flop is called resetting or clearing the flip-flop.
The inputs to a flip-flop are of two types. An asynchronous or direct input 1s
one in which a signal change of sufficient magnitude and duration essentially pro-
duces an immediate change in the state of the flip-flop. In physical circuits, the re-
sponse actually occurs after a very short time delay. This point is elaborated upon
in Sec. 6.3 when the timing of signals is discussed in greater detail. On the other
hand, a synchronous input does not immediately affect the state of the flip-flop, but
rather affects the state of the flip-flop only when some control signal, usually called
an enable or clock input, also occurs. In the next several sections, various input
schemes to the basic bistable element are introduced that result in different types of
flip-flops.

6.2 LATCHES
The storage devices called latches form one class of flip-flops. This class is charac-
terized by the fact that the timing of the output changes is not controlled. That is,
the output essentially responds immediately to changes on the input lines, although
a special control signal, called the enable or clock, might also need to be present.
Thus, the input lines are continuously being interrogated. In Secs. 6.4 and 6.5 flip-
flops in which the timing of the output changes is controlled are studied. It this case,
the inputs are normally sampled and not interrogated continuously.
304 DIGITAL PRINCIPLES AND DESIGN

6.2.1 The SR Latch


Figure 6.2a shows the SR (or set-reset) latch that consists of two cross-coupled nor-
gates. It has two inputs, S and R, referred to as the set and reset inputs, and two out-
puts, Q and Q. As is immediately evident from the second logic diagram in Fig. 6.2a,
when S = R = 0, the logic diagram simplifies to the basic bistable element described
in the previous section, i.e., the cross-coupling of two not-gates. Thus, the latch is in
one of its two stable states when these inputs are applied. This condition corresponds
to the first row of the function table given in Fig. 6.2b. In the table, Q denotes the
present state of the latch. That is, Q is the state of the device at the time the input sig-
nals are applied. The response of the latch at the Q and Q output terminals as a con-
sequence of applying the various inputs is denoted by Q~ and Q’, respectively.
Thus, Q* is called the next state of the latch. For S = R = 0, the entries Q and QO in
the Q* and Q* columns, respectively, are interpreted to mean that the next state of
the device is the same as its present state. That is, the outputs do not change and the
present state is retained.
Now assume a | is applied to the R input of the upper nor-gate in Fig. 6.2a and
a 0 is applied to the S input of the lower nor-gate. Regardless of the second input to
the upper nor-gate, the output Q must become 0 since R = 1. This signal, which is
fed back to the lower nor-gate along with the 0 on the S input, causes the output of
the lower nor-gate, Q, to become |. Thus it is seen that a 1 on the R input and 0 on
the S input results in the latch being reset. This is given by the second row of the
function table in Fig. 6.2b. If the input R is subsequently returned to 0, then the
latch retains its present reset state as described by the first row of the function table

Inputs

q | *Unpredictable behavior
O cs O will result if inputs
return to 0 simultaneously
(a) (b)

(c)

Figure 6.2 SA latch. (a) Logic diagrams. (b) Function table where Q* denotes the output Q in response to
the inputs. (c) Two logic symbols.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 305

since the Q = | signal applied to the lower input of the upper nor-gate maintains the
outputs OQ= 0 and Q = 1.
By a similar argument, if a 1 is applied to the S input and a 0 is applied to the R
input, then the latch becomes set regardless of its present state. That is, the new out-
puts are Q = | and Q = 0. This corresponds to the third row of the function table.
Furthermore, the latch remains in the 1-state when the S input returns to 0.
For the three situations just discussed, the outputs Q and Q are complementary.
Consider now the case when § = R = 1. This causes the outputs of both nor-gates to
become 0 as indicated in the function table, and, consequently, they are not comple-
mentary outputs. Difficulty is encountered when the inputs return to 0. If one input
should return to 0 before the other, then the final state of the latch is determined by
the order in which the inputs are changed. In particular, the last input to stay at 1 de-
termines the final state. In the event of both inputs returning to 0 simultaneously, the
device may enter its metastable state. This is a condition that should be avoided as
discussed previously. Eventually, the device becomes stable, but its final state is un-
predictable since it is based on such things as construction differences and thermal
noise. For this reason, along with the fact that the outputs are not complementary, the
S = R = | input is frequently regarded as a forbidden input condition.
From the function table of the SR latch it should be noted that a | serves as the
activation signal of the device. That is, a 1 on either the S or R input terminal causes
the device to set or reset, respectively. Furthermore, since changes on the S and R
inputs can immediately affect the outputs of the latch, the S and R inputs are re-
garded as asynchronous (or direct) inputs.
Two logic symbols for the SR latch are given in Fig. 6.2c. In the second sym-
bol, the output bubble indicates the inversion of the normal state of the latch. Thus
the output terminal with the bubble corresponds to Q.

6.2.2 An Application of the SR Latch:


A Switch Debouncer
A common problem involving switches is the occurrence of contact bounce. This is
illustrated in Fig. 6.3a. As indicated by the waveforms, with the center contact of
the switch in its lower position, the voltage at terminal B is + V volts, while the volt-
age at terminal A is zero. Now if the center contact is moved from its lower position
to its upper position, then it is noted that the voltage at terminal B first becomes
zero, followed by the voltage at terminal A becoming +V volts when the center
contact reaches the upper terminal. However, as a result of contact bounce, the cen-
ter contact of the switch leaves terminal A, causing the output voltage at that termi-
nal to return to zero, and then upon returning to terminal A, causing the voltage at
terminal A to become +V volts again. This opening and closing effect, due to the
springiness of the contacts, may occur several times before the center contact of the
switch remains in its upper position. It is important to note that during contact
bounce, the center contact does not return all the way to terminal B. Similarly, as in-
dicated by the waveforms of Fig. 6.3a, contact bounce again occurs when the
switch is moved from its upper position to its lower position. The effect of contact
306 DIGITAL PRINCIPLES AND DESIGN

Va

A g Time

= ae V
;
a v
Time
(a)

Va
V

0 :
Time

Vp
V

0 ;
Time

a ear
Time

Se
Time
(b)

Figure 6.3 An application of the SA latch. (a) Effects of contact bounce. (6) A switch debouncer.

bounce is normally undesirable. For example, in the case of push-button keys on a


keyboard, contact bounce may cause a system to respond as though a key was de-
pressed several times in succession.
A very simple, but important, application of the SR latch is to eliminate the ef-
fect of contact bounce. A switch debouncer circuit and corresponding waveforms
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 307

are shown in Fig. 6.3b. Assume positive logic so that +V volts corresponds to
logic-1 and ground to logic-0. By use of the two pull-down resistors, logic-0 values
are ensured at the S and R terminals of the SR latch whenever the center contact of
the switch is not connected to either terminals A or B, i.e., whenever the switch is
open. Thus, when the center contact moves from its lower position to its upper posi-
tion, the SR latch remains in its reset state until the center contact reaches terminal
A, at which time the Q output of the SR latch becomes 1. If the switch now opens, as
a result of contact bounce, then the 0 input to the S$ and R terminals of the latch
causes the Q and Q outputs to remain unchanged. Hence, by use of the SR latch, the
effect of contact bounce is eliminated. In a similar manner, the effect of contact
bounce is also eliminated when the switch moves from its upper position to its
lower position.

6.2.3 The S R Latch


Another type of latch, the $R latch, is constructed by cross-coupling two nand-
gates. Such a latch is shown in Fig. 6.4a and its function table is given in
Fig. 6.4b. From the second logic diagram of Fig. 6.4a, it is immediately seen that
when S = R = | the logic diagram reverts to the basic bistable element of Sec. 6.1,
i.e., the cross-coupling of two not-gates. Thus, the device has two stable states. This
is indicated by the last row of the function table.
If just one of the inputs to the SR latch is made 0 while the other is 1, then the
output of the nand-gate having the 0 input becomes 1. This, in turn, is applied as an
input to the second nand-gate that also has a | as its other input. Consequently, the

Outputs

nA! nH)

*Unpredictable behavior
will result if inputs
return to | simultaneously

(a)

Figure 6.4 SR latch. (a) Logic diagrams. (b) Function table where Q* denotes the output Q in response to the
inputs. (c) Two logic symbols.
308 DIGITAL PRINCIPLES AND DESIGN

output of the second nand-gate becomes 0). Thus, if R = Oand § = 1, then the latch
resets; while if R = 1 and S = 0, then the latch sets. These conditions are described
by the two middle rows of the function table. In either case, when the 0 input re-
turns to 1, the SR latch retains its present state.
Similar to the SR latch, the fourth possible input combination causes difficulty.
In this case, if 0 is applied to both the S and R inputs, then both outputs become 1.
Now if the inputs subsequently return to 1 simultaneously, then unpredictable be-
havior results in a similar way, as was discussed for the SR latch. Thus, the applica-
tion of S= R = O is normally not recommended.
Referring to the function table of Fig. 6.44, it is readily seen that 0 serves to ini-
tiate action in the SR latch. That is, a 0 on the S terminal causes the latch to set;
while a 0 on the R terminal causes it to reset.
Two symbols for the SR latch are shown in Fig. 6.4c. It should be noted that in-
version bubbles appear at the input terminals of the symbols since the latch re-
sponds to 0’s on the inputs.

6.2.4 The Gated SR Latch


The inputs for both the SR latch and the SR latch just described are asynchronous
(or direct). That is, a change in value of these inputs causes an immediate change of
the outputs. It is frequently desirable to prevent input activation signals from affect-
ing the state of the latch immediately, but rather to have the effect occur at some de-
sirable time or, alternatively, to allow the input changes to be effective only during
a prescribed period of time. For these situations, a gated SR latch is used. The gated
SR latch is also called an SR latch with enable.
A gated SR latch is shown in Fig. 6.5a. It consists of the SR latch along with
two additional nand-gates and a control input, C, referred to as the enable, gate, or
clock input. The enable input, C, determines when the S and R inputs become effec-
tive. As long as the enable input is 0, the outputs of nand-gates A and B are 1,
which, according to the $R-latch function table of Fig. 6.4b, keeps the SR latch in
its current stable state. In this case, any changes on the $ and R lines are blocked
and the output is said to be latched in its present state. Equivalently, the latch is said
to be disabled. This is indicated by the last row of the function table in Fig. 6.5b.
The crosses in the table under the $ and R inputs are interpreted as “regardless of
the value” or, simply, “irrelevant.”
The remaining four rows of the function table correspond to those situations
when the enable signal, C, is 1. In these cases the gated latch is said to be en-
abled. Here the latch behaves as a regular SR latch. The nand-gates A and B
serve to invert the signals on the S and R input lines when the latch is enabled.
Thus, a | on just one S or R input, in turn, becomes a 0 to the cross-coupled
nand-gates and causes the latch to set or reset, respectively. Applying | simulta-
neously to both the S and R input terminals when C = 1 is not recommended in
order to avoid the possibility of an unpredictable state if the activation signals
are subsequently removed simultaneously or if C is changed to 0 while both the
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 309

Outputs

Enable (C)

*Unpredictable behavior will result


if S and R return to 0 simultaneously
or C returns to 0 while S and R are 1
(b)

Figure 6.5 Gated SA latch. (a) Logic diagram. (b) Function table where Q* denotes
the output Qin response to the inputs. (c) Two logic symbols.

activation signals are present. Since the effects of the S and R inputs are depen-
dent upon the presence of an enable signal, these inputs are classified as syn-
chronous inputs.
Two symbols for the gated SR latch are given in Fig. 6.5c. Again, the output
terminal having the bubble corresponds to Q.

6.2.5 The Gated D Latch


The three latches discussed thus far each has an input combination that is not rec-
ommended. In particular, the situation in which both of the noncontrol inputs, i.e.,
S and R or S and R, are simultaneously active. The gated D (or data) latch, whose
logic diagram and function table are shown in Fig. 6.6a-b, does not have this
problem.
The gated D latch is a gated SR latch in which a not-gate is connected between
the S and R terminals. Thus, the latch consists of a single input D that determines its
next state and a control, i.e., enable, input C that determines when the D input is ef-
fective. As indicated in the function table, when the latch is enabled, i.e., C = 1, the
output of the latch follows the values applied to the D input terminal. In particular,
if D = O, then the latch switches to or remains in the O-state; while if D = 1, then
the latch switches to or remains in the |-state. However, when the latch is disabled,
i.e., C = 0, the latch remains in the state prior to the enable signal going to 0. Two
logic symbols for the gated D latch are given in Fig. 6.6c.
310 DIGITAL PRINCIPLES AND DESIGN

oe Q
Outputs

\ / Enable (C)

(a) (b)

D Q D Q

G. (C

Q ae
(c)

Figure 6.6 Gated Dlatch. (a) Logic diagram. (b) Function table where Q™ denotes the output
Qin response to the inputs. (c) Two logic symbols.

6.3 TIMING CONSIDERATIONS


The function tables for the various latches introduced specify the state outputs as a
result of applying the input signals. However, the responses to the inputs are not re-
ally immediate, but rather occur after some appropriate time delay. This is due to
the time delays associated with the gates themselves, as was discussed in Sec. 3.10.
Furthermore, to achieve the desired responses, certain timing constraints must nor-
mally be satisfied. These timing constraints are presented in this section with refer-
ence to latches; however, they also pertain to the additional types of flip-flops that
are discussed in the next two sections.
A convenient way of showing the terminal behavior of a flip-flop is the timing
diagram. A timing diagram is a graph that depicts the input and output transitions of
a flip-flop as a function of time.

6.3.1 Propagation Delays


The propagation delay is the time it takes a change in an input signal to produce a
change in an output signal. In general, the propagation delay between each pair of
input and output terminals is different, as well as whether the change causes the out-
put to go from low to high, i.e., from 0 to | in positive logic, or from high to low,
i.e., from | to 0 in positive logic. The various propagation delays of a flip-flop are
specified by the manufacturer.
Propagation delays in an SR latch are illustrated in Fig. 6.7. Finite slopes of
the rising and falling edges of the signals are shown since their midpoints are used
in the specifications of the delay times. This figure shows the effect of first setting
and then resetting an SR latch. It should be noted that the outputs do not change in-
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 311

Figure 6.7 Propagation delays in an SR latch.

stantaneously to an input change nor do the outputs change simultaneously. Thus,


the propagation delays from low to high, i.e., from 0 to | in positive logic, denoted
by t,,4, and from high to low, 1.e., from | to 0 in positive logic, denoted by ¢,;,
are, in general, different as well as whether the Q or Q output terminals are being
viewed.
Figure 6.8 shows the outputs of an SR latch as various signal values are applied
to the S and R inputs. This timing diagram is based on the function table of Fig. 6.25.
For simplicity, the finite slopes of the rising and falling edges of the signals are not
shown and the propagation delays are assumed to be all equal. It should be noted in
the figure that when S = R = 1, both the Q and Q outputs become 0. In addition,
special attention should be given to the response of the latch at time ¢,;. Here it is as-
sumed that the signals on the S and R input terminals are simultaneously changed
from | to 0. As a consequence, the response of the latch is unpredictable as indicated
by the shaded area. The latch may be in its O-state, 1-state, or metastable state. At

Figure 6.8 Timing diagram for an SA latch.


312 DIGITAL PRINCIPLES AND DESIGN

S ' ! \
ie Cwcmin) >| 1 —Fyy(min) 1

R ea§ eo Se I eee
i}
| !
te !(min)1

Time

Figure 6.9 Minimum pulse width constraint.

time t,,, however, the application of the 1 on the set input terminal returns the latch
to predictable behavior after a short propagation delay time.

6.3.2 Minimum Pulse Width


Another specification stated by the manufacturers of latches is that of a minimum
pulse width tyemin. This is the minimum amount of time a signal must be applied in
order to produce a desired result. Failure to satisfy this constraint may not cause the
intended change or possibly have the latch enter its metastable state. In either event
unpredictable behavior results. The minimum pulse width constraint is illustrated in
Fig. 6.9. The shaded area of the output signal Q indicates that the state of the latch is
unpredictable since the set signal did not satisfy the minimum pulse width constraint.

6.3.3 Setup and Hold Times


A timing diagram for a gated D latch whose function table was given in Fig. 6.6b is
shown in Fig. 6.10. Again, for simplicity, the finite slopes of the rising and falling
edges of the signals are not shown and all propagation delays are assumed to be
equal. In accordance with the function table, and as illustrated in the figure, the
Q-output of the latch follows the input signal at the D terminal whenever the enable
signal, C, is 1. Whenever the enable signal is 0, changes at the D terminal are ig-

Figure 6.10 Timing diagram for a gated D latch.


CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 313

i]

tie Sgt,

Figure 6.11 [Illustration of an unpredictable response in a gated D latch.

nored and the Q-output retains the state of the latch just prior to the 1 to 0 change in
the enable signal.
To achieve this operation of a gated latch, constraints are normally placed on the
time intervals between input changes. Consider times f;, f¢, t;,, and f,, in Fig. 6.10.
At these times the enable signal C is returned to 0, causing the output to latch onto its
current state. However, to guarantee this latching action, a constraint is placed upon
the D signal for some minimum time before and after the enable signal goes from 1
to 0. This is shown as the shaded areas in Fig. 6.10. For proper operation, the D sig-
nal must not change during this period. The minimum time the D signal must be held
fixed before the latching action, t,,,, is called the setup time; while the minimum time
the D signal must be held fixed after the latching action, f,, is called the hold time.
Failure to satisfy the setup time and hold time constraints can result in unpre-
dictable output behavior, including metastability. This is illustrated in Fig. 6.11,
where unpredictable behavior occurs when latching is attempted at time ¢,, since the
signal on the D line changed at time ¢,,, which is assumed to have occurred within
the setup time of the gated D latch.
Setup and hold time constraints are very important properties when considering
the behavior of all types of flip-flops. In the next two sections master-slave and
edge-triggered flip-flops are discussed. The need to satisfy setup and hold times in
these types of flip-flops relative to changes in their control signal must be adhered to
for their proper operation.

6.4 MASTER-SLAVE FLIP-FLOPS


(PULSE-TRIGGERED FLIP-FLOPS)
In addition to latches, there are two other general categories of flip-flops. These
are the master-slave flip-flops (also called pulse-triggered flip-flops) and the edge-
triggered flip-flops. The first of these two categories is considered in this section
and the second is presented in the next section.
It was observed in Sec. 6.2 that latches have the common property of immedi-
ate output responses (to within the propagation delay times) while enabled caused
314 DIGITAL PRINCIPLES AND DESIGN

by changes on the information input lines, i.e., the S, R, and D lines. This property is
referred to as transparency. In certain applications this is an undesirable property.
Rather, it is necessary that the output changes occur only coincident with changes
ona control input line. This is particularly the case when it is necessary to sense the
current state of a flip-flop while simultaneously allowing new state information to
be entered as determined by the information lines. The property of having the tim-
ing of a flip-flop response being related to a control input signal is achieved with
master-slave and edge-triggered flip-flops.
A master-slave flip-flop consists of two cascaded sections, each capable of stor-
ing a binary symbol. The first section is referred to as the master and the second
section as the slave. Information is entered into the master on one edge or level of a
control signal and is transferred to the slave on the next edge or level of the control
signal. In its simplest form, each section is a latch.

6.4.1 The Master-Slave SR Flip-Flop


Figure 6.12a shows the master-slave SR flip-flop as constructed from two gated SR
latches and an inverter. The information input lines S and R are used to set and reset
the flip-flop. A clock signal, C, is applied to the control input line. The timing behav-
ior of the master-slave flip-flop is referenced to the control signal. This behavior is il-
lustrated in Fig. 6.12b. The transition of the control signal from its low to high value,
i.e., 0 to 1 in positive logic, is called the rising, leading, or positive edge of the control
signal; while the transition of the control signal from its high to low value, i.e., | to 0
in positive logic, is called the falling, trailing, or negative edge of the control signal.
Referring to Fig. 6.12, as long as C = 0 the master, being a gated SR latch, is dis-
abled and any changes on the S and R input lines are ignored. At the same time, the
slave is enabled due to the presence of the inverter. Hence, the slave is in the same state
as that of the master since the Q,, and Q,, outputs of the master are connected to the §
and R inputs, respectively, of the slave. As the control signal starts to rise, the slave is
disabled, by design, at time f,; while the master remains disabled. Thus, the slave be-
comes disconnected from the master but retains the state of the master. The control sig-
nal continues to rise, and it is at time f, that the master is enabled. While
C = 1, the master, being a gated SR latch, responds to the inputs on the S and R lines,
as was discussed in Sec. 6.2. Meanwhile, since the slave is disabled due to the presence
of the inverter, any changes to the state of the master are not reflected to the slave. The
control signal is subsequently returned to its low level at time 4. At this time, the mas-
ter is disabled, causing it to latch onto its new state. However, it is not until time f, that
the slave is enabled. This results in the slave taking on the state of the master as the
connection is made. It is important to note that for very short periods during the rising
and falling edges of the control signal both the master and slave latches are disabled.
This is critical to the operation of a master-slave flip-flop.
It should be observed that although the master can change its state (and, corre-
spondingly, its output) at any time while the control signal is 1, it is only as the con-
trol signal goes from | to 0 that the slave changes its state. Thus, the output change
of the master-slave flip-flop is synchronized to the falling edge of the control signal.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 315

Master-slave SR flip-flop

Clock (C)

(a)

Master disabled Master enabled Master disabled


—> |-+-—$SS jj

ty bs

Clock:
(C)

+ —____——_> <«—____ |
Slave enabled } Slave disabled ; Slave enabled
] 4

Time

(b)

Inputs Outputs
Saal
S R G Oz Or
G

0 (Ohana ameide O O Roo a0

0 tg ie
ai 0 1

1 Cremae eek 1 0 4 50
G
1 tae tnlers Undefined Undefined

<a aX 0 QO Q
(c) (d)

Figure 6.12 Master-slave SRflip-flop. (a) Logic diagram using gated SA latches.
(b) Flip-flop action during the control signal. (c) Function table where
Q* denotes the output Q in response to the inputs. (da) Two logic
symbols.
316 DIGITAL PRINCIPLES AND DESIGN

This controlling of the output change to be coincident with the change on the con-
trol input line is precisely the property being sought by flip-flops not in the latch
category. The master-slave principle is one way in which this property is achieved.
The behavior of the master-slave SR flip-flop is summarized by the function
table in Fig. 6.12c. The pulse symbol in the C column, _| |_, indicates that the
master is enabled while the control signal is high and that the state of the master is
transferred to the slave, and, correspondingly, to the output of the flip-flop, at the
end of the pulse period. Special attention should be given to the fourth row of the
function table. This row corresponds to the situation of both S and R being 1 when
the control signal goes from high to low. Since the master is a latch, it enters an un-
predictable state, including the possibility of the metastable state. This state value is
then subsequently transferred to the slave. Hence, the output of the master-slave SR
flip-flop itself becomes unpredictable. Such a condition should be avoided. Since the
behavior of master-slave flip-flops constructed from latches is dependent upon the
rising and falling edges of the control signal as well as the period of time in which
the control signal is high, they are also referred to as pulse-triggered flip-flops.
Two logic symbols for the master-slave SR flip-flop are given in Fig. 6.12d. The
~ | symbol, called the postponed-output indicator, at the output terminals is used to
imply that the output change is postponed until the end of the pulse period. For the
master-slave flip-flop of Fig. 6.12a, this corresponds to the time when the control sig-
nal goes from high to low. Also, as in the case of latches, bubble notation is used to in-
dicate the complementary output @Q of the flip-flop in the second logic symbol shown.
Figure 6.13 shows a timing diagram for the input and output terminals of a
master-slave SR flip-flop along with a timing diagram for the output terminals of the
master section of the flip-flop. For simplicity, the finite slopes of the rising and

Figure 6.13 Timing diagram for a master-slave SR flip-flop.


CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 317

falling edges of the signals are not shown and the propagation delays are assumed to
be all equal. However, the sequence of events during the rise and fall times of the
control signal as indicated in Fig. 6.12 is still occurring.

6.4.2 The Master-Slave JK Flip-Flop


Since the output state of a master-slave SR flip-flop is undefined upon returning the
control input to 0 when S$ = R = 1, it is necessary to avoid this condition. The master-
slave JK flip-flop, on the other hand, does allow its two information input lines to be
simultaneously |. This results in the toggling of the output of the flip-flop. That is, if
the present state is 0, then the next state is 1; while if the present state is 1, then the
next state is 0. The logic diagram of a master-slave JK flip-flop is shown in Fig. 6.14a.
The J and K inputs have the effect of setting and resetting the flip-flop, respectively,
and hence are analogous to the S and R inputs of the master-slave SR flip-flop. In ad-
dition, two and-gates are used to sense and steer the state of the slave.
To see how this flip-flop works, assume the master-slave JK flip-flop of Fig. 6.14a
is in its 1-state, the control signal, i.e., the clock, is 0, and that J = K = 1. Thus, the
master and slave latches are both in the 1-state with OQ= Q, = | and Q = Os = 0. As
a result of the feedback lines, the output of the J-input and-gate is logic-O and the out-
put of the K-input and-gate is logic-1. The net effect is that S = 0 and R = | at the in-
puts to the master latch, although these inputs cannot affect the state of the master at
this time since C = 0. If the clock is now changed from 0 to 1, then the master resets;
while the slave, being disabled, remains in its 1-state. However, upon the clock return-
ing to O, the content of the master is transferred to the slave, causing the new state of
the master-slave JK flip-flop to become the 0-state. Thus, the output of the master-
slave JK flip-flop toggled when J = K = | as the result of the control signal.
Now assume the master-slave JK flip-flop is in its O-state, again J = K = 1, and
the control signal, i.e., the clock, is low. Thus, Q@= Q; = 0 and O= 0; = 1. In this
case the output of the J-input and-gate is logic-1; while the output of the K-input and-
gate is logic-0. At the master input terminals, S = 1 and R = 0. Hence, when the clock
is changed from 0 to 1, the master enters its 1-state, which is subsequently transferred
to the slave when the clock changes from | to 0. Again, the state of the master-slave
JK flip-flop toggled. The toggling behavior of the flip-flop when J = K = | is indi-
cated by the fourth row in the function table shown in Fig. 6.145.
Consider now the third row of the function table that indicates that a 1 on just
the J input line has the effect of setting the flip-flop. To see this, assume the master-
slave JK flip-flop is in its 1-state when the clock is low. Thus, Q@= Qs = | and
O = Os = 0. Since the slave is enabled and in its 1-state, the master must also be in
its 1-state, i.c., Qy = 1 and Qy = 0. If J= 1 and K = 0, then the outputs of both
and-gates are logic-0 since they each have a 0 on one of their inputs, i.e., the upper
and-gate has Q, = 0 and the lower and-gate has K = 0. Consequently, at the input
terminals of the master, § = R = 0. When the clock becomes 1, the state of the mas-
ter does not change, i.e., it remains in its 1-state. Upon returning the clock to 0, the
slave, which in turn takes on the value of the master, also remains in its |-state.
On the other hand, if the master and slave latches are in their 0-states when
K = 0, and the clock is low, then Q = Qy = Qs;= 0 and Q = Qy = Q;= 1.
J = 1,
318 DIGITAL PRINCIPLES AND DESIGN

Master-slave JK flip-flop

i]
Clock (C)

(a)

Outputs
1g
(e

pu:

| 2! wal©)

iE
—— Ki

(c)

Figure 6.14 Master-slave JKflip-flop. (a) Logic diagram using gated SR latches. (6) Function table where Q*
denotes the output Q in response to the inputs. (c) Two logic symbols.

The output of the J-input and-gate is logic-1 and the output of the K-input and-gate
is logic-O. Thus, S = | and R = 0 at the inputs to the master latch. When the clock
goes high, the master is set. The |-state of the master is then subsequently trans-
ferred to the slave when the clock returns to 0. In summary, regardless of its present
state when J = | and K = 0, the master-slave JK flip-flop enters or remains in its
|-state upon the occurrence of the pulse signal on the control line. This corresponds
to the third row of the function table.
By a similar argument, if J = 0 and K = 1, then the master-slave JK flip-flop
enters or remains in its 0-state after a clock pulse has occurred. This resetting effect
is described by the second row of the function table.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 319

Considering the remaining rows of the function table, the first row indicates
that the master-slave JK flip-flop retains its current state when J = K = 0 during a
clock pulse. Similarly, the last row indicates that whenever the clock is low, i.e.,
C = 0, the state of the flip-flop does not change.
In Fig. 6.14c two symbols are shown for the master-slave JK flip-flop. Again,
the postponed-output indicator is used to symbolize that the output change occurs
coincident with the falling edge of the control signal, i.e., when the control signal
changes from | to 0.
For ease of the above analysis, the logic-1 values on the J and K lines were as-
sumed to be applied prior to the application of the clock pulse. In actuality, these
values can occur anytime while the control signal is | since the master, being a
latch, is enabled during that time.
A timing diagram illustrating the behavior of a master-slave JK flip-flop is
shown in Fig. 6.15. Again, for simplicity, propagation delays are assumed to be
equal and the finite slopes of the rising and falling edges of the signals are not
shown. In addition, manufacturer’s constraints regarding minimum width of the sig-
nals, i.e., minimum time durations that signals are applied, and setup and hold times
of the information signals relative to the control signal must be adhered to for
proper operation of master-slave flip-flops. It is assumed these constraints are satis-
fied in the timing diagram of Fig. 6.15.

6.4.3 0O’s and 1’s Catching


As was indicated above and illustrated in Fig. 6.15, the master of the master-slave
JK flip-flop, being a latch, is enabled during the entire period the control signal 1s 1.

Figure 6.15 Timing diagram for a master-slave JK flip-flop.


320 DIGITAL PRINCIPLES AND DESIGN

Thus, if the slave latch is in its 1-state, then a logic-1 on the K input line while the
control signal is 1 causes the master latch to reset. This subsequently results in the
slave becoming reset when the control signal returns to 0. An example of this oc-
curred during the second clock pulse in Fig. 6.15. This behavior is known as 0’s
catching. It should be noted that once the master latch is reset by a logic-1 signal on
the K input line, a subsequent logic-1 signal on the J input line during the same pe-
riod in which C = 1| does not cause the master to again become set. This is due to the
fact that since the slave does not change its state until C returns to 0, the feedback
signal from the slave, i.e., Q, = 0, keeps the output of the J-input and-gate at logic-0.
In a similar manner, if the slave is storing a 0, then a logic-1 on the J input line
while the control signal is | causes the master latch to be set, which subsequently
results in the setting of the slave upon the occurrence of the falling edge of the con-
trol signal. This behavior occurred during the third clock pulse in Fig. 6.15 and is
known as /’s catching.
In many applications, the 0’s and 1’s catching behavior is undesirable. Hence, it
is normally recommended that the J and K input values should be held fixed during
the entire interval that the master is enabled. To satisfy this constraint, any changes in
the J and K inputs must occur while the control signal is 0. This was done during the
first and fourth clock pulses in Fig. 6.15. The function table of Fig. 6.145 does not ac-
count for 0’s and 1|’s catching but, rather, assumes the J and K inputs are held fixed
during the entire period the control signal is 1. The problem of 0’s and 1’s catching is
also solved by the use of another class of flip-flops called edge-triggered flip-flops.
This class of flip-flops is studied in Sec. 6.5. Alternatively, a variation of the master-
slave flip-flop, called the master-slave flip-flop with data lockout, is available that is
not subject to 0’s and 1’s catching. This variation also is discussed in the next section.

6.4.4 Additional Types of Master-Slave Flip-Flops


So far the master-slave SR and JK flip-flops have been discussed. From these, addi-
tional types of master-slave flip-flops can be constructed. For example, by placing
an inverter between the S and R inputs of a master-slave SR flip-flop, as shown in
Fig. 6.16, a master-slave D flip-flop is obtained.
Another type of master-slave flip-flop is shown in Fig. 6.17a, where the J and
K input terminals are tied together so that T = J = K. In this case the flip-flop

(a) (b)

Figure 6.16 Master-slave Dflip-flop. (a) Logic diagram using a master-slave SRflip-
flop. (b) Two logic symbols.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 321

Inputs Outputs

Figure 6.17 Master-slave Tflip-flop. (a) Logic diagram using a master-slave JKflip-flop. (6) Function table
where Q” denotes the output Q in response to the inputs. (c) Two logic symbols.

changes state, or toggles, with each control pulse if T = 1 and retains its current
state with each control pulse if T = 0. This is called a master-slave Tflip-flop.
The function table of the master-slave T flip-flop and its logic symbols are given
in Fig. 6.17b-c.

6.5 EDGE-TRIGGERED FLIP-FLOPS


In basic master-slave flip-flops, the master is enabled during the entire period the
control input is 1. As was mentioned previously, this can result in 0’s and 1’s catch-
ing. To avoid the catching problem, the signals on the information lines are re-
stricted from changing during the time the master is enabled. In this way the state of
the master is established during the positive edge of the control signal and then
transferred to the slave on the negative edge of the control signal. As a consequence
of this process, the effect of the information signals appears delayed at the output of
the master-slave flip-flop.
Edge-triggered flip-flops use just one of the edges of the control, i.e., clock, sig-
nal to affect the reading of the information input lines. This is referred to as the trig-
gering edge. These flip-flops are designed to use either the positive or negative tran-
sition of the control signal for this purpose. The response to the triggering edge at
the outputs of the flip-flop is almost immediate since it is dependent only on the
propagation delay times of its components. Once the triggering edge occurs, the
flip-flop remains unresponsive to information input changes until the next triggering
edge of the control signal.

6.5.1 The Positive-Edge-Triggered D Flip-Flop


The logic diagram of a positive-edge-triggered D flip-flop is shown in Fig. 6.18a,
where D is the information input and C is the control, or clock, input. By positive-
edge-triggered it is meant that the setting or resetting of the flip-flop is established
by the rising, or positive, edge of the control signal. The behavior of the positive-
edge-triggered D flip-flop, given in Fig. 6.18b, is similar to that of the D latch, with
the major difference being that the value of the D input is transferred to the output
only as a consequence of the rising edge of the signal on the control line. Thus, the
322 DIGITAL PRINCIPLES AND DESIGN

Outputs

Clock (C) (b)

(a)

Figure 6.18 flip-flop. (a) Logic diagram.(b) Function table where Q” denotes the
Positive-edge-triggered D
output Qin response to the inputs. (c) Two logic symbols.

positive edge of the control input has the effect of sampling the D input line. This is
indicated in the function table by the 7 symbol. At all other times, including the
time while the clock is at 1, the D input is inhibited and the state of the flip-flop can-
not change.
To see how the positive-edge-triggered D flip-flop operates, consider the logic
diagram in Fig. 6.18a. Nand-gates 5 and 6 serve as an SR latch whose behavior was
previously described by the function table in Fig. 6.4b. Thus, as long as S = R = 1,
the state of the latch cannot change; while whenever either S or R is 0, but not both,
the latch sets or resets, respectively.
Assume the control input, i.e., clock, C, is 0. Regardless of the input at D, the
outputs of nand-gates 2 and 3 are |. These signals are applied to the SR output
latch, causing it to hold its current state. Now assume that D is also 0. This holds the
output of gate 4 at 1. In turn, the output of gate | is 0 since the outputs of gates 2
and 4 are 1’s. When the clock goes from 0 to 1, i.e., the positive edge of the control
signal, all three inputs to gate 3 become 1, causing the output of the gate to change
to 0. Meanwhile, the output of gate 2, S, remains at 1 since the output of gate | is
still 0. The 0 on the R line and the | on the S line cause the §R latch to enter or re-
main in its reset state, i.c., Q = 0 and Q = 1. In addition, the output of gate 3, which
is currently 0, is also fed back as an input to gate 4. This now keeps the output of
gate 4 at 1, and any subsequent changes in the D input while C is | have no effect
upon the output of gate 4 and, correspondingly, gate |. Thus, after the occurrence of
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 323

the positive edge of the clock signal when D = 0, the flip-flop is in its 0-state and
any changes in the D input are inhibited even though the clock is 1.
Again assume C = 0, but now let D = 1. As before, the outputs of gates 2 and
3 are 1, causing the SR latch to hold its current state. However, the D = | input
causes the output of gate 4 to be 0, and this output, in turn, causes the output of gate
1 to be 1. Now when the clock changes to 1, both inputs to gate 2 are 1 and, conse-
quently, its output, S, becomes 0. Since the output of gate 4 is 0, the output of gate
3, R, remains at 1. The S = 0 and R = | results in the setting of the SR latch con-
sisting of gates 5 and 6. The 0 output from gate 2 serves as an input to both gates 1
and 3 that, in turn, guarantees that their outputs remain at 1. Thus, if D should sub-
sequently change from | to 0 while the clock is 1, causing the output of gate 4 to
change, then the outputs of gates | and 3 do not change. Therefore, once the posi-
tive edge of the clock has occurred, changes in the D input while C = | have no ef-
fect upon the state of the flip-flop.
In summary, only upon the occurrence of the positive edge of the clock signal
does the flip-flop respond to the value of the D input. Once the new output state is
established, changes in the D input while C = 1 are ineffectual. When the clock sig-
nal returns to 0, both § and R become 1, and the SR latch retains the state entered as
a consequence of sampling the D input by the positive edge of the control signal.
Two logic symbols for the positive-edge-triggered D flip-flop are shown in
Fig. 6.18c. Since the outputs of the flip-flop respond essentially immediately to
the positive edge of the control signal, postponed-output indicators do not appear.
To signify that the output change can only occur as a consequence of the transi-
tion of the control signal, a triangular symbol, called the dynamic-input indicator,
is used at the control input of the logic symbol.
Figure 6.19 shows a timing diagram for the positive-edge-triggered D flip-flop.
For simplicity, the finite slopes of the rising and falling edges of the signals are not
shown and all propagation delays are assumed to be equal. Indicated in Fig. 6.19 are
the setup, ¢,,, and hold, ¢,, times with respect to the triggering edge of the control
signal that need to be satisfied. During these times the D input must not change;
otherwise, an unpredictable output, including the metastable state, is possible.

Figure 6.19 Timing diagram for a positive-edge-triggered D flip-flop.


324 DIGITAL PRINCIPLES AND DESIGN

Se Q ee
Clk —op SG
Inputs Outputs 5

5 7
Chk—OFC
(O) Oa
is

(a) (b)

Figure 6.20 Negative-edge-triggered


D flip-flop. (a) Function table
where Q* denotes the output
Qin response to the inputs.
(b) Two logic symbols.

6.5.2 Negative-Edge-Triggered D Flip-Flops


A slight variation of the positive-edge-triggered D flip-flop is the negative-edge-
triggered D flip-flop. In this case the falling edge, i.e., a high to low transition, of
the control signal is used to sample the D input line rather than the rising edge.
This can be achieved by simply placing an inverter at the control input of the flip-
flop shown in Fig. 6.18a. The function table and logic symbols for this type of flip-
flop are given in Fig. 6.20. It should be noted that an inversion bubble appears at
the control input of the symbol in addition to the dynamic-input indicator. This in-
version bubble and dynamic-input indicator combination denotes negative-edge
triggering.

6.5.3 Asynchronous Inputs


Earlier in this chapter, the information inputs of flip-flops were categorized into
two types: synchronous and asynchronous. These inputs are distinguished by
whether or not they require the presence of a control signal to make them effec-
tive. All the information inputs of the edge-triggered and master-slave flip-flops
that have been presented thus far are synchronous inputs. To provide greater
flexibility, many flip-flops have both asynchronous and synchronous inputs
within the same device. The asynchronous inputs, usually called preset (denoted
by PR) and clear (denoted by CLR), are used to forcibly set and reset the flip-
flop, respectively, independently of the control input. These inputs are particu-
larly useful for bringing a flip-flop into a desired initial state prior to normal
clocked operation.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 325

A logic diagram for a positive-edge-triggered D flip-flop with asynchronous


preset and clear inputs is shown in Fig. 6.2la and its corresponding function
table is given in Fig. 6.21b. In this network, a logic-0 initiates action on the
asynchronous lines. This logic-0 activation is indicated by the bubbles on the
asynchronous inputs of the logic symbols shown in Fig. 6.21c. Hence, a logic-0
on the PR input line causes the flip-flop to enter its 1-state, i.e., to be set, while a
logic-O on the CLR input line causes the flip-flop to enter its O-state, i.e., to be
reset.

Inputs

PR CLR D O60:
(GIR Q oo x

es Oe
Clock (C) +4 ae

5 i il

| | 1

1 Il OK

i Ih OX

*Unpredictable behavior will


result if PR and CLR return
to 1 simultaneously

(a) (b)

CLR

Figure 6.21 Positive-edge-triggered Dflip-flop with asynchronous inputs. (a) Logic diagram. .
(b) Function table where Q* denotes the output Q in response to the inputs. (c) Two logic
symbols.
326 DIGITAL PRINCIPLES AND DESIGN

Referring to the function table, the first two rows indicate the fact that a 0 on just
the PR or CLR input lines causes the flip-flop to set or reset asynchronously, 1.¢., re-
gardless of the values on the D and C lines as denoted by crosses in the D and C
columns. The third row corresponds to the situation when both the PR and CLR inputs
are simultaneously active. This condition is not recommended since unpredictable be-
havior results if both asynchronous inputs return to | simultaneously. Only when both
asynchronous inputs are |’s does the flip-flop behave as a positive-edge-triggered D
flip-flop. This corresponds to the last four rows of the function table.
Again consider the logic diagram of Fig. 6.21a. If the asynchronous lines are
removed, then the logic diagram of Fig. 6.18a results. This is analogous to having
1’s on both the PR and CLR input lines. In this case the operation of the device is as
discussed previously for the positive-edge-triggered D flip-flop.
If PR = 0 and CLR = 1 while the control signal C is 0, then the output of nand-
gate 5 becomes | and the output of nand-gate 6 becomes 0). Thus, the SR latch por-
tion of the flip-flop is forced into the 1-state, i.e., to be set. Similarly, if PR = 1 and
CLR= 0 is applied, then the SR latch portion of the flip-flop is forced into the
0-state, i.e., to be reset. The PR and CLR inputs are also applied to nand-gates 1, 2,
and 4. This is done to ensure the effect of an asynchronous input on the flip-flop
outputs while the control signal C is 1. That is, if either PR or CLR becomes 0 while
the clock is 1, then the flip-flop accordingly responds immediately.
Although the above discussion was concerned with asynchronous inputs in a
positive-edge-triggered D flip-flop, asynchronous inputs also may occur in negative-
edge-triggered D flip-flops as well as in the other types of edge-triggered flip-flops
that are discussed shortly. In addition, asynchronous inputs also occur in pulse-
triggered flip-flops. Occasionally, however, only one asynchronous input appears in
commercial flip-flops.

6.5.4 Additional Types of Edge-Triggered Flip-Flops


Thus far, only positive-edge-triggered and negative-edge-triggered D flip-flops
have been considered. Other types of edge-triggered flip-flops are possible. Since
the hold time of an edge-triggered D flip-flop is less than its propagation delay
times, by using an edge-triggered D flip-flop, along with additional gates, other
flip-flop types can be constructed. For example, Fig. 6.22 shows a logic diagram,
function table, and logic symbols for a positive-edge-triggered JK flip-flop. Edge-
triggered JK flip-flops are not subject to the 0’s and I’s catching phenomenon since
they respond to the values on the information input lines only at the time of the
triggering edge.
Figure 6.23 shows two possible ways of constructing a positive-edge-triggered
T flip-flop. One approach is to simply tie together the J and K inputs of a positive-
edge-triggered JK flip-flop. The second approach involves the use of an exclusive-
or-gate with a positive-edge-triggered D flip-flop.
As in the case of all previously discussed flip-flops, setup and hold time re-
quirements on the J, K, and T inputs relative to the active triggering edge of the con-
trol signal must be satisfied to ensure proper operation.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 327

Inputs Outputs

(b)

LLL) —— aS SSS

(a) (c)

Figure 6.22 Positive-edge-triggered JKflip-flop. (a) Logic diagram. (b) Function table where Q* denotes the
output Qin response to the inputs. (Cc) Two logic symbols.

(a)

ISIS
Cl
(b) (c)

Figure 6.23 Positive-edge-triggered Tflip-flop. (a) Logic diagrams. (b) Function


table where Q* denotes the output Qin response to the inputs.
(c) Two logic symbols.
328 DIGITAL PRINCIPLES AND DESIGN

6.5.5 Master-Slave Flip-Flops with Data Lockout


There are situations in which delayed outputs from flip-flops are desirable. In such
cases a master-slave configuration is appropriate. However, to avoid the 0’s and 1’s
catching behavior, the master should respond to the information lines only on one
edge of the control signal and then transfer its content to the slave on the next oppo-
site edge of the control signal. Master-slave flip-flops having this property are said
to have data lockout.
A possible construction for a master-slave JK flip-flop with data lockout is
shown in Fig. 6.24a. Here a positive-edge-triggered JK flip-flop is used for the mas-
ter and an SR latch is used for the slave. Because of the presence of the inverter be-
tween the two sections, information only enters the master on the positive edge of
the control signal. Since the master is an edge-triggered flip-flop, any changes on
the J or K information lines while the control signal is | are disregarded. The con-
tent of the master is subsequently transferred to the slave during the negative-edge
transition of the control signal. Hence, the desired output delay is achieved. For
proper operation, set and hold time requirements relative to the triggering edge of
the control signal must be satisfied.

Master-slave JK flip-flop with data lockout

J Q Q
Clock (C)
K Q Q
Slave

(a)

(b)

Figure 6.24 Master-slave JKflip-flop with data lockout. (a) Logic diagram. (b) Two
logic symbols.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 329

Symbols for the master-slave JK flip-flop with data lockout are given in
Fig. 6.24b. The dynamic-input indicator is used since the information input lines
are sampled on the positive edge of the control signal. Postponed-output indica-
tors appear in the symbols since the output change is delayed as a consequence
of the master-slave configuration.

6.6 CHARACTERISTIC EQUATIONS


Three classes of flip-flops have been presented in this chapter: latches, pulse-triggered
flip-flops, and edge-triggered flip-flops. The timing scheme utilized by the flip-flops
served as the basis for the classifications. Furthermore, within each class, several
types of flip-flops were described. The types were associated with the information in-
puts of the flip-flops.
The first class of flip-flops consisted of the latches. This class is character-
ized by their outputs responding immediately at all times while enabled. The sec-
ond class was the pulse-triggered flip-flops using the master-slave structure. In
this class, information enters the master on the first edge of the control signal, in
the case of those with data lockout, or during an entire pulse period, in the case
of those without data lockout. In both cases, the content of the master is trans-
ferred to the slave at the time of the second edge of the triggering pulse. Thus,
this class of flip-flops is characterized by the outputs being postponed until the
end of the triggering pulse period. In the third class of flip-flops, i.e., the edge-
triggered flip-flops, inputs from the information lines are accepted only upon the
occurrence of one of the control input edges. At all other times, the informa-
tion inputs are effectively disconnected. Thus, the control input edge serves to
sample the information input lines. The outputs of edge-triggered flip-flops re-
spond immediately.
The four types of flip-flops that have been described in this chapter are the SR
flip-flop, the JK flip-flop, the D flip-flop, and the T flip-flop. Simplified forms of
their function tables are given in Table 6.1. In these versions of the function tables
the control signal is not explicitly shown, but rather is implicitly assumed. In this
way, these tables serve to summarize the functional behavior of all four types of
flip-flops regardless of the flip-flop class. Special attention should be given to the
last row of the simplified SR flip-flop function table. The dash indicates that the
input combination S = R = | is not permitted since the output is subject to unpre-
dictable behavior if S and R should return to 0 simultaneously. Finally, only the
next-state values of Q, i.e., O°, are shown in the tables. It is assumed the next-state
values of Q are the opposite to those of Q.
A variation of the simplified function tables, called the next-state tables, is
given in Table 6.2 for the four types of flip-flops. The next-state tables show the
value of the next state of the flip-flops for each combination of values to the present
state of the flip-flops and their information input lines. Since Q can have two values,
each row of the simplified function table becomes two rows in the next-state table.
For each table, the appropriate interpretation is that for a given present state Q and
336 DIGITAL PRINCIPLES AND DESIGN

Table 6.1 Simplified flip-flop function


tables. Q denotes the current
state and Q™ denotes the
resulting state as a
consequence of the information
inputs and the control signal.
(a) SR flip-flop. (6) D flip-flop.
(c) JKflip-flop. (d) T flip-flop.

(c) (d)

inputs, the application of a control signal causes the flip-flop to change to the next-
state O°.
The algebraic description of the next-state table of a flip-flop is called the char-
acteristic equation of the flip-flop. This description is easily obtained by construct-
ing the Karnaugh map for Q” in terms of the present state and information input
variables. An example of such a Karnaugh map for an SR flip-flop is shown in Fig.
6.25a. For the purpose of this map, the inputs that cause an undefined output are re-
garded as don’t-cares since these inputs are assumed to not occur. From the Kar-
naugh map of Fig. 6.25a, the characteristic equation

OS SPRO

immediately follows. In the case of SR flip-flops, a constraining equation, SR = 0, is


included to signify that S and R should not be | simultaneously for proper opera-
tion. The characteristic equations for the four types of flip-flops presented in this
chapter are given in Fig. 6.25b.
As in the case of the next-state tables, the characteristic equations only specify
the functional behavior of the flip-flops. Thus, it is implied that the stated functional
response is a consequence of an appropriate control signal, i.e., an enable signal in
the case of latches, a pulse in the case of pulse-triggered flip-flops, and a specific
edge in the case of edge-triggered flip-flops.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 331

Table 6.2 Flip-flop next-state tables. Q denotes the current state and Q* denotes
the resulting state as a consequence of the information inputs and
flip-flop. (c) JKflip-flop.
the control signal. (a) SRflip-flop. (b) D
(d) Tflip-flop.
S R Q
0 0 0
0 0 1
0 1 OF
0 1
1 0 0
1 0 i
1 ] 0 - Inputs not
1 1 ] - } allowed
(a) (b)
J K Q T O
0 0 0 0 0 0 0
0 0 I I 0 I
0 I 0 0 I 0 I
0 ] | 0 1 ] 0
1 0 0
1 0 1 1
I 0
1 I I 0
(c) (d)

Flip-flop type | Characteristic equation


SR Q+=S+RQ (SR=0)
JK O+=JO+KO
Qt=D
Or =S+RO (SR =0) ip Ot=TO+TO=T®Q
(a) (b)

Figure 6.25 Characteristic equations. (a) Derivation of characteristic


equation for an SPflip-flop. (6) Summary of
characteristic equations.
332 DIGITAL PRINCIPLES AND DESIGN

For many of the networks encountered in this book, it is necessary that the out-
put changes occur coincident with the changes on the control input line. Either
edge-triggered or pulse-triggered flip-flops can be used for this purpose. Thus, they
are categorically referred to as simply clocked flip-flops.

6.7 REGISTERS
In the remaining sections of this chapter, attention is turned to some simple applica-
tions involving clocked flip-flops. These applications are examples of sequential
networks. Sequential networks are formally discussed in the next three chapters of
this book. However, the intention at this time is to illustrate the use of clocked flip-
flops as network devices. Sequential networks, unlike combinational networks, pos-
sess a memory property. This can be achieved with flip-flops since they have the ca-
pability of storing the symbols 0 and 1, whether they correspond to the binary digits
or the logic values.
A register is simply a collection of flip-flops taken as an entity. The basic func-
tion of a register is to hold information within a digital system so as to make it
available to the logic elements during the computing process. However, a register
may also have additional capabilities associated with it.
Since a register consists of a finite number of flip-flops and since each flip-flop
is capable of storing a 0 or a | symbol, there are only a finite number of 0-1 combi-
nations that can be stored in a register. Each of these combinations is known as the
state or content of the register.
Registers that are capable of moving information positionwise upon the occur-
rence of a clock signal are called shift registers. These registers are normally classi-
fied by whether they can move the information in one or two directions, i.e., unidi-
rectional or bidirectional.
The manner in which information is entered into and outputted from a register is
another way in which they are categorized. There are two basic ways in which these
transfers are done: serially or in parallel. When information is transferred in a paral-
lel manner, all the 0-1 symbols that comprise the information are handled simultane-
ously as an entity in a single unit of time. Such information transfers require as many
lines as symbols being transferred. On the other hand, the serial handling of informa-
tion involves the symbol-by-symbol availability of the information in a time se-
quence. These information transfers only require a single line to perform the transfer.
Thus, there are four possible ways registers can transfer information: serial-in/
serial-out, serial-in/parallel-out, parallel-in/parallel-out, and parallel-in/serial-out.
Figure 6.26 illustrates the serial-in, serial-out unidirectional shift register con-
structed from positive-edge-triggered D flip-flops. The Q output of each flip-flop is
connected to the D input of the flip-flop to its right. The control inputs of all the flip-
flops are connected together to a common synchronizing signal called the clock.
Thus, upon the occurrence of a positive edge of the clock signal, the content of each
flip-flop is shifted one position to the right. The content of the leftmost flip-flop
after the clock signal depends upon the signal value on the serial-data-in line, and
the content of the rightmost flip-flop prior to the clock signal is lost. The output
from the shift register occurs at the rightmost flip-flop on the serial-data-out line.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 333

Serial
data
in
Clock

Figure 6.26 Serial-in, serial-out unidirectional shift register.

For the register of Fig. 6.26, if the initial content of the four flip-flops is 1011 and a
logic-O is applied to the serial-data-in line prior to the positive edge of the clock sig-
nal, then the content of the register becomes 0101 after the positive edge of the
clock signal. The signal value that is shifted in, i.e., the logic-O0, becomes available
as an output on the serial-data-out line after four clock pulses.
In some applications, the information within a register must be preserved, but
only a reorientation of the information is desired. To achieve this, the serial-data-out
line of Fig. 6.26 is connected to the serial-data-in line. In this way the content of the
register is again shifted one position to the right upon the occurrence of each clock
signal, but the state of the leftmost flip-flop is replaced by the state of the rightmost
flip-flop. For example, again assume the initial content of the register is 1011, but the
output of the rightmost flip-flop is connected to the input of the leftmost flip-flop.
Then after the occurrence of the positive edge of the clock signal, the register contains
1101. Shift registers having this type of connection are called circular shift registers.
It is important to note that the flip-flops of a register are subject to a change in
state while they are being interrogated by the next flip-flop in the cascade connec-
tion. That is, a flip-flop is simultaneously being read into while being read by an-
other flip-flop. Thus, edge-triggered or pulse-triggered, i.e., master-slave, flip-flops
are used. Latches are not appropriate in such an application since their outputs are
subject to changes during the entire period in which they are enabled.
The serial-in, parallel-out unidirectional shift register is illustrated in Fig. 6.27.
In this case, outputs are provided from each flip-flop. Once information is shifted

Parallel data out

Figure 6.27 Serial-in, parallel-out unidirectional shift register.


334 DIGITAL PRINCIPLES AND DESIGN

into the register, i.e., serial in, the information is available as a single entity, 1.e.,
parallel out, at the flip-flop output terminals. Since information is transferred into
this register serially and, after an appropriate number of shifts, made available in
parallel, this type of register provides for the serial-to-parallel conversion of
information.
The register shown in Fig. 6.28 is used as a parallel-in, serial-out unidirec-
tional shift register. The operation of the register is controlled by the Load/Shift
line. When a logic-0 signal appears on this line, the signals on the parallel-data-in
lines /,J,/clp are transferred into the register upon the occurrence of a positive-edge
clock signal. Then, when a logic-1 signal appears on the Load/Shift line, the D flip-
flops become a cascade connection that functions as a unidirectional shift register
providing the serial output. In this way, the register of Fig. 6.28 provides for the
parallel-to-serial conversion of information. By taking the outputs from the indi-
vidual flip-flops, the register functions as a parallel-in, parallel-out unidirectional
shift register. It should be noted that the register illustrated in Fig. 6.28 can also
function as a serial-in, parallel-out unidirectional shift register and as a serial-in,
serial-out unidirectional shift register.
The second general classification of shift registers consists of the bidirectional
shift registers. These types of registers are capable of shifting their contents either
left or right depending upon the signals present on appropriate control input lines.
An example of a bidirectional shift register is shown in Fig. 6.29. This register
is also known as the universal shift register. Depending upon the signal values on
the select lines of the multiplexers, 1.e., the mode control lines, the register can re-
tain its current state, shift right, shift left, or be loaded in parallel. Each of these op-
erations is the result of the occurrence of a positive edge on the clock line. In addi-
tion, the register is cleared asynchronously if a logic-O is applied to the line labeled
CLEAR.
As an illustration of the operation of the universal shift register, according to
the table in Fig. 6.29b the register performs the shift-right operation when the
logic values on the select lines $,S) of the multiplexers are O01. Under this condi-
tion the 7, input of each multiplexer is connected to its f output. Thus, as seen in
Fig. 6.29a, the input to the leftmost D flip-flop is the signal on the serial-input-
for-shift-right line, the input to the second leftmost D flip-flop is the output of
the leftmost D flip-flop, the input to the third leftmost D flip-flop is the output
of the second leftmost D flip-flop, and the input to the fourth leftmost D flip-
flop is the output of the third leftmost D flip-flop. Upon the occurrence of the
positive-edge signal on the clock line, the register shifts its content one position
to the right. The remaining three register operations listed in Fig. 6.29b are easily
verified in a similar manner. A symbol for the universal shift register is given in
Fign6.2 9c.
Registers are available commercially as MSI components. In these circuits,
the control lines for the clock inputs of the flip-flops are connected together and
appropriate logic is included to provide various capabilities, e.g., unidirectional
or bidirectional shifting, and handling ability of the information input and out-
put lines.
[aTfereg
vIep jNO

[BLASBVPUT

POTD

My 7
AE (-
[ater
elepUl

eanbiggz Ul-|e|/eIeg |EUONOSUIP


YIYS IUN JOSIH91

335
336 DIGITAL PRINCIPLES AND DESIGN

Parallel outputs
pee =< —_
On Op Oc Op

D Q
EXE

cLr2P
O

CLEAR
Clock |

f
Mode { S; 4-to-|
control Se MUX

hihi kh I,

Serial input Serial input


for shift for shift
right (SIR) left (SIL)
I, Ip Ic Ip
CG oe eee ee J
Parallel inputs
(da)

Select lines} Register


S; So operation

0 0 Hold
Ona Shift right
i 0 Shift left
an Parallel load

(b)

Figure 6.29 Universal shift register. (a) Logic diagram. (b) Mode control. (c) Symbol.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 337

6.8 COUNTERS
A counter is another example of a register. Its primary function is to produce a spec-
ified output pattern sequence. For this reason, it is also a pattern generator. This
pattern sequence might correspond to the number of occurrences of an event or it
might be used to control various portions of a digital system. In this latter case, each
pattern is associated with a distinct operation that the digital system must perform.
As in the case of a register, each of the 0-1 combinations that are stored in the
collection of flip-flops that comprise the counter, i.e., the output pattern, is known as
a state of the counter. The total number of states is called its modulus. Thus, if a
counter has m distinct states, then it is called a modulus-m counter or mod-m counter
for short. The order in which the states appear is referred to as its counting sequence.
The counting sequence is often depicted by a directed graph called a state dia-
gram. Figure 6.30 shows a state diagram for a mod-m counter where each node, S,,
denotes one of the states of the counter and the arrows in the graph denote the order
in which the states occur.

6.8.1. Binary Ripple Counters


Counters whose counting sequence corresponds to that of the binary numbers are called
binary counters. The modulus of a binary counter is 2”, where n is the number of flip-
flops in the counter. This follows from the fact that there are 2” combinations of 0’s and
1’s consisting of n bits. For the case of a binary up-counter, the counting sequence is
from 00+ + +0.) to 11+ + + 1, which is equivalent to 0,19) to (2” — 1)40). After reaching
its maximum count, the counting sequence is then repeated. The counting sequence for a
binary down-counter is in reverse order, 1.e., 11 + + + 12) to 00° + + O,).
Figure 6.31a shows a four-bit binary up-counter implemented with positive-
edge triggered T flip-flops. Recall this type of flip-flop is readily constructed by
connecting together the J and K terminals of a positive-edge triggered JK flip-
flop and labeling this common terminal as 7. In this way, each positive transi-
tion, i.e., from logic-0 to logic-1, on the C terminal causes the flip-flop to toggle.

Figure 6.30 State diagram


of a counter.
338 DIGITAL PRINCIPLES AND DESIGN

Q; Q, A A
OR Om Oma)
OOF Ort
ty IE
Oe til
Oe oth @ @
OT SIO
Ole ao
ie al th a
i. W- @ @
( @ @) eal
10 sO
JO allies <i
a ee OmeO
i ae Oe i
Ny peed es (8)
Pale ol
OF OOO
Count enable etc.
(a) (c)

Count
pulses

Qo

es a te pe fia |

Time

(b)

Figure 6.31 Four-bit binary ripple counter. (a) Logic diagram. (b) Timing diagram.
(c) Counting sequence.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 339

Since this is a 4-bit up-counter, its modulus is 2 = 16 and its counting se-
quence is from 0000,.) to 1111,.). The output of the counter appears at the Q out-
put terminals of the four flip-flops where the flip-flop output Q; corresponds to
the ith-order bit of the binary number. The input to the counter is a count enable
signal and a series of count pulses applied to the flip-flop associated with the
lowest-order binary digit. In this way, as long as the count enable signal is
logic-1, the Qo flip-flop changes state on each positive edge of a count pulse.
The control input, i.e., €, of the remaining flip-flops is connected to the Q output
of its previous-order flip-flop. Thus, when the Q;_, flip-flop changes, from its
l-state to its 0-state, thereby resulting in the Q;_, output to change from logic-0
to logic-1, a positive triggering edge occurs at the control input of the Q, flip-
flop causing it to toggle.
Figures 6.31b-c illustrate the counter’s behavior. Although propagation de-
lays are associated with each flip-flop, i.e., the output changes occur after an
input change, these delays are not included in the timing diagram for simplicity.
The counter is assumed to be initially in its 0000 state and the count enable sig-
nal is logic-1. Upon the occurrence of the positive edge of the first count pulse,
the Qy flip-flop changes to its 1-state. Since the Qp-output terminal goes from
logic-1 to logic-0O, flip-flop Q, is not affected by the input pulse. The state of the
counter is now 0001. When the positive edge of the second count pulse arrives,
the Qp flip-flop is again toggled. This time it returns to its O-state. Furthermore,
since the Qy output goes from logic-0 to logic-1, a positive edge appears at the
control input of the Q, flip-flop and causes it to toggle. The change in state of the
Q, flip-flop does not affect the Q, flip-flop since a negative edge occurs at its
control input. Hence, at the end of the second count pulse, the state of the
counter is 0010. The third count pulse causes only the Q, flip-flop to change
state, and the count to become 0011. When the positive edge of the fourth pulse
occurs, the Q, flip-flop returns to its 0-state. This causes a positive edge to occur
at the Qy terminal. Thus, the Q, flip-flop is toggled, returning it to its 0-state. In
addition, when the Q, flip-flop changes its state, the Q, flip-flop is toggled by
the logic-0 to logic-1 transition appearing at the Q,-output terminal. The counter
now stores the binary number 0100. The binary counting sequence continues
until the count 1111 is reached. At that time, a count pulse causes the Qy flip-
flop to return to its 0-state. This, in turn, causes the Q, flip-flop to return to its 0-
state. A consequence of this change causes the Q, flip-flop to return to its O-state
and, finally, this change returns the Q; flip-flop to its 0-state. Thus, the state of
the counter becomes 0000. If any further count pulses are applied to the counter,
then it repeats its counting sequence.
The binary counter of Fig. 6.31a is known as a ripple counter since a change in
state of the Q,_, flip-flop is used to toggle the Q; flip-flop. Thus, the effect of a count
pulse must ripple through the counter. Ripple counters are also referred to as asyn-
chronous counters. Recalling there is a propagation delay between the input and
output of a flip-flop, this rippling behavior affects the overall time delay between
the occurrence of a count pulse and when the stabilized count appears at the output
terminals. The worst case occurs when the counter goes from its 11 +--+ 1-state
3490 DIGITAL PRINCIPLES AND DESIGN

to its 00+ + - O-state since toggle signals must propagate through the entire length
of the counter. For an n-stage binary ripple counter, the worst-case settling time
becomes n X t,,, where t,, is the propagation delay time associated with each
flip-flop.

6.8.2 Synchronous Binary Counters


The settling time problem associated with ripple counters is avoided in synchro-
nous counters. For synchronous counters, the count pulses are applied directly to
the control inputs, C, of all the clocked flip-flops. This causes all the flip-flops to
change simultaneously after the appropriate propagation delay associated with a
single flip-flop.
Referring to the binary counting sequence given in Fig. 6.31c, it is noted that for
each count pulse, the lowest-order flip-flop, Qo, must toggle. Furthermore, for each of
the remaining flip-flops, a flip-flop Q; must toggle upon the occurrence of a count
pulse if all its lower-order flip-flops, i.e., Q;_,, forp = 1, 2,..., 7, are in their |-states.
Figure 6.32 shows a synchronous binary up-counter based on this observation.
As is characteristic of a synchronous counter, the count pulses are applied to the
control input, C, of each clocked flip-flop. Furthermore, as long as the counter is en-
abled, i.e., the count-enable signal is logic-1, the counter follows the binary count-
ing sequence. In particular, the lowest-order flip-flop, Qo, toggles on the positive
edge of each count pulse. The and-gate preceding each 7 input terminal of the re-
maining flip-flops detects if all the lower-order flip-flops are in their |-states. If this
condition is satisfied, then the flip-flop toggles upon the occurrence of the positive
edge of the count pulse. Since the count pulses are applied directly to each flip-flop,
the only delay incurred between the application of a count pulse and the availability
of the new count output is the propagation delay time of a flip-flop.
The synchronous counter of Fig. 6.32 does have its drawbacks. In particular,
there are p inputs to the and-gate connected to the pth flip-flop. Thus, if the counter
consists of a large number of flip-flops, then and-gates having a large number of in-
puts are required. In addition, if there are n stages to the counter, then the output of
the pth flip-flop must appear as inputs to n—p and-gates. Again, if the number of
flip-flops is large, then the low-order flip-flops must drive a large number of gates,
which may introduce loading complications.
It is observed in Fig. 6.32 that the output of the and-gate preceding the Q, flip-
flop consists of precisely the inputs to the and-gate preceding the Q;_, flip-flop plus
the output of the Q;_ ,flip-flop. This observation leads to the variation of the synchro-
nous binary counter shown in Fig. 6.33. Now each and-gate only requires two inputs,
and the output of each flip-flop is only needed as an input to the next-stage and-gate.
In this variation, propagation delays are incurred between the positive edges of the
count pulses due to the serial connection of and-gates. This puts a constraint on the
count pulse rate. However, as typical with synchronous counters, all the flip-flops
change state simultaneously after the propagation delay time of a flip-flop.
It should be noted in the above discussion on asynchronous and synchronous
binary counters, the counter speed was based on the availability of the next count at
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 341

Count
enable

Count pulses

Figure 6.32 Four-bit synchronous binary counter.

the output terminals. In this sense, synchronous counters are faster than asynchro-
nous counters. It is the gate delays and the flip-flop propagation delay in a synchro-
nous counter that determine the rate at which count pulses can be applied. In an
asynchronous counter, the allowable count pulse rate is determined by simply the
first stage of the counter. This implies that an asynchronous counter is very fast rel-
ative to its input. That is, once the first flip-flop changes state it can accept the next
count pulse even though the change has not propagated through the rest of the
counter. However, it is not until the rippling effect is completed that the count is
available for use.
If provisions are made to the synchronous counter structures of Figs. 6.32 or
6.33 so that they are loaded in parallel with an initial binary number prior to the
counting operation, then the mod-2” counter can be used as a mod-m counter where
m < 2". The counter structure of Fig. 6.33 modified to provide for parallel loading is
shown in Fig. 6.34a. JK flip-flops, rather than T flip-flops, are used in this network
to facilitate the handling of the parallel load inputs. Two enable signals are utilized.
One is to allow the parallel loading of the data inputs Do, D;, D,, and D3, and a sec-
ond to provide for counting. Both of these operations are synchronized with the pos-
itive edges of the count pulses. The load function takes precedence over the count
342 DIGITAL PRINCIPLES AND DESIGN

Count O
enable 0

N © fe es
ae

Count pulses

Figure 6.33 Four-bit synchronous binary


counter variation.

function* so that if a logic-1 is placed on the load enable line, regardless of the sig-
nal value on the count enable line, then the signal values on the data input lines, 1.e.,
Do, D,, D>, and D3, are entered into the four flip-flops of the counter upon the occur-
rence of the positive edge of the count pulse. If a logic-0O is applied to the load enable
line and a logic-1 is applied to the count enable line, then the network of Fig. 6.34a
behaves as a binary up-counter in the same way as the counter of Fig. 6.33. Finally,
a logic-O applied to both the load enable and count enable lines causes the count
pulses to be ignored and the counter to retain its current state since logic-0’s appear
at the J and K terminals of each flip-flop. A symbol for the counter of Fig. 6.34a is
given in Fig. 6.345.
Figure 6.35a shows how the counter of Fig. 6.34a is converted to function as a
mod-10, i.e., decimal, counter having the counting sequence given in Fig. 6.35b.
The normal counting sequence for the counter of Fig. 6.34a is that of a 4-bit binary
up-counter when enabled with a logic-1! on the count enable input. To limit the
counting sequence to the first 10 binary numbers, an and-gate is used to detect the

*This is due to the not-gate connected to the load enable line.


CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 343

Count
enable

Load
enable
Do

4-bit
count

Do Q%

ema Dy =

= Load CO. =

Carry Count Count


output pulses —pP>c
(CO)
(a) (b)

Figure 6.34 Four-bit synchronous binary counter with parallel load inputs. (a) Logic diagram. (6) Symbol.
344 DIGITAL PRINCIPLES AND DESIGN

4-bit
count
Do Q% Q%
0 D Q Q;
Dy Q> Q>
Dy QO, Q, Q; Q, Q, A

:
Load» GO OF
AMR TO ORO:
tials
i) Count YO i ©
OO ae
_ (oe ewe OO
ab Dy etl
Q a a
Tarik SU al
OOO
i Oy 0)
Count Pa rronmG. can
pulses etc.

(a) (b)

Figure 6.35 Synchronous mod-10 counter. (a) Connections.


(b) Counting sequence.

count of 1001. Starting from the 0000 state, the first occurrence of Qy) = Q; = 1
causes the output of the and-gate to be logic-1. Since the load function takes prece-
dence over the count function, by connecting the and-gate output to the load enable
input the counter is loaded with 0000, i.e., the values on the D; inputs, upon the next
occurrence of a positive edge of a count pulse. In this way, the counting sequence is
0000, 0001, ... , 1001, 0000, etc.
Also incorporated into the counter of Fig. 6.34a is a carry output, CO, in
which a logic-1 appears whenever the counter state is 1111 and the counter is in
its count mode, i.e., when the count enable signal is logic-1 and the load enable
signal is logic-0. This output is used for constructing larger binary counters by
cascading two or more 4-bit binary counters. Figure 6.36 shows the connections
necessary to construct an 8-bit binary counter. When the state of the upper 4-bit
binary counter is 1111, which corresponds to the four least significant binary dig-
its of the 8-bit binary counter, its carry output signal is logic-1. This signal is ap-
plied to the count enable input of the lower 4-bit counter that is used for the four
most significant binary digits of the 8-bit binary counter. In this way, upon the oc-
currence of the positive edge of the next count pulse, the upper 4-bit binary
counter of Fig. 6.36 returns to its 0000 state, while the lower 4-bit binary counter
is incremented by I.
Many different types of MSI counters are commercially available. These in-
clude counters of both the asynchronous and synchronous types. Commercial coun-
ters may provide for downward counting as well as upward counting.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 345

4-bit
count
D0 ON0 eee CO
D, Cie a= Oh
= Dy Qy |=
Ds Q, Q;

0 Load CO
| Count

4-bit
count

a
D, Q Qs

ale Se
Dy Q> O%

0 Load CO }—
Count

Count
pulses

Figure 6.36 8-bit synchronous


binary counter
constructed from two
4-bit synchronous
binary counters.

6.8.3 Counters Based on Shift Registers


As will become evident in the future chapters, the overall operation of a digital sys-
tem is divided into a sequence of time periods. In order to determine the various
time periods, one approach is to assign them to states of a binary counter. Then, by
incorporating a decoder along with the counter, the various states, and, correspond-
ingly, time periods, are identified. Rather than using a decoder, a nonbinary counter
could be used whose counting sequence provides a series of patterns that simplify
the detection of its states, possibly at the cost of increasing the number of flip-flops
used. Examples of such nonbinary counters, based on the structure of the shift regis-
ter, are the ring counter and the switch-tail counter.
A ring counter is a circular shift register which is initialized so that only one of
its flip-flops is in the 1-state; while the others are in their 0-states. Then, upon the
346 DIGITAL PRINCIPLES AND DESIGN

Q, Qs Qc VY
i @ © @
oy ft @ @
Oo @ it ©
0) @ jl
[OOO
Gic.
(b)

Figure 6.37 Mod-4 ring counter. (a) Logic diagram. (6) Counting sequence.

occurrence of each count pulse, the single | is shifted to its adjacent flip-flop. As
a consequence, a ring counter consisting of n flip-flops has only n states in its
counting sequence. Figure 6.37a shows a circular shift-right register. This con-
figuration is capable of serving as a mod-4 ring counter. If it is assumed that the
counter is initialized to its Q,Q;Q-Qp = 1000 state, then the counting sequence
given in Fig. 6.375 results.
Although the ring counter is not efficient in the number of flip-flops used, it
provides a decoded output. That is, to detect any particular state in the counting se-
quence, it is only necessary to interrogate the output of a single flip-flop. For exam-
ple, the 0001 state is readily detected by observing the output terminal Qp. When-
ever a logic-1 value appears at this terminal, the state of the counter is known to be
0001. Similarly, the determination of any other state only requires observing the
output of a single flip-flop.
A variation of the ring counter is the switch-tail counter, also known as the
twisted-ring counter or Johnson counter. This counter is illustrated in Fig. 6.38a
and its counting sequence is given in Fig. 6.385 assuming the counter starts in the
Q,Q3QcQp = 0000 state. In this counter, the complement of the rightmost flip-flop
serves as the input to the leftmost flip-flop in the shift-right register configuration.

And-gate
Q, Og Qc Ap inputs

0 100540 Q4Qp
i @ © O70;
(00 OnQc
[rier 0 QcOp
Bl heal Q4Qp
Olas. Q, Oz
es O,0c
OF Ooomet OcOp
(ae On a)

Figure 6.38 Mod-8 twisted-ring counter. (a) Logic diagram. (b) Counting sequence.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 347

And-gate
Q, Qp Qe Qp inputs
0 40 0 Q,Qp
en) Q4Qp
AP thead) QpQc
hil 1 QcQp
Cent Q4Qp
0 Oui Q3Q9c
ORO QcQp
ORO

(a)

Figure 6.39 Mod-7 twisted-ring counter. (a) Logic diagram. (b) Counting sequence.

As a result of this connection, 2n states occur in the counting sequence of an n-stage


counter.
Unlike the ring counter, to detect any particular state in the counting sequence
of a twisted-ring counter it is necessary to incorporate some logic elements. Refer-
ring to the counting sequence of Fig. 6.38b, it is readily noted that the underlined
pairs of bits uniquely determine a state. Thus, only a single two-input and-gate,
whose logic expression is also given in Fig. 6.38, is required to obtain a decoded
output.
A twisted-ring counter having the above structure always has an even number
of states in its counting sequence. A twisted-ring counter having an odd number of
states is shown in Fig. 6.39a and its counting sequence is given in Fig. 6.39b. In
this variation, the state consisting of all 1’s is eliminated from the counting se-
quence. This is achieved by connecting Q-Qp to the input of the leftmost D flip-
flop. Again, each state is detectable by use of a single two-input and-gate as indi-
cated in Fig. 6.395.

6.9 DESIGN OF SYNCHRONOUS COUNTERS


The synchronous counter was introduced in the previous section. This type of
counter is characterized by the count pulses being applied directly to the control
inputs, C, of the clocked flip-flops that comprise the counter. As a result, all the
flip-flops change simultaneously and the new state of the counter is observable
in a minimum amount of time. At that time, emphasis was on the binary counter
and counters based on shift registers. However, depending upon the application,
other types of counters might be desirable, e.g., one that counts according to the
Gray code.
It was also previously mentioned that a counter is a pattern generator in which
the counting sequence serves as the order in which a series of various patterns is
produced. These patterns can then be used to enable or disable various portions of a
logic network so as to control its behavior. The use of counters as pattern generators
348 DIGITAL PRINCIPLES AND DESIGN

Table 6.3 Counting sequence for a


mod-6 counter

21 Q, Q;
0 0 0
0 0
0 I
0
0 ]
0 0 ao ot
oe BI SaRs (ae eae 20
etc:

is further explored in the next two chapters. In any event, nonbinary counting se-
quences are often desirable.
At this time a general procedure is developed for designing synchronous coun-
ters having a prespecified output pattern sequence. For illustrative purposes, syn-
chronous mod-6 counters having the counting sequence shown in Table 6.3 are de-
signed using the four types of clocked flip-flops introduced in this chapter.

6.9.1 Design of a Synchronous Mod-6 Counter


Using Clocked JK Flip-Flops
To begin the design of the synchronous mod-6 counter, consider its general
structure assuming the use of clocked JK flip-flops. This is shown in Fig. 6.40.
The three clocked JK flip-flops have the count pulses applied directly to their
control inputs, C. The count pulses may be clock signals or they may originate

Count
pulses

Logic network

Figure 6.40 General structure of a synchronous mod-6 counter using positive-


edge-triggered JKflip-flops.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 349

from some other source. The current state of the counter is applied to a logic net-
work. The function of the logic network is to generate the appropriate signals for
the J and K terminals of the clocked flip-flops so that the specified next state in
the counting sequence results upon the occurrence of the triggering edge of a
count pulse. What needs to be designed is the appropriate logic network. In this
case, the logic structure of this network can be described by six Boolean expres-
sions, one for each of the six inputs to the three flip-flops, in terms of the
Boolean variables Q,,-Q,, and Q; that correspond to the present state of the
counter. To obtain these expressions, a truth table for the logic network, called an
excitation table, is first developed and then the simplified Boolean expressions
are obtained.
Table 6.4 shows the excitation table for the synchronous mod-6 counter. It is
divided into three sections labeled present state, next state, and flip-flop inputs. At
this point, the first two sections can be completed. The counting sequence is listed
in the present-state section and the desired next state for each present state is en-
tered in the next-state section.
Before the third section can be filled-in, it is necessary to consider the terminal
behavior of a clocked JK flip-flop. In general, there are four distinct actions that a
flip-flop can undergo as a consequence of a triggering signal at its control input. In
particular, a flip-flop should remain in its O0-state, a flip-flop should remain in its
|-state, a flip-flop should go from its O-state to its 1-state, and, finally, a flip-flop
should go from its |-state to its 0-state. To see how these four actions are achieved,
it is necessary to consider the flip-flop next-state tables previously established in
Table 6.2. Using the table for the JK flip-flop, it is seen that the conditions for a JK
flip-flop to remain in its O-state are given by the first and third rows where Q = 0
and Q* = 0. In the first row J = 0, K = O and in the third row J = 0, K = 1. Thus,
for a JK flip-flop to remain in its O-state upon the occurrence of a triggering signal
on its control input, a logic-0 must appear at its J input terminal but either a logic-0
or a logic-1 may appear at its K input terminal. This is summarized by the first row
of the JK flip-flop application table given in Table 6.5, where the dash denotes a
don’ t-care.

Table 6.4 Excitation table for a synchronous mod-6 counter using clocked
JK flip-flops
Present state Next state Flip-flop inputs
350 DIGITAL PRINCIPLES AND DESIGN

Table 6.5 Application table for a clocked


JKflip-flop

Q Q- | J K
0 0 0 ~
0 1 | -
| 0 - 1
| | - 0

Continuing this analysis, the fifth and seventh rows of the /K flip-flop next-state
table shown in Table 6.2 indicate the necessary conditions at the J and K terminals
when it is required to have it change from its present 0-state to its 1-state upon the
occurrence of a triggering signal. From the fifth row it is seen that this occurs when
J = 1, K = Oand from the seventh row it is seen that this occurs when J = 1, K = 1.
Thus, it immediately follows that to change a JK flip-flop from its O-state to its
l-state, it is necessary that J = | and that K can be either logic-O or logic-1. This
condition is given by the second row of Table 6.5.
The remaining two rows of Table 6.5 are again obtained from Table 6.2. From
the fourth and eighth rows of the JK flip-flop next-state table, it follows that the
action of changing a JK flip-flop from its |-state to its O-state requires that K = |
and that either a logic-0 or a logic-1 appear at the / input terminal. Finally, the
second and sixth rows of the JK flip-flop next-state table lead to the last row of the
JK application table shown in Table 6.5, denoting the conditions needed for a JK
flip-flop to remain in its |-state. In this case, K = 0 and J is either a logic-O or a
logic-1.
Returning to Table 6.4 for the synchronous mod-6 counter, it is now a simple
matter to determine the logic signals that must be applied to the three JK flip-flops
in order to produce the present-state to next-state transitions specified in each row.
For example, when the present state of the counter is 0,Q,Q,; = 000, its next state is
to be Q/ Q; Q; = 010. Flip-flop Q, must remain in its 0-state. As indicated in Table
6.5, this is achieved by having J, = 0 and K, = -. Thus, these become the first two
entries in the first row of the flip-flop inputs section of Table 6.4. Similarly, since
flip-flop Q, must go from its O-state to its 1-state, this is achieved by having J, = |
and K, = — according to Table 6.5. Finally, the last pair of entries in the first row of
the mod-6 counter excitation table, /; = 0 and K, = —, corresponds to the necessary
conditions for flip-flop Q; to remain in its 0-state. The remaining entries in the flip-
flop inputs section are determined row by row in a similar manner, thus completing
Table 6.4.
Referring to Fig. 6.40 and Table 6.4, the inputs to the logic network corre-
spond to the present-state section of the excitation table and the outputs from the
logic network correspond to the flip-flop inputs section of the excitation table.
Thus, the first and third sections of the counter’s excitation table is really the
truth table for the logic network. Using just these two sections, the six Karnaugh
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 351

00

0 0

lem Oy
1

J, = Q0;

00

0 1

FF: Qi
fy ae

J, = Q3

00

FF3: Qy
1

Jz= Q)

Figure 6.41 Determination of the minimal-sum expressions for a


synchronous mod-6 counter using clocked JKflip-flops.

maps of Fig. 6.41 are drawn. The six maps correspond to the six flip-flop input
functions and a cell of the map corresponds to a present state of the counter.
Thus, for example, from the first row of Table 6.4, Q,Q,Q, = 000, entries are
made into the upper left cell of each map for the appropriate values of J,, K,, Jo,
K,, J;, and K;. After considering the remaining five rows of Table 6.4, six of the
eight cells of each map have entries. Finally, the two cells Q,Q,Q; = 100 and
111 correspond to the two states that do not occur in the counting sequence.
Hence, dashes are placed in these two cells of each map since these present states
should never occur. A minimal-sum expression for each map is also included in
Fig. 6.41. These expressions lead to the logic diagram of Fig. 6.42 for the syn-
chronous mod-6 counter. Although minimal-sum expressions were written, mini-
mal-product expressions could have been obtained instead by grouping the 0’s
and the don’t-cares.
352 DIGITAL PRINCIPLES AND DESIGN

Count
pulses

Figure 6.42 Logic diagram of a synchronous mod-6 counter.

6.9.2 Design of a Synchronous Mod-6 Counter


Using Clocked D, T, or SR Flip-Flops
The mod-6 counter of Table 6.3 can equally well be designed using clocked D, T,
or SR flip-flops. The structure of Fig. 6.40 is still applicable except the JK flip-
flops are replaced by some other type of clocked flip-flops. To describe the logic
network, an excitation table again is constructed. The first two sections, i.e., the
present-state and next-state sections, are the same as those of Table 6.4. How-
ever, the third section, i.e., the flip-flop inputs section, must correspond to the
type of flip-flop being used. To complete the third section, it is necessary to first
determine how the four possible state transitions are achieved using the specified
type of flip-flop. Once this is done, the third section is readily completed. The
first and third sections again correspond to a truth table from which minimal ex-
citation expressions for the flip-flop inputs can be obtained and the logic diagram
drawn.
Assume that clocked D flip-flops are to be used to design the synchronous
mod-6 counter of Table 6.3. Referring to the D flip-flop next-state table shown in
Table 6.2, the D flip-flop application table given in Table 6.6 immediately follows.

Table 6.6 Application table for a clocked


D
flip-flop
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 353

Table 6.7 Excitation table for a synchronous mod-6 counter using clocked
D flip-flops
Present state Next state) \ ee Flip-flop inputs

This table simply states that whatever next-state value is needed of a clocked D flip-
flop, that logic value should appear at its D input terminal upon the occurrence of a
triggering edge at its control input. Using the information of Table 6.6, the third
section of the mod-6 counter excitation table is completed in a manner analogous to
that done previously for JK flip-flops. This is shown in Table 6.7.* Finally, regard-
ing the first and third sections of Table 6.7 as a truth table, the Karnaugh maps for
the three output functions, D,, D,, and D3, are constructed as shown in Fig. 6.43.
Again don’t-cares occur in the two cells of each map corresponding to the two un-
used states of the counting sequence, i.e., Q,Q,Q; = 100 and 111. From these maps,
minimal expressions for the logic network are obtained. In this case, a minimal-sum
expression describing the logic preceding each D flip-flop is written beneath its
Karnaugh map. Once the expressions for the logic network are established, the logic
diagram can be drawn.
The above procedure is readily modified when clocked T flip-flops are used for
the counter. As was done previously, the application table for a clocked T flip-flop
is first obtained and then the excitation table for the synchronous mod-6 counter is
constructed. Table 6.8 gives the T flip-flop application table. This table again imme-
diately follows from the T flip-flop next-state table given in Table 6.2 by noting
what logic value should be applied to the T input terminal for each of the four
present-state/next-state combinations. According to the clocked T flip-flop applica-
tion table, a logic-1 is needed at the T input terminal if the flip-flop is to change
state upon the occurrence of a triggering edge at its control input; otherwise, a
logic-0 should occur at the T input terminal. Using Table 6.8, the third section of the
synchronous mod-6 counter excitation table shown in Table 6.9 is completed. From
the first and third sections of Table 6.9, the Karnaugh maps for the synchronous
mod-6 counter with clocked T flip-flops, given in Fig. 6.44, are obtained. Using
these maps, minimal sums are easily written.

*Since D, = Q;', it should be no surprise that the next-state and flip-flop inputs sections of Table 6.7 are
identical.
354 DIGITAL PRINCIPLES AND DESIGN

D3 = Q, +Q0;

Figure 6.43 Determination of the


minimal-sum
expressions for a
synchronous mod-6
counter using clocked
D
flip-flops.

Table 6.8 Application table for a clocked


T flip-flop
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 355

Table 6.9 Excitation table for a synchronous mod-6 counter using clocked Tflip-flops

Present state Next state Flip-flop inputs


7, 7,
0
0
0

0 0
0

T3 = Qo +103

Figure 6.44 Determination of the


minimal-sum
expressions for a
synchronous mod-6
counter using clocked
T flip-flops.
356 DIGITAL PRINCIPLES AND DESIGN

Table 6.10 Application table for a


clocked SRflip-flop

Finally, consider the design of the synchronous mod-6 counter, having the
counting sequence given in Table 6.3, with clocked SR flip-flops. Table 6.10 gives
the necessary SR flip-flop application table from which the flip-flop inputs section of
the counter excitation table is completed. Table 6.2 is used to construct the SR flip-
flop application table by using the same type of analysis previously used to obtain the
JK flip-flop application table. However, since an SR flip-flop has nonallowable input
combinations, these combinations must not be used in forming Table 6.10. Thus,
when a clocked SR flip-flop is to change from its O-state to its l-state, according to
Table 6.2 this is achieved only if S = 1 and R = 0. Table 6.11, the excitation table
for the synchronous mod-6 counter using SR flip-flops, is next constructed. From this
table the Karnaugh maps of Fig. 6.45 are formed and the minimal sums written.

6.9.3 Self-Correcting Counters


The counting sequence of the mod-6 counter given in Table 6.3 did not include
states 100 and 111. Once the counter is designed, definite next states result if either
of these states should occur.
When a system is initially started, i.e., when power is first applied to the net-
work, the initial states of the flip-flops are unpredictable. Consequently, either of
the two states 100 and 111 can occur. As a solution to this problem, the counter
could be initialized prior to its use. One way in which this can be done is by ap-
plying appropriate signals to the asynchronous input terminals of the flip-flops to
force them into the first state of the counting sequence. However, noise signals

Table 6.11 Excitation table for a synchronous mod-6 counter using clocked SR
flip-flops

Present state Next state Flip-flop inputs

oS
So
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 357

S R

0,0; Q50,
00 Ol 1] 10 00 Ol 1] 10

0 0 0 1 0 0 0 -
lhe (Or Q,

1 es 0 = 2 1 & 0

S3 = QO; R3 = Q,0,

Figure 6.45 Determination of the minimal-sum expressions for a


synchronous mod-6 counter using clocked SRflip-flops.

can also cause the counter to enter one of its initially unused states. Thus, it is of
interest to consider the behavior of a counter under the assumption that the unused
states of a counting sequence occur. A counter in which all the states not included
in the original counting sequence eventually lead to the normal counting sequence
after one or more count pulses are applied to the control inputs, C, is said to be
self-correcting. To avoid having the counter “hang up,” it should always be de-
signed as self-correcting.
Again consider the synchronous mod-6 counter previously designed with JK
flip-flops. State Q,Q0,Q;, = 100 was not included in the counting sequence. Substi-
tuting these values into the flip-flop input equations obtained in Fig. 6.41 for the re-
alization, it is seen that J, = 0, K, = 1, J, = 1, K, = 1, J; = 0, and K; = 0. Conse-
quently, upon the occurrence of the count pulse, flip-flop Q, resets, flip-flop Q,
toggles, and flip-flop Q; remains unchanged, with the net result that the next state of
the counter is O/ Q;Q; = 010. Hence, a valid state of the counting sequence is
358 DIGITAL PRINCIPLES AND DESIGN

Figure 6.46 Complete state diagram for the synchronous


mod-6 counter of Fig. 6.42.

reached if the initially unused state should occur. In a similar manner, it is easily
checked that 0,050; = 111 leads to the valid next state Q;Q;Q; = 101 in the
counting sequence. Figure 6.46 shows the complete state diagram for the synchro-
nous mod-6 counter whose realization was given in Fig. 6.42. Included are the ef-
fects of the two initially unused states. Since these states lead to the normal count-
ing sequence, the realization is that of a self-correcting counter. In an analogous
manner, it can be shown that the other three realizations of the synchronous mod-6
counter are also self-correcting.
Counters having no unused states are always self-correcting. It is the assignment
to the don’t-cares associated with the unused states when the logic expressions are
obtained that can cause a counter not to be self-correcting. By actually specifying the
next states for each of the unused states in the counting sequence prior to construct-
ing the Karnaugh maps, a self-correcting counter realization can be guaranteed.

CHAPTER 6G PROBLEMS
6.1 Design a switch debouncer using an SR latch.
6.2 The input signals shown in Fig. P6.2 are applied to the SR latch of Fig. 6.2a
when initially in its O-state. Sketch the Q and Q output signals. Assume all
timing constraints are satisfied.

Figure P6.2
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 359

6.3 The input signals shown in Fig. P6.3 are applied to the SR latch of Fig. 6.4a
when initially in its O-state. Sketch the Q and Q output signals. Assume all
timing constraints are satisfied.

Time

Figure P6.3

6.4 The input signals shown in Fig. P6.4 are applied to the gated SR latch of Fig.
6.5a when initially in its 0-state. Sketch the Q and Q output signals. Assume
all timing constraints are satisfied.

Figure P6.4

6.5 The input signals shown in Fig. P6.5 are applied to the gated D latch of Fig.
6.6a when initially in its 0-state. Sketch the Q and Q output signals. Assume
all timing constraints are satisfied.

Figure P6.5
360 DIGITAL PRINCIPLES AND DESIGN

6.6 The input signals shown in Fig. P6.6 are applied to the master-slave SK flip-
flop of Fig. 6.12a when initially in its 0-state. Sketch the Qy, Qy, Qs, and Os
output signals. Assume all timing constraints are satisfied.

Figure P6.6

6.7 The input signals shown in Fig. P6.7 are applied to the master-slave JK flip-
flop of Fig. 6.14a when initially in its O-state. Sketch the Q, and Q, output
signals. Assume all timing constraints are satisfied.

Figure P6.7

6.8 The input signals shown in Fig. P6.8 are applied to the master-slave D flip-
flop of Fig. 6.16a when initially in its 0-state. Sketch the Qy and Q, output
signals. Assume all timing constraints are satisfied.

Figure P6.8
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 361

6.9 The input signals shown in Fig. P6.9 are applied to the master-slave T flip-
flop of Fig. 6.17a when initially in its 0-state. Sketch the Qy, and Ox output
signals. Assume all timing constraints are satisfied.

Time

Figure P6.9

6.10 A logic diagram and function table for a proposed gated JK latch is shown in
Fig. P6.10. Discuss the problems that can be encountered with this network
and under what constraints proper JK flip-flop behavior is achieved.

i
Q Inputs Outputs
ot or
= J
pe
0
- 0
Q 1
K
1
xX xe
Oro Ovol-
ch oOlO
WI
oF

(a)

Figure P6.10

6.11 The input signals shown in Fig. P6.8 are applied to the positive-edge-
triggered D flip-flop of Fig. 6.18a when initially in its 0-state. Sketch the Q
output signal. Assume all timing constraints are satisfied.
6.12 The input signals shown in Fig. P6.7 are applied to the positive-edge-
triggered JK flip-flop of Fig. 6.22a when initially in its O-state. Sketch the Q
output signal. Assume all timing constraints are satisfied.
6.13 The input signals shown in Fig. P6.9 are applied to the positive-edge-
triggered Tflip-flop of Fig. 6.23a when initially in its O-state. Sketch the Q
output signal. Assume all timing constraints are satisfied.
6.14 The input signals shown in Fig. P6.7 are applied to the master-slave JK flip-
flop with data lockout of Fig. 6.24a when initially in its 0-state. Sketch the
Qy and Q, output signals. Assume all timing constraints are satisfied.
362 DIGITAL PRINCIPLES AND DESIGN

6.15 The positive-edge-triggered D flip-flop shown in Fig. P6.15a has the signals
of Fig. P6.15 applied when initially in its 0-state. Sketch the Q output
signal. Assume all timing constraints are satisfied.

Logic-1

(a) (b)

Figure P6.15

6.16 Show that for the positive-edge-triggered D flip-flop of Fig. 6.2 1a, it resets
when PR = 1 and CLR = 0 and that the C and D inputs have no effect on its
behavior. Show that the flip-flop sets when PR = 0 and CLR = 1.
6.17 Show that the master-slave configuration involving two gated D latches as
given in Fig. P6.17 is best described by the positive-edge-triggered D flip-
flop function table of Fig. 6.18.

D D Q D = ©

( ie

Ob 2D0—O

Clock (C) —{>o—+—_}0«—

Figure P6.17

6.18 Verify the characteristic equations for the JK, D, and T flip-flops given in
Fig. 6.25D by constructing the appropriate Karnaugh maps and obtaining the
minimal sums.
6.19 Assume the shift register of Fig. 6.26 initially contains 1101. What is the
content of the register after the positive edge of each clock signal if the
values occurring on the serial-data-in line are 1, 1, 0, 1, 0, 1, and O in that
order?
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 363

6.20 Modify the register of Fig. 6.26 so that it is synchronously cleared. That is,
incorporate a Clear/shift control input line in which, upon the occurrence of
the clock signal, all the flip-flops enter their 0-state when the control signal is
0 and behaves as a shift-right register when the control signal is 1.
6.21 Design a register, incorporating four multiplexers and four positive-edge-
triggered D flip-flops, having the behavior specified in Table P6.21.

Table P6.21
Select lines
Sy So Register operation
0 0 Hold
0 | Synchronous clear
1 0 Complement contents
1 Circular shift right

6.22 For a 3-bit binary ripple up-counter similar to the one in Fig. 6.31, draw the
Count pulse, Qo, Q;, and Q, signals assuming a propagation delay of t for
each flip-flop.
6.23 Design a 4-bit binary ripple up-counter using negative-edge-triggered JK
flip-flops.
6.24 Design a 4-bit binary ripple up-counter using positive-edge-triggered D flip-
flops. Do not include a count-enable line.
6.25 Design a 4-bit binary ripple down-counter using positive-edge-triggered T
flip-flops.
6.26 Using a structure similar to that of Fig. 6.33, design a 4-bit synchronous
binary down-counter.
6.27 a. Using a structure similar to that of Fig. 6.33, design a 4-bit synchronous
binary up/down-counter having a count enable input line and an
up/down input line. When the signal value on the up/down input line is
logic-1, the counter should behave as a binary up counter; when the
signal value on the up/down input line is logic-0, the counter should
behave as a binary down counter.
b. Repeat part (a) using a structure similar to that of Fig. 6.34. The CO output
should be logic-1 when the counter is going down and the counter state is
0000 and when the counter is going up and the counter state is 1111.
6.28 Using the counter of Fig. 6.34, design a mod-5 counter
a. whose counting sequence consists of its first five states, i.e., 0000, 0001,
..., 0100, 0000, etc.
b. whose counting sequence consists of its last five states, i.e., 1011, 1100,
elle LO il ete:
364 DIGITAL PRINCIPLES AND DESIGN

6.29 Modify the synchronous mod-256 binary counter shown in Fig. 6.36 to
become a synchronous mod-77 binary counter.

6.30 Realize the 4-bit ring counter of Fig. 6.37 using the universal shift register of
Fig. 6.29. Use the parallel load capability of the register to initialize the
counter.

6.31 Realize the 4-bit twisted-ring counter of Fig. 6.38 using the universal shift
register of Fig. 6.29. Use the asynchronous clear capability of the register to
initialize the counter.

6.32 Using the general design procedure of Sec. 6.9, design a synchronous
mod-16 binary counter by obtaining its minimal-sum equations. Use
positive-edge-triggered D flip-flops.

6.33 Using the general design procedure of Sec. 6.9, design a synchronous mod-
10 binary counter, i.e., one whose counting sequence corresponds to the first
10 binary numbers, by obtaining its minimal-sum equations.
a. Use positive-edge-triggered JK flip-flops.
Use positive-edge-triggered D flip-flops.
Use positive-edge-triggered T flip-flops.
Use positive-edge-triggered SR flip-flops.
Oo
&© For the design of part (a), determine if the counter is self-correcting by
constructing the complete state diagram.

6.34 Design a synchronous mod-10 counter whose counting sequence


corresponds to the 5421 code (see Table 2.7) by obtaining its minimal-sum
equations.
a. Use positive-edge-triggered JK flip-flops.
b. Use positive-edge-triggered D flip-flops.
c. Use positive-edge-triggered T flip-flops.
d. Use positive-edge-triggered SR flip-flops.

6.35 Design a synchronous mod-10 counter whose counting sequence


corresponds to the 7536 code (see Table 2.7) by obtaining its minimal-sum
equations.
a. Use positive-edge-triggered JK flip-flops.
b. Use positive-edge-triggered D flip-flops.
c. Use positive-edge-triggered T flip-flops.

6.36 Design a synchronous mod-6 counter whose counting sequence is 000, 001,
100, 110, 111, 101, 000, etc., by obtaining its minimal-sum equations.
a. Use positive-edge-triggered JK flip-flops.
b. Use positive-edge-triggered D flip-flops.
CHAPTER 6 Flip-Flops and Simple Flip-Flop Applications 365

c. Use positive-edge-triggered T flip-flops.


d. Use positive-edge-triggered SR flip-flops.
For each of the above designs, determine if the counter is self-correcting
by drawing the complete state diagram.

Logic-1
Count
pulses

Figure P6.37

6.37 Consider the synchronous counter shown in Fig. P6.37 constructed with
positive-edge-triggered flip-flops. Assuming it is initialized to 000 prior to
the first count pulse, determine the counting sequence. Is this counter self-
correcting?
6.38 From the Karnaugh maps of Fig. 6.43, it is seen that the synchronous mod-6
counter could also be realized from the equations
Di= 0,0; + QQ;

Dy, = 0,0; + QQ;


D; = Q, + Q,0,
Determine the effect of the two unused states 100 and 111 based on such a
realization.
6.39 It is a simple matter to formally convert any type of clocked flip-flop into
another type using the same triggering scheme. For example, Fig. P6.39a
shows the necessary structure to convert a positive-edge-triggered JK flip-
flop into a positive-edge-triggered AB flip-flop whose simplified function
table is given in Fig. P6.39b. To do this, the three-section table shown in Fig.
P6.39c is completed. For each combination of the AB flip-flop inputs and
present state, the Q* section indicates the required next-state of the AB flip-
flop and the JK section indicates how the state transition 1s achieved using a
JK flip-flop. The first and third sections of the completed table provide a
366 DIGITAL PRINCIPLES AND DESIGN

truth table for the logic network. Using this approach, design a positive-
edge-triggered AB flip-flop.

Clock A Q
0 0
0 1
0 0
0 1
| 0
1 1
l 0
I 1
(a) (b) (c)

Figure P6.39

6.40 Using the approach suggested in Problem 6.39,


a. convert a positive-edge-triggered D flip-flop into a positive-edge-
triggered AB flip-flop.
b. convert a positive-edge-triggered T flip-flop into a positive-edge-
triggered AB flip-flop.
c. convert a positive-edge-triggered D flip-flop into a positive-edge-
triggered JK flip-flop.
6.41 Using the positive-edge-triggered AB flip-flop described in Problem 6.39,
design a synchronous mod-6 counter whose counting sequence is 000, 110,
101, O11, 010, OO1, 000, etc., by obtaining its minimal-sum equations.
Synchronous Sequential
Networks

n the previous chapter, several examples of logic networks incorporating flip-flops


were studied. Those networks are examples of synchronous sequential networks.
Also at that time, a general design procedure was introduced for counters. In this
chapter, a more detailed and formal study of synchronous sequential networks is
undertaken.
What distinguishes sequential networks from combinational networks is the ex-
istence of memory. In a combinational network, the outputs at any instant of time
are dependent only upon the inputs present at that instant. However, in the case of
sequential networks, in addition to the outputs at any instant being dependent upon
the inputs present at that instant, they are also dependent upon the past history, i.e.,
the sequence, of inputs. At any time, the entire past history of inputs is preserved
by the network as internal information that is referred to as the present state, or,
simply, state, of the network. In this way, the outputs of a sequential network are
only a function of the present external inputs and its present state. Furthermore,
upon the arrival of a new set of external input signals, the network must enter a
new state since the past history of inputs must now include the last previous input
signals. Thus, the external inputs and present state determine the next state of the
network.
Figure 7.1 shows a general model of a sequential network corresponding to
the above discussion. The box labeled “memory” provides for the preservation of
information about the past history of inputs. In a physical system, flip-flops are
frequently used for holding this information. The current outputs of the flip-flops
denote the present state of the system. The inputs to the memory box are associ-
ated with the next state that, in turn, becomes available as a present state at some
appropriate later time. Thus, it is observed that a delay time is associated with the
memory.
As seen from the above discussion, the operation of a sequential network involves
a time sequence of inputs, outputs, and states. The nature of the timing relationship

367
368 DIGITAL PRINCIPLES AND DESIGN

Inputs
pe Combinational -— Outputs

Present Next
state state

Figure 7.1 General model of a


sequential network.

classifies sequential networks into two broad categories: synchronous and asynchro-
nous. In the case of synchronous sequential networks it is assumed that the behavior of
the system is totally determined by the values of the present state and external input
signals at discrete instants of time. Special timing signals are used to define these time
instants. In addition, it is only at these discrete instants of time that the memory of the
system is allowed to undergo changes. In this way it is possible to describe the behav-
ior of the system relative to the set of ordinal numbers that are assigned to the timing
signals. The counters of the previous chapter are examples of synchronous sequential
networks where the discrete instants of time are associated with the occurrences of the
triggering edge of the count pulses.
The second category of sequential networks is the asynchronous sequential net-
works. In these networks, it is the order in which input signals change that affects
the network behavior. Furthermore, these changes are allowed to occur at any in-
stants of time. This class of sequential networks is studied in Chapter 9. @

7.1 STRUCTURE AND OPERATION


OF CLOCKED SYNCHRONOUS
SEQUENTIAL NETWORKS
In synchronous sequential networks, the network behavior is defined at specific in-
stants of time associated with special timing signals. The most common method of
providing timing in a synchronous sequential network is by means of a single mas-
ter clock that appears at the control inputs of all the flip-flops that make up the
memory portion of the network. In this way, all the flip-flops receive the common
clock signal simultaneously. Such sequential networks are referred to as clocked
synchronous sequential networks. The structure of clocked synchronous sequential
networks is shown in Fig. 7.2.*

*Although edge-triggered flip-flops are shown in the figure, master-slave flip-flops can also be used.
CHAPTER 7 Synchronous Sequential Networks 369

Inputs

Memory
(clocked flip-flops)

Present Next
state state

Clock

Figure 7.2 Structure of a clocked


synchronous sequential network.

The clock signal is a periodic waveform having one positive edge and one neg-
ative edge during each period. Thus, during part of the period the clock signal has
the value of logic-1 and during the other part of the period it has the value of logic-
0. Since this control signal is applied to clocked flip-flops, one edge is used for trig-
gering in the case of edge-triggered flip-flops. Alternatively, the time duration in
which the clock signal is in one of its logic states, along with its associated edges, is
used to achieve a pulse for pulse-triggered flip-flops. For simplicity, no distinction
is made in this chapter between these edges or pulses, and their occurrence is here-
after referred to as the triggering times or active times of the clock signal.
The use of a single master clock provides for network synchronization and has
the advantage of preventing many timing problems. The basic operation of clocked
synchronous sequential networks proceeds as follows. After the input and new
present state signals in Fig. 7.2 are applied to the combinational logic, the effects of
the signals must propagate through the logic network since gates have finite propa-
gation delay times. As a result, the final values at the flip-flop inputs occur at differ-
ent times depending upon the number of gates involved in the signal paths and the
actual propagation delays of each gate. In any event, it is only after the final values
are reached that the active time of the clock signal is allowed to occur and cause any
state changes. Since the clock signal is applied simultaneously to all the flip-flops in
370 DIGITAL PRINCIPLES AND DESIGN

the memory portion of the network, all state changes of the flip-flops occur at the
same time.* The process is then repeated. That is, new inputs are applied and then
the synchronizing clock signal affects the state changes. It is important to note that
the flip-flops are only allowed to undergo at most a single state change for each
clock period. That is, any changes in the outputs of the flip-flops incurred as a result
of the clock signal cannot cause another state change until the next clock period. As
was seen in the previous chapter, edge-triggered and pulse-triggered, i.e., master-
slave, flip-flops provide this type of behavior.
As shown in Fig. 7.2, combinational logic is used to generate the next-state and
output signals. The present state of the network is the current content of the flip-
flops that comprise the memory portion of the network. The next state corresponds
to the updated information about the past history of inputs that must be preserved by
the system. Thus, if X denotes the collective external input signals and Q the collec-
tive present states of the flip-flops, then the next state of the network, denoted by
Q*, is functionally given by
O* = f(X,Q) (7.1)
Similarly, if Z is regarded as the collective output signals of the network, then under
the assumption that the outputs are a function of both the inputs and present state, it
immediately follows that

Z= 3(X,Q) (7.2)
Equations (7.1) and (7.2) suggest the general structure of a clocked synchro-
nous sequential network shown in Fig. 7.3. This model is frequently referred to as
the Mealy model or Mealy machine.
A variation to the Mealy model occurs when the outputs are only a function of
the present state and not of the external inputs. In this case

Z= g(Q) (7.3)

Inputs Next state


Next-state (Q*) Memory Output
(X)
function f (locked Present state function g Outputs
(combinational a (Q) (combinational (Z)
: flip-flops) i
logic) logic)

Figure 7.3 Mealy model of a clocked synchronous sequential network.

*As is seen in Chapter 9, this is in contrast to asynchronous sequential networks, which are allowed to
respond to signal changes on the inputs as they occur,
CHAPTER 7 Synchronous Sequential Networks 371

Inputs No Next state Present state


(X) ext state (Q*) Memory (OQ) Output
function f (load function g Outputs
(combinational i (combinational (Z)
‘ flip-flops) :
logic) logic)

Figure 7.4 Moore model of a clocked synchronous sequential network.

Again the next state of the network, Q", is a function of the external inputs and the
present state as given by Eq. (7.1). This variation of a clocked synchronous sequen-
tial network is referred to as the Moore model or Moore machine. The general struc-
ture suggested by Eqs. (7.1) and (7.3) is illustrated in Fig. 7.4.
Although the use of a master clock is the most common method of synchroniz-
ing sequential networks, there are other timing mechanisms. For example, pulses of
controlled duration from one or more sources can be used to trigger the flip-flops.
However, such pulse-type timing introduces complications in realizations that do
not occur when a single master clock is used. The concepts of Mealy and Moore
models are still applicable to such synchronous sequential networks if these pulses
only can cause single state changes. In this chapter, however, the study of synchro-
nous sequential networks is restricted to only those that are single master-clock
controlled.
In the general models of Figs. 7.1 to 7.4, the inputs to the memory portion of
the sequential network are labeled “next state.” The interpretation here is that ap-
propriate signals are applied to the clocked flip-flops so that after the triggering
edge or pulse of the clock signal, the states of the flip-flops reflect the updated state
information. In a realization, the actual signals that must be applied to the flip-flops
are really excitation signals that achieve the appropriate next state. These signals
are dependent upon the type of clocked flip-flops used for the memory. That is, the
necessary excitation signals depend upon whether JK, SR, D, or T flip-flops are
used. In the previous chapter, relationships were established between the present-
state/next-state transitions of a flip-flop and the logic values needed at its input ter-
minals to achieve these transitions.

7.2 ANALYSIS OF CLOCKED


SYNCHRONOUS SEQUENTIAL
NETWORKS
There are two main reasons for beginning the study of clocked synchronous se-
quential networks with analysis. First, analysis provides tabular descriptions of se-
quential networks. These are particularly useful when sequential networks are to
be designed. The steps involved in the synthesis of clocked synchronous sequen-
tial networks are basically the reverse of those involved in the analysis procedure.
372 DIGITAL PRINCIPLES AND DESIGN

Second, analysis provides a means for studying the terminal behavior of clocked
synchronous sequential networks. In particular, given a time sequence of inputs,
the time sequence of outputs and next states is readily determined.
In the previous section, two models of clocked synchronous sequential net-
works were presented: the Mealy model as defined by Eqs. (7.1) and (7.2) and the
Moore model as defined by Eqs. (7.1) and (7.3). For both models, the next states
(and, correspondingly, excitation) of the flip-flops are a function of the external in-
puts and the present states of the flip-flops. However, the two models differed in the
case of the network outputs. The outputs of Mealy sequential networks are also a
function of both the external inputs and the present states of the flip-flops; while for
Moore sequential networks the outputs are a function of just the present states of the
flip-flops.
Two clocked synchronous sequential networks are analyzed in this section.
They are shown in Figs. 7.5 and 7.6 and are referred to as Examples 7.1 and 7.2, re-
spectively, during the course of this discussion. As required for clocked synchro-
nous sequential networks, a clock signal is applied simultaneously to each of the
flip-flops for synchronization. In both examples, positive-edge-triggered flip-flops
are used for the memory portion of the network. Thus, the flip-flops change state
only at the occurrence of a leading edge of the clock signal. In order to explain vari-
ations to the analysis procedure, however, one network utilizes D flip-flops and the
other JK flip-flops.
The realization of Fig. 7.5 corresponds to a Mealy network. The diagram has
been laid out so as to resemble Fig. 7.3. The present state of the sequential network
corresponds to the signals at the output terminals of the flip-flops. These signals are

Figure 7.5 Logic diagram for Example 7.1.


CHAPTER 7 Synchronous Sequential Networks 373

tae
L1 V Clock

Figure 7.6 Logic diagram for Example 7.2.

fed back to the combinational logic that precedes the flip-flop input terminals. It is
this present state along with the external input signal x that serves as the inputs to
the combinational logic that provides the excitation signals to the D flip-flops. As
easily seen in the figure, the output portion of the sequential network is also a func-
tion of the external input x and the present states of the flip-flops. Thus, the two
combinational subnetworks satisfy the functional relationships given by Eqs. (7.1)
and (7.2) for a Mealy network.
The logic diagram for Example 7.2, 1.e., Fig. 7.6, is for a Moore network. The
diagram has been laid out to correspond to the general structure shown in Fig. 7.4.
As previously noted, the difference between Mealy and Moore models involves the
output portion of the network. In the case of Fig. 7.6, it is seen that the two outputs,
z, and z,, are only a function of the present states of the flip-flops and not a function
of the external inputs x and y. On the other hand, the excitations to the J and K ter-
minals of the flip-flops are indeed a function of both the external inputs and the
present states of the flip-flops.

7.2.1 Excitation and Output Expressions


Throughout the study of combinational networks, algebraic expressions served as
mathematical representations of the networks. It is also possible to write algebraic
expressions for sequential networks. To do this, it is first necessary to assign present-
state variables to each of the output terminals of the flip-flops. For the two examples
374 DIGITAL PRINCIPLES AND DESIGN

under discussion, the upper flip-flop outputs are assigned the variables Q, and Q, for
the true and complemented outputs, respectively; while the lower flip-flops have
outputs Q, and Q,. In addition, it is necessary to assign excitation variables to the in-
puts of the flip-flops. This is done by defining the excitation variables to be the same
as the input terminal designators of the flip-flops, subscripted according to the num-
ber assigned to the flip-flop. Thus D, is the excitation variable for the upper flip-flop
of Example 7.1 and J, and K, are the excitation variables for the upper flip-flop of
Example 7.2. In a similar manner, excitation variables are assigned to the lower flip-
flops of the two examples. Since the clock signal is applied directly to the clock
input terminal of all the flip-flops, it is not necessary to write expressions for the
clock inputs. Once the state and excitation variables are defined, Boolean expres-
sions for the flip-flop excitations are readily written in terms of the present-state
variables and the external input variables.
The excitations to the flip-flops of Fig. 7.5 correspond to the logic values that
appear at the D input terminals of flip-flops FF 1 and FF2. Algebraically,

D, = xQ, + 0,0) (7.4)


D, = xQ, 5 0,0) (7.5)
To complete the algebraic description of a sequential network, it is also neces-
sary to write algebraic expressions for the network outputs. For the case of Fig. 7.5,
the output is given by

z= xQ, + xQ,0, (7.6)


The algebraic description for Example 7.2, 1.e., Fig. 7.6, is done in a similar
manner. In this case, the flip-flop excitations correspond to the logic values that ap-
pear at the J and K terminals of flip-flops FF1 and FF2. From Fig. 7.6, the excita-
tion expressions are immediately written as

Ji=y CET)
K, = y + xO, (7.8)
Jy = x0, + xyO, 79)
K, = xy + yQ, (7.10)
Finally, the outputs of the sequential network are given by

£1 0,0, (iN)
Z = 2+ O; (7.12)
7.2.2 Transition Equations
The general structures for Mealy and Moore models given in Figs. 7.3 and 7.4 show
the inputs to the memory portion of the sequential network as next states rather than
excitation signals. Effectively, these structures are independent of the flip-flop types
used in a realization. To convert excitation expressions into next-state expressions,
It Is necessary to use the characteristic equations of the flip-flops. Characteristic
CHAPTER 7 Synchronous Sequential Networks 375

equations for the various types of flip-flops were previously developed in Chapter 6.
As indicated in Fig. 6.25), the characteristic equation for a D flip-flop is

OD
and for a JK flip-flop it is

OO
These equations indicate the next state of a flip-flop for given excitations at its input
terminals. By substituting the excitation expressions for a flip-flop into its character-
istic equation, an algebraic description of the next state of the flip-flop is obtained.
These expressions are referred to as transition equations.
Since Example 7.1 consists of D flip-flops, the next states of the flip-flops are
given by

Or Dy
Q; =D,
Substituting Eqs. (7.4) and (7.5) into the above equations gives the transition equations

Oi = x0, + OO, (7.13)


Q> = xO, + 0,9, (7.14)
For the case of Example 7.2, the characteristic equations for the two flip-flops are

On JO, 15 K,Q,
QO; aL JO, a K,Q,

The transition equations, obtained by substituting Eqs. (7.7) to (7.10), become

QO} = yO, + (vy + xQ,)Q,


= yO, + y+ O,)O,
yO, + xyQ, + yO,2, (7.15)
2 = (xO, + xyQ))Q. + (xy + yQ)Q>
(x, + xyQ))Q, + & + yy + OO,
xQ,Q) i xyQ,Q> + xyQ + x0,Q) co yQ\Q, (7.16)

7.2.3 Transition Tables


Rather than using the algebraic descriptions for the next state and outputs of a se-
quential network, it is more convenient and useful to express the information in tab-
ular form. The transition table is the tabular representation of the transition and out-
put equations. This table consists of three sections, one each for the present-state
variables, the next-state variables, and the output variables.
The present-state section lists all the possible combinations of values for the state
variables. Thus, if there are p state variables, then this section consists of 2” rows. The
length of the present-state section determines the length of the transition table.
376 DIGITAL PRINCIPLES AND DESIGN

The next-state section has one column for each combination of values of the
external input variables. Hence, if there are n external input variables, then this sec-
tion consists of 2” columns. Each entry in this section is a p-tuple corresponding to
the next state for each combination of present state (as indicated by its row) and ex-
ternal input (as indicated by its column).
The structure of the third section of the transition table depends upon whether
the network is of the Mealy or Moore type. In the case of a Mealy sequential net-
work, the outputs of the network are a function of both the present state and external
inputs. Thus, as in the next-state section, there is one column for each combination
of values of the input variables, and the entries within the section indicate the out-
puts for each present-state/input combination. On the other hand, since the outputs
of Moore sequential networks are only a function of the present state, the output
section of the transition table has only a single column. The entries within this col-
umn correspond to the outputs for the associated entries given in the present-state
section of the table.
Table 7.1 shows the transition table for Example 7.1. In the present-state sec-
tion, the four combinations of values of Q, and Q, are listed. Next, the next-state
section is constructed. Since there is only one external input variable, x, there are
2' = 2 columns in this section. One column is for x = 0 and the other is for x = 1.
The entries within this section correspond to the pair of Q; and Q3 values for the
various values of x, Q,, and Q, given by the column and row labels. These entries
are obtained by evaluating Eqs. (7.13) and (7.14). Attention should be paid as to the
order of the pair of elements for each entry in the table. To illustrate the determina-
tion of the first entry of each pair from Eq. (7.13), Q* = 1 when xQ, = 1, i.e., when
x = Oand Q, = 0. Thus, 1’s occur as the first element in the first column, x = 0, and
first and third rows, Q, = 0, of the next-state section. In addition, Q' = 1 when
Q,Q, = 1, 1e., when Q, = 0 and Q, = 1. This accounts for additional 1’s as the first
element in both columns of the second row of the next-state section. This completes
all present-state/input combinations that causes Q; = 1. Thus, all the remaining
first elements in the next-state section of the transition table are 0’s. In a similar
manner, Eq. (7.14) is used to determine the second element for each entry in the
next-state section of the transition table.
Since Example 7.1 corresponds to a Mealy sequential network, the output sec-
tion also has two columns to correspond to the two possible values of the input vari-

Table 7.1 Transition table for Example 7.1

Present state Next state Output


(Q,0)) (Qi Q>) (z)
Input (x) Input (x)
0 1
00 0 1
01 0 0
10 1 0
i 1 0
CHAPTER 7 Synchronous Sequential Networks 377

Table 7.2 Transition table for Example 7.2


Present state Next state | Output
(Q,0,) — (QQ) (2422)
Inputs (xy)
10

able x. The entries within this section of the table are obtained by evaluating Eq.
(7.6) for the eight combinations of values of x, Q,, and Q3.
The transition table for Example 7.2 is given in Table 7.2. Again the present-state
section consists of four rows that correspond to all the combinations of values to Q,
and Q,. The next-state section has four columns since there are 27 = 4 combinations
of values of the two external input variables x and y. The entries within this section
consist of pairs of elements corresponding to Q; and Q;. These entries are obtained
by evaluating Eqs. (7.15) to (7.16). In this example, the output section of the transition
table has only a single column since the logic diagram is of a Moore sequential net-
work. The entries in this column correspond to the evaluation of Eqs. (7.11) to (7.12)
for the four combinations of values of Q, and Q, given in the present-state section.
It should be noted that the transition table is really the truth table for the transi-
tion and output equations. The only difference lies in the fact it is represented as a
two-dimensional array where the rows denote the values of the present-state vari-
ables and the columns denote the values of the external input variables.

7.2.4 Excitation Tables


The transition table was constructed as the result of substituting excitation expressions
into the flip-flop characteristic equations. An alternate approach to the construction of
the transition table is to first construct the excitation table directly from the excitation
and output expressions. The excitation table consists of three sections: the present-state
section, the excitation section, and the output section. The present-state and output sec-
tions of the excitation table are constructed the same way as the corresponding sections
of the transition table. In particular, the present-state section lists all combinations of
values of the state variables and the output section corresponds to the evaluation of the
output expressions of the network. However, the excitation expressions are used to
form the excitation section in an analogous way as the transition expressions were used
to form the next-state section of the transition table. The excitation section consists of
one column for each combination of values of the external input variables. The entries
in this section are r-tuples corresponding to the evaluation of the r excitation equations.
The excitation table for Example 7.1 is shown in Table 7.3. For this example
there are two excitation equations, given by Eqs. (7.4) and (7.5), for D, and D,. The
evaluation of this pair of equations for the eight combinations of values of the
present-state and input variables leads to the excitation section shown in Table 7.3.
378 DIGITAL PRINCIPLES AND DESIGN

Table 7.3 Excitation table for Example 7.1

Present state Excitation


(2,02) (D,D,)
Input (x) Input (x)
0 1
00 0 ]
01 0 0
10 | 0
11 | 0

However, in view of the fact that the characteristic equation for a D flip-flop is
Q* = D, it is readily seen that the excitation section of Table 7.3 is the same as the
next-state section of Table 7.1. Hence, for sequential networks using D flip-flops,
the excitation table and the transition table are identical except for the label assign-
ment to the entries in the second section.
Table 7.4 gives the excitation table for Example 7.2. Equations (7.7) to (7.10)
are the excitation equations for this example. These four expressions are used to de-
termine the 4-tuples appearing as entries in the excitation section by evaluating the
expressions in the same manner as the transition equations were previously evalu-
ated. The comma in each 4-tuple is used just to delineate the excitations of flip-flop
FF from flip-flop FF 2.
In order to obtain the transition table from the excitation table, it is necessary to
analyze each entry of the excitation table to determine the effect of the indicated ex-
citation values. The effects of excitation signals on the states of the various types of
flip-flops were previously given in Table 6.2.
To illustrate the construction of the transition table from an excitation table,
consider the entry in the fourth column, first row of the excitation section of Table
7.4, 1.e., J,K,,J>Ky = 11,10. The present state associated with the first row of the
table is Q,Q, = 00. Thus, for flip-flop FF1, J, = 1, K, = 1, and Q, = 0. From Table
6.2 it is immediately seen that under this condition Q; = 1. That is, when a logic
value of | appears at both the J and K terminals of a JK flip-flop at the triggering
time of the clock signal, the state of the flip-flop is complemented. For flip-flop
FF2, J, = 1, K, = 0, and Q, = 0. Again from Table 6.2 it is seen that Q3 = 1. That

Table 7.4 Excitation table for Example 7.2

Present state Excitation Output


(Q,0,) (JK,J2K2) (21,22)
Inputs (xy)
01 10
00 11,00 Olt Ol
Ol 00,00 11,00 00,11 11,10 00
10 00,00 itil 01,01 11,01 11
1] 00,00 ALI 00,01 11,01 01
CHAPTER 7 Synchronous Sequential Networks 379

is, a logic value of | on just the J terminal of a JK flip-flop at the triggering time of
the clock signal causes it to set. Hence, the next state of the sequential network
when xy = 11 and Q,Q, = 00 is QQ; = 11. This is precisely the entry in the
fourth column, first row of the next-state section of Table 7.2. By repeating this pro-
cedure on each entry in the excitation section of Table 7.4, the next-state section of
Table 7.2 is obtained. The present-state and output sections of both the excitation
table and transition table are always the same.

7.2.5 State Tables


In studying the output terminal behavior of a sequential network, the actual binary
codes used to represent the states are not important. Hence, alphanumeric symbols
can be assigned to represent these states. When this relabeling is done to the transi-
tion table, the resulting table is called the state table.
The state table consists of three sections: the present-state section, the next-
state section, and the output section. Each combination of values of the state vari-
ables, which, in turn, corresponds to a state of the memory portion of the sequential
network, is assigned a unique alphanumeric symbol. Then, the present-state and
next-state sections of the state table are obtained by simply replacing the binary
code for each state in the transition table by the newly defined symbol. The output
section of the state table is identical with the output section of the transition table.
Again consider Examples 7.1 and 7.2 and their corresponding transition tables
given in Tables 7.1 and 7.2. In both of these examples, the sequential networks con-
sist of four states. If the assignments A = 00, B = 01, C = 10, and D = 11 are made
to the four states, then the state tables shown in Tables 7.5a and 7.6 result.

Table 7.5 State table for Example 7.1

Present state Next state Output (z)


Input (x) Input (x)
—_

ke
©
—— Sa

Present state Next state, Output (z)


Input (x)
0
A Gx0 B,1
B D, 0 D,0
€ Cy 1 A, 0
D A, | A, 0
(b)
380 DIGITAL PRINCIPLES AND DESIGN

Table 7.6 State table for Example 7.2

Present state Next state :


Inputs (xy)
00 01 11

For Mealy sequential networks, frequently alternate, more compact forms of


the transition tables, excitation tables, and state tables are used. This is illustrated in
Table 7.5) for the state table of Example 7.1. In this variation, the next-state and
output sections of Table 7.5a are superimposed since they have exactly the same
column headings. Similarly, alternate forms of the excitation and transition tables
are obtained by superimposing their second and third sections. Such alternate forms
are not possible for Moore sequential networks.

7.2.6 State Diagrams


There is also a graphical representation of the state table. This is called the state dia-
gram. Each state of the network is represented by a labeled node. Directed branches
connect the nodes to indicate transitions between states. The directed branches are la-
beled according to the values of the external input variables that permit the transition
to exist. The outputs of the sequential network are also entered on a state diagram.
For Mealy sequential networks, the outputs appear on the directed branches along
with the external inputs. In this case, the label for a branch leaving a node consists of
a present-input/output combination for the state associated with the node. For Moore
networks, the outputs are included within the nodes along with their associated states.
Consider first the construction of a state diagram for a Mealy sequential net-
work. A node is drawn and labeled for each state. If a transition is possible between
two, possibly the same, states, then a directed branch is drawn connecting the two
corresponding nodes. The branch is labeled with the input values that causes the
transition, a slash, and the outputs that are associated with the present-state/input
combination. To illustrate this construction procedure, consider the state table for
Example 7.1, i.e., Table 7.5. The corresponding state diagram is shown in Fig. 7.7.
The four states are represented by four nodes. A branch is directed from node A to
node C since, according to the next-state section of Table 7.5, the input x = 0 ap-
plied to the network when in present-state A results in the next-state C. In addition,
the same present-state/input combination has a 0 output as shown in the output sec-
tion of the state table. Hence, the branch is labeled as 0/0 to indicate an input of 0
produces an output of 0 while in state A. Similarly, a branch connects node A to
node B since B is the next state for present-state A when x = 1. As indicated in the
output section of Table 7.5, z = | for this present-state/input combination. Thus, the
branch is labeled as 1/1. The remaining branches of Fig. 7.7 are obtained from
CHAPTER 7 Synchronous Sequential Networks 381

Present Present Arrow


Present input output —_indicates
state (x) (z) next state

0/1

Figure 7.7 State diagram for


Example 7.1.

Table 7.5 in exactly the same way. For simplicity, if more than one input causes a
transition to occur between two states, then multiple labels are used on a single
branch. This is seen in Fig. 7.7 for present-state B where D is the next state for
x = 0 and x = 1. In both cases, the output is 0.
When state diagrams for Moore sequential networks are constructed, the
outputs associated with each state are entered in the node along with the state
designator. The branch labels in this case are just the input combinations that af-
fect the state transitions. Figure 7.8 shows the state diagram for Table 7.6. Each

Present Present Arrow


output input indicates
(ZZ) (xy) next state

Present
state

00 00

Figure 7.8 State diagram for Example 7.2.


382 DIGITAL PRINCIPLES AND DESIGN

node is labeled with a state and its associated outputs separated by a slash. The
multiple label on the branch connecting nodes B and D corresponds to the situa-
tion in which two input combinations affect a transition between the same two
states.
In Sec. 8.2 another diagrammatic form describing sequential network behavior
is discussed. This form is a chart bearing some resemblance to a flowchart fre-
quently encountered in computer programming. However, it is time-oriented rather
than task-oriented.

7.2.7 Network Terminal Behavior


As was mentioned at the beginning of this section, one objective of sequential
network analysis is to describe the time response of a network to a sequence of
inputs. Although this can be done from the logic diagram by tracing signals, the
state table or state diagram simplifies the process. For example, again consider
Example 7.1. Assume the flip-flops are both in their 0-states, which corresponds
to state A in Table 7.5 or Fig. 7.7, and the input sequence x = 0011011101 1s ap-
plied to the network.* From the general discussion on the operation of clocked
synchronous sequential networks, the inputs are assumed to be applied prior to
the triggering time of the clock signal that can affect a state transition, and the
effects of the inputs have propagated through the combinational logic so that
final values appear at the network outputs and flip-flop inputs. Therefore, ac-
cording to Table 7.5 or Fig. 7.7 it is seen that when the first x = 0 input is ap-
plied to the network when in state A, the network produces a z = 0 output.
Furthermore, upon receipt of the positive edge of the clock signal, the memory
portion of the network goes to state C. Next, another x = 0 is applied. Since the
network is now in state C, a current output of | is produced and the network re-
mains in state C upon receipt of the next positive edge of the clock signal. It can
readily be checked that the input sequence x = 0011011101 applied to the net-
work of Example 7.1 when initially in state A produces the following state and
output sequences:

Input sequencex = 0011011101


State sequence = ACCABDABDAB
Output sequencez =O 107001011

The state diagram for a Mealy sequential network is a little misleading rel-
ative to the outputs. Although the outputs are shown on the directed branches
of the state diagram, this does not mean that the outputs are produced during
the transition between two states. Rather, the outputs appearing on the branches
are continuously available while in a present state and the indicated inputs are
applied.

“Whenever a time sequence of symbols is given, it is assumed to occur from left to right. In this case,
the first input for x is 0, then another 0, then a 1, ete.
CHAPTER 7 Synchronous Sequential Networks 383

The fact that the outputs from a Mealy sequential network are a function of
both the external inputs and the present state introduces the possibility of false
outputs or glitches. When the state and output sequences are determined from a
State table or state diagram, as was done above, the values of the external input
variables only at the triggering time of the clock signal are considered. However,
the external input variables may change values any time during the clock pe-
riod. Although these input changes can continuously affect the network outputs,
the consequences of these input changes do not appear in the listing of the output
sequences.
To illustrate this problem of false outputs, a timing diagram for the input se-
quence x = 0011011101 applied to state A of Example 7.1 is shown in Fig. 7.9.
For simplicity, propagation delays are assumed to be zero so that the effects of
all signal changes occur immediately. Since positive-edge-triggered flip-flops
are used in the realization, any state changes occur coincident with the positive
edges of the clock signal. Previously it was seen that this input sequence pro-
duced the output sequence z = 0101001011. However, as can be seen in the fig-
ure, the actual output sequence is z = 01010(1)0101(0)1 where the two outputs
in parentheses are false outputs. Consider the first of these false outputs. An
input of x = 0 applied to the network when in present-state B causes the network
to go to state D upon the occurrence of the positive edge of the clock signal.

se ee a
State
sequence A c @ A B D A B

Output
sequence 0 ] 0 1 0 (1) O | 0

False False
output output

Figure 7.9 Timing diagram for Example 7.1.


384 DIGITAL PRINCIPLES AND DESIGN

However, immediately after the state transition, the input x is still 0. Referring
to the state table or the state diagram for Example 7.1, it is seen that when in
present-state D, the input x = 0 produces a | output. It is not until the input x
changes to | that the output for present-state D becomes 0. Hence, for the period
of time in which the network is in its new state and the old input is still being ap-
plied, there is a false logic-1 output. The second false output shown in Fig. 7.9 is
the result ofx still being 0 for a short time after the state transition from state D
to state A.
The above discussion only considered one possible cause of false outputs. False
outputs can also occur as a result of propagation delays. This topic is further investi-
gated in Chapter 9.
Terminal behavior of a Moore sequential network is also readily determinable
from its state table or state diagram. For the Moore sequential network of Example

Logic
diagram

|
Assign state and
excitation variables
to each flip-flop

Excitation and
output equations

es
Transition equations
(obtained by applying Excitation
flip-flop characteristic table
equation)


Transition
table

State
diagram

Figure 7.10 The analysis procedure.


CHAPTER 7 Synchronous Sequential Networks 385

7.2, the state table shown in Table 7.6 or the state diagram in Fig. 7.8 is used. If it is
assumed the network begins operation in state A, then the following is an example
of input, state, and output sequences:

Input sequence Karmac daunting


=a) LO, a oahy beg
State sequence
=A AC BBDAD
‘ 7Z=0010000
Out *
spat sequence {=f AIP GRO

The analysis procedure presented in this section is summarized in Fig. 7.10.


Here the order in which the various steps of the analysis are performed is diagram-
matically illustrated.

7.3 MODELING CLOCKED SYNCHRONOUS


SEQUENTIAL NETWORK BEHAVIOR
In this and the remaining sections of this chapter, an approach to the synthesis of
clocked synchronous sequential networks is undertaken. The synthesis procedure
involves the establishment of a network realization that satisfies a set of input-
output specifications.
Basically, the procedure leading to the network realization is the reverse of the
analysis procedure introduced in the previous section. From the word specifications
of a network, a state table or state diagram is constructed. However, in the course of
constructing the state table, more states may be introduced than are really neces-
sary. By means of a state reduction technique it is possible to obtain a state table
with a minimum number of states. From this reduced state table a transition table is
formed by coding the states of the state table with a sequence of binary symbols.
Then, an excitation table is constructed based on the flip-flop types to be used in the
realization. From the excitation table, the excitation and output expressions for the
network are determined. Finally, the logic diagram is drawn.
The first step in the synthesis of clocked synchronous sequential networks is
the establishment of a formal description of the network specifications. This is an
abstract model of the network behavior that is obtained from the sometimes am-
biguous natural language description. Such a model is the state table or state dia-
gram. There is no standard technique for obtaining these models from the state-
ments of the desired network behavior. Rather, by understanding the network
specifications and knowing how to interpret a state table or state diagram, an appro-
priate model for the network to be designed is constructed. This modeling step of
the synthesis procedure is presented in this section via a series of examples.

7.3.1 The Serial Binary Adder as a Mealy Network


One approach to the modeling of a set of input-output specifications is to define
a priori a set of states needed by the network to preserve the information regarding
the past history of inputs. One of these states should correspond to an initial state
386 DIGITAL PRINCIPLES AND DESIGN

that signifies that no past inputs have been applied. This preliminary analysis of list-
ing the states may not produce all the necessary states of the network. However, it
does serve as a starting point for the construction of the model. As the state table or
state diagram is formed, additional states are added to the proposed collection if
none of the initially defined states adequately describes the information to be pre-
served at some point in time. It is also possible that too many states are proposed
and used in the model. This problem is readily handled, as is seen in Sec. 7.4, by ap-
plying a systematic reduction technique to the state table. An important point to re-
member, however, is that the state table must be finite in length. Thus, although
states can be continually added to the proposed listing, for a realization to exist,
only a finite number of states are allowed.
The first network to be modeled is the serial binary adder. Let us assume that
the realization is to be a Mealy network having the general structure shown in
Fig. 7.11. Two binary sequences, corresponding to the two operands being added,
are applied to the network inputs x and y, least significant bits first. The values on
the x and y inputs are applied prior to the triggering time of the clock signal that is
used for synchronization. The binary sum of the two numbers appears as a time se-
quence, also least significant bit first, on the single output line z.
At any time, there are four possible input combinations for the two external in-
puts x and y, i.e., 00, 01, 10, and 11. As was discussed previously, the state of a se-
quential network must preserve any necessary past history in order to determine a
present output and a next state. From the discussion on binary addition in Sec. 2.3,
it was seen that with the exception of the addition of the least significant pair of bits,
the sum bit for any order position is determined by the two operand bits of that
order as well as whether or not any carry was generated from the addition of the
previous order of bits. Thus the existence or nonexistence of a carry must be the re-
quired internally preserved information needed to perform the addition process
upon any pair of operand bits. Two states can now be defined for the network re-
flecting this fact. The first state, say, A, is associated with the past history “no carry
was generated from the previous order addition,” and a state, say, B, is associated
with the past history “a carry was generated from the previous order addition.”
When the first pair of bits is added, i.e., the least significant pair of bits, the correct
sum bit is obtained if it is assumed that the carry from the previous order addition

v—>P-
Serial
binary
adder

Clock

Figure 7.11 Theserial


binary
adder.
CHAPTER 7 Synchronous Sequential Networks 387

was 0. It is now seen that knowledge of the state information and the current pair of
operand bits is sufficient to determine a sum bit and an appropriate next state with
state A serving as the initial state of the network.
Having defined a set of states describing the information that must be preserved
about the past history, the construction of a state diagram is started by introducing a
node for each state. Since there are two input bits present at any time, corresponding
to the bit values on the x and y lines, four input-bit combinations must be considered
for each node. Since state-A denotes no carry was generated from the previous order
addition, the appropriate sum bit to be produced while in state A is simply the bi-
nary sum of the two bits on the x and y lines. Furthermore, the next state must corre-
spond to whether or not a carry results from the current two-bit addition. No carry
occurs upon the bit-pair addition of the input combinations 00, 01, and 10. The first
of these three input combinations results in a sum bit of 0, while the other two com-
binations result in a sum bit of 1. This behavior is illustrated in Fig. 7.12a as the
loop on node A. The letters “I.S.” next to node A indicate that it is the initial state.
The fourth input combination, 11, also results in a sum bit of 0, but since a carry is
generated, the network must go to state B so as to preserve this information about
the past history. This appears in Fig. 7.12a as the directed branch from node A to
node B.
Next it is necessary to describe the behavior of the network under the four
input conditions while in state B. When the network is in state B, it is remembering
that a carry was produced from the previous order addition. Thus, the appropriate
sum bit for this state must be one greater than the binary sum of the two input bits.
Hence, the xy inputs 01 and 10 must produce a zero sum bit plus a carry. Since state
B corresponds to the remembering of a carry, an arc is directed from node B to
node B for these two cases as shown in Fig. 7.12b. Similarly, the arc also is labeled
with the input combination 11 since again the sum of these two bits and a “remem-
bered” carry results in a carry. However, the output in this case is a 1. The final

11/0
00/0,
-@
O1/1, oe

10/1

00/0, 01/0,

10/1 1S. 00/1 W/1

(b)

Figure 7.12 Obtaining the state diagram for


a Mealy serial binary adder.
(a) Partial state diagram.
(b) Completed state diagram.
388 DIGITAL PRINCIPLES AND DESIGN

Table 7.7 State table for a Mealy serial binary adder

Present state Next state Output (z)


Inputs (xy) Inputs (xy)
01 10 00 01 10 11
*A 0 l | 0
B | 0 0 l

input combination for the network while in state B corresponds to xy = 00. In this
case, the appropriate sum bit is a | due to the remembered carry. The next state,
however, must correspond to a state that preserves the information of “no carry was
generated for the previous order addition.” This is precisely the meaning of state A.
Hence, a transition from state B to state A must occur for the input combination of
11 as shown in Fig. 7.125.
Using two binary numbers, the reader can easily check that the state diagram of
Fig. 7.12b models the behavior of a serial binary adder assuming the process is
started in state A. Since state A is the initial state, provision must be made to start
the network in this state when the realization is completed. The initialization
process is discussed further in Sec. 7.6.
In many cases, such as the example just completed, the state diagram is a con-
venient way of formalizing the network behavior of a synchronous sequential net-
work. However, for the remaining steps of the synthesis procedure, the state table is
more useful. Since there is a one-to-one correspondence between a state diagram
and a state table, it is a simple matter to construct a state table. As was explained in
the previous section, for a Mealy sequential network, both the next-state and output
sections of the state table have one column for each possible input combination.
The rows of the state table, listed in the present-state section, correspond to the
states of the network. The entries in the remaining two sections of the table are
readily determined by noting the next state and output for each present-state/input
combination appearing on the state diagram. The state table for the serial binary
adder is shown in Table 7.7. An asterisk is placed next to state A in the present-state
section to indicate that it is the initial state.

7.3.2 The Serial Binary Adder as a Moore Network


Let us now repeat the design of the serial binary adder under the assumption that the
realization is to be a Moore sequential network. Unlike Mealy sequential networks,
the outputs from a Moore sequential network are a function of only the present
state. This makes the modeling process more difficult when the network specifica-
tions indicate that the outputs are also a function of the current external inputs. In
this example, the correct sum bit from the adder cannot be established until the next
state is reached or, equivalently, until the next clock period, since in a Moore se-
quential network the current external inputs do not affect the present output. In ef-
fect, this imposes a unit delay on the output of the network. Since an output is nor-
CHAPTER 7 Synchronous Sequential Networks 389

mally specified for the initial state, this first output must be ignored and only the
subsequent outputs are relevant as the output sequence. Furthermore, in a Moore se-
quential network, each state must be associated with both an output and information
regarding how the past history of inputs causes that output. For this reason two
states are not sufficient for modeling the serial binary adder as a Moore sequential
network since either a 0 or | sum bit is possible both with and without the existence
of a carry from the previous order addition. A little reflection upon the problem
leads to the conclusion that four states are necessary, one state for each combination
of sum bit and carry bit from the previous order addition. Thus, the following four
states are defined:

A: The sum bit is 0 and no carry was generated from the previous order
addition.
B: The sum bit is 0 and a carry was generated from the previous order
addition.
C: The sum bit is 1 and no carry was generated from the previous order
addition.
D: The sum bit is 1 and a carry was generated from the previous order
addition.

Having defined the states for the network, the construction of the state diagram
for the serial binary adder under the Moore sequential network assumption proceeds
in much the same way as was done previously. Appropriate next states are deter-
mined for each state under the four input combinations of values occurring on the x
and y input lines of the network. The corresponding state diagram is shown in Fig.
7.13 and the state table is given in Table 7.8. It is important to realize that the first
valid sum bit occurs immediately after entering the first state from the initial state.
After that, the sum bits are produced in sequential order. Thus, either state A or state
C can serve as the initial state since the first output must be ignored. In this exam-
ple, state A is chosen as the initial state.

Figure 7.13 State diagram for a Moore


serial binary adder.
390 DIGITAL PRINCIPLES AND DESIGN

Table 7.8 State table for a Moore serial binary adder

Present state Next state Output (z)


Inputs (xy)
01 10

*A A ( G B 0
B G B B D 0
C A C G B |
D Cc B B D 1

In summary, every Mealy sequential network has a corresponding Moore se-


quential network under the assumption that the output for the initial state from the
Moore network is to be ignored. However, Moore sequential networks inherently
require more states than a Mealy sequential network since outputs are associated
with the states and not with the inputs. Finally, since the outputs of a Moore sequen-
tial network are not a function of the current inputs, the output sequence from a
Moore sequential network has an inherent delay of one clock period.
As was illustrated by the above two examples, one can approach clocked syn-
chronous sequential network modeling under the assumption of either a Mealy net-
work or a Moore network. However, one model is normally more appropriate than
the other since the two models are not really equivalent. It was seen in the above
that it was necessary to make assumptions about the network behavior in order that
the Moore model could be applied, i.e., the acceptance of a delayed output and the
ignoring of the first output. Such assumptions may not always be appropriate. The
required network behavior should be taken into consideration when determining a
model. In general, if the outputs of a synchronous sequential network are also a
function of the current external inputs, then the Mealy network model is the more
appropriate; while if the outputs are only a function of the past history of inputs,
then the Moore network model should be considered. In the ensuing examples, only
one model is established in each case.

7.3.3 A Sequence Recognizer


Another approach to the modeling of a word description involves the step-by-step
construction of a state table or state diagram under the assumption that inputs are
being applied to the network when in its various states, rather than defining the
states prior to the construction. To do this, a state is initially defined as the starting
point. From this initial state, additional entries in the state table or nodes in the state
diagram are introduced as needed to reflect the different past histories that must be
retained by the network. The process is continued until no new entries in the state
table or no new nodes in the state diagram are created. This approach is illustrated
by the following two examples.
Consider the design of a clocked synchronous sequential network having a sin-
gle input line x in which the symbols 0 and | are applied and a single output line z.
CHAPTER 7 Synchronous Sequential Networks 391

Sequence
recognizer |

Clock

Figure 7.14 A
sequence
recognizer.

An output of | is to be produced if and only if the three input symbols following


two consecutive input 0’s consist of at least one |. At all other times the output is to
be 0. Once the consecutive pair of 0’s is detected, the network must analyze the
next three input symbols to determine if at least one of them is a 1. The output of 1,
when appropriate, is to be coincident with the third input symbol of the three-input-
symbol sequence. Upon completing the analysis of the three input symbols follow-
ing the pair of 0 inputs, the network is to reset itself and wait for another pair of 0’s
and then at least one | in the following sequence of three input symbols. Since the
output is to be coincident with the third input symbol of the three-input-symbol se-
quence, a Mealy network is implied. The general structure of the network is shown
in Fig. 7.14. An example of an input sequence and the desired responding output se-
quence for the recognizer is
x=0100010010010010000000011'
z=0000001000000100000000001
The sequences that must be analyzed are shown bracketed. It should be noted that
one of these sequences does not satisfy the conditions for producing a | output.
Let A be the initial state of the network. It is first necessary to detect two con-
secutive 0 inputs. This phase is illustrated in Fig. 7.15a. The network remains in
state A as long as 1’s are applied. When the first 0 occurs, the network goes to state
B. If a | should occur at this time, then the network must return to state A to again
wait for a pair of consecutive 0 inputs. However, when the network is in state B, a
second 0 input sends it to state C to indicate that the initial pair of 0 inputs has been
detected. The outputs are 0’s for all these inputs since all the conditions of the net-
work specifications have not been satisfied. The definitions of the first three states,
along with those states to be introduced later, are given in Fig. 7.15d.
Once the network is in state C, it must analyze the next three inputs. If at least
one of these inputs is a 1, then an output of 1 is to be produced coincident with the
third input and the network is to return to its initial state A. The state diagram is
continued as shown in Fig. 7.150. If the first input is 1 when the network is in
state C, then the detection conditions of the network specifications are satisfied.
However, the network goes to state D with an output of 0 since, also according to
the network specifications, it must produce an output of | coincident with the
third input after entering state C. Similarly, a state E is introduced after state D to
392 DIGITAL PRINCIPLES AND DESIGN

1/0

A: Waiting to detect two


consecutive 0’s.
B: First 0 detected.
C: Two consecutive 0’s detected.
D: A | detected in three-
symbol sequence, but two
more inputs must still
be applied.
E: At least one | detected in
three-symbol sequence, but
one more input must still
be applied.
F: First input after two
consecutive ()’s was 0.
G: First two inputs after two
consecutive 0’s were 00.
(a) (b) (c) (d)

Figure 7.15 State diagram for a sequence recognizer. (a) Detection of two consecutive O's. (6) Partial analysis
of the three-symbol sequence. (c) Completed state diagram. (d) Definition of states.

continue the waiting time needed before producing the | output. When the net-
work is in state E, the third input occurs. Since at least one | has occurred during
the last three inputs, the output is 1, regardless of the current input, and the net-
work returns to state A to repeat the recognition procedure.
The state diagram of Fig. 7.150 is still not complete since the situation of a
0 input occurring while in state C must be considered. As shown in Fig. 7.15c,
the network in this case goes to state F, with a 0 output, to indicate that the first
input after the initial pair of 0’s was also a 0. When in state F, the occurrence of
a | input implies that the conditions of the network specifications are satisfied,
but the network must still wait one more time period before producing a 1 out-
put. This is precisely the meaning of state E. Thus a directed branch, for the
| input, is drawn from state F to state E with a 0 output. However, if the input is
0 while in state F, then two consecutive 0’s have occurred in the sequence of
three inputs that is being analyzed. The network enters state G to signify this fact
and a 0 output is produced. One more input must be considered when the net-
work is in state G. Regardless of this input, however, the network must return to
the initial state A. If this third input is a 1, then a | output is produced according
to the network specifications. On the other hand, a 0 input implies that the three
inputs that followed the initial pair of 0 inputs were also 0’s and, consequently,
the output is 0.
CHAPTER 7 Synchronous Sequential Networks 393

Table 7.9 State table for a sequence


recognizer
Present state Next state Output (z)
Input (x) Input (x)
0 1
AN

AwmnmdOAd

The state table corresponding to Fig. 7.15c is shown in Table 7.9. An asterisk is
placed next to state A to signify that this is the initial state of the network.

7.3.4 A 0110/1001 Sequence Recognizer


Now consider the design of another sequence recognizer having the block diagram of
Fig. 7.14. In this case, it is desired that the network produce a | output if and only if
the current input and the previous three inputs correspond to either of the sequences
0110 or 1001. The | output is to occur at the time of the fourth input of the recog-
nized sequence. Outputs of 0 are to be produced at all other times. Again a Mealy net-
work model is developed since the output is a function of the current input x. It should
be noted in this example that the network is not required to reset upon the occurrence
of the fourth input. Thus, the sequences are allowed to overlap. An example of an
input sequence and the appropriate responding output sequence for this network is
SSS. US SS =. 2S
Se SS
ce LO Oto TOO rT OO tO
z=0000100000100000101010100

The sequences to be detected according to the network specifications are indicated


by brackets.
Figure 7.16a shows the first step in constructing the state diagram by just consid-
ering the two sequences that lead to a | output. State A is the initial state. States are
added to record the past history of received inputs that eventually lead to an output of
1. Since the output is to be coincident with the fourth input of a detected sequence,
states are needed to record the various input sequences up to 011 and 100. These two
sequences lead to states F and G, respectively. From the network specifications it im-
mediately follows that an input of 0 when in state F or an input of | when in state G
should produce | outputs as indicated in the figure. Figure 7.16 gives the definitions
of the various states that were introduced. A question that remains is what are the
next-states for the branches leaving states F and G in Fig. 7.16a. The exit branch from
state F corresponds to the input sequence 0110. Referring to Fig. 7.160, it is seen that
394 DIGITAL PRINCIPLES AND DESIGN

0/0

0/0

0/0
0/0

(a)

A: No inputs received (initial state). |£: Last two inputs received were 10.
B: Last input received was 0. F: Last three inputs received were 011.
C: Last input received was 1. G: Last three inputs received were 100.
D: Last two inputs received were 01.

(D)

Figure 7.16 A 0110/1001 sequence recognizer. (a) Beginning the detection of the sequences 0110 or 1001.
(b) Definition of states. (c) Completing the detection of the two sequences 0110 or 1001.
(d) Completed state diagram.

state E was initially introduced to record the occurrence of the input sequence 10.
This 10 sequence can also be the first two inputs of a sequence that must produce a |
output if it is followed by 01. Since the two sequences that must be recognized, i.e.,
0110 and 1001, are allowed to overlap and 10 are the last two inputs of the 0110
sequence, the 0/1 branch from state F' should be directed to state E as shown in
Fig. 7.16c to record the fact that another potentially detectable sequence has started.
Using a similar analysis, the next state for state G under a | input is state D.
The state diagram of Fig. 7.16c is still incomplete since each node must have
exit branches for both 0 and | inputs. A 1 input when in state F corresponds to an
input sequence of 0111. This last | input could be the first 1 of a 1001 sequence.
From Fig. 7.16b, state C was defined to record such a situation. Hence, the next state
from F should be C if the input is 1. This is shown in Fig. 7.16d. By a similar argu-
ment the next state from state G with a O input should be state B. Next the second
input during states D and F is studied. State D records the fact that the last two inputs
were (1 and a 0 input during state D implies the last three inputs are 010. The 10
portion of this sequence can be the beginning of the sequence 1001 that results in a 1
CHAPTER 7 Synchronous Sequential Networks 395

Table 7.10 State table for the 0110/1001


sequence recognizer
Present state Next state : Output (z)
Input (x) Input (x)
1 0 1
E
<A 0 0
B 0 0
G 0 0
D