BCA-Digital Computer Organization
BCA-Digital Computer Organization
Under Guidance of
Data representation Data Types and Number Systems, Binary Number System, Octal
& Hexa-Decimal Number System, Fixed Point Representation, 1's & 2's Complement, Binary,
Arithmetic Operation on Binary Numbers, Overflow & Underflow, Floating Point
Representation, Codes, ASCII, EBCDIC Codes, Gray Code, Excess-3 & BCD, Error Detection
& Correcting Codes Binary Storage and Registers.
Unit II:-
Boolean algebra and digital logic circuits -Logic Gates, AND, OR, NOT,, NOR,
NAND & XOR Gates and their Truth Tables, Boolean Algebra, Basic Definition and Properties,
Basic Boolean Law's, Demorgan's Theorem, Minimization Techniques, K Map – Two, Three
and More Variables maps, Sum of Product & Product of Sums, Don’t care conditions.
Unit III:-
Combination Circuits - Half adder & Full adder, Half Subtractor, Full Subtractor ,
Code Conversion, Multilevel NAND and NOR Circuits, Decimal adder, decoders, Multiplexers
and Demultiplexers.
Unit IV:-
Unit V:-
Registers, Counters and the memory unit, Shift registers, Ripple counters and
Synchronous counters, Inter-register Transfer, Arithmetic Logic and Shift Micro Operation,
Conditional Control Statement, Instruction Codes, Processor organization, design of a simple
computer.
a a a ... a a . a a ... a
n n-1 n-2 1 0 -1 -2 -m
where
Base or Radix:-
Radix Conversion
Positional decimal systems include a zero and use symbols (called digits) for the ten values (0, 1,
2, 3, 4, 5, 6, 7, 8, and 9) to represent any number, no matter how large or how small. These digits
are often used with a decimal separator which indicates the start of a fractional part, and with a
symbol such as the plus sign + (for positive) or minus sign − (for negative) adjacent to the
numeral to indicate whether it is greater or less than zero, respectively.
NOTES:-
Name Base Symbol + is used for addition
Binary Base 2 B - is used for subtraction
* is used for multiplication
Octal Base 8 Q or O / is used for division
^ is used to raise to a power
Decimal Base 10 none or D
Hexadecimal Base 16 H
You have been using the decimal (base 10) numbering system for so long that you often take it
for granted. When you see a number like "123", you don't think about the value 123. Instead, you
generate a mental image of how many items this value represents. In reality, however, the
number 123 represents:
1 * 100 + 2 * 10 + 3 * 1 =
100 + 20 + 3 =
123
Each digit appearing to the left of the decimal point represents a value between zero and nine
times power of ten represented by its position in the number. Digits appearing to the right of the
decimal point represent a value between zero and nine times an increasing negative power of ten.
For example, the value 725.194 is represented as follows:
725.194
Computers and electronics work with bytes or eight digit binary numbers. Each byte has encoded
information that a computer is able to understand. Many bytes are stringed together to form
digital data that can be stored for use later.
0, 1, …
From here, there are no more symbols. We do not go to 2 because in binary, a 2 doesn't exist.
Instead, we use 10. In a binary system, 10 is equal to 2 in decimal.
Just like in decimal, we know that the more digits there are, the larger the number. However, in
binary, we use powers of two. In the binary number 1001101, we can create a chart to find out
what this really means.
26 25 24 23 22 21 20
1 0 0 1 1 0 1
64+0+0+8+4+0+1
87
Since this is base two, however, the numbers don't get quite as large as it does in decimal. Even
still, a binary number with 10 digits would be larger than 1000 in decimal.
Binary Fractions
When you are dealing with fractions you multiply the number by 2 then
record the carry in 1 or 0. (Decimal fraction to Binary)
hence (0.85)10=(0.110110)2
Double-Dabble Method
99÷2 = 49 + reminder 1
49÷2 = 24 + reminder 1
24÷2 = 12 + reminder 0
12÷2 = 6 + reminder 0
6÷2 = 3 + reminder 0
3÷2 = 1 + reminder 1
1÷2 = 0 + reminder 1
0, 1, 2, 3, 4, 5, 6, 7
Number system that uses 8 digits 0,1,2,3,4,5,6 and 7 is called Ocatal Number systems.
Octal Odometer
Can you think of a way how to convert octal numbers to decimal numbers? In the octal number
systems each digit position corresponds to the power of 8 like shown below:
Suppose you want to change the octal 23 to decimal. You should proceed as follows:
2(81) + 3(80) = 16 + 3 = 19
Fractions
When you convert decimal fraction to octal fraction you multiply instead of divide.
Now suppose you want to convert decimal 0.23 into an octal fraction.
When you convert from octal to binary you obtained three binary
number. Thus for conversion from binary to octal you have to reverse the process. You grouped
one that has three binary numbers. For example the binary number 1011.01101 can be converted
into
↓ ↓ ↓ ↓
1 3 . 3 2
As you can see above that you have to add 00 to the first binary to make it 3 binary in one group.
Hexadecimal Digits
9 A F
↓ ↓ ↓
↓ ↓ ↓ ↓
E 8 D 6
- 127 = 11111111
One’s complement is found by inverting all bits with the exception of the sign bit. For two’s
complement, we perform the same operation and in addition, ‘1’ is added to the least significant
bit position.
1. Sign bit
2. Exponent (Abscissa)
3. Mantissa (Significand or Fraction)
To represent a real number using single precision 4 bytes are used, i.e. 4*8=32 bits.
However for the IEEE16 standard, the mantissa is composed of 24 bits as indicated in the below
table.
Mantissa 24
Exponent 8
Sign 1
Simple arithmetic indicates that it would take 33 bits to store a 4-byte single precision number
but we are only using 32.
Although the mantissa is quoted as 24 bits only 23 bits are used. Normalization
allows us to discard the single ‘1’ bit to the left of the radix point (i.e. binary point for base 2).
To illustrate, let us convert the decimal number 12 to binary and then normalize.
(12)10 = (1100)2
Another alteration is made to the Real number representation. In this instance, the
exponent is stored as a Biased Exponent. A bias of +127 is added to the exponent.
Let us now look at a few examples of how Real numbers are represented.
1. +12 (decimal)
Sign = 0 (+ve)
Mantissa = .1
To represent a real number using single precision 8 bytes are used, i.e. 8*8=64 bits.
There are 2 exceptions to the rules for floating point numbers (whether single or double
precision)
Binary Arithmetic
Addition
Subtraction
Binary Addition
0+0=0
0+1=1
1+0=1
1 + 1 = 0, and carry 1 to the next more significant bit
For example,
0 0 1 0 0 1 1 0 = 38(base 10)
0 1 0 1 0 0 0 1 = 81(base 10)
Note: The rules of binary addition (without carries) are the same as the truths of the XOR gate.
Binary Subtraction
2 0 - 1 0 (with a
borrow 1)
3 1 - 0 1
4 1 - 1 0
0-0=0
0 - 1 = 1, and borrow 1 from the next more significant bit
1-0=1
1-1=0
10-1=01
For example,
0 0 0 1 0 1 0 0 = 20(base 10)
0 0 0 1 1 1 0 1 = 29(base 10)
1’s complement:-
1011----------------> 0100
110110-------------> 001001
1) Complement the subtrahend by converting all 1’s to 0’s and all 0’s to 1’s.
2) Proceed as in addition.
3) Disregard the carry and add 1 to the total (end –around -carry)
Example:- Perform the subtractions using 1’s complement addition of the given number.
Solution:-
+1
000101
1) Complement the subtrahend by converting all 1’s to 0’s and all 0’s to 1’s.
2) Proceed as in addition.
3) Complement the result and place a negative sign in front of the result.
Excample:- Perform the subtractions using 1’s complement addition of the given number
Solution:-
Example:- Perform the subtractions using 2’s complement addition of the given
number
Solution:-
Example:- Perform the subtractions using 2’s complement addition of the given number
Solution:-
1001 1001
-1011 +0101 (2’s complement of 1011)
0010 no carry ---> 1110
a) The MSB is the sign bit : 0 for a + sign and 1 for a – sign.
b) The negative numbers shown in represents the 2’s complement of the positive numbers.
Note:- Here -22 will be in its 2’s complement form. Thus +22 (0001 0110) must be converted to
-22 (1110 1010)
Note:- Here -47will be in its 2’s complement form. Thus must be converted to -47 (1101
0001).
Logic Gates
Logic gates process signals which represent true or false. Normally the positive
supply voltage +Vs represents true and 0V represents false. Other terms which are used for the
true and false states are shown in the table on the right. It is best to be familiar with them all.
Gates are identified by their function: NOT, AND, NAND, OR, NOR, EX-OR and EX-NOR.
Capital letters are normally used to make it clear that the term refers to a logic gate.
Note that logic gates are not always required because simple logic functions can be performed
with switches or diodes:
Logic states
Switches in series (AND function) True False
Switches in parallel (OR function)
1 0
HIGH LOW
OPERATIONS ON OFF
+Vs 0V
Inverter
The inverter (NOT circuit) performs a basic logic function called inversion or
complementation. The purpose of the inverter is to change one logic level (HIGH / LOW) to the
opposite logic level. In terms of bits, it changes a ‘1’ to a ‘0’ and vice versa. The standard logic
symbol for the inverter and a Venn diagram illustrating the relationship between the variables
and the logic gate operation, are shown in Figure 1-1 and Figure 1-2, respectively.
We generally express the logical operation of a gate with a truth table which lists all input
combinations and the corresponding outputs. The truth table for the NOT gate is shown in Table
1-1.
INPUT OUTPUT
A
0 1
1 0
Note: The total number of possible input combinations (N) is determined by the
mathematical formula:
(1-1)
The AND gate, which is composed of two or more inputs and a single
output, performs logical multiplication. The standard symbol for the AND gate is shown in
Figure 1-3 and its truth table listed in Table 1-2. The logical operation of the AND gate is such
that the output is HIGH (1) when all the inputs are HIGH, otherwise it is LOW (0). The Venn
diagram shown in Figure 1-4 provides an insight into the AND function. The highlighted area
represents the function X=AB.
A B
0 0 0
0 1 0
1 0 0
1 1 1
The OR gate
The OR gate, which is composed of two or more inputs and a single output,
performs logical addition. The standard symbol for the OR gate is shown in Figure 1-5 and its
truth table listed in Table 1-3. The logical operation of the OR gate is such that the output is
HIGH (1) when any of the inputs are HIGH, otherwise it is LOW (0). The Venn diagram shown
in Figure 1-6 provides an insight into the OR function. The highlighted area represents the
function X=A+B.
INPUT OUTPUT
A B
0 0 0
0 1 1
1 0 1
1 1 1
INPUT OUTPUT
A B
0 0 1
0 1 1
1 0 1
1 1 0
The NOR gate, which is composed of two or more inputs and a single
output, also has a universal property. The term NOR is formed by the concatenation NOT-OR
and implies an OR function with an inverted output. The standard symbol for the NOR gate is
shown in Figure 1-7 and its truth table listed in Table 1-5. The logical operation of the NOR gate
is such that the output is HIGH (1) only when all the inputs are LOW.
Figure 1-7 Standard logic symbol for NOR gate
INPUT OUTPUT
A B
0 0 1
0 1 0
1 0 0
1 1 0
However, because of their functional importance, these gates are treated as basic gates with their
own unique symbols. The truth tables for the XOR and XNOR gates, shown in Figure 1-8, are
listed in Table 1-6. The Exclusive-OR is an "inequality" function and the output is HIGH (1)
when the inputs are not equal to each other. Conversely, the Exclusive-NOR is an "equality"
function and the output is HIGH (0) when the inputs are equal to each other.
Figure 1-8 Standard logic symbols for: (a) XOR (b) XNOR
A B
0 0 0 1
0 1 1 0
1 0 1 0
1 1 0 1
Table 1-6 Truth table for XOR and XNOR logic gates
Any gate can be built from NAND or NOR gates (Universal Gate)
As well as making a NOT
gate, NAND or NOR gates can be combined to create any type of gate! This enables a circuit to
be built from just one type of gate, either NAND or NOR. For example an
AND gate is a NAND gate then a NOT gate (to undo the inverting function). Note that AND and
OR gates cannot be used to create other gates because they lack the inverting (NOT) function.
To change the type of gate, such as changing OR to AND, you must do three things:
For example an OR gate can be built from NOTed inputs fed into a NAND (AND + NOT) gate.
NOT
AND
OR
NOR
we stated that the NAND and NOR were universal logic gates. Using DeMorgan’s theorem and
the Rules and laws of Boolean algebra proving this should be an easy task. Figure 1-11 shows
the equivalency between the basic logic gates and their NAND logic circuits counterpart.
Similarly, Figure 1-12 shows the equivalency between the basic logic gates and their NOR logic
circuits counterpart.
Figure 1-11 NAND Equivalent Circuits
Boolean Algebra: A mathematical system for formulating logical statements with symbols so
that problems can be solved in a manner to ordinary algebra.
In short, Boolean algebra is the mathematics of digital systems. The basic rules for Boolean
addition and multiplication are presented in Table 1-9
0+0=0 0 .0 = 0
0+1=1 0 .1 = 0
1+0=1 1 .0 = 0
1+1=1 1 .1 = 1
Commutative Laws
A+B=B+A
AB = BA
In summary, the order in which the variables are ORed or ANDed make no difference.
Associative Laws
The associative law of addition of three variables is expressed as
A + (B + C) = (A + B) + C
A(BC) = (AB)C
In summary, ORing or ANDing a grouping of variables produces the same result regardless of
the grouping of the variables.
Distributive Law
A (B+C) = AB + AC
This law states that ORing several variables and ANDing the result is equivalent of ANDing the
single variable with each of the variables in the grouping, then ORing the result.
Boolean function
6
7
10
11
12
All these rules, in particular rules 1-9, are easily verified using truth tables.
Let us examine two methods by which we can prove the relationships of rules 10-12. First we
use the laws and rules of Boolean algebra. Second we employ the use of truth tables.
Method 1:
Method 2:
A B AB A + AB
0 0 0 0
0 1 0 0
1 0 0 1
1 1 1 1
As the shaded columns are equal then the rule has be shown to be correct.
Similarly for Rule 12, we can apply the same two methods to prove the relationship.
Method 1:
Method 2:
A B C A+B A+C (A + BC A + BC
B)(A+C)
0 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0
0 1 0 1 0 0 0 0
0 1 1 1 1 1 1 1
1 0 0 1 1 1 0 1
1 0 1 1 1 1 0 1
1 1 0 1 1 1 0 1
1 1 1 1 1 1 1 1
Again, as the shaded columns are equal then the rule has be shown to be correct
Suppose A binary variable can have two possible states, namely ‘0’ and ‘1’. A Boolean
function is an expression formed with binary variables and logical operators, e.g.
X=AB+CD+AD. In essence a truth table is a list which defines a Boolean function. For example,
lets consider the truth table shown in Table 1-8. Note that the Function (X) is equal to 1 if A=0,
B=0, C=1; otherwise X=0. The algebraic expression representing this function is therefore
. Accordingly, the logic circuit is as shown in Figure1-10.
Input Output
A B C X
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 0
DE Morgan’s Theorems
Theorem1:- (1-2)
That is, the complement of the product is equivalent to the sum of the complements
Theorem2:- (1-3)
Similarly, the complement of the sum is equivalent to the product of the complements
Applications of DeMorgan’s theorems range from deriving the composite of functions to
providing alternative design to logic circuits. The objective of using these theorems in circuit
design is to minimise the number of ICs required in a logic circuit.
Canonical Form
0 0 0 P0 S0
0 0 1 P1 S1
0 1 0 P2 S2
0 1 1 P3 S3
1 0 0 P4 S4
1 0 1 P5 S5
1 1 0 P6 S6
1 1 1 P7 S7
In short,
minterms and maxterms may be used to define the two standard forms for
logic expressions, namely the sum of products (SOP), or sum of
minterms, and the product of sums (POS), or product of maxterms. These
standard forms of expression aid the logic circuit designer by simplifying
the derivation of the function to be implemented.
SUM OF PRODUCTS
1
X = P3 + P5 + P6 or
2
3
Number A B C X
0 0 0 0 0
1 0 0 1 0
2 0 1 0 0
3 0 1 1 1
4 1 0 0 0
5 1 0 1 1
6 1 1 0 1
7 1 1 1 0
2X = S0S1S2S4S7
Note:
Further examples:
Derive the SOP and POS expressions for the truth tables shown below:
0 0 0 1
1 0 1 0
2 1 0 1
3 1 1 0
X = P0 + P2 or
POS:
X = S1S3 or
Karnaugh Map
Input Output
A B C X
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 0
1 1 0 1
1 1 1 1
Let us now use the K map technique to simplify the SOP Boolean
expression . The associated truth
table and K map are presented in Table 1-19 and Table 1-20, respectively.
Input Output
A B C X
0 0 0 1
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 0
1 1 1 1
Table 1-19 Truth Table of
Suppressed Variables
Input Output
A B C X
0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 0
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 1
From the three groupings shown in the K map we form the expression
.
To illustrate let us consider the function specified by Table 1-23 and its
corresponding K map shown in Table 1-24. Note that the two groupings
determine that the simplified expression is expressed as
Input Output
A B C X
0 0 0 1
0 0 1 1
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 X
1 1 0 X
1 1 1 X
Recall that the basic rules of binary addition are as indicated below in Table
2-9. A circuit known as the half-adder carries out these basic operations. The half-adder,
illustrated in Figure 2-1, accepts two binary digits and produces the sum and carry digits.
Addition Rules
0+0 0 0
0+1 0 1
1+0 0 1
1+1 1 0
Observe from Table 2-9 that we may derive logical expressions for the Sum (S) and carry output
(CO) bits. Namely, CO = AB, and . From these two expressions, the
implementation of the half-adder function is developed as illustrated in Figure 2-2.
Figure 2-2 Logic Circuit for half-adder
Full Adder
Input Output
A B CI CO S
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1
By the very nature of the full adder we know that the two input bits must be added to the carry
input bit. Recall that for the half-adder the sum of A and B is the XOR of those two variables,
. Similarly, for the three variables A, B and CI the sum becomes .
This is illustrated in Figure 2-4.
By examining Table 2-10 observe that the carry output (CO) is ‘1’ when both the inputs to the
first and second XOR gates are ‘1’. Consequently, . Note that the
complete logic circuit for the full adder is illustrated in Figure 2-5.
Figure 2-5 Logic Circuit for the full-adder
NB: An alternative design of this circuit may be found by forming the SOP expression for the
sum and carry functions and then implementing the results.
Figure 2-6 shows how two half-adders can be arranged to form a full adder.
Half subtractor
Truth table
X Y D B
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
From the above table one can draw the Karnaugh map for "difference" and "borrow".
Full subtractor
As in the case of the addition using logic gates, a full subtractor is made by
combining two half-subtractors and an additional OR-gate. A full subtractor has the borrow in
capability (denoted as BORIN in the diagram below) and so allows cascading which results in
the possibility of multi-bit subtraction. The circuit diagram for a full subtractor is given below.
Truth table
X Y Z D B
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1
Binary Adder
The use of one half-adder or one full-adder alone are great for adding up two
binary numbers with a length of one bit each, but what happens when the computer needs to add
up two binary numbers with a longer length? Well, there are several ways of doing this. The
fastest way by far is to use the Parallel Binary Adder. The parallel binary adder uses one half-
adder, along with one or more full adders. The number of total adders needed depends on the
length of the largest of the two binary numbers that are to be added. For example, if we were to
add up the binary numbers 1011 and 1, we would need four adders in total, because the length of
the larger number is four. Keeping this in mind, here is a demonstration of how a four-bit parallel
binary adder works, using 1101 and 1011 as the two numbers to add:
Just like when we add without the computer, in the parallel binary adder, the computer adds from
right to left. Here is a step by step list, showing you what happens in the parallel Binary Adder
In the first full-adder (going from right to left), the inputs of 1 and 0 plus the carry of 1 from
2
the half-adder give us a 0 with a carry of 1.
In the second full adder, the inputs of 0 and 1 plus the carry of 1 from the previous full-adder
3
give us a 0 with a carry of 1.
In the third and final full adder, the inputs of 1 and 1 plus the carry of 1 from the previous
4
full-adder give us a 1 with a carry of 1.
Since there are no more numbers to add up, and there is still a carry of 1, the carry becomes
5
the most significant bit.
S1 S0
0 0 D0
0 1 D1
1 0 D2
1 1 D3
Note that if a binary zero appears on the data-select lines then data on input line D0 will appear
on the output. Thus, data output Y is equal to D0 if and only if S1=0 and S0=0
Demultiplexer
Address Outputs
Data S1 S0 Y0 Y1 Y2 Y3
D 0 0 D 0 0 0
D 0 1 0 D 0 0
D 1 0 0 0 D 0
D 1 1 0 0 0 D
Encoders
Recall that with BCD the ten decimal digits (0,1,…,9) are
represented by their four-digit binary counterparts. Consequently, we expect the Decimal-to-
BCD encoder to have 10 inputs and 4 outputs. The logic symbol for this 10-line-to-4-line
decoder is presented in Figure 2-11 and the associated conversion table listed in Table 2-13 with
A3 representing the most significant bit.
Figure 2-11 Logic symbol for Decimal-to-BCD Encoder
BCD Code
Decimal
Digit
A3 A2 A1 A0
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
A3 = 8 + 9
Similarly, A2, A1 and A0 are defined by the functions
A2 = 4 + 5 + 6 + 7
A1 = 2 + 3 + 6 + 7
A0 = 1 + 3 + 5 + 7 + 9
Note that a connection for the decimal digit zero is not required as in this case the BCD outputs
are all LOW when there are no HIGH inputs.
Decoder
Decoders can detect a code and activate a single output to signal the presence of
that code. Decoders have many applications, from producing system alerts in alarm systems to
performing the task of driving multiple devices in microprocessor systems (e.g. memory).
0 0 0 0 1 0 0 0 0 0 0 0
1 0 0 1 0 1 0 0 0 0 0 0
2 0 1 0 0 0 1 0 0 0 0 0
3 0 1 1 0 0 0 1 0 0 0 0
4 1 0 0 0 0 0 0 1 0 0 0
5 1 0 1 0 0 0 0 0 1 0 0
6 1 1 0 0 0 0 0 0 0 1 0
7 1 1 1 0 0 0 0 0 0 0 1
Table 2-11 Decoding functions and truth tables for the 3-line-to-8-line decoder
Now, we can develop a decoder based on each logic function and implement the SOP logic
circuit. This is illustrated below in Figure 2-8.
Figure 2-8 Internal Circuitry for 3-line-to-8-line decoder
It is far more convenient to use the logic symbol for the3-line-to-8-line decoder as
illustrated in Figure 2-9 rather than repeating the complex internal circuitry each time.
Decimal
Binary Inputs Logic Function
Digit
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
6 1 0 0 1
Now, we can develop a decoder based on each logic function and implement the SOP logic
circuit. This is illustrated below in Figure 2-10.
Figure 2-10 Logic Circuit for BCD decoder
Sequential Circuits
One important parameter to consider when analyzing sequential circuits is the time taken for the
output of the circuit to respond to a change in inputs, this is referred to as the propagation delay
(tp). In practice the output waveforms of a device a can be drawn to indicate the presence of tp,
however for many circuits this delay time is ignored to determine the basic logic operation of the
circuit.
FLIP-FLOP:
Flip flop is a BI stable device it can store single bit either 0 or 1. Flip flop has
two output, they are complement to each other. Today, the term flip-flop has come to mostly
denote non-transparent (clocked or edge-triggered) devices, while the simpler transparent ones
are often referred to as latches; however, as this distinction is quite new, the two words are
sometimes used interchangeably. A flip-flop is usually controlled by one or two control signals
and/or a gate or clock signal. The output often includes the complement as well as the normal
output. As flip-flops are implemented electronically, they require power and ground connections.
Clock Signal:
D, T and Master and Slave Flip-Flops are negative edge triggered flip flop
Set
Reset
No change
Toggle
Forbidden.
Flip flop has two output they are commonly named as and .
Set
Reset
No change
In which previous output does not change for the present input combination.
Toggle
In this condition flip flop produces the compliment output to the previous output.
Forbidden
Flip flop output must be complement to each other, if any condition does n't satisfy the above
statements that is called forbidden condition.
Uses
A single flip-flop can be used to store one bit, or binary digit, of data.
Any one of the flip-flop type can be used to build any of the others.
Many logic synthesis tools will not use any other type than D flip-flop and D latch.
Level sensitive latches cause problems with Static Timing Analysis (STA) tools and
Design For Test (DFT). Therefore, their usage is often discouraged.
Many FPGA devices contain only edge-triggered D flip-flops
The data contained in several flip-flops may represent the state of a sequencer, the value
of a counter, an ASCII character in a computer's memory or any other piece of
information.
One use is to build finite state machines from electronic logic. The flip-flops remember
the machine's previous state, and digital logic uses that state to calculate the next state.
Types of Flip-Flop:
S-R Flip-Flop
J-K Flip-Flop
J-K MS Flip-Flop
D Flip-Flop[ Data or Delay FF]
T Flip-Flop[ Toggle FF]
S-R Flip-Flop
The logic symbol for the S-R flip-flop is shown in Figure 3-5 and its operation
outlined in Table 3-3.
Figure 3-5 Logic Symbols for S-R flip-flop (rising edge triggered)
S R C Q Operation
0 1 Rising 0 1 Reset
edge
1 0 Rising 1 0 Set
edge
1 1 Rising ? ? Unstable
edge
Lets us now examine the output waveforms from the S-R flip-flop given the inputs shown in
Figure 3-6. Assume that Q is HIGH initially.
The Set-Reset latch, commonly called S-R latch, has two inputs (S and R),
one true output (Q) and one complemented output ( ) as shown in Figure 3-2. The crossing of
the outputs is known as cross coupling and this circuit is said to employ cross-coupled feed back.
When the output at Q is HIGH (1) the latch is said to be in the set state, similarly when Q = 0 the
latch is said to be in the clear (or Reset) state.
The logic symbol and the corresponding truth table for the S-R latch are presented in Figure 3-3
and Table 3-1 respectively.
S R Q Operation
0 1 0 1 Reset
1 0 1 0 Set
1 1 ? ? Unstable
By introducing a time variable in the truth table, we may use the present input variable
combinations and the latch state, at time t (Qt), to determine and next state of the latch at time
t+1 (Qt+1). This type of table, referred to as the characteristic table, is illustrated in Table 3-2.
0 0 0 0 Hold
0 0 1 1 Hold
0 1 0 0 Reset
0 1 1 0 Reset
1 0 0 1 Set
1 0 1 1 Set
1 1 0 Oscillating Invalid
1 1 1 Oscillating Invalid
Note that the circuit designer is responsible for ensuring that S=1 and R=1 do not occur at the
same time on the inputs of the S-R latch.
The S-R latch Circuit with active LOW inputs may be formed by replacing both of the NOR
gates with NAND gates.
The Gated S-R latch has some additional circuitry and an enable (EN) input. This type of
latch will not operate unless the enable is active.
The D Flip-Flop
The logic symbol for the D flip-flop is shown in Figure 3-7 and its
operation outlined in Table 3-4. Notice that this flip-flop only has one input in addition to the
clock called the D-input. Note that whatever is on the D-input when the trigger occurs is output
at Q.
D C Q Operation
Notice that a D flip flop can be made from a S-R flip flop by ensuring that the S and R
outputs are the complement of each other at all times.
J-K Flip-Flop
The J-K flip-flop is perhaps the most widely used type of flip-flop. Its
function is identical to that of the S-R flip flop in the SET, RESET and HOLD conditions of
operation. The difference is that the J-K flip-flop does not have any invalid states. The logic
symbol for the J-K flip-flop is presented in Figure 3-8 and its corresponding truth table is listed
in Table 3-5. Notice that for J=1 and K=1 the output toggles, that is to say that the output at time
t is complemented at time t+1.
Figure 3-8 Logic Symbols for J-K flip-flop (rising edge triggered)
J K C Q Operation
0 1 Rising 0 1 Reset
edge
1 0 Rising 1 0 Set
edge
1 1 Rising Toggle
edge
Let us now consider the J-K flip-flop operation as illustrated by the waveform diagrams in
Figure 3-9. Again, we assume that Q is HIGH initially.
Figure 3-9 J-K waveform diagram
Recall that the flip-flops discussed so far are called synchronous because the transfer of the data
from the input to the output lines is synchronized with the triggering edge of the clock pulse.
Most integrated circuit flip-flops also have asynchronous inputs that affect state of the flip-flop
independent of the clock. These inputs are normally labeled preset (PRE) and clear (CLR) by
the manufacturers. An active level on the preset will SET (1) the flip-flop, similarly an active
level on the clear will RESET (0) the flip-flop. The logic symbol and the operation of a positive
edge triggered J-K flip-flop with active LOW preset ( ) and clear ( ) inputs are shown
in Figure 3-10 and Figure 3-11, respectively. Note that we assume that Q is HIGH initially
Figure 3-10 Logic Symbols for J-K flip-flop with and
Pulse-Triggered Master-Slave
A major restriction of the pulse-triggered flip-flop is that the data inputs must not change
while the clock pulse is HIGH, because the flip-flop is sensitive to any changes of input
levels during this time.
Master-Slave J-K Flip-Flop
Recall the truth table for the J-K flip-flop, shown below in Table 3-4.
J K C Q Operation
0 1 Pulse 0 1 Reset
1 0 Pulse 1 0 Set
1 1 Pulse Toggle
T- Flip Flop
The T or "toggle" flip-flop changes its output on each clock edge, giving an
output which is half the frequency of the signal to the T input. It is useful for constructing binary
counters, frequency dividers, and general binary addition devices. It can be made from a J-K flip-
flop by tying both of its inputs high.
Symbol:
Connection Diagram:
When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock
frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This
"divide by" feature has application in various types of digital counters. A T flip-flop can also be
built using a JK flip-flop (J & K pins are connected together and act as T) or D flip-flop (T input
and Qprevious is connected to the D input through an XOR gate).
Excitation table
In electronics design, an excitation table shows the minimum inputs that are
necessary to generate a particular next state when the current state is known. They are similar to
truth tables and state tables, but rearrange the data so that the current state and next state are next
to each other on the left-hand side of the table, and the inputs needed to make that state change
happen are shown on the right side of the table.
Method to draw excitation table of flip-flop Draw the Q(t) and Q(t+1) for all possible cases
e.g., 00,01,10,11 Now, make the value of flip-flop such that on giving this value, we shall
receive the input as Q(t+1) as desired. e.g., in T flip flop, we want to change Q(t+1) to be
compliment of Q(t), make T=1, and if Q(t+1)=Q(t) , make T=0; thus we can get the T column as
below:
Race condition
A race condition or race hazard is a flaw in an electronic system or process whereby the output
or result of the process is unexpectedly and critically dependent on the sequence or timing of
other events. The term originates with the idea of two signals racing each other to influence the
output first.
Race conditions can occur in electronics systems, especially logic circuits, and in computer
software, especially multithreaded or distributed programs.
Latch or flip-flop
In electronics, a flip-flop or latch is a circuit that has two stable states and can
be used to store state information. The circuit can be made to change state by signals applied to
one or more control inputs and will have one or two outputs. It is the basic storage element in
sequential logic. Flip-flops and latches are a fundamental building block of digital electronics
systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements. Such data storage can be used for
storage of state, and such a circuit is described as sequential logic. When used in a finite-state
machine, the output and next state depend not only on its current input, but also on its current
state (and hence, previous inputs). It can also be used for counting of pulses, and for
synchronizing variably-timed input signals to some reference timing signal.
Introduction
This section begins our study of designing an important class of clocked sequential logic circuits-
synchronous finite-state machines. Like all sequential circuits, a finite-state machine determines
its outputs and its next state from its current inputs and current state. A synchronous finite-state
machine changes state only on the clocking event.
Counter design is a good place for you to start understanding the design process for finite-state
machines. Counters are the simplest possible finite-state machines. They typically have only a
single input instructing them to count (often just the clock), and their outputs are nothing more
than their current state.
Three-Bit Up-Counter: State Transition Diagram Let's start with a simple counter, a 3-bit
binary up-counter. We begin the design process by understanding how the counter is to operate.
A convenient way to describe this is with a graphical specification called a state transition
diagram.
The state transition diagram is a graph with nodes and directed arcs. Each node represents a
unique state of the counter. A directed arc connects two nodes representing the present state and
the next state. If the counter is in the state at the tail of the arc, it will advance to the state at the
head of the arc at the next count request.
For the example design, we will dispense with a "count" input, and simply allow the counter to
advance to the next state on each clock pulse.
Figure 7.14(a) shows a state transition diagram for the example. The nodes are labeled with the
counter state they represent, and the arcs connect the nodes in the sequence implemented by the
counter. We can describe the behavior of any finite-state machine with a state transition diagram,
although the diagrams are typically more complex than those for counters.
Each bit of the state is held by a single storage element. In this example, the counter proceeds
through eight states. To assign binary codes to these states, we need exactly three storage
elements. We have named the storage elements C, B, and A, from the highest- to the lowest-order
bit.
Three-Bit Up-Counter: Flip-Flop Choice The next step is to choose a kind of flip-flop to
implement the counter's storage elements. A close look at the state transition table suggests that
toggle flip-flops might be an attractive choice. In essence, A toggles on every clock pulse, B on
every second clock pulse, and C on every fourth clock pulse. This is a binary counter, after all.
The rightmost column of the state transition table of Figure 7.14(b) shows the inputs that must be
presented to toggle flip-flops to implement the desired state transitions. For example, consider
the state transition from 011 to 100. To get toggle flip-flops to implement these state changes, we
must set the toggle input of each flip-flop to one. The transition will take place on the appropriate
clock edge after the toggle inputs are set.
Three-Bit Up-Counter: Next-State Logic Our task now is to design combinational logic whose
input is the current state of the counter and whose output is the toggle inputs to the state flip-
flops. For this simple example, we can determine the logic just by examining the transition table.
Flip-flop A toggles on each state transition, B toggles whenever A is asserted, and C toggles
whenever A and B are asserted.
For more complex examples, we can view the transition table as a truth table that specifies the
flip-flops' inputs as a function of C, B, and A. We would use standard K-map methods to obtain
the reduced Boolean expressions. The K-maps for TC, TB, and TA are shown in Figure 7.15.
Step 2: We next derive the state transition table from the state -diagram, tabulating the
current state with the next state in the count sequence. Each state bit is implemented by
its own flip-flop.
Step 3: We express each next-state bit as a combinational logic function of the current
state bits.
Step 4: We choose a flip-flop for implementation. Then we must "remap" the next-state
mapping derived in step 3 to obtain the desired behavior from the selected flip-flop.
Example Generalized Counter Design To see this four-step process in action, let's look at another
implementation of a counter. We will design a 3-bit counter that advances through the sequence
000, 010, 011, 101, 110, 000 and repeats. Not all of the possible combinations of the 3 bits
represent a valid state. The unused states, 001, 100, and 111, can be used as don't-care conditions
to simplify the logic.
Step 1: Draw the state transition diagram. This is shown in Figure 7.18.
Step 2: Derive the state transition table. This is shown in Figure 7.19.
Step 3: Express each next-state bit as a combinational logic function of three current-state bits.
Figure 7.20 shows the appropriate K-maps.
Step 4: Choose a flip-flop type for implementation. Since this is almost a straight binary
sequence, toggle flip-flops seem like a -reasonable choice. We use the toggle flip-flop excitation
table in Figure 7.22 to derive new next-state maps. Then we replace the desired state bits in the
K-map with the values needed to control the selected flip-flops to perform the necessary state
changes.
Figure 7.21 shows the toggle inputs needed to implement the state transitions of Figure 7.18.
For example, counter state 000 advances to 010, so the inputs to the toggle flip-flops should be 0
(don't toggle) for C, 1 (toggle) for B, and 0 (don't toggle) for A. Similarly, state 110 returns to
000. In this case, the control for C, B, A is toggle, toggle, don't toggle, respectively, or 110.
Reflecting this remapping of functions, the K-maps become those of Figure 7.23.
Counter Classification
Asynchronous counters do
not have a common clock that controls all the flip-flop stages. The control clock is input to the
first stage, or the LSB stage of the counter. The clock for each stage subsequent is obtained from
the flip-flop from the prior stage.
Let us analyse the 2-bit counter shown in Figure 3-15 and its corresponding waveform diagram
in Figure 3-16. Note that we assume that Q0 and Q1 are LOW initially.
The counter has two flip-flops and two output bits, therefore it a two-stage counter.
The input clock does not trigger both flip-flops, therefore it is an asynchronous counter
The J and K inputs are tied together as kept HIGH, so they are considered to be toggle flip-
flips
The flip-flops are positive edge triggered
The waveform analysis reveals that Q0 is the LSB and that its frequency is ½ the input clock
frequency. Furthermore, Q1 is the MSB and that its frequency is ¼ the input clock frequency
The count sequence is 00,01,10,11 where the LSB is Q0. Thus it is a MOD-4 binary up
counter
The state transition diagram for this MOD-4 binary up counter is shown in figure 3-17.
Asynchronous counters are also know as ripple counters because the effect of the input clock
pulse which is first "felt" by the flip-flip at the first stage cannot immediately be "felt" by the
flip-flip at the second stage. This is due to the propagation delay. Thus, the effect of the input
clock "ripples" through the counter until it reaches the final stage.
In some texts the waveform diagram is also referred to as a timing diagram
Specify the operational requirements – number of stages, modulus, trigger and characteristics.
Graph the desired output waveforms.
Determine the necessary output to use as the clock input to the following stager. Either the
true or complemented out put could serve as the clocking signal.
Verify the circuit through analysis and testing.
Synchronous digital counters have a common clock which results in all the flip-flops being
triggered simultaneously. Consequently, there are no cumulative delays that result because the
clock signal must ripple through the stages as in the asynchronous counters. Synchronous
counters can be designed to count up and down in numerical order. In addition, they may be used
to produce count sequences of non-consecutive numbers. The count sequence produced by
synchronous counters is not dependent on the trigger characteristics of the flip-flops that
comprise the count stages. The count sequence is achieved by applying the required logic
function into the flip-flops.
Verify that the counter is indeed synchronous (i.e. identify the common clock feature).
Binary Up Counters
Take for example A4 A3 A2 A1 = 0011. On the next count, A4 A3 A2 A1 = 0100. A1, the lowest-
order bit, is always complemented. A2 is complemented because all the lower-order positions (A1
only in this case) are 1's. A3 is also complemented because all the lower-order positions, A2 and
A1 are 1's. But A4 is not complemented the lower-order positions, A3 A2 A1 = 011, do not give an
all 1 condition.
To implement a synchronous counter, we need a flip-flop for every bit and an AND gate for
every bit except the first and the last bit. The diagram below shows the implementation of a 4-bit
synchronous up-counter.
4-bit Synchronous Binary Up-Counter
From the diagram above, we can see that although the counter is synchronous and is supposed to
change simultaneously, we have a propagation delay through the AND gates which add up to
give an overall propagation delay which is proportional to the number of bits of the counter. To
overcome this problem, we can feed the outputs from the flip-flops directly to a many-input
AND gate as follows :
This method does overcome the problem of additive propagation delay but introduces some other
problem of its own. From the diagram above, we can see that the third flip-flop gets its J-K input
from the output of a 2-input AND gate and the fourth flip-flop gets its input from a 3-input AND
gate and so on. If we have a counter that counts to for example 16 bits, we will need to have:
1 * 15-input AND gate,
1 * 14-input AND gate,
...
...
...
1 * 3-input AND gate and
1 * 2-input AND gate.
This method obviously uses a lot more resources than the first method. Not only that, in the first
method, the output from each flip-flop is only used as an input to one AND gate. In the second
method, the output from each flip-flop is used as an input to all the higher-order bits. If we have
a 12-bit counter, the output of the first flip-flop will have to drive 10 gates (called fan-out. The
output from the flip-flop may not have the power to do this.
The "solution" to this is to use a compromise between the two methods. Say we have a 12-bit
counter, we can organize it into 3 groups of 4. Within each group of 4, we use the second method
and between the 3 groups, use the first method. This way, we only have an overall gate
propagation delay and a maximum fan-out of 3 instead of 10 using the first and second method
respectively.
There are many variations to the basic binary counter. The one described above is the binary up
counter (counts upwards). Besides the up counter, there is the binary down counter, the binary
up/down counter, binary-coded-decimal (BCD) counter etc. Any counter that counts in binary is
called a binary counter.
In a binary up counter, a particular bit, except for the first bit, toggles if all the lower-order bits
are 1's. The opposite is true for binary down counters. That is, a particular bit toggles if all the
lower-order bits are 0's and the first bit toggles on every pulse.
Taking an example, A4 A3 A2 A1 = 0100. On the next count, A4 A3 A2 A1 = 0011. A1, the lowest-
order bit, is always complemented. A2 is complemented because all the lower-order positions (A1
only in this case) are 0's. A3 is also complemented because all the lower-order positions, A2 and
A1 are 0's. But A4 is not complemented the lower-order positions, A3 A2 A1 = 011, do not give an
all 0 condition.
4-bit Synchronous Binary Down Counter
The implementation of a synchronous binary down counter is exactly the same as that of a
synchronous binary up counter except that the inverted output from each flip-flop is used. All the
methods used improve a binary up counter can be similarly applied here.
The similarities between the implementation of a binary up counter and a binary down counter
leads to the possibility of a binary up/down counter, which is a binary up counter and a binary
down counter combined into one. Since the difference is only in which output of the flip-flop to
use, the normal output or the inverted one, we use two AND gates for each flip-flop to "choose"
which of the output to use.
From the diagram, we can see that COUNT-UP and COUNT-DOWN are used as control inputs
to determine whether the normal flip-flop outputs or the inverted ones are fed into the J-K inputs
of the following flip-flops. If neither is at logic level 1, the counter doesn't count and if both are
at logic level 1, all the bits of the counter toggle at every clock pulse. The OR gate allows either
of the two outputs which have been enabled to be fed into the next flip-flop. As with the binary
up and binary down counter, the speed up techniques apply.
MOD-N/Divide-by-N Counters
Normal binary counter counts from 0 to 2N - 1, where N is the number od bits/flip-flops in the
counter. In some cases, we want it to count to numbers other than 2N - 1. This can be done by
allowing the counter to skip states that are normally part of the counting sequence. There are a
few methods of doing this. One of the most common methods is to use the CLEAR input on the
flip-flops.
In the example above, we have a MOD-6 counter. Without the NAND gate, it is a MOD-8
counter. Now, with the NAND gate, the output from the NAND gate is connected to the
asynchronous CLEAR inputs of each flip-flop. The inputs to the NAND gate are the outputs of
the B and C flip-flops. So, all the flip-flops will be cleared when B = C = 1 (1102 = 610 ) .When
the counter goes from state 101 to state 110, the NAND output will immediately clear the
counter to state 000. Once the flip-flops have been cleared, the B = C = 1 condition no longer
exists and the NAND output goes back to high. The counter will therefore count from 000 to
101, and for a very short period of time, be in state 110 before the counter is cleared. This state is
called the temporary state and the counter usually only remains in a temporary state for a few
nanoseconds. We can essentially say that the counter skips 110 and 111 so that it goes only six
different states; thus, it is a MOD-6 counter. We also have to note that the temporary state causes
a spike or glitch on the output waveform of B. This glitch is very narrow and will not normally
be a problem unless it is used to drive other circuitry outside the counter. The 111 state is the
unused state here. In a state machine with unused states, we need to make sure that the unused
states do not cause the system to hang, ie. no way to get out of the state. We don't have to worry
about this here because even if the system does go to the 111 state, it will go to state 000, a valid
state) on the next clock pulse.
Ring Counters
Ring counters are implemented using shift registers. It is essentially a circulating shift register
connected so that the last flip-flop shifts its value into the first flip-flop. There is usually only a
single 1 circulating in the register, as long as clock pulses are applied.
4bitSynchronous Ring Counter
In the diagram above, assuming a starting state of Q3 = 1 and Q2 = Q1 = Q0 = 0. At the first pulse,
the 1 shifts from Q3 to Q2 and the counter is in the 0100 state. The next pulse produces the 0010
state and the third, 0001. At the fourth pulse, the 1 at Q0 is transferred back to Q3, resulting in the
1000 state, which is the initial state. Subsequent pulses will cause the sequence to repeat, hence
the name ring counter.
The ring counter above functions as a MOD-4 counter since it has four distinct states and each
flip-flop output waveform has a frequency equal to one-fourth of the clock frequency. A ring
counter can be constructed for any MOD number. A MOD-N ring counter will require N flip-
flops connected in the arrangement as the diagram above.
A ring counter requires more flip-flops than a binary counter for the same MOD number. For
example, a MOD-8 ring counter requires 8 flip-flops while a MOD-8 binary counter only
requires 3 (23 = 8). So if a ring counter is less efficient in the use of flip-flops than a binary
counter, why do we still need ring counters? One main reason is because ring counters are much
easier to decode. In fact, ring counters can be decoded without the use of logic gates. The
decoding signal is obtained at the output of its corresponding flip-flop.
For the ring counter to operate properly, it must start with only one flip-flop in the 1 state and all
the others at 0. Since it is not possible to expect the counter to come up to this state when power
is first applied to the circuit, it is necessary to preset the counter to the required starting state
before the clock pulses are applied. One way to do this is to apply a pulse to the PRESET input
of one of the flip-flops and the CLEAR inputs of all the others. This will place a single 1 in the
ring counter.
Johnson/Twisted-Ring Counters
The Johnson counter, also known as the twisted-ring counter, is exactly the same as the ring
counter except that the inverted output of the last flip-flop is connected to the input of the first
flip-flop.
4-bit Synchronous Johnson Counter
The Johnson counter works in the following way : Take the initial state of the counter to be 000.
On the first clock pulse, the inverse of the last flip-flop will be fed into the first flip-flop,
producing the state 100. On the second clock pulse, since the last flip-flop is still at level 0,
another 1 will be fed into the first flip-flop, giving the state 110. On the third clock pulse, the
state 111 is produced. On the fourth clock pulse, the inverse of the last flip-flop, now a 0, will be
shifted to the first flip-flop, giving the state 011. On the fifth and sixth clock pulse, using the
same reasoning, we will get the states 001 and 000, which is the initial state again. Hence, this
Johnson counter has six distinct states : 000, 100, 110, 111, 011 and 001, and the sequence is
repeated so long as there is input pulse. Thus this is a MOD-6 Johnson counter.
The MOD number of a Johnson counter is twice the number of flip-flops. In the example above,
three flip-flops were used to create the MOD-6 Johnson counter. So for a given MOD number, a
Johnson counter requires only half the number of flip-flops needed for a ring counter. However,
a Johnson counter requires decoding gates whereas a ring counter doesn't. As with the binary
counter, one logic gate (AND gate) is required to decode each state, but with the Johnson
counter, each gate requires only two inputs, regardless of the number of flip-flops in the counter.
Note that we are comparing with the binary counter using the speed up technique discussed
above. The reason for this is that for each state, two of the N flip-flops used will be in a unique
combination of states. In the example above, the combination Q2 = Q1 = 0 occurs only once in
the counting sequence, at the count of 0. The state 010 does not occur. Thus, an AND gate with
inputs (not Q2) and (not Q2) can be used to decode for this state. The same characteristic is
shared by all the other states in the sequence.
A Johnson counters represent a middle ground between ring counters and binary counters. A
Johnson counter requires fewer flip-flops than a ring counter but generally more than a binary
counter; it has more decoding circuitry than a ring counter but less than a binary counter. Thus, it
sometimes represents a logical choice for certain applications.
Registers
A register is a group of flip –flops, is used to store or manipulation data or both.
Each flip-flop is capable of storing one bit of information. An n-bit register has n flip-flop and is
capable of storing any binary information containing n bits.
The register is a type of sequential circuits and an important building black used in digital
systems like multiplies, dividers, memory, microprocessors etc.
A resister stores a sequence of 0’s and 1’s Register that are used to store information are known
as memory registers. If they are used to process information, they are called shift registers.
Shift Register
A register that is capable of shifting data one bit at a time is called a shift
register. The logical configuration of a serial shift register consists of a chain of flip-flops
connected in cascade, with the output of one flip-flop being connected to the input of its
neighbour. The operation of the shift register is synchronous; thus each flip-flop is connected to a
common clock. Using D flip-flops forms the simplest type of shift-registers.
The basic data movements possible within a four-bit shift register is shown in Figure 3-22.
Figure 3-22 Data movement diagrams
Figure 3-23 shows the circuit diagram for a four-bit serial in-serial out shift register implemented
using D flip-flops. Assuming that the register is initially clear, Figure 3-24 shown the waveform
diagram when ‘1000’ is shifted through the register.
Serial-in, serial-out shift registers delay data by one clock time for each stage. They will store a
bit of data for each register. A serial-in, serial-out shift register may be one to 64 bits in length,
longer if registers or packages are cascaded.
Below is a single stage shift register receiving data which is not synchronized to the register
clock. The "data in" at the D pin of the type D FF (Flip-Flop) does not change levels when the
clock changes for low to high. We may want to synchronize the data to a system wide clock in a
circuit board to improve the reliability of a digital logic circuit.
The obvious point (as compared to the figure below) illustrated above is that whatever "data in"
is present at the D pin of a type D FF is transfered from D to output Q at clock time. Since our
example shift register uses positive edge sensitive storage elements, the output Q follows the D
input when the clock transitions from low to high as shown by the up arrows on the diagram
above. There is no doubt what logic level is present at clock time because the data is stable well
before and after the clock edge. This is seldom the case in multi-stage shift registers. But, this
was an easy example to start with. We are only concerned with the positive, low to high, clock
edge. The falling edge can be ignored. It is very easy to see Q follow D at clock time above.
Compare this to the diagram below where the "data in" appears to change with the positive clock
edge.
Since "data in" appears to changes at clock time t1 above, what does the type D FF see at clock
time? The short over simplified answer is that it sees the data that was present at D prior to the
clock. That is what is transfered to Q at clock time t1. The correct waveform is QC. At t1 Q goes
to a zero if it is not already zero. The D register does not see a one until time t2, at which time Q
goes high.
Since data, above, present at D is clocked to Q at clock time, and Q cannot change until the next
clock time, the D FF delays data by one clock period, provided that the data is already
synchronized to the clock. The QA waveform is the same as "data in" with a one clock period
delay.
A more detailed look at what the input of the type D Flip-Flop sees at clock time follows. Refer
to the figure below. Since "data in" appears to changes at clock time (above), we need further
information to determine what the D FF sees. If the "data in" is from another shift register stage,
another same type D FF, we can draw some conclusions based on data sheet information.
Manufacturers of digital logic make available information about their parts in data sheets,
formerly only available in a collection called a data book. Data books are still available; though,
the manufacturer's web site is the modern source.
The following data was extracted from the CD4006b data sheet for operation at 5VDC, which
serves as an example to illustrate timing.
[*]
tS=100ns
tH=60ns
tP=200-400ns typ/max
tS is the setup time, the time data must be present before clock time. In this case data must be
present at D 100ns prior to the clock. Furthermore, the data must be held for hold time tH=60ns
after clock time. These two conditions must be met to reliably clock data from D to Q of the
Flip-Flop.
There is no problem meeting the setup time of 60ns as the data at D has been there for the whole
previous clock period if it comes from another shift register stage. For example, at a clock
frequency of 1 Mhz, the clock period is 1000 µs, plenty of time. Data will actually be present for
1000µs prior to the clock, which is much greater than the minimum required tS of 60ns.
The hold time tH=60ns is met because D connected to Q of another stage cannot change any
faster than the propagation delay of the previous stage tP=200ns. Hold time is met as long as the
propagation delay of the previous D FF is greater than the hold time. Data at D driven by another
stage Q will not change any faster than 200ns for the CD4006b.
To summarize, output Q follows input D at nearly clock time if Flip-Flops are cascaded into a
multi-stage shift register.
Three type D Flip-Flops are cascaded Q to D and the clocks paralleled to form a three stage shift
register above.
Type JK FFs cascaded Q to J, Q' to K with clocks in parallel to yield an alternate form of the
shift register above.
A serial-in/serial-out shift register has a clock input, a data input, and a data output from the last
stage. In general, the other stage outputs are not available Otherwise, it would be a serial-in,
parallel-out shift register..
The waveforms below are applicable to either one of the preceding two versions of the serial-in,
serial-out shift register. The three pairs of arrows show that a three stage shift register
temporarily stores 3-bits of data and delays it by three clock periods from input to output.
At clock time t1 a "data in" of 0 is clocked from D to Q of all three stages. In particular, D of
stage A sees a logic 0, which is clocked to QA where it remains until time t2.
At clock time t2 a "data in" of 1 is clocked from D to QA. At stages B and C, a 0, fed from
preceding stages is clocked to QB and QC.
At clock time t3 a "data in" of 0 is clocked from D to QA. QA goes low and stays low for the
remaining clocks due to "data in" being 0. QB goes high at t3 due to a 1 from the previous stage.
QC is still low after t3 due to a low from the previous stage.
QC finally goes high at clock t4 due to the high fed to D from the previous stage QB. All earlier
stages have 0s shifted into them. And, after the next clock pulse at t5, all logic 1s will have been
shifted out, replaced by 0s
The purpose of the parallel-in/ parallel-out shift register is to take in parallel data, shift it, then
output it as shown below. A universal shift register is a do-everything device in addition to the
parallel-in/ parallel-out function.
Above we apply four bit of data to a parallel-in/ parallel-out shift register at DA DB DC DD. The
mode control, which may be multiple inputs, controls parallel loading vs shifting. The mode
control may also control the direction of shifting in some real devices. The data will be shifted
one bit position for each clock pulse. The shifted data is available at the outputs QA QB QC QD .
The "data in" and "data out" are provided for cascading of multiple stages. Though, above, we
can only cascade data for right shifting. We could accommodate cascading of left-shift data by
adding a pair of left pointing signals, "data in" and "data out", above.
The internal details of a right shifting parallel-in/ parallel-out shift register are shown below. The
tri-state buffers are not strictly necessary to the parallel-in/ parallel-out shift register, but are part
of the real-world device shown below.
The 74LS395 so closely matches our concept of a hypothetical right shifting parallel-in/ parallel-
out shift register that we use an overly simplified version of the data sheet details above. See the
link to the full data sheet more more details, later in this chapter.
LD/SH' controls the AND-OR multiplexer at the data input to the FF's. If LD/SH'=1, the upper
four AND gates are enabled allowing application of parallel inputs DA DB DC DD to the four FF
data inputs. Note the inverter bubble at the clock input of the four FFs. This indicates that the
74LS395 clocks data on the negative going clock, which is the high to low transition. The four
bits of data will be clocked in parallel from DA DB DC DD to QA QB QC QD at the next negative
going clock. In this "real part", OC' must be low if the data needs to be available at the actual
output pins as opposed to only on the internal FFs.
The previously loaded data may be shifted right by one bit position if LD/SH'=0 for the
succeeding negative going clock edges. Four clocks would shift the data entirely out of our 4-bit
shift register. The data would be lost unless our device was cascaded from QD' to SER of
another device.
Right shifting of data is provided above for reference to the previous right shifter.
If we need to shift left, the FFs need to be rewired. Compare to the previous right shifter. Also,
SI and SO have been reversed. SI shifts to QC. QC shifts to QB. QB shifts to QA. QA leaves on
the SO connection, where it could cascade to another shifter SI. This left shift sequence is
backwards from the right shift sequence.
Above we shift the same data pattern left by one bit.
There is one problem with the "shift left" figure above. There is no market for it. Nobody
manufactures a shift-left part. A "real device" which shifts one direction can be wired externally
to shift the other direction. Or, should we say there is no left or right in the context of a device
which shifts in only one direction. However, there is a market for a device which will shift left or
right on command by a control line. Of course, left and right are valid in that context.
What we have above is a hypothetical shift register capable of shifting either direction under the
control of L'/R. It is setup with L'/R=1 to shift the normal direction, right. L'/R=1 enables the
multiplexer AND gates labeled R. This allows data to follow the path illustrated by the arrows,
when a clock is applied. The connection path is the same as the"too simple" "shift right" figure
above.
Data shifts in at SR, to QA, to QB, to QC, where it leaves at SR cascade. This pin could drive SR
of another device to the right.
With L'/R=0, the multiplexer AND gates labeled L are enabled, yielding a path, shown by the
arrows, the same as the above "shift left" figure. Data shifts in at SL, to QC, to QB, to QA, where
it leaves at SL cascade. This pin could drive SL of another device to the left.
The prime virtue of the above two figures illustrating the "shift left/ right register" is simplicity.
The operation of the left right control L'/R=0 is easy to follow. A commercial part needs the
parallel data loading implied by the section title. This appears in the figure below.
Now that we can shift both left and right via L'/R, let us add SH/LD', shift/ load, and the AND
gates labeled "load" to provide for parallel loading of data from inputs DA DB DC. When
SH/LD'=0, AND gates R and L are disabled, AND gates "load" are enabled to pass data DA DB
DC to the FF data inputs. the next clock CLK will clock the data to QA QB QC. As long as the
same data is present it will be re-loaded on succeeding clocks. However, data present for only
one clock will be lost from the outputs when it is no longer present on the data inputs. One
solution is to load the data on one clock, then proceed to shift on the next four clocks. This
problem is remedied in the 74ALS299 by the addition of another AND gate to the multiplexer.
If SH/LD' is changed to SH/LD'=1, the AND gates labeled "load" are disabled, allowing the
left/ right control L'/R to set the direction of shift on the L or R AND gates. Shifting is as in the
previous figures.
The only thing needed to produce a viable integrated device is to add the fourth AND gate to the
multiplexer as alluded for the 74ALS299. This is shown in the next section for that part.
A serial-in/parallel-out shift register is similar to the serial-in/ serial-out shift register in that it
shifts data into internal storage elements and shifts data out at the serial-out, data-out, pin. It is
different in that it makes all the internal stages available as outputs. Therefore, a serial-
in/parallel-out shift register converts data from serial format to parallel format. If four data bits
are shifted in by four clock pulses via a single wire at data-in, below, the data becomes available
simultaneously on the four Outputs QA to QD after the fourth clock pulse.
The practical application of the serial-in/parallel-out shift register is to convert data from serial
format on a single wire to parallel format on multiple wires. Perhaps, we will illuminate four
LEDs (Light Emitting Diodes) with the four outputs (QA QB QC QD ).
The above details of the serial-in/parallel-out shift register are fairly simple. It looks like a serial-
in/ serial-out shift register with taps added to each stage output. Serial data shifts in at SI (Serial
Input). After a number of clocks equal to the number of stages, the first data bit in appears at SO
(QD) in the above figure. In general, there is no SO pin. The last stage (QD above) serves as SO
and is cascaded to the next package if it exists.
The shift register has been cleared prior to any data by CLR', an active low signal, which clears
all type D Flip-Flops within the shift register. Note the serial data 1011 pattern presented at the
SI input. This data is synchronized with the clock CLK. This would be the case if it is being
shifted in from something like another shift register, for example, a parallel-in/ serial-out shift
register (not shown here). On the first clock at t1, the data 1 at SI is shifted from D to Q of the
first shift register stage. After t2 this first data bit is at QB. After t3 it is at QC. After t4 it is at QD.
Four clock pulses have shifted the first data bit all the way to the last stage QD. The second data
bit a 0 is at QC after the 4th clock. The third data bit a 1 is at QB. The fourth data bit another 1 is
at QA. Thus, the serial data input pattern 1011 is contained in (QD QC QB QA). It is now available
on the four outputs.
It will available on the four outputs from just after clock t4 to just before t5. This parallel data
must be used or stored between these two times, or it will be lost due to shifting out the QD stage
on following clocks t5 to t8 as shown above.
Ring counters
If the output of a shift register is fed back to the input. a ring counter results. The data pattern
contained within the shift register will recirculate as long as clock pulses are applied. For
example, the data pattern will repeat every four clock pulses in the figure below. However, we
must load a data pattern. All 0's or all 1's doesn't count. Is a continuous logic level from such a
condition useful?
We make provisions for loading data into the parallel-in/ serial-out shift register configured as a
ring counter below. Any random pattern may be loaded. The most generally useful pattern is a
single 1.
Loading binary 1000 into the ring counter, above, prior to shifting yields a viewable pattern. The
data pattern for a single stage repeats every four clock pulses in our 4-stage example. The
waveforms for all four stages look the same, except for the one clock time delay from one stage
to the next. See figure below.
The circuit above is a divide by 4 counter. Comparing the clock input to any one of the outputs,
shows a frequency ratio of 4:1. How may stages would we need for a divide by 10 ring counter?
Ten stages would recirculate the 1 every 10 clock pulses.
An alternate method of initializing the ring counter to 1000 is shown above. The shift waveforms
are identical to those above, repeating every fourth clock pulse. The requirement for initialization
is a disadvantage of the ring counter over a conventional counter. At a minimum, it must be
initialized at power-up since there is no way to predict what state flip-flops will power up in. In
theory, initialization should never be required again. In actual practice, the flip-flops could
eventually be corrupted by noise, destroying the data pattern. A "self correcting" counter, like a
conventional synchronous binary counter would be more reliable.
The above binary synchronous counter needs only two stages, but requires decoder gates. The
ring counter had more stages, but was self decoding, saving the decode gates above. Another
disadvantage of the ring counter is that it is not "self starting". If we need the decoded outputs,
the ring counter looks attractive, in particular, if most of the logic is in a single shift register
package. If not, the conventional binary counter is less complex without the decoder.
The waveforms decoded from the synchronous binary counter are identical to the previous ring
counter waveforms. The counter sequence is (QA QB) = (00 01 10 11).
Arithmetic Micro-operation
The arithmetic operations are used for modifying the data during the transfer by using basic
operation like addition, subtraction, increment, decrement, and shift.
R3 <- R1 + R2
In the above example, the content of R1 is added to the content of R2 and the result is transferred
to the register R3.
R1 <- R2 R3
In the above example, the content of R3 is subtracted from the content of R2 and the result will
be transferred to the register R1.
The subtraction operation can also be accomplished by using compliment and addition operation.
R1 <- R2 + R3 + 1
In above example, the 1s compliment is represented by R3 and adding 1 to this content will
result into the 2s compliment.
The increment and decrement operators are implemented by using plus-one and minus-one
operation respectively. The following table represents the arithmetic micro-operation.
Notation Meaning
R3 <- R1 + R2 The content of R1 is added to the content of R2
and the result is transferred to R3.
R3 <- R1 R2 The content of R2 is subtracted from the
content of R1 and the result is transferred to
R3.
R1 <- R1 1s compliment of R1.
R2 <- R2 + 1 2s compliment of R2.
R3 <- R1 + R2 The content of R1 is added to the 2s
+1 compliment of R2 and the result is transferred
to R3.
R1 <- R1 + 1 The content of R1 is incremented by one.
R1 <- R1-1 The content of R1 is decremented by one.
Add and shift micro-operations are used for implementing multiplication operator. Subtract and
shift micro-operators are used for implementing division operator. The CPU uses combination of
these operations for performing an operation.
Transtutors.com is a one stop shop for all your need regarding the homework help and
assignment help. We are pioneer in providing homework help and assignment help for all
levels. Our team of experts thrive to provide only original and plagiarism free content for
homework help and assignment help.
The logic micro-operation is used for specifying binary operation for the strings of bits stored in
the register. Here, each bit of register is treated as a separate binary variable. In other words,
each individual bit of a register operates with corresponding register bit. Various logical
operation like OR, AND, NAND, and other logical operators. Special symbols are used for
representing OR and AND operators with v and ^ respectively.
Let us consider that there are two register R1 and R2, each with four bit. Let register R1 have
1010 and R2 have 1100. If OR micro-operation is performed on these registers and the result is
stored in the register P, then this operation can be represented as:
P <- R1 + R2
The below table lists all micro-operation, that can be performed on any two registers. Here, we
will consider register R1 and R2 for representing these operations.
Need help for your homework help and assignment help? Get all the resource need for
homework help and assignment help at Transtutors.com. With our team of experts we are
capable of providing homework help and assignment help for all levels.
Micro operations
1. Register transfer i.e. the transfer of binary information from one register to another.
Register transfer
Sometime it happens that we only need to transfer data from one register to another. The reason
could be the prolong storage or out of memory like situation. So in order to accomplish such
tasks we just transfer data from one register to another. Below given example will show you how
to transfer data from one register to another.
R1 R2
This shows that the data in R2 is transferred to R1.
Arithmetic micro operations are the operations of manipulation, i.e. the data in such operations
are modified and then stored. The basic arithmetic micro operations are ADD, SUBTRACT,
MUTIPLY, and DIVISION. Examples are listed below to help you understand and also you may
contact our experts on Transtutors.com. Get your assignment help and homework help as per
your requirements and on time.
R1 R1+R2
Here, the data in R1 and R2 is first added and then stored in R1.
R1 R1*R2
Here, the data in R1 and R2 is first multiplied and then stored in R1.
R1 R1-R2
R1 R1/R2
Shift micro operations are the operations in which the contents of the register can be shifted to
left or right. Shift micro operations are used for serial transfer of data. They can also be used in
lieu with arithmetic, logic and other data-processing operations. There are three types of shift
operations:
· Logical shift
· Circular shift
· Arithmetic shift
Logical shift
Logical shift can be defined as the shift of the bits to the right or left serially. Let us suppose the
symbol for logical right shift as shr and for logical left shift as shl. The following example will
help you clear out in this. You may contact out experts for assignment
help and homework help through out website Transtutors.com.
R1 shl R1
R2 shr R2
The bits are being transferred to 1-bit left in first one i.e. there is one bit shift in register R1, and
in R2 there is one bit shift to the right.
Circular shift
Circular shift also named as rotate shift circulates the bits among the ends of the register
without losing information. We can achieve this by connecting the output terminal to the input
terminal of the register. They also shifts only one bit at a single time. For example
R cir R
R cil R
Instruction Code
We all know that without our instructions a computer can do nothing. Hence we need to instruct
the computer to perform a specific operation.
1. The collection of bits that instruct the computer to perform a specific operation is called
an instruction code.
2. Operation part is the most basic part of an instruction code. The operation code of an
instruction is a group of bits that define such operations as add, subtract, multiply, shift
and complement.
3. The total number of operations available in the computer determines the number of bits
required for the operation code of an instruction. The operation code must consist of at
least n bits for a given 2n (or less) distinct operations.
4. An 'operation' is a binary code, that instruct the computer to perform a specific operation.
The control unit gets the instruction from memory and interprets the operation code bits.
It then issues a sequence of control signals to initiate micro-operations in internal
computer registers. For every operation code, the control issues a sequence of micro-
operations required for the hardware implementation of the specified operation.
This operation should be performed on some data stored in processor registers or on the data
stored in the memory. Hence an instruction code must specify both the operation and the
registers or the memory words where the operands are to be found, as well as the registers or the
memory word where the operands be stored.
Memory words can be specified in instruction codes by their address. Processor registers can be
specified by assigning to the instruction another binary code of K bits that specifies one of 2K
registers. There are many variations for arranging the binary code of instructions. Each computer
has its own particular instruction code format called its Instruction Set.
Z: RD1 <- RA - RB
Hint: What does the Branch Control do when Z is 0 but the branch instruction requires Z be 1 in
order to take effect (that is, modify the PC register)?
One option would be to do the computation, but not write it to the destination register if Z is not
asserted. You'll probably need some logic that uses info from the instruction decoder to only
disable the register writeback when appropriate. (I haven't thought it through entirely, but...if for
some reason your architecture is such that you need to write something to the register file, you
might be able to do a "Write" that has no effect. e.g. If register R0 always returns 0, then it might
be the case that you could write anything to R0 and the result is just discarded. If so, your logic
could trigger changing the destination register to R0.)