Lecture Note 3: Instruction Execution Cycle, Interrupt Cycles and System Interconnection
In previous note, we studied the Von-neuman architecture and the IAS machine. We examined the instruction
execution cycle of the IAS machine with the associated registers (MBR, MAR, PC, AC etc.) involved in the process
of instruction execution by the CPU.
In this note we will take a general view of the of the instruction execution cycle by using a state diagram……
The processor reads (fetches ) instructions from memory one at a time and executes each instruction. Program
execution consists of repeating the process of instruction fetch and instruction execution. The instruction
execution may involve several operations and depends on the nature of the instruction (see, for example, the
lower portion of Figure 3.3).
The processing required for a single instruction is called an instruction cycle. It consists of two steps referred to
as the fetch cycle and the execute cycle depicted in Figure 3.1
Figure 3.1 : Basic Instruction Cycle
The Fetch Cycle: Generally fetch cycle involves the following steps:
At the beginning of each instruction cycle the processor fetches an instruction from memory
The program counter (PC) holds the address of the instruction to be fetched next
The processor increments the PC after each instruction fetch so that it will fetch the next instruction in sequence
The fetched instruction is loaded into the instruction register (IR)
Execute Cycle: The processor interprets the instruction in the IR and performs the required action
In general, the actions that can be performed by the processor during interpreting and executing the instructions
fall into four categories:
Processor-memory: Data may be transferred from processor to memory or from memory to processor.
• Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the processor
and an I/O module.
• Data processing: The processor may perform some arithmetic or logic operationmon data.
• Control: An instruction may specify that the sequence of execution be altered.
For example, the processor may fetch an instruction from location 149, which specifies that the next instruction
be from location 182. The processor will remember this fact by setting the program counter to 182. Thus, on the
next fetch cycle, the instruction will be fetched from location 182 rather than 150. An instruction’s execution may
involve a combination of these actions.
1
Data transferred from Data transferred to or
processor to memory or from a peripheral device
from memory to processor by transferring between
the processor and an I/O
module
Processor- Processor-
memory I/O
Data
Control
processing
An instruction may specify The processor may
that the sequence of perform some arithmetic
execution be altered or logic operation on data
Figure 3.2 : Action Categories
Instruction Cycle State Diagram
The execution cycle for a particular instruction may involve more than one reference to memory.
Figure 3.3 : Instruction Cycle State Diagram
Also, instead of memory references, an instruction may specify an I/O operation. With these additional
considerations in mind, Figure 3.3 provides a more detailed look at the basic instruction cycle of Figure 3.3. The
figure is in the form of a state diagram.
For any given instruction cycle, some states may be null and others may be visited more than once. The states can
be described as follows:
• Instruction address calculation (iac): Determine the address of the next instruction to be executed. Usually, this
involves adding a fixed number to the address of the previous instruction. For example, if each instruction is 16
bits long and memory is organized into 16-bit words, then add 1 to the previous address. If, instead, memory is
organized as individually addressable 8-bit bytes, then add 2 to the previous address.
• Instruction fetch (if): Read instruction from its memory location into the processor.
2
• Instruction operation decoding (iod): Analyze instruction to determine type of operation to be performed and
operand(s) to be used.
• Operand address calculation (oac): If the operation involves reference to an operand in memory or available via
I/O, then determine the address of the operand.
• Operand fetch (of): Fetch the operand from memory or read it in from I/O.
• Data operation (do): Perform the operation indicated in the instruction.
• Operand store (os): Write the result into memory or out to I/O.
States in the upper part of Figure 3.2 involve an exchange between the processor and either emory or an I/O
module. States in the lower part of the diagram involve only internal processor operations. The oac state appears
twice, because an instruction may involve a read, a write, or both. However, the action performed during that
state is fundamentally the same in both cases, and so only a single state identifier is needed.
Also note that the diagram allows for multiple operands and multiple results, because some instructions on some
machines require this. For example, the PDP-11 instruction ADD A,B results in the following sequence of states:
iac, if, iod, oac, of, oac, of, do, oac, os.
Finally, on some machines, a single instruction can specify an operation to be performed
on a vector (one-dimensional array) of numbers or a string (one-dimensional array) of characters. As Figure 3.6
indicates, this would involve repetitive operand fetch and/or store operations.
Interrupt Cycle
Virtually all computers provide a mechanism by which other modules (I/O, memory) may interrupt the normal
processing of the processor. Table 3.1 lists the most common classes of interrupts. The specific nature of these
interrupts will be examined later in the course. However, we need to introduce the concept now to understand
more clearly the nature of the instruction cycle and the implications of interrupts on the interconnection
structure.
Table 3.1 Classes of Interrupts
Interrupts are provided primarily as a way to improve processing efficiency. For example, most external devices
are much slower than the processor. Suppose that the processor is transferring data to a printer using the
instruction cycle scheme of Figure 3.3. After each write operation, the processor must pause and remain idle until
3
the printer catches up. The length of this pause may be on the order of many hundreds or even thousands of
instruction cycles that do not involve memory. Clearly, this is a very wasteful use of the processor.
Figure 3.4a illustrates this state of affairs. The user program performs a series of WRITE calls interleaved with
processing. Code segments 1, 2, and 3 refer to sequences of instructions that do not involve I/O. The WRITE calls
are to an I/O program that is a system utility and that will perform the actual I/O operation. The I/O program
consists of three sections:
• A sequence of instructions, labeled 4 in the figure, to prepare for the actual I/O operation. This may include
copying the data to be output into a special buffer and preparing the parameters for a device command.
• The actual I/O command. Without the use of interrupts, once this command is issued, the program must wait
for the I/O device to perform the requested function (or periodically poll the device). The program might wait by
simply repeatedly performing a test operation to determine if the I/O operation is done.
• A sequence of instructions, labeled 5 in the figure, to complete the operation. This may include setting a flag
indicating the success or failure of the operation.
Because the I/O operation may take a relatively long time to complete, the I/O program is hung up waiting for the
operation to complete; hence, the user program is stopped at the point of the WRITE call for some considerable
period of time.
4
Figure 3.4 Program Flow of Control Without and With Interrupts
Interrupts and the instruction cycle
With interrupts, the processor can be engaged in executing other instructions while an I/O operation is in
progress.
Consider the flow of control in Figure 3.4b. As before, the user program reaches a point at which it makes a
system call in the form of a WRITE call. The I/O program that is invoked in this case consists only of the
preparation code and the actual I/O command. After these few instructions have been executed, control returns
to the user program. Meanwhile, the external device is busy accepting data from computer memory and printing
it. This I/O operation is conducted concurrently with the execution of instructions in the user program.
When the external device becomes ready to be serviced—that is, when it is ready to accept more data from the
processor—the I/O module for that external device sends an interrupt request signal to the processor. The
processor responds by suspending operation of the current program, branching off to a program to service that
particular I/O device, known as an interrupt handler, and resuming the original execution after the device is
serviced. The points at which such interrupts occur are indicated by an asterisk in Figure 3.4b.
Let us try to clarify what is happening in Figure 3.4. We assume we have a user program that contains two WRITE
commands (Figure 3.5). There is a segment of code at the beginning, then one WRITE command, then a second
segment of code, then a second WRITE command, then a third and final segment of code. The WRITE command
invokes the I/O program provided by the OS. Similarly, the I/O program consists of a segment of code, followed by
an I/O command, followed by another segment of code. The I/O command invokes a hardware I/O operation.
Figure 3.5 Program with WRITE call
From the point of view of the user program, an interrupt is just that: an interruption of the normal sequence of
execution. When the interrupt processing is completed, execution resumes (Figure 3.6). Thus, the user program
does not have to contain any special code to accommodate interrupts; the processor and the operating system
are responsible for suspending the user program and then resuming it at the same point.
5
Figure 3.6 Transfer of Control via Interrupts
Instruction Cycle with Interrupts
Figure 3.7 Instruction Cycle with Interrupts
To accommodate interrupts, an interrupt cycle is added to the instruction cycle, as shown in Figure 3.7. In the
interrupt cycle, the processor checks to see if any interrupts have occurred, indicated by the presence of an
interrupt signal. If no interrupts are pending, the processor proceeds to the fetch cycle and fetches the next
instruction of the current program. If an interrupt is pending, the processor does the following:
• It suspends execution of the current program being executed and saves its context. This means saving the
address of the next instruction to be executed (current contents of the program counter) and any other data
relevant to the processor’s current activity.
• It sets the program counter to the starting address of an interrupt handler routine. The processor now proceeds
to the fetch cycle and fetches the first instruction in the interrupt handler program, which will service the
interrupt. The interrupt handler program is generally part of the operating system. Typically, this program
determines the nature of the interrupt and performs whatever actions are needed.
In the example we have been using, the handler determines which I/O module generated the interrupt and may
branch to a program that will write more data out to that I/O module. When the interrupt handler routine is
completed, the processor can resume execution of the user program at the point of interruption.
6
It is clear that there is some overhead involved in this process. Extra instructions must be executed (in the
interrupt handler) to determine the nature of the interrupt and to decide on the appropriate action.
Nevertheless, because of the relatively large amount of time that would be wasted by simply waiting on an I/O
operation, the processor can be employed much more efficiently with the use of interrupts.
Assignment: Servicing interrupts requires extra overhead like saving program context; execute interrupts handler
program etc. all which takes processor time. At the same not using interrupts will make processor idle during I/O
operations. Why use of interrupts in a system makes the processor works more efficiently?
Figure 3.8 shows a revised instruction cycle state diagram that includes interrupt cycle processing.
Figure 3.8: Instruction Cycle State Diagram with Interrupts
I/O Module , CPU and Memory Interconnections
A computer consists of a set of components or modules of
three basic types (processor, memory, I/O) that
communicate with each other. In effect, a computer is a
network of basic modules. Thus, there must be paths for
connecting the modules.
The collection of paths connecting the various modules is
called the interconnection structure. The design of this
structure will depend on the exchanges that must be made
among modules. Figure 3.9 suggests the types of exchanges
that are needed by indicating the major forms of input and
output for each module type:
• Memory: Typically, a memory module will consist of N
words of equal length. Each word is assigned a unique
numerical address (0, 1, …, N - 1). A word of data can be read
from or written into the memory. The nature of the
operation is indicated by read and write control signals. The
location for the operation is specified by an address.
7
• I/O module: From an internal (to the computer system) point of view, I/O is functionally similar to memory.
There are two operations, read and write. Further, an I/O module may control more than one external device.
We can refer to each of the interfaces to an external device as a port and give each a unique address (e.g., 0, 1, …,
M - 1). In addition, there are external data paths for the input and output of data with an external device. Finally,
an I/O module may be able to send interrupt signals to the processor.
• Processor: The processor reads in instructions and data, writes out data after processing, and uses control
signals to control the overall operation of the system. It also receives interrupt signals.
The preceding list defines the data to be exchanged(i.e address, data and control signal). The interconnection
structure must support the following types of transfers:
• Memory to processor: The processor reads an instruction or a unit of data from memory.
• Processor to memory: The processor writes a unit of data to memory.
• I/O to processor: The processor reads data from an I/O device via an I/O module.
• Processor to I/O: The processor sends data to the I/O device.
• I/O to or from memory: For these two cases, an I/O module is allowed to exchange data directly with memory,
without going through the processor, using direct memory access.
Over the years, a number of interconnection structures have been tried. By far the most common are:
(1) the bus and various multiple-bus structures, and
(2) point-to-point interconnection structures with packetized data transfer.
The Bus
The bus was the dominant means of computer system component interconnection for decades. For general-
purpose computers, it has gradually given way to various point-to-point interconnection structures, which now
dominate computer system design. However, bus structures are still commonly used for embedded systems,
particularly microcontrollers. In this section, we give a brief overview of bus structure.
A bus is a communication pathway connecting two or more devices. A key characteristic of a bus is that it is a
shared transmission medium. Multiple devices connect to the bus, and a signal transmitted by any one device is
available for reception by all other devices attached to the bus. If two devices transmit during the same time
period, their signals will overlap and become garbled. Thus, only one device at a time can successfully transmit.
Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of transmitting signals
representing binary 1 and binary 0. Over time, a sequence of binary digits can be transmitted across a single line.
Taken together, several lines of a bus can be used to transmit binary digits simultaneously (in parallel). For
example, an 8-bit unit of data can be transmitted over eight bus lines.
Computer systems contain a number of different buses that provide pathways between components at various
levels of the computer system hierarchy. A bus that connects major computer components (processor, memory,
I/O) is called a system bus. The most common computer interconnection structures are based on the use of one
or more system buses.
A system bus consists, typically, of from about fifty to hundreds of separate lines.
8
The data lines provide a path for moving data among system modules. These lines, collectively, are called the
data bus. The data bus may consist of 32, 64, 128, or even more separate lines, the number of lines being
referred to as the width of the data bus. Because each line can carry only 1 bit at a time, the number of lines
determines how many bits can be transferred at a time. The width of the data bus is a key factor in determining
overall system performance. For example, if the data bus is 32 bits wide and each instruction is 64 bits long, then
the processor must access the memory module twice during each instruction cycle.
The address lines are used to designate the source or destination of the data on the data bus. For example, if the
processor wishes to read a word (8, 16, or 32 bits) of data from memory, it puts the address of the desired word
on the address lines.
Clearly, the width of the address bus determines the maximum possible memory capacity of the system.
Furthermore, the address lines are generally also used to address I/O ports. Typically, the higher-order bits are
used to select a particular module on the bus, and the lower-order bits select a memory location or I/O port
within the module. For example, on an 8-bit address bus, address 01111111 and below might reference locations
in a memory module (module 0) with 128 words of memory, and address 10000000 and above refer to devices
attached to an I/O module (module 1).
The control lines are used to control the access to and the use of the data and address lines. Because the data
and address lines are shared by all components, there must be a means of controlling their use. Control signals
transmit both command and timing information among system modules. Timing signals indicate the validity of
data and address information. Command signals specify operations to be performed. Typical control lines include:
• Memory write: causes data on the bus to be written into the addressed location
• Memory read: causes data from the addressed location to be placed on the bus
• I/O write: causes data on the bus to be output to the addressed I/O port
• I/O read: causes data from the addressed I/O port to be placed on the bus
• Transfer ACK: indicates that data have been accepted from or placed on the bus
• Bus request: indicates that a module needs to gain control of the bus
• Bus grant: indicates that a requesting module has been granted control of the bus
• Interrupt request: indicates that an interrupt is pending
• Interrupt ACK: acknowledges that the pending interrupt has been recognized
• Clock: is used to synchronize operations
• Reset: initializes all modules
Each line is assigned a particular meaning or function. Although there are many different bus designs, on any bus
the lines can be classified into three functional groups (Figure 3.10): data, address, and control lines. In addition,
there may be power distribution lines that supply power to the attached modules.
9
Figure 3.10 Bus Interconnection Scheme
The operation of the bus is as follows. If one module wishes to send data to another, it must do two things:
(1) obtain the use of the bus, and (2) transfer data via the bus.
If one module wishes to request data from another module, it must (1) obtain the use of the bus, and (2) transfer
a request to the other module over the appropriate control and address lines. It must then wait for that second
module to send the data.
Point-to-point Interconnect
The shared bus architecture was the standard approach to interconnection between the processor and other
components (memory, I/O, and so on) for decades. But contemporary systems increasingly rely on point-to-point
interconnection rather than shared buses.
The principal reason driving the change from bus to point-to-point interconnect was the electrical constraints
encountered with increasing the frequency of wide synchronous buses. At higher and higher data rates, it
becomes increasingly difficult to perform the synchronization and arbitration functions in a timely fashion.
Further, with the advent of multi-core chips, with multiple processors and significant memory on a single chip, it
was found that the use of a conventional shared bus on the same chip magnified the difficulties of increasing bus
data rate and reducing bus latency to keep up interconnect has lower latency, higher data rate, and better
scalability.
example of the point-to-point interconnect approach: Intel’s QuickPath Interconnect (QPI), which
was introduced in 2008.
The following are significant characteristics of QPI and other point-to-point interconnect schemes:
• Multiple direct connections: Multiple components within the system enjoy direct pairwise connections to other
components. This eliminates the need for arbitration found in shared transmission systems.
10
• Layered protocol architecture: As found in network environments, such as TCP/IP-based data networks, these
processor-level interconnects use a layered protocol architecture, rather than the simple use of control signals
found in shared bus arrangements.
• Packetized data transfer: Data are not sent as a raw bit stream. Rather, data are sent as a sequence of packets,
each of which includes control headers and error control codes.
PCI to PCIe
The peripheral component interconnect (PCI) is a popular high-bandwidth, processor independent bus that can
function as a mezzanine or peripheral bus. Compared with other common bus specifications, PCI delivers better
system performance for high speed I/O subsystems (e.g., graphic display adapters, network interface controllers,
and disk controllers).
Intel began work on PCI in 1990 for its Pentium-based systems. Intel soon released all the patents to the public
domain and promoted the creation of an industry association, the PCI Special Interest Group (SIG), to develop
further and maintain the compatibility of the PCI specifications. The result is that PCI has been
widely adopted and is finding increasing use in personal computer, workstation, and server systems. Because the
specification is in the public domain and is supported by a broad cross section of the microprocessor and
peripheral industry, PCI products built by different vendors are compatible.
As with the system bus discussed in the preceding sections, the bus-based PCI scheme has not been able to keep
pace with the data rate demands of attached devices. Accordingly, a new version, known as PCI Express (PCIe)
has been developed.
PCIe, as with QPI, is a point-to-point interconnect scheme intended to replace bus-based schemes such as PCI.
A key requirement for PCIe is high capacity to support the needs of higher data rate I/O devices, such as Gigabit
Ethernet. Another requirement deals with the need to support time-dependent data streams. Applications such
as video-on-demand and audio redistribution are putting real-time constraints on servers too. Many
communications applications and embedded PC control systems also process data in real-time.
Today’s platforms must also deal with multiple concurrent transfers at ever-increasing data rates. It is no longer
acceptable to treat all data as equal—it is more important, for example, to process streaming data first since late
real-time data is as useless as nodata. Data needs to be tagged so that an I/O system can prioritize its flow
throughout the platform.
Review Questions
1. What is the primary purpose of the 3. Which register holds the address of the
instruction cycle? next instruction?
a) Increase memory capacity a) MBR b) MAR c) AC d) PC
b) Perform I/O operations 4. After fetching an instruction, it is loaded
c) Execute program instructions into the: a) MAR b) PC c) IR d) MBR
d) Manage interrupts 5. The processor performs which of the
2. The instruction cycle consists of: following during instruction execution?
a) Fetch and decode a) Formatting the memory
b) Fetch and execute b) Encrypting data
c) Decode and store c) Data processing
d) Execute and write d) Reading keyboard input only
11
6. Which state calculates the address of the b) Data corruption
next instruction? c) One device to transmit at a time
a) if b) iac c) iod d) do d) No transmission
7. The ‘do’ state in the instruction cycle 17. An I/O module can send ______ to the
refers to: a) Operand storage processor.
b) Data operation c) Instruction decoding a) Cache flush b) Clock signal
d) Address calculation c) Interrupt signal d) Power signal
8. Interrupts are mainly used to: 18. The main limitation of shared bus
a) Slow down CPU structures is: a) Power consumption
b) Reduce memory b) Limited scalability at high frequencies
c) Improve processing efficiency c) Software complexity
d) Increase I/O delay d) Physical size
9. What does the interrupt handler do? 19. Point-to-point interconnect provides:
a) Skips the instruction a) More latency
b) Handles memory allocation b) Reduced data rates
c) Services the I/O device c) Lower latency and better scalability
d) Powers down the processor d) Unreliable communication
10. Which of the following best describes the 20. PCIe is a replacement for:
role of the interrupt cycle? a) RAM b) PCI c) SATA d) ROM
a) Halts the processor permanently 21. PCI uses a ______ bus scheme.
b) Clears the instruction queue a) Wireless b) Point-to-point
c) Checks and handles pending interrupts c) Bus-based d) Serial-only
d) Deletes program code 22. What does the I/O write signal do?
11. In the absence of interrupts, I/O a) Reads data from I/O
processing is: b) Writes data to I/O
a) More efficient c) Shuts down the I/O
b) Slower and wasteful d) Initializes I/O module
c) Unaffected 23. What component controls the bus?
d) Secure a) Memoryb) Processorc) Any attached
12. During interrupt servicing, the processor device d) ALU
must: 24. What is the main advantage of interrupts?
a) Skip the current instruction a) Security enhancement
b) Save program context b) Simpler code
c) Clear the system bus c) Reduced power usage
d) Increase bus speed d) Processor efficiency
13. A system bus typically consists of: 25. The instruction fetch phase involves:
a) Cache, CPU, and I/O a) Reading data from I/O
b) Data, address, and control lines b) Fetching operand
c) ROM, RAM, and SSD c) Reading instruction from memory
d) ALU, IR, and MAR d) Storing results
14. Which line is used to write data to 26. An instruction like “ADD A, B” involves
memory? how many operand fetch states?
a) Memory read a) 0 b) 1c) 2d) 3
b) I/O read 27. Vector instructions require:
c) Memory write a) Fewer operations
d) Transfer ACK b) No memory access
15. The address bus width determines: c) Repetitive operand fetch/store
a) Data rate d) Only fetch cycle
b) Clock speed 28. What is the purpose of Transfer ACK?
c) Power usage a) Interrupt the CPU
d) Maximum memory capacity b) Confirm data transfer
16. A shared bus allows: c) Clear the address line
a) Simultaneous transmissions d) Set PC to zero
12
29. A bus is:
a) A single dedicated line
b) An exclusive memory unit
c) A communication pathway
d) Part of the ALU
30. Which bus line controls access to shared
resources? a) Data line b) Address line
c) Control line d) Power line
13