Unit -3
Basic Computer Organization and Central Processing Unit
PART I (BASIC COMPUTER ORGANIZATION)
What is an Instruction?
An instruction is a command given to the computer’s processor (CPU) to
perform a specific task.
These tasks can include:
Adding two numbers
Moving data from one place to another
Comparing values
Making decisions (like if-else conditions)
Think of instructions as orders given to the CPU to perform actions during
program execution.
Structure of an Instruction
An instruction typically consists of two main parts:
Part Description
Opcode (Operation Tells the CPU what action to perform (like add, move,
Code) compare).
Tells the CPU on what or where to perform the operation
Operand(s)
(i.e., the data or the location of data).
Example:
ADD R1, R2
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 1
Opcode: ADD (means addition)
Operands: R1 and R2 (registers containing data to be added)
What is an Instruction Code?
An Instruction Code is the binary representation of an instruction that the
CPU understands directly.
Computers only understand binary (0s and 1s). So, each instruction in
assembly language (like ADD, SUB, MOV) has a corresponding binary
instruction code.
Example of Instruction Code
Suppose we have this instruction in assembly:
ADD A, B
This might be translated into a binary instruction like:
0001 1000
Here:
0001 → is the Opcode for ADD
1000 → is the Operand Code indicating registers A and B
Note: Actual codes depend on the CPU design.
How Instruction Codes Work (Step-by-step):
1. The program gives an instruction (like ADD R1, R2).
2. The instruction is translated into machine code (binary).
3. The CPU reads the opcode part to know the operation.
4. The CPU reads the operand(s) part to get data or addresses.
5. CPU performs the operation and stores the result.
Types of Instructions
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 2
Instructions are generally classified into the following categories:
Type Description Example
Data Transfer Move data from one place to another MOV A, B
Instructions
Arithmetic Instructions Perform mathematical operations ADD A, B
Logical Instructions Perform logic operations (AND, OR, AND A, B
NOT)
Control Instructions Change the sequence of execution JMP,
CALL
Input/Output Deal with input and output devices IN, OUT
Instructions
Types of Instruction Codes
Zero Address Instructions
These instructions do not include any explicit operands or memory addresses.
Instead, they work on data that is implicitly stored in predefined locations, such
as the top elements of a stack. For instance, a zero-address instruction might
perform an addition using the top two values from the stack without mentioning
their specific locations.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 3
One Address Instructions
These instructions include a single operand, usually referring to a memory
location or a register. The operation is performed using this operand and an
implied accumulator, which serves as the default location for storing
intermediate results. The result can be saved back into the accumulator or
another specified location. Since the CPU assumes one of the operands is
already in the accumulator, there's no need to explicitly mention it in the
instruction.
Two-Address Instructions
These instructions involve two operands or addresses, which can be either
memory locations or registers. The operation is performed using the contents of
both operands, and the result can be saved in either the same or a different
location. For instance, a two-address instruction may add the values of two
registers and store the result in one of them.
This format is frequently used in commercial computer systems. Unlike one-
address instructions—where the result is typically stored in an accumulator—
two-address instructions allow the result to be stored in various locations.
However, they require more bits in the instruction format to represent both
addresses.
Three-Address Instructions
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 4
These instructions involve three operands or addresses, which can refer to either
memory locations or registers. The operation is carried out using the contents of
all three operands, and the result can be saved in the same or a different
location. For example, a three-address instruction might multiply the values of
two registers, add the value of a third register, and store the result in a fourth
register.
This format includes three address fields to identify registers or memory
locations. Although programs written using this instruction type are generally
shorter in size, each instruction occupies more bits due to the extra address
fields. These instructions simplify program development; however, this doesn’t
necessarily translate into faster execution. Each instruction still executes one
micro-operation per cycle, such as updating a register or loading an address
onto the address bus.
Advantages
Zero-Address Instructions
Ideal for Stack-Based Systems: These instructions are widely used in
stack-based architectures, utilizing the top elements of the stack
implicitly.
Simplified Instruction Set: Leads to a more straightforward CPU design,
potentially increasing system reliability.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 5
Lower Decoding Complexity: Useful for recursive or nested functions,
which are prevalent in function calls and mathematical operations.
Efficient for Nested Calculations: Fewer bits are needed for operand
specification, simplifying the decoding process.
Supports Compiler Optimizations: The stack-oriented nature of these
systems allows optimization techniques to reorder operations more
effectively.
One-Address Instructions
Moderate Complexity: Offers a balance between flexibility and
simplicity—easier to implement than multi-address instructions while
being more capable than zero-address formats.
Simplified Operand Management: Only one explicit operand reduces
the complexity of operand fetching compared to two- or three-address
types.
Use of Implicit Accumulator: Typically leverages an accumulator
register, accelerating some operations and simplifying hardware design.
Compact Code: Smaller instruction size compared to multi-address
instructions leads to better memory and cache usage.
Versatile Addressing Modes: Can support different addressing schemes
(direct, indirect, indexed), improving flexibility without adding much
overhead.
Two-Address Instructions
Higher Efficiency: Enables direct operations on memory or registers,
reducing the number of instructions needed for specific tasks.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 6
More Operand Flexibility: Offers diverse operand combinations and
supports varied addressing modes.
Supports Intermediate Result Storage: Facilitates efficient handling of
temporary data during complex computations.
Better Code Clarity: The structure of two-address instructions often
results in more understandable and maintainable code.
Performance Gains: Reduces memory access in certain scenarios,
enhancing overall performance.
Three-Address Instructions
Direct Expression Evaluation: Enables complex expressions to be
computed directly without needing temporary variables.
Enables Parallel Execution: Multiple operands can be accessed
simultaneously, aiding instruction-level parallelism.
Improves Compiler Optimization: Allows advanced compiler
techniques like instruction scheduling and reordering to boost runtime
efficiency.
Fewer Instructions Needed: Though individual instructions are larger,
the total number of instructions may be reduced for complex operations.
Better Pipeline Utilization: Richer instruction detail improves how well
the CPU pipeline is used, increasing throughput.
Efficient Register Usage: Can manipulate multiple registers in one
instruction, improving register allocation.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 7
Disadvantages
Zero-Address Instructions
Reliance on Stack: The heavy dependency on stack operations can lead
to inefficiencies compared to register-based systems.
Stack Operation Overhead: Frequent use of push and pop may degrade
performance.
Limited Addressing Flexibility: Makes working with complex data
structures or direct memory access more difficult.
Harder to Optimize: Implied operand handling in stacks complicates
code optimization.
Difficult to Debug: Stack-based execution flows are generally harder to
trace and troubleshoot than register-based ones.
One-Address Instructions
Accumulator Limitations: The use of a single accumulator can become
a bottleneck, limiting efficiency and parallelism.
Higher Instruction Count: Complex tasks may need multiple
instructions, increasing program size.
Less Optimal Operand Access: Addressing only one explicit operand
can create inefficiencies in data access.
Complexity in Addressing Modes: Supporting multiple addressing types
can complicate decoding and instruction design.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 8
Data Movement Overhead: Moving data between memory and the
accumulator often requires additional instructions, adding to execution
time.
Two-Address Instructions
Operand Overwriting Issue: One operand often holds the result,
overwriting the original data and sometimes requiring extra instructions.
Larger Instruction Format: Requires more bits, increasing memory
usage compared to simpler instruction formats.
Handling Temporary Results: Intermediate values must be managed
carefully, potentially complicating programming.
Decoding Complexity: The need to decode two addresses increases CPU
design complexity.
Not Always Efficient: Some operations still need multiple instructions,
reducing potential gains in specific cases.
Three-Address Instructions
Largest Memory Footprint: Consumes the most memory per
instruction, potentially impacting cache and memory efficiency.
Complex Instruction Decoding: Decoding three addresses adds to CPU
complexity and can influence performance and power usage.
Slower Operand Fetching: Retrieving three operands may slow down
instruction execution.
Higher Hardware Demands: Requires more advanced hardware,
increasing development cost and power usage.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 9
Greater Power Consumption: More complex operations and memory
activity increase power needs, which is a drawback for battery-powered
devices.
CPU Registers
CPU registers are ultra-fast memory elements that are vital for the smooth and
efficient execution of programs, allowing immediate access to frequently used
data during processing. They are central to tasks such as data handling, memory
referencing, and monitoring the CPU’s operational state. Although accessing
data from RAM is significantly quicker than from storage devices like hard
drives, it still doesn’t meet the speed requirements of the CPU—hence, registers
are used to provide even faster data access. Working alongside the CPU’s
memory system, registers help optimize overall performance. Cache memory
serves as the next fastest storage tier but remains slower than registers. Different
types of CPU registers, including general-purpose, status, and control registers,
are designed to manage specific functions, contributing to the efficient and
seamless operation of the processor.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 10
Types of CPU Registers
The CPU contains various registers, each designed for specific tasks. Below is a
breakdown of the primary types of CPU registers:
Accumulator:
This is one of the most commonly used registers. It stores data retrieved
from memory and is essential for performing arithmetic and logic
operations. The number of accumulators varies across different
microprocessor designs.
Memory Address Register (MAR):
MAR holds the memory address of the location the CPU intends to read
from or write to. Together with the Memory Data Register (MDR), it
facilitates data transfer between the CPU and main memory.
Memory Data Register (MDR):
This register temporarily holds the data being transferred to or from a
memory address specified by the MAR. It either stores data fetched from
memory or data ready to be written back.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 11
General Purpose Registers (GPRs):
These registers—commonly labeled R0, R1, R2, ..., Rn—are used for
storing temporary values during program execution. They can be
accessed directly using assembly language. Modern CPUs often include
more GPRs to allow efficient register-to-register operations, which are
faster than memory-based access methods.
Program Counter (PC):
The PC tracks the current position of program execution by storing the
address of the next instruction to be executed. After each instruction is
fetched, the PC updates to point to the subsequent one. In a 32-bit system,
for example, the PC typically increments by 4 with each instruction cycle.
Instruction Register (IR):
Once the CPU fetches an instruction from the address stored in the PC, it
is placed into the IR. This register holds the instruction currently being
decoded and executed.
Stack Pointer (SP):
The SP holds the address of the top of the stack—a memory structure
used for storing return addresses, local variables, and control information
during function calls and other operations.
Flag Register:
Also known as the status or condition code register, this register reflects
the outcome of various CPU operations. It contains several individual
flags such as:
o Zero Flag (Z): Set if the result of an operation is zero.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 12
o Carry Flag (C): Set if an addition generates a carry or subtraction
causes a borrow.
o Sign Flag (S/N): Indicates if the result of a signed operation is
negative.
o Overflow Flag (V): Set if the result of a signed operation exceeds
the allowed range.
o Parity Flag (P): Set based on whether the number of 1s in the
result is even or odd.
o Auxiliary Carry Flag (AC): Useful in binary-coded decimal
operations.
o Interrupt Enable Flag (IE): Controls the acceptance of interrupts
by the CPU.
Condition Code Register (CCR):
This is another name for a flag/status register that holds individual bits
representing the outcome of recent operations. Some of the typical flags
include:
o Carry (C): Indicates a carry out or borrow.
o Overflow (V): Relevant in signed operations.
o Zero (Z): Set when the result equals zero.
o Negate (N): Set when the result is negative in signed calculations.
o Extend (X): Used in extended precision arithmetic operations.
These flags are usually set by the Arithmetic Logic Unit (ALU).
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 13
CPU Register Sizes
The number and size of CPU registers are determined by the processor's
architecture and directly influence the system's performance and capabilities.
Below is a breakdown of the various register sizes found in CPUs:
8-bit Registers:
These can store 8 bits (1 byte) of data. They're typically used for basic
data handling and arithmetic operations.
16-bit Registers:
These hold 16 bits (2 bytes) of data and are commonly found in older
processors or systems that perform 16-bit operations.
32-bit Registers:
Capable of storing 32 bits (4 bytes), these registers are widely used in
standard processors, allowing them to handle more data and perform
more advanced calculations than 8- or 16-bit registers.
64-bit Registers:
These registers store 64 bits (8 bytes) of data and are standard in most
modern CPUs. They provide greater processing power and enhanced
memory addressing capabilities.
Most of today’s computers are either 32-bit or 64-bit systems, referring to the
width of their registers and the amount of data the processor can manage in a
single operation.
128-bit and 256-bit Registers:
These are found in specialized processors designed for tasks such as
vector processing, cryptography, and parallel computing, where large
data sets or high-speed operations are required.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 14
Purpose of CPU Registers
CPU registers serve critical functions during program execution. Here's how
they are typically used:
Instruction Storage:
Registers temporarily hold instructions fetched from memory before the
CPU executes them, enabling faster instruction handling.
Temporary Result Storage:
During computations or operations, intermediate results are stored in
registers until they are needed or finalized.
Fast Data Access:
Registers act as the CPU's high-speed "workbench," providing instant
access to frequently used data. Instead of retrieving data from slower
main memory, the CPU uses registers to boost efficiency—similar to
keeping essential tools within arm’s reach.
Computer Instructions
A computer stores programs in its RAM using binary code—combinations of 1s
and 0s—which the CPU interprets as machine-level instructions. Each word in
RAM typically represents a single instruction in machine language.
These instructions are fetched one at a time by the CPU, where they are
decoded and then executed. In a basic computer system, there are three main
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 15
types of instruction formats: memory-reference instructions, register-
reference instructions, and input-output instructions.
Memory Reference Instructions
A Memory Reference Instruction is a type of instruction that accesses data
stored in the main memory (RAM). These instructions involve reading from
or writing to memory locations.
These instructions are part of the instruction set used by a basic computer to
perform operations like loading, storing, or performing arithmetic/logical tasks
using data from memory.
Register Reference Instruction
Register reference instructions are recognized by the operation code 111, along
with a 0 in the most significant bit (bit 15) of the instruction. These instructions
are used to perform operations on, or test the status of, the Accumulator (AC)
register. Since no data needs to be fetched from memory, the remaining 12 bits
of the instruction are used to specify the exact operation or test to be carried out.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 16
Input-Output Instruction
Input-output instructions do not involve memory references and are identified
by the operation code 111, along with a 1 in the most significant bit (bit 15) of
the instruction. The remaining 12 bits specify the particular I/O operation or test
to be performed.
Instruction Type Identification
The type of instruction is determined by the control unit based on the four
highest-order bits (bits 12 through 15). Here's how the classification works:
If the 3-bit opcode (bits 12–14) is anything other than 111, the
instruction is treated as a memory-reference instruction, and bit 15
indicates the addressing mode (I).
If the opcode is 111, the control then checks bit 15:
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 17
o If bit 15 is 0, the instruction is a register-reference type.
o If bit 15 is 1, it is classified as an input-output instruction.
Timing and Control
In computer architecture, timing and control refers to the coordination and
management of all operations inside the CPU. It ensures that every operation
happens in the correct sequence and at the right time.
Imagine a computer as a busy kitchen with multiple chefs (components like
ALU, registers, memory). The control unit acts like the head chef, giving
instructions on what to do, when, and in what order.
Components of Timing and Control
1. Control Unit (CU):
o The brain of the CPU's coordination.
o It interprets instruction codes and generates control signals.
o It directs the data flow between the CPU, memory, and I/O
devices.
2. Timing Signals (Clock Pulses):
o Every operation in the CPU is synced with a clock.
o The clock sends regular pulses that trigger the execution of micro-
operations.
3. Control Signals:
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 18
o These are binary signals sent by the CU to other components.
o They enable or disable various parts of the CPU like registers,
ALU, or buses.
Types of Control
There are two main types of control mechanisms:
Type Description Characteristics
Control signals are generated Fast, but hard to modify
Hardwired Control
using fixed logic circuits. or update.
Control signals are generated
Easier to modify and
Microprogrammed by executing micro-
debug, but slower than
Control instructions stored in control
hardwired.
memory.
Timing and Control in Basic Computer Organization
Control and timing are critical in computer organization when carrying out the
Fetch-Decode-Execute cycle, which refers to the sequence of operations that a
computer follows to run a program. The cycle supports step-by-step processing
of instructions, with synchronization and good sequencing being tackled
through appropriate timing signals.
Instruction Cycle:
In computer organization, timing and control are fundamental in executing the
Fetch-Decode-Execute cycle. This cycle consists of three main phases:
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 19
Fetch instructions: The control unit sends timing signals to fetch an
instruction from memory, which is then loaded into the instruction
register.
Decode instruction: The fetched instruction is decoded to determine
what operation needs to be performed. This step may involve control
signals to set the ALU or other components into specific modes.
Fetch operands or effective addresses from memory if needed: If an
operand resides in memory, the system initiates memory read cycles to
transfer it into CPU registers. The effective address (EA) refers to the
memory location of an operand. Retrieval can be expressed as: Register
← Memory[EA].
Execute: The decoded instruction is executed, with control signals
directing the data flow between the ALU, memory, and other
components.
Types of Control Units
There are two main types of control units in a computer's CPU
1. Hardwired Control Unit
2. Microprogrammable control unit
1. Hardwired Control Unit
Hardwired control units generate control signals using rigid logic circuits. These
circuits are designed for specific operations,using fast-controlled hardwired
units. The speed of control signals generation by hardwired units is incredible
because they depend on physical circuits, and these circuits have the capability
of working extremely fast.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 20
The key advantage of this approach is speed because control signals are
generated directly through physical circuits, and they can operate very quickly.
However, hardwired control units are less flexible, making them suitable for
small or specialised systems that do not require frequent updates or complex
operations. An example is a basic embedded system or a small CPU for a
specific application.
2. Microprogrammed Control Unit
Microprogrammed control units use a set of instructions (called micro-
operations or microprograms) to generate control signals. Compared to
hardwired units, this approach enjoys far more flexibility because modification
of the microprograms is trivial. This makes microprogrammed control units
most suitable for more complex systems, or general-purpose CPUs, in which
flexibility and modification are essential. Microprogrammed control units most
commonly used for larger systems like general-purpose processors (for
example, in PCs or smartphones).
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 21
Single-Level Control Store
The opcode fetches a microinstruction containing control signals and the next
microinstruction’s address. It also includes an addressing mode field, influenced
by condition flags, and the final microinstruction fetches the next instruction
from memory.
Two-Level Control Store
Micro-instructions are specific to controlling the addresses of nano-instructions
appropriately; thus, storing control signals in nano-instructions reduces the size
of the microinstruction and memory consumed by preventing redundancy.
Nano-instructions generate control signals through 1 bit per signal.
Instruction Cycle in Computer Architecture
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 22
A program stored in a computer's memory is made up of a sequence of
instructions. To execute this program, the computer processes each instruction
through a repeating cycle.
In a basic computer system, the instruction cycle consists of the following
steps:
1. Fetch the instruction from memory.
2. Decode the instruction to determine the operation to be performed.
3. Read the effective address from memory if the instruction uses indirect
addressing.
4. Execute the instruction based on the decoded operation.
Once these four steps are completed for one instruction, the control unit loops
back to the first step to process the next instruction. This cycle continues
repeatedly until a Halt instruction is encountered, signaling the end of the
program.
A diagram typically illustrates these phases of the instruction cycle for clarity.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 23
As display in the figure, the halt condition appears when the device receive
turned off, on the circumstance of unrecoverable errors, etc.
Fetch Cycle
The instruction to be executed is located at the address stored in the Program
Counter (PC). The processor retrieves the instruction from the memory
location indicated by the PC.
After fetching, the PC is incremented to point to the address of the next
instruction. The fetched instruction is then loaded into the Instruction Register
(IR). The processor reads this instruction and prepares for execution by
initiating the necessary operations.
Execute Cycle
During the execute cycle, the processor performs the operations specified by the
instruction. Data transfers involved in execution typically occur in two main
ways:
Processor-to-Memory / Memory-to-Processor:
Data is moved between the CPU and memory, either for reading or
writing purposes.
Processor-to-I/O / I/O-to-Processor:
Data is transferred between the processor and peripheral devices (input or
output).
In this phase, the processor carries out the required actions on the data.
Additionally, the control unit may alter the execution sequence depending on
the instruction. These data transfer methods work together to complete the
execute cycle.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 24
State Diagram for Instruction Cycle
The diagram illustrates a broad overview of the instruction cycle in a basic
computer, represented as a state diagram. Within this cycle, some states may
remain unused (null) during specific operations, while others may be revisited
multiple times, depending on the instruction being executed.
Instruction Address Calculation:
The address of the next instruction is determined by adding a fixed value
to the address of the current instruction.
Instruction Fetch:
The instruction is retrieved from its designated memory location and
brought into the processor.
Instruction Operation Decoding:
The processor decodes the instruction to identify the type of operation to
perform and the operands involved.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 25
Operand Address Calculation:
If the instruction references an operand in memory or through an I/O
device, its address is computed.
Operand Fetch:
The operand is then retrieved either from memory or via the I/O interface.
Data Operation:
The specified operation in the instruction is carried out using the fetched
operand(s).
Store Operands:
The result of the operation is either stored back in memory or sent to an
I/O device.
Memory Reference Instructions
Memory Reference Instructions are a type of command used to access data
stored in memory. These instructions generate memory addresses, allowing a
program to locate and interact with the required data. They define where the
information is stored and how it can be retrieved during execution.
There are seven types of memory reference instructions, which include the
following:
AND
The AND instruction performs a bitwise logical AND between the contents of a
register and the memory word located at the effective address. The result is then
stored back into the register.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 26
ADD
The ADD instruction adds the value stored at the effective memory address to
the current content of the register. The result is placed back into the register.
LDA (Load Accumulator)
The LDA instruction loads the memory word specified by the effective address
directly into the register, replacing its previous content.
STA (Store Accumulator)
The STA instruction stores the value in the register into the memory location
defined by the effective address. The value is first placed onto the common bus
and then written to memory through the data input line. This operation requires
just one micro-operation.
BUN (Branch Unconditionally)
The BUN instruction causes the program to jump to a new instruction located at
the effective address. Since the Program Counter (PC) typically holds the
address of the next instruction, this command is used when execution needs to
continue at a non-sequential memory location.
BSA (Branch and Save Return Address)
The BSA instruction is used to call subroutines. It stores the address of the next
instruction (from the PC) into the memory location specified by the effective
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 27
address. This allows the program to return to the original sequence after the
subroutine is executed.
ISZ (Increment and Skip if Zero)
The ISZ instruction increases the value of the memory word located at the
effective address. If the resulting value becomes zero, the PC is incremented to
skip the next instruction. This is useful for loop control and conditional
execution based on counters.
What is an Input-Output (I/O) Interrupt?
When a CPU executes programs, it also needs to communicate with
input/output (I/O) devices like keyboards, printers, or disk drives. However,
these devices are slower compared to the CPU. So instead of making the CPU
wait for these devices to complete tasks, the system uses interrupts.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 28
Definition:
An I/O interrupt is a signal sent by an I/O device to the CPU indicating that it
needs CPU attention or has completed an operation.
Why Use I/O Interrupts?
Without interrupts, the CPU would have to continuously check (poll) each I/O
device to see if it needs service — wasting time.
With interrupts:
CPU performs other tasks.
I/O device sends interrupt when ready.
CPU pauses current task, handles the I/O, then resumes.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 29
Types of I/O Interrupts
Type Description
1. Hardware Generated by I/O devices (e.g., keyboard, mouse) when
Interrupt they need CPU attention.
Initiated by a running program (using instructions like
2. Software Interrupt
INT in assembly).
3. Maskable Can be ignored/disabled by the CPU. Useful for lower-
Interrupt priority tasks.
4. Non-Maskable Cannot be ignored. Usually indicates serious problems
Interrupt (e.g., hardware failure).
In computer systems, input-output (I/O) interrupts help manage communication
between the CPU and external devices efficiently. Based on their source and
how they are handled, I/O interrupts can be classified into several types:
1. Hardware Interrupts: These are generated by external hardware devices
such as keyboards, mice, printers, or network cards. When a hardware device
completes a task or requires attention, it sends a signal (interrupt request) to the
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 30
CPU. For example, when a key is pressed on the keyboard, the keyboard
controller sends an interrupt to inform the CPU to read the key. Hardware
interrupts ensure that the CPU is alerted only when the device is ready, rather
than constantly checking its status.
2. Software Interrupts: These are initiated by a running program rather than
external hardware. They are typically used to request system-level services from
the operating system. For instance, when a program wants to perform
input/output or access hardware resources, it can use a specific instruction (like
INT in assembly language) to generate a software interrupt. These interrupts are
part of system calls and are used to switch from user mode to kernel mode
securely.
3. Maskable Interrupts: Maskable interrupts are those that the CPU can enable
or disable (mask) as needed. If a maskable interrupt occurs when interrupts are
disabled, the CPU ignores it temporarily. These are generally used for non-
critical operations, like disk I/O or peripheral communications. The CPU may
decide to service these interrupts only when it's not busy with more important
tasks.
4. Non-Maskable Interrupts (NMI): These are high-priority interrupts that
cannot be disabled or ignored by the CPU. They are used for critical events such
as hardware failures, power issues, or memory errors. When an NMI occurs, the
CPU immediately suspends its current activity and handles the interrupt. This
ensures that serious problems are addressed without delay to prevent system
crashes or data loss.
Together, these types of interrupts allow the system to function smoothly by
handling both regular I/O tasks and emergency situations efficiently.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 31
Understanding them helps in grasping how operating systems and hardware
work together to manage processes and devices.
Components of a Computer/Design of basic computer
A digital computer is composed of several functional units that work together to
receive, process, store, and display data. These core components include the
Input Unit, Central Processing Unit (CPU), Memory Unit, Output Unit, and
the Bus System. Each plays a distinct role in ensuring the computer functions
efficiently.
Functional Components of a Computer
The main building blocks of a computer system include:
Input Unit – Receives data and instructions from users or external
sources.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 32
CPU (Central Processing Unit) – Processes data and manages
operations.
Memory Unit – Temporarily or permanently stores data and instructions.
Output Unit – Displays or delivers processed results.
Bus System – Connects all components and facilitates data transfer.
These components operate in coordination to carry out tasks and deliver desired
outputs.
1. Input Unit
Purpose: To collect and transmit user or external data to the computer.
Function: Transforms user inputs into binary signals understandable by
the system.
Common Devices (as of 2025):
o Keyboard, Mouse, Touchscreens
o Scanners, Sensors, Stylus Pens
o Voice Assistants (e.g., Siri, Alexa)
o Biometric Scanners (facial/fingerprint recognition)
o IoT-enabled smart inputs
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 33
2. Central Processing Unit (CPU) – The Computer’s Brain
The CPU is responsible for executing instructions and coordinating operations
within the computer. By 2025, most CPUs support multi-core and multi-
threading, enhancing performance through parallel processing.
CPU Components:
a. Arithmetic Logic Unit (ALU):
o Handles arithmetic operations (addition, subtraction, etc.)
o Performs logical operations (comparisons, decisions)
o In modern CPUs, supports AI/ML workloads through
vector/matrix processing.
b. Control Unit (CU):
o Manages instruction decoding and execution.
o Controls the flow of data among components.
o Sends timing and control signals to other parts of the computer.
c. Registers:
o Small, high-speed storage locations inside the CPU.
o Temporarily hold instructions, data, and addresses.
o Examples: Accumulator, Program Counter, Instruction Register.
o Modern CPUs often feature 64-bit or even 128-bit registers for
high-speed processing.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 34
3. Memory / Storage Unit
Stores data and instructions before, during, and after processing.
a. Primary Memory (Main Memory):
o RAM: Temporarily stores data and is volatile.
Types in 2025: DDR5, LPDDR5X, MRAM
o ROM: Non-volatile, used for firmware and boot instructions.
o Cache Memory: Extremely fast storage located between the CPU
and RAM, typically in L1, L2, and L3 layers.
b. Secondary Storage:
o Long-term storage solutions like SSDs (especially NVMe), HDDs,
USB drives, and cloud services.
o 2025 trend: Hybrid storage systems with integrated cloud
functionality.
4. Output Unit
Purpose: Converts processed binary data into human-readable or
perceivable formats.
Examples:
o Visual: LED/OLED monitors, 4K/8K displays
o Print: Laser, Inkjet, and 3D printers
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 35
o Audio: Speakers and headphones
o Haptic: Vibration and touch feedback devices
o Emerging Tech: AR/VR headsets, voice outputs, Braille displays
for accessibility
Interconnection of Functional Components
All computer components communicate via a bus system, which is a collection
of signal-carrying pathways made up of electrical wires.
Types of Buses:
o Address Bus: Carries memory addresses of data or instructions.
o Data Bus: Transfers actual data between components.
o Control Bus: Carries control signals for coordinating operations.
The system bus connects the CPU, memory, and I/O devices. I/O devices
interact with the system bus through controller circuits, which manage
communication and ensure proper data exchange.
Design of Accumulator Logic
What is an Accumulator?
In computer architecture, an accumulator is a special-purpose register used to
store intermediate arithmetic and logic results during program execution.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 36
Most instructions in a simple CPU (like addition, subtraction, etc.) use the
accumulator as one of the operands and also store the result back in it.
For example, if you add two numbers, the result is usually stored in the
accumulator.
Purpose of Accumulator Logic
The accumulator logic is a part of the Arithmetic Logic Unit (ALU) system
that:
Takes inputs (from memory, registers, etc.)
Performs operations (like ADD, SUB, AND, etc.)
Stores the result temporarily in the accumulator
Sends the result back to memory or another register if needed
Components in the Design of Accumulator Logic
1. Accumulator Register (AC)
A flip-flop-based register that stores intermediate results.
2. Input Bus
Carries data from memory or another register to the accumulator.
3. Arithmetic Logic Unit (ALU)
Performs arithmetic (ADD, SUB) and logic (AND, OR) operations using
accumulator contents and inputs.
4. Control Unit
Sends signals to perform the correct operation and update the
accumulator.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 37
5. Output Bus
Sends the result from the accumulator to memory or another register if
needed.
Working of Accumulator Logic (Step-by-Step)
1. Input Fetched: Data is fetched from memory or another register and sent
to the ALU via the input bus.
2. Operation Selected: The control unit sends a signal to the ALU
specifying the operation (e.g., ADD, SUB).
3. Perform Operation: ALU performs the operation using the current value
of the accumulator and the input.
4. Store Result: The result is stored back into the accumulator register.
5. Output If Needed: The accumulator can also send the result to another
register or back to memory if required.
Part II (Central Processing Unit)
Central Processing Unit (CPU)
Components of a Computer
A digital computer is composed of several functional units that work together to
receive, process, store, and display data. These core components include the
Input Unit, Central Processing Unit (CPU), Memory Unit, Output Unit, and
the Bus System. Each plays a distinct role in ensuring the computer functions
efficiently.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 38
Functional Components of a Computer
The main building blocks of a computer system include:
Input Unit – Receives data and instructions from users or external
sources.
CPU (Central Processing Unit) – Processes data and manages
operations.
Memory Unit – Temporarily or permanently stores data and instructions.
Output Unit – Displays or delivers processed results.
Bus System – Connects all components and facilitates data transfer.
These components operate in coordination to carry out tasks and deliver desired
outputs.
1. Input Unit
Purpose: To collect and transmit user or external data to the computer.
Function: Transforms user inputs into binary signals understandable by
the system.
Common Devices (as of 2025):
o Keyboard, Mouse, Touchscreens
o Scanners, Sensors, Stylus Pens
o Voice Assistants (e.g., Siri, Alexa)
o Biometric Scanners (facial/fingerprint recognition)
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 39
o IoT-enabled smart inputs
2. Central Processing Unit (CPU) – The Computer’s Brain
The CPU is responsible for executing instructions and coordinating operations
within the computer. By 2025, most CPUs support multi-core and multi-
threading, enhancing performance through parallel processing.
CPU Components:
a. Arithmetic Logic Unit (ALU):
o Handles arithmetic operations (addition, subtraction, etc.)
o Performs logical operations (comparisons, decisions)
o In modern CPUs, supports AI/ML workloads through
vector/matrix processing.
b. Control Unit (CU):
o Manages instruction decoding and execution.
o Controls the flow of data among components.
o Sends timing and control signals to other parts of the computer.
c. Registers:
o Small, high-speed storage locations inside the CPU.
o Temporarily hold instructions, data, and addresses.
o Examples: Accumulator, Program Counter, Instruction Register.
o Modern CPUs often feature 64-bit or even 128-bit registers for
high-speed processing.
3. Memory / Storage Unit
Stores data and instructions before, during, and after processing.
a. Primary Memory (Main Memory):
o RAM: Temporarily stores data and is volatile.
Types in 2025: DDR5, LPDDR5X, MRAM
o ROM: Non-volatile, used for firmware and boot instructions.
o Cache Memory: Extremely fast storage located between the CPU
and RAM, typically in L1, L2, and L3 layers.
b. Secondary Storage:
o Long-term storage solutions like SSDs (especially NVMe), HDDs,
USB drives, and cloud services.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 40
o 2025 trend: Hybrid storage systems with integrated cloud
functionality.
4. Output Unit
Purpose: Converts processed binary data into human-readable or
perceivable formats.
Examples:
o Visual: LED/OLED monitors, 4K/8K displays
o Print: Laser, Inkjet, and 3D printers
o Audio: Speakers and headphones
o Haptic: Vibration and touch feedback devices
o Emerging Tech: AR/VR headsets, voice outputs, Braille displays
for accessibility
Interconnection of Functional Components
All computer components communicate via a bus system, which is a collection
of signal-carrying pathways made up of electrical wires.
Types of Buses:
o Address Bus: Carries memory addresses of data or instructions.
o Data Bus: Transfers actual data between components.
o Control Bus: Carries control signals for coordinating operations.
The system bus connects the CPU, memory, and I/O devices. I/O devices
interact with the system bus through controller circuits, which manage
communication and ensure proper data exchange.
General Register Organization in CPU
Introduction
In computer architecture, registers play a crucial role in executing instructions
efficiently. A register is a small, high-speed storage location located inside the
CPU that temporarily holds data and instructions. General Register
Organization refers to a CPU architecture where multiple general-purpose
registers (GPRs) are used to store operands, intermediate results, and other
temporary data during program execution. This design is widely used in modern
CPUs to increase processing speed and flexibility.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 41
Need for General Register Organization
In earlier computers, most arithmetic and logic operations used a single
accumulator register. While this was simple, it had limitations in terms of
speed and flexibility. As computing needs grew, it became essential to allow
more operations to be performed simultaneously or in quick succession without
always accessing the slower main memory.
General register organization addresses this by providing multiple registers that
can be used by instructions to store and manipulate data. This reduces the
number of memory accesses, speeds up instruction execution, and provides a
more versatile platform for complex operations.
Structure of General Register Organization
In a typical general register organization, the CPU contains several components
working together:
1. General-Purpose Registers (GPRs): These are used by the CPU to hold
temporary data and results. Commonly labeled as R0, R1, R2, ..., Rn,
they are directly accessible by instructions.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 42
2. Arithmetic Logic Unit (ALU): The ALU performs all arithmetic and
logic operations such as addition, subtraction, logical AND, OR, etc. It
takes its operands from the registers and stores the result back into a
register.
3. Control Unit: This unit manages the execution of instructions by
controlling the movement of data between registers, ALU, and memory.
It decodes instructions and generates control signals.
4. Instruction Register (IR): Holds the current instruction being executed.
5. Program Counter (PC): Keeps track of the memory address of the next
instruction to be executed.
6. Buses (Data and Address): These are used to transfer data between
registers, memory, and other parts of the CPU.
Working Process
When a CPU executes a program using general register organization, it follows
these steps:
The instruction is fetched from memory and stored in the Instruction
Register.
The operands required for the instruction are loaded from the general-
purpose registers.
The ALU performs the specified operation on the operands.
The result is stored back into one of the registers.
The Program Counter is updated to fetch the next instruction.
For example, the instruction ADD R1, R2, R3 adds the values in R2 and R3 and
stores the result in R1.
Advantages
Increased Speed: Reduces dependency on slower main memory.
Better Performance: Allows more complex and multiple operations to
be handled easily.
Efficient Use of Resources: Registers can be reused effectively within
programs.
Programming Flexibility: Makes it easier for compilers to generate
efficient code.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 43
Stack-Based CPU Organization
Introduction
In computer architecture, a stack-based CPU organization is a type of CPU
design where most of the operations are performed using a structure called a
stack. A stack is a special type of data structure that follows the Last In, First
Out (LIFO) principle — the last item pushed (added) onto the stack is the first
one to be popped (removed).
This design simplifies instruction sets, reduces the need for multiple general-
purpose registers, and makes expression evaluation easier, especially in
arithmetic and logical operations.
What is a Stack?
A stack is an area in memory where data is stored and retrieved in a particular
order. The two main operations used in a stack are:
PUSH: Adds (stores) an item on the top of the stack.
POP: Removes (retrieves) the top item from the stack.
In stack-based CPU organization, operands are pushed onto the stack,
operations are performed on the top elements, and results are pushed back.
Structure of Stack-Based CPU Organization
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 44
The main components of a stack-based CPU include:
1. Stack Memory: A reserved portion of memory used to store data in
LIFO order.
2. Stack Pointer (SP): A special register that always points to the top of the
stack. It gets updated with each push or pop operation.
3. Instruction Register (IR): Holds the instruction currently being
executed.
4. Program Counter (PC): Holds the address of the next instruction to be
fetched from memory.
5. Control Unit: Decodes instructions and controls stack operations and
instruction execution.
6. Arithmetic Logic Unit (ALU): Performs arithmetic and logical
operations using the topmost elements of the stack.
How It Works
Here’s a step-by-step example of how instructions work in stack-based
organization:
Let’s say we want to compute:
A+B
The instructions in a stack machine would look like:
1. PUSH A → Push value of A onto the stack
2. PUSH B → Push value of B onto the stack
3. ADD → Pop top two values (A and B), add them, push the result back
onto the stack
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 45
No registers are needed to store A or B, as all operations happen on the stack.
Advantages
Simple Instruction Set: Fewer operands needed per instruction (zero-
address format).
Efficient Expression Evaluation: Good for evaluating arithmetic
expressions and recursive function calls.
Less Register Usage: No need for general-purpose registers as all data is
handled via the stack.
Good for Recursive Programming: Easily supports function call stacks
and return addresses.
Disadvantages
Slower Access: Memory-based stack operations can be slower than
using registers.
Limited Flexibility: Not suitable for all types of programs; less efficient
for register-heavy operations.
Difficult to Optimize: Compiler optimization is harder compared to
register-based architectures.
Instruction Formats
Introduction
In computer architecture, an instruction is a binary command that tells the CPU
what operation to perform. Every instruction is represented in a specific layout
or structure known as the instruction format. The instruction format defines
how the instruction is divided into different fields, such as the operation code
(opcode), operands, addressing modes, and more. This structure allows the CPU
to correctly decode and execute instructions.
Importance of Instruction Format
Instruction formats are essential because:
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 46
They ensure consistent communication between the CPU and memory.
They help in decoding operations properly.
They determine the instruction length, which impacts memory usage
and speed.
They define how operands are accessed — from registers, memory, or
immediate values.
Basic Components of an Instruction Format
Most instruction formats include the following fields:
1. Opcode (Operation Code):
This field specifies the operation to be performed, such as ADD, SUB,
LOAD, STORE, etc.
2. Operand(s):
The operand field specifies the data to be operated on. It could be a
register, a memory location, or an immediate (constant) value.
3. Addressing Mode (optional):
This field indicates how the operand should be interpreted — whether it's
a direct address, indirect, or based on register content.
4. Register Field (optional):
Specifies the register(s) involved in the operation.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 47
5. Immediate Field (optional):
Used when the instruction contains constant data to be used directly.
Types of Instruction Formats
1. Zero-Address Instruction Format
The zero-address format is used in stack-based CPU organizations, where
operands are implicitly taken from the stack. In this format, instructions do
not specify any operands directly. Instead, the CPU assumes that operands are at
the top of the stack. The result of an operation is also pushed back onto the
stack.
For example, to calculate C = A + B, the instructions would be:
PUSH A
PUSH B
ADD
POP C
Here, ADD adds the top two values from the stack, and POP stores the result in
C.
Advantage: Very simple instruction format.
Disadvantage: Frequent memory access due to stack operations.
2. One-Address Instruction Format
The one-address format is based on accumulator-based architecture. It uses
one explicit operand and assumes the second operand is stored in the
accumulator, a special register. The result is also stored in the accumulator.
For example:
LOAD A ; Load A into Accumulator
ADD B ; Add B to Accumulator
STORE C ; Store the result in C
Advantage: Simple and compact format.
Disadvantage: Limited by dependency on the accumulator.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 48
3. Two-Address Instruction Format
This format uses two operands – one source and one destination. The result of
the operation is stored in the destination operand.
Example:
ADD A, B ; A = A + B
The contents of B are added to A, and the result is stored in A.
Advantage: Fewer instructions needed for arithmetic operations.
Disadvantage: Somewhat less flexible than three-address format.
4. Three-Address Instruction Format
In this format, all three operands are explicitly mentioned: two source
operands and one destination.
Example:
ADD C, A, B ; C = A + B
Here, A and B are added, and the result is stored in C. This format is commonly
used in RISC (Reduced Instruction Set Computer) architectures.
Advantage: Highly flexible and reduces the total number of instructions.
Disadvantage: Instructions are longer and require more memory.
Addressing Modes in Computer Architecture
Introduction
In computer architecture, when the CPU executes an instruction, it often needs
to access data stored in memory or registers. The way in which the address of
the operand (data) is specified within an instruction is known as the
addressing mode. In simpler terms, addressing modes define how and where
the CPU should find the data required for an operation.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 49
Different addressing modes provide flexibility, efficiency, and simplicity in
writing machine-level instructions. They also help in implementing features like
loops, arrays, pointers, and constants.
Types of Addressing Modes
Addressing modes define how the operand of an instruction is selected or
accessed during program execution. Different modes offer flexibility in
programming and improve execution efficiency. Below are the common
addressing modes used in computer architecture:
1. Implied Addressing Mode
In this mode, the operand is implicitly defined by the instruction itself—no
operand field is needed. For example, in the instruction "Complement
Accumulator," the operand (accumulator) is assumed, so it’s not explicitly
stated. Many register reference instructions that operate on the accumulator
use implied addressing.
2. Immediate Addressing Mode
Here, the operand is directly provided within the instruction. Instead of
pointing to a memory location, the instruction contains the actual value to be
used. This mode is efficient for initializing registers with constant values.
Example: LOAD #10 loads the value 10 directly into a register.
3. Register Addressing Mode
In this mode, the operand is stored in one of the CPU’s internal registers. The
instruction includes a field that specifies which register to use. A k-bit register
field can identify one of 2^k registers. This is a fast and efficient way to access
data.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 50
4. Register Indirect Addressing Mode
The instruction specifies a register that holds the address of the actual
operand in memory. Thus, instead of accessing the data directly, the register
provides a pointer to the memory location where the data resides. This mode
requires fewer bits in the instruction compared to specifying a full memory
address.
5. Autoincrement / Autodecrement Mode
This mode is an extension of register indirect addressing. The difference is that
the register used to point to memory is automatically incremented or
decremented after (or before) accessing the operand. This is particularly useful
for working with data tables or arrays, where sequential memory access is
needed.
6. Direct Addressing Mode
In direct addressing, the address field of the instruction holds the memory
location of the operand. The effective address (EA) is exactly the address
given in the instruction.
Example: If the instruction contains address 500, then the operand is fetched
from memory location 500.
7. Indirect Addressing Mode
This mode uses the address field of the instruction to point to a memory
location, which in turn contains the effective address of the operand. This
requires an extra memory access, but offers flexibility when dealing with
variable or pointer-based data.
8. Indexed Addressing Mode
In indexed mode, the effective address is obtained by adding the contents of
an index register to the address field in the instruction. The index register
usually holds an offset value, and the address field typically represents the base
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 51
address of an array or data block.
This mode is especially useful for array processing and loop structures.
These various addressing modes provide the flexibility and efficiency needed in
modern computer systems to access data in different ways, depending on the
structure and logic of the program.
Data Transfer and Manipulation in Computer Architecture
Introduction
In the intricate world of computer architecture, data transfer and manipulation
stand as pillars that uphold the efficiency and functionality of computing
systems. These processes are integral to the seamless operation of computers,
enabling them to perform a wide range of tasks from executing simple
instructions to running complex applications. Understanding how data moves
within a system and how it can be manipulated is crucial for anyone looking to
delve deeper into the inner workings of computer technology.
Data transfer involves the movement of data between different components of
a computer system, such as from the processor to memory, between registers, or
across network interfaces. This movement is essential for fetching instructions,
storing results, and communicating between various parts of a system.
Data manipulation, on the other hand, refers to the processes that transform
and operate on data to produce desired outcomes. This includes arithmetic
operations, logical operations, bit-level manipulations, and data shifting. Each
of these operations plays a vital role in how computers process information and
execute tasks.
Different types of manipulation in computer
Data manipulation instructions can be categorized into three parts:
1) Arithmetic instruction
2) Logical and bit manipulation instructions
3) Shift instructions
Arithmetic Instruction
Arithmetic instructions include increment, decrement, add, subtract, multiply,
divide, add with Carry, subtract with Borrow, negate that is (2’s) two's
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 52
complement. If there’s a negative number, it is considered as negate (so two's
complement).
Generally, most computers carry instructions for all four of these operations. If
computers have only addition(ADD) and possibly subtraction(SUB)
instructions, the other two operations, i.e. multiplication(MUL) and
division(DIV) must be generated using software subroutines. These four basic
arithmetic operations are sufficient for solving scientific problems when
expressed in numerical analysis methods.
The table given below shows the Arithmetic Instructions:
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 53
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 54
Logical and Bit Manipulation Instruction
We have another list of instructions that is logical and bit manipulation
instructions starting with clear (that means clear the content of accumulator),
complement the accumulator, AND, OR, Exclusive-OR, Clear carry, Set carry,
Complement carry, Enable interrupts, Disable interrupts, all these are logical
and bit manipulation instructions.
These logical instructions consider each operand bit individually and treat it as a
Boolean variable. Basically, logical instructions help perform binary operations
on strings of bits stored in registers.
Clear instruction means making all the bits of a register ‘0’.
AND instruction is sometimes referred to as bit clear instruction or mask.
OR instruction is sometimes referred to as bit set instruction.
Set instruction means making all the bits of a register ‘1’.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 55
Name Mnemonic Example Explanation
Clear CLR CLR R1 Sets the value in register R1 to 0.
Inverts all the bits in the value
Complement COM COM R2
stored in register R2.
Performs a bitwise AND between
AND R3,
AND AND values in R3 and R4, stores the
R4
result in R3.
Performs a bitwise OR between
OR R5,
OR OR values in R5 and R6, stores the
R6
result in R5.
Performs a bitwise XOR between
XOR R7,
Exclusive-OR XOR values in R7 and R8, stores the
R8
result in R7.
Clear carry CLRC CLRC Clears the carry flag (sets it to 0).
Set Carry SETC SETC Sets the carry flag to 1.
Complement Inverts the carry flag (if it was 1, it
COMC COMC
Carry becomes 0, and vice versa).
Enable Enables the interrupt system,
EI EI
Interrupt allowing interrupts to occur.
Disables the interrupt system,
Disable
DI DI preventing interrupts from
Interrupt
occurring.
XOR instruction is referred to as bit complement instruction.
Shift Instructions
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 56
Shift instructions allow the bits of a memory byte or register to be shifted one-
bit place to the right or the Left.
There are basically two types of shift instructions — arithmetic and logical.
Arithmetic shifts consider the contents of the memory byte or register to be a
signed number. So, when the shift is made, the number is arithmetically divided
by two (right shift) or multiplied by two (left shift). Logical shifts consider the
contents of the register or memory byte to be just a bit pattern when the shift is
made.
OP is the opcode field
RL (It tells whether to shift it right or left).
REG (It determines which register is to be shifted).
COUNT (It tells the number of places to be shifted).
TYPE( It tells the type of shifting from the list given below).
In right-shift operations, zeros are shifted into high-order vacated positions. And
in the case of the left-shift operation, shifts the zero into low-order vacated
positions.
Program Control in Computer Architecture
Introduction
In computer architecture, program control refers to how the flow of execution
of instructions is managed by the CPU. A typical program consists of many
instructions, and program control determines which instruction will be
executed next, and how the CPU responds to different conditions during
execution.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 57
Program control includes sequencing, branching, and decision-making
operations, and is an essential part of instruction execution in any computer
system.
Purpose of Program Control
The main goals of program control are:
To maintain proper sequence of instruction execution
To allow conditional and unconditional branching
To handle interrupts or subroutine calls
To support logical decision-making during execution
Instruction Sequencing
In a normal case (without branching), the CPU executes instructions
sequentially, one after the other. The Program Counter (PC) holds the
address of the next instruction. After each instruction is executed, the PC is
automatically incremented to point to the next instruction.
Example:
Instruction 1 → Instruction 2 → Instruction 3 → ...
Types of Program Control Instructions
1. Branch Instructions
These instructions change the sequence of execution by modifying the
Program Counter (PC).
There are two types of branches:
a. Unconditional Branch
Directly jumps to a new instruction address.
Example: JMP 2000 → Jump to instruction at address 2000
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 58
b. Conditional Branch
The CPU jumps to a new address only if a condition is true (like Zero,
Negative, Overflow, etc.).
Example: JZ 3000 → Jump to 3000 if the Zero flag is set
2. Subroutine Call and Return
Subroutines are reusable blocks of code (like functions).
CALL instruction: Transfers control to the subroutine.
RET instruction: Returns control to the instruction after the CALL.
The return address is usually saved on a stack, so the CPU knows where to
return after subroutine execution.
3. Interrupts
Interrupts are signals that pause the current program and execute a service
routine (like handling I/O or errors).
After handling the interrupt, the program resumes from where it left off.
Status Flags for Control Decisions
The CPU uses status flags (in a flag register) to make decisions in conditional
branching:
Zero (Z) Flag: Set if the result is zero
Carry (C) Flag: Set if there’s a carry/borrow in arithmetic
Sign (S) Flag: Shows if the result is negative
Overflow (O) Flag: Set when there’s an overflow in result
These flags are used with branch instructions to guide the flow.
Program Control Flow Diagram
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 59
Start → Instruction 1 → Instruction 2
↓ (if condition met)
Branch to Instruction 5
↓ (if not)
Continue to Instruction 3
RISC and CISC in Computer Organization
In computer architecture, RISC (Reduced Instruction Set Computer) and
CISC (Complex Instruction Set Computer) are two primary approaches to
CPU design. While RISC focuses on simplifying the hardware by using a
smaller set of instructions, CISC emphasizes minimizing the number of
instructions in a program by making each instruction more powerful.
1. Reduced Instruction Set Computer (RISC)
The core idea behind RISC is to make the hardware simpler by using a small set
of instructions that perform basic operations like load, store, and arithmetic.
Each instruction is designed to execute quickly and efficiently.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 60
Characteristics of RISC:
Uses simple instructions for faster decoding and execution.
Each instruction is typically one word in size.
Most instructions execute in a single clock cycle.
Includes more general-purpose registers.
Supports simple addressing modes.
Has fewer data types.
Supports pipelining for faster instruction processing.
Advantages of RISC:
Faster execution due to simpler instructions.
Lower power consumption, making it suitable for mobile and embedded
devices.
Simplified CPU design, which aids in easier testing and debugging.
Disadvantages of RISC:
More instructions are needed to perform complex operations.
Increased memory usage due to longer programs.
Potentially higher development costs for optimized compilers and
hardware.
2. Complex Instruction Set Computer (CISC)
CISC architecture is designed to minimize the number of instructions per
program by using more complex instructions that can perform multiple
operations (e.g., loading, computing, and storing) in a single command.
Characteristics of CISC:
Instructions are complex and may take multiple cycles to execute.
Each instruction is larger than one word in size.
Makes use of fewer general-purpose registers, relying more on
memory.
Includes complex addressing modes.
Supports a wide range of data types.
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 61
Advantages of CISC:
Smaller program size, since one instruction can perform multiple tasks.
Efficient memory usage, with fewer instructions needed.
Well-established architecture, widely supported by software and
hardware tools.
Disadvantages of CISC:
Slower instruction execution due to more complex decoding.
More difficult to design and manufacture, increasing hardware
complexity.
Higher power consumption, which may be unsuitable for battery-
powered devices.
RISC and CISC Architectures – A Comparison
In the world of computer architecture, RISC (Reduced Instruction Set
Computer) and CISC (Complex Instruction Set Computer) represent two
different philosophies for designing the instruction sets that a CPU uses to
perform operations. These architectures have played a major role in the
evolution of modern processors and affect how efficiently programs run on a
computer.
RISC architecture is based on the principle of using a small set of simple
instructions that can execute very quickly—typically, within a single clock
cycle. The idea behind RISC is that simple instructions can be executed faster,
and complex tasks can be broken down into a series of these fast, simple steps.
As a result, RISC processors require more lines of code to perform a task, but
each instruction is easy to decode and execute. Common features of RISC
include fixed-length instructions, a large number of general-purpose registers,
and separate instructions for memory access (Load/Store architecture).
Examples of RISC-based processors include ARM, MIPS, and SPARC.
On the other hand, CISC architecture uses a large and complex set of
instructions, many of which can perform multi-step tasks within a single
instruction. For example, a single CISC instruction might load data from
memory, perform a calculation, and store the result—all in one command. This
reduces the number of instructions required in a program, making the code
smaller and potentially easier to write. However, CISC instructions are more
complex, require more decoding time, and may take multiple clock cycles to
execute. Key features of CISC include variable-length instructions, multiple
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 62
addressing modes, and the ability to perform operations directly on memory.
The most well-known CISC architecture is the Intel x86 family of processors.
There are several important differences between RISC and CISC. RISC
emphasizes hardware simplicity and execution speed, making it well-suited
for applications that require high performance and low power consumption,
such as mobile phones and embedded systems. CISC, however, focuses on
software simplicity and code density, making it more favorable for general-
purpose computing where memory space is limited and backward compatibility
is important.
For example, to perform an addition operation in RISC, multiple simple
instructions like LOAD, ADD, and STORE may be used. In contrast, a CISC
processor might perform the same operation using a single instruction like ADD
A, B, C, which adds two values from memory and stores the result—without
needing separate load or store commands.
Both architectures have their pros and cons. RISC is easier to pipeline, easier to
debug, and consumes less power. However, it often leads to longer programs.
CISC allows for more compact code and easier programming but suffers from
slower execution and complex hardware.
In conclusion, RISC and CISC are two different approaches to solving the
same problem—how to execute instructions effectively. Today, modern
processors often use a hybrid approach, combining the best of both worlds.
For example, Intel’s modern CISC processors internally translate complex
instructions into simpler RISC-like micro-operations. Understanding RISC and
CISC helps students appreciate how CPU design affects system performance,
efficiency, and programming.
RISC vs CISC: Key Differences
Feature RISC CISC
Instruction Set Small and simple Large and complex
Instruction Length Fixed Variable
Execution Time One cycle per instruction Multiple cycles possible
Hardware Complexity Simple Complex
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 63
Feature RISC CISC
Code Size Larger Smaller
Memory Access Only Load/Store instructions Memory operations in all
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 64
Prepared by: Ms Anjali Sharma (Asst.Professsor)
Page 65