0% found this document useful (0 votes)
3 views28 pages

Computer Architecture

The document provides a comprehensive overview of computer architecture, covering basic organization, instruction sets (RISC vs. CISC), and various processing techniques such as pipelining and vector processing. It explains the roles of the CPU, memory, and I/O devices, as well as the execution of instructions and the importance of registers. Additionally, it delves into computer arithmetic, including binary operations and algorithms for multiplication and division.

Uploaded by

rayyan anwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views28 pages

Computer Architecture

The document provides a comprehensive overview of computer architecture, covering basic organization, instruction sets (RISC vs. CISC), and various processing techniques such as pipelining and vector processing. It explains the roles of the CPU, memory, and I/O devices, as well as the execution of instructions and the importance of registers. Additionally, it delves into computer arithmetic, including binary operations and algorithms for multiplication and division.

Uploaded by

rayyan anwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Computer

Architecture
For B.Sc CS 1st Year, BCA 1st
Sem, M.Sc. Computer Science

Rayyan Anwar
1

Unit I: Basic Computer Organization & Design

1. Introduction
A computer is an electronic machine that accepts data (input), processes it, and produces
meaningful results (output). To perform these tasks, it must be properly organized and
designed.
• Organization → The way operational units (CPU, memory, I/O devices) are
connected and interact.
• Design → The details of implementation, like control signals, data paths, micro-
operations, and logic.
Thus, understanding basic computer organization & design is essential to know how
hardware executes software instructions.

2. Basic Computer Organization


A computer system consists of 3 main units:
1. Central Processing Unit (CPU)
2. Memory Unit
3. Input/Output Devices
2

Block Diagram (Conceptual)


+---------------------+
| Input Unit |
+---------------------+

+---------------------+
| Memory Unit |
+---------------------+

+---------------------+
| CPU (ALU + CU) |
+---------------------+

+---------------------+
| Output Unit |
+---------------------+
• Input Unit: Accepts data/instructions (keyboard, mouse, scanner).
• Memory Unit: Stores data/instructions (RAM, ROM).
• CPU: Processes data (ALU for operations, CU for control, Registers for storage).
• Output Unit: Displays results (monitor, printer).

3. Instruction & Instruction Codes


Instruction
• A command to CPU to perform a specific operation.
• Example: ADD R1, R2 → add contents of R2 to R1.
3

Instruction Code
• Binary representation of an instruction.
• Consists of two parts:
1. Opcode (Operation Code) → Specifies operation (ADD, SUB, MOV).
2. Operand/Address field → Specifies data or memory location.
Format Example (16-bit Instruction):
| Opcode (4 bits) | Address (12 bits) |
• 1010 000011001010 → Opcode = 1010 (ADD), Address = 00CAh.

4. Timing & Control Unit


• Every instruction execution requires a sequence of control signals.
• The Control Unit (CU) generates these signals and synchronizes activities using a
clock pulse.
Types of Control Unit
1. Hardwired Control
o Uses fixed digital circuits (logic gates, flip-flops).
o Faster but less flexible.
2. Microprogrammed Control
o Uses microinstructions stored in control memory.
o Flexible, easier to modify.

5. Instruction Cycle
The process of executing an instruction = Instruction Cycle.
It consists of:
1. Fetch – Get instruction from memory.
2. Decode – CU decodes opcode.
3. Execute – Perform operation.
4. Interrupt Check – If interrupt occurs, handle it.
Diagram (conceptual flow):
Fetch → Decode → Execute → Interrupt → Next Instruction
4

6. Registers
Registers are small, high-speed storage units inside CPU.
Types of Registers
1. General Purpose Registers (GPRs)
o Temporary storage for data & results.
o Example: AX, BX, CX, DX in x86 processors.
2. Special Purpose Registers
o Program Counter (PC) → Holds address of next instruction.
o Instruction Register (IR) → Holds current instruction.
o Memory Address Register (MAR) → Holds memory address.
o Memory Data Register (MDR) → Holds data fetched/written.
o Accumulator (ACC) → Stores intermediate results.
o Index Register → Used for indexed addressing, supports array access.
o Status Register / Flags → Holds condition codes (Zero, Carry, Sign, Overflow).

7. Register Transfer & Micro-operations


• Register Transfer: Moving data from one register to another.
o Example: R1 ← R2 (content of R2 copied to R1).
• Micro-operations: Fundamental operations on data in registers.
o Arithmetic: ADD, SUB, INC, DEC.
o Logic: AND, OR, XOR, NOT.
o Shift: Left shift, right shift.
o Data Transfer: MOV, LOAD, STORE.

8. Register Transfer Instructions


• Instructions that explicitly transfer data between registers or between memory and
registers.
• Examples:
o MOV R1, R2 → Transfer R2 → R1.
5

o LOAD R1, [1000H] → Load memory content at 1000H into R1.


o STORE R2, [2000H] → Store R2 contents into memory address 2000H.

9. Input/Output & Interrupts


Input/Output Organization
• CPU communicates with external devices using I/O instructions.
• Two main approaches:
1. Programmed I/O – CPU waits for device.
2. Interrupt-driven I/O – Device signals CPU when ready.
Interrupts
• An interrupt is a signal that temporarily halts CPU’s normal execution to handle
urgent tasks.
• Types:
1. Hardware Interrupts – Generated by I/O devices.
2. Software Interrupts – Generated by instructions (e.g., INT 21H).
Interrupt Cycle:
1. Current instruction completes.
2. CPU saves PC & status.
3. Jumps to Interrupt Service Routine (ISR).
4. Executes ISR.
5. Returns to normal program.

10. Memory Reference Instructions


• Operate on data stored in memory.
• Typical types:
o LOAD – Copy data from memory → register.
o STORE – Copy data from register → memory.
o ADD – Add contents of memory to register.
o SUB – Subtract memory contents.
Example (in pseudo code):
6

ADD 2000H
→ AC ← AC + [2000H]
where AC = Accumulator, [2000H] = contents of memory at 2000H.

11. Worked Example


Suppose we want to compute: C = A + B, where
• A stored at 1000H
• B stored at 1001H
• Result stored at 1002H
Steps:
1. LOAD R1, [1000H] → Load A into R1.
2. LOAD R2, [1001H] → Load B into R2.
3. ADD R1, R2 → Add A+B.
4. STORE R1, [1002H] → Store result into memory.
This shows how instructions + registers + memory work together.

12. Summary
• A computer is organized into CPU, memory, I/O.
• Instructions are represented as binary codes (opcode + operand).
• The Control Unit manages instruction execution through instruction cycle.
• Registers are fast storage units → GPRs (AX, BX, etc.), SPRs (PC, IR, MAR, MDR, ACC,
Flags).
• Register transfers and micro-operations define internal data flow.
• I/O & interrupts allow efficient device communication.
• Memory reference instructions form the foundation of programming.
7

Unit II: RISC–CISC, Parallel Processing, Pipelining, Vector


Processing & Array Processor

✦ 1. Introduction
In modern computer architecture, performance depends not only on faster hardware but
also on how efficiently instructions are executed. Two dominant instruction set philosophies
exist: CISC (Complex Instruction Set Computer) and RISC (Reduced Instruction Set
Computer).
Additionally, parallel processing techniques such as pipelining, vector processing, and array
processors are used to enhance execution speed and efficiency. This unit explores these
concepts in detail.

✦ 2. RISC vs. CISC


(a) CISC (Complex Instruction Set Computer)
• Contains large set of complex instructions.
• Single instruction may perform multiple low-level operations.
• Example: Intel x86 processors.
Characteristics:
1. Large instruction set (100–250 instructions).
2. Complex addressing modes.
3. Microprogrammed control unit.
4. Variable-length instruction format.
5. Emphasis on hardware complexity.
Advantages:
• Easier programming due to powerful instructions.
• Smaller code size.
• Well-suited for systems with limited memory.
Disadvantages:
• Complex hardware leads to higher cost.
8

• Slower execution per instruction due to multiple cycles.


• Difficult to pipeline due to irregular instruction length.

(b) RISC (Reduced Instruction Set Computer)


• Focuses on simple and few instructions.
• Each instruction performs a single operation.
• Example: ARM, MIPS, SPARC.
Characteristics:
1. Small instruction set (30–50 instructions).
2. Fixed-length instructions → easier decoding.
3. Load/Store architecture (memory access only via load/store).
4. Hardwired control unit.
5. Emphasis on software optimization.
Advantages:
• High speed due to simple instructions.
• Easy pipelining.
• Lower hardware complexity and cost.
• Energy efficient.
Disadvantages:
• Larger program size (more instructions needed).
• Compiler design is complex.

(c) Comparison Table: RISC vs CISC

Feature RISC CISC

Instruction Set Small, simple (30–50) Large, complex (100–250)

Instruction Length Fixed Variable

Execution Time One clock cycle Multiple cycles

Control Unit Hardwired Microprogrammed


9

Feature RISC CISC

Pipelining Easy to implement Difficult to implement

Examples MIPS, ARM, SPARC Intel x86, VAX

✦ 3. Berkeley RISC Architecture


• Developed at University of California, Berkeley (1980s).
• Designed to maximize instruction throughput.
Key Features:
1. One instruction per cycle execution.
2. Large number of general-purpose registers.
3. Simple addressing modes.
4. Load/Store architecture.
5. Support for pipelining.
Impact:
• Forms the foundation for many modern processors (like ARM).

✦ 4. Parallel Processing
Parallel processing allows execution of multiple instructions simultaneously to increase
performance.
(a) Flynn’s Taxonomy
1. SISD (Single Instruction, Single Data): Traditional uniprocessor.
2. SIMD (Single Instruction, Multiple Data): Same instruction operates on multiple
data (e.g., vector processors).
3. MISD (Multiple Instruction, Single Data): Rarely used.
4. MIMD (Multiple Instruction, Multiple Data): Used in multiprocessors and clusters.

✦ 5. Pipelining
Pipelining is a technique where multiple instruction stages are overlapped.
• Similar to assembly line in manufacturing.
10

• Increases instruction throughput without increasing clock speed.


(a) Stages of Instruction Pipeline
1. Instruction Fetch (IF)
2. Instruction Decode (ID)
3. Operand Fetch (OF)
4. Execution (EX)
5. Write Back (WB)

(b) Pipeline Hazards


1. Structural Hazards – hardware resource conflict.
2. Data Hazards – dependency between instructions (RAW, WAR, WAW).
3. Control Hazards – branch prediction issues.

(c) Arithmetic Pipeline


• Used in arithmetic units like floating-point addition/multiplication.
• Example: Multiply pipeline with stages for exponent, mantissa, normalization.

(d) RISC Pipeline vs CISC Pipeline

Aspect RISC Pipeline CISC Pipeline

Instruction Length Fixed → easier pipeline Variable → complex pipeline

Execution Speed Faster Slower

Hazards Easier to handle Complex hazard management


11

✦ 6. Vector Processing
• Vector Processor: Executes a single instruction on multiple data elements
simultaneously.
• Example: Adding two arrays with one instruction.
Applications:
• Scientific computing.
• Engineering simulations.
• Image & signal processing.
Advantages:
• High performance in data-parallel tasks.
• Reduces instruction fetch overhead.

✦ 7. Array Processor
• A type of processor that has multiple ALUs operating in parallel.
• Each processor element executes the same instruction on different data.
Types:
1. Attached Array Processor – connected to a host computer.
2. SIMD Array Processor – independent processor with control unit.
Applications:
• Weather forecasting.
• Real-time image processing.
• Large-scale scientific simulations.

✦ 8. Summary
• CISC: Complex instructions, hardware-focused, slower pipelines.
• RISC: Simple instructions, software-focused, faster pipelines.
• Berkeley RISC revolutionized CPU design.
• Parallel Processing improves performance using SISD, SIMD, MIMD.
• Pipelining enhances throughput with stages & hazard management.
12

• Vector & Array Processors enable large-scale data-parallel computations.

✦ Hand-drawn Style Diagrams to include


1. Block diagram of RISC vs CISC.
2. Instruction pipeline stages.
3. Flynn’s taxonomy classification chart.
4. Vector processor working.
5. Array processor architecture.
13

Unit III: Computer Arithmetic

✦ 1. Introduction
Computers perform arithmetic operations in binary number system using logic circuits and
algorithms. Arithmetic operations form the backbone of processor design. Unlike humans,
computers cannot directly handle decimal operations, so binary and floating-point
representations are used.
This unit covers addition, subtraction, multiplication, division algorithms, floating-point
arithmetic, and decimal arithmetic operations.

✦ 2. Binary Addition & Subtraction


(a) Binary Addition Rules

A B SUM Carry

0 0 0 0

0 1 1 0

1 0 1 0

1 1 0 1

• Implemented using Half Adder and Full Adder circuits.


Example:
1011 (11)
+ 1101 (13)
=11000 (24)

(b) Binary Subtraction Rules

A B DIFFERENCE Borrow

0 0 0 0

0 1 1 1

1 0 1 0
14

A B DIFFERENCE Borrow

1 1 0 0

• Implemented using Half Subtractor and Full Subtractor.


Example:
1101 (13)
- 1011 (11)
= 010 (2)

✦ 3. Multiplication Algorithms
Binary multiplication is repetitive addition.
(a) Booth’s Algorithm
• Handles both positive & negative numbers.
• Efficient for signed multiplication.
Steps:
1. Represent numbers in 2’s complement.
2. Use a multiplier register & accumulator.
3. Scan multiplier bits → perform add/subtract/shift operations.
Example: Multiply 3 × (–4).
• 3 = 0011, –4 = 1100.
• Apply Booth’s rules → result = –12 (11110100).

(b) Unsigned Multiplication


• Repeated shift-and-add method.
• Similar to manual multiplication.

✦ 4. Division Algorithms
Binary division is repeated subtraction and shifting.
(a) Restoring Division Algorithm
1. Initialize dividend and divisor.
15

2. Shift left and subtract divisor.


3. If result < 0 → restore by adding divisor back.
4. Repeat for all bits.

(b) Non-Restoring Division Algorithm


• More efficient, avoids repeated restoration.
• Uses add/subtract decisions based on previous step.
Example: 13 ÷ 3
• Step through shift–subtract/add → result quotient = 4, remainder = 1.

✦ 5. Floating-Point Arithmetic
Used to represent very large or very small numbers.
(a) IEEE 754 Standard
• 32-bit (single precision) format:
o 1 bit → sign
o 8 bits → exponent
o 23 bits → mantissa
Representation:
Value = (–1)^sign × (1.mantissa) × 2^(exponent–127)

(b) Floating-Point Addition/Subtraction


1. Align exponents.
2. Add/subtract mantissas.
3. Normalize the result.
4. Round if necessary.
(c) Floating-Point Multiplication/Division
• Multiply/divide mantissas.
• Add/subtract exponents.
• Normalize result.
16

✦ 6. Decimal Arithmetic
Computers work in binary, but many applications need decimal (BCD) arithmetic.
(a) Decimal Addition in BCD
• If sum > 9, add 6 (0110) to adjust.
Example:
0101 (5)
+ 1001 (9)
=1110 → invalid → add 0110 = 0001 0100 (14)

(b) Decimal Subtraction in BCD


• Uses 9’s complement or 10’s complement methods.

✦ 7. Decimal Arithmetic Unit (DAU)


• Special hardware unit designed to handle decimal arithmetic directly.
• Used in financial/business computers.

✦ 8. Summary
1. Addition & Subtraction implemented using adder–subtractor circuits.
2. Booth’s Algorithm efficient for signed multiplication.
3. Restoring & Non-Restoring Division methods handle binary division.
4. Floating-point arithmetic (IEEE 754) supports large/small values.
5. Decimal arithmetic & DAU are essential for real-world applications.

✦ 9. Diagrams to Draw in Notebook


1. Half adder & full adder circuits.
2. Flowchart of Booth’s Algorithm.
3. Restoring vs Non-Restoring Division.
4. IEEE 754 single precision format.
17

Unit IV: Input/Output Organization

✦ 1. Introduction
The performance of a computer depends not only on its CPU and memory but also on how
efficiently it communicates with Input/Output (I/O) devices.
I/O devices (like keyboards, monitors, disks, printers) are slower than the CPU, so proper I/O
organization ensures smooth data transfer without wasting CPU cycles.
This unit explains I/O interfaces, asynchronous transfer, different modes of I/O, interrupts,
DMA, and Input–Output processors (IOP).

✦ 2. Input/Output Interface
An I/O interface acts as a bridge between the CPU and I/O devices.
• CPU and devices work at different speeds → interface synchronizes them.
• Provides control, status, and data registers.
• Manages communication through ports (serial, parallel, USB).
Functions of I/O Interface:
1. Data buffering between CPU and device.
2. Device status reporting.
3. Error detection (parity, checksum).
4. Control signal handling (handshaking).
Diagram to draw:
CPU ↔ I/O Interface ↔ Device

✦ 3. Asynchronous Data Transfer


I/O devices are much slower than CPU → synchronization is needed.
(a) Synchronous Transfer
• Uses a common clock.
• Faster but requires precise timing.
(b) Asynchronous Transfer
18

• No common clock; communication uses handshaking signals.


• Control signals:
o Ready/Busy signal from device.
o Acknowledge signal from CPU.
• Flexible and widely used.

✦ 4. Modes of Data Transfer


There are different ways of transferring data between CPU and I/O devices:
(a) Programmed I/O
• CPU actively waits for device to complete operation.
• Inefficient → wastes CPU cycles.
• Suitable for simple/slow devices (like keyboard input).
Steps:
1. CPU checks device status register.
2. If ready → CPU reads/writes data.
3. Repeat for next operation.

(b) Interrupt-Initiated I/O


• Device notifies CPU when it is ready via interrupt signal.
• CPU can perform other tasks instead of waiting.
Advantages:
• Better CPU utilization.
• Faster response.
Diagram to draw: Flow of interrupt request → CPU → ISR → Resume program.

(c) Priority Interrupts


• When multiple devices request service, priority decides which device is handled first.
• Priority can be fixed or dynamic.
Methods:
19

1. Software polling – CPU checks devices sequentially.


2. Daisy chaining – devices connected in series, first has highest priority.
3. Vectored interrupts – device sends ISR address directly.

(d) Direct Memory Access (DMA)


• Allows device to transfer data directly to memory without CPU intervention.
• CPU only initiates the transfer, DMA controller manages it.
Steps in DMA:
1. CPU sends DMA request to controller.
2. DMA controller takes control of bus.
3. Data transferred directly to memory.
4. DMA sends interrupt to CPU after completion.
Advantages:
• High-speed transfer (used in disk I/O, graphics).
• Frees CPU from data transfer tasks.

✦ 5. Input–Output Processor (IOP)


An IOP is a special processor dedicated to handling I/O operations.
• Offloads CPU from I/O management.
• Can execute its own I/O instruction set.
• Works in parallel with CPU.
Features:
• Independent control unit.
• Direct communication with memory and devices.
• Efficient for systems with heavy I/O (mainframes, servers).
Diagram: CPU ↔ Memory ↔ IOP ↔ Devices.
20

✦ 6. Interrupt System
(a) Types of Interrupts
1. Hardware Interrupts – triggered by I/O devices.
2. Software Interrupts – generated by program instructions.
3. Maskable Interrupts – can be ignored.
4. Non-Maskable Interrupts (NMI) – cannot be ignored (e.g., power failure).

(b) Interrupt Cycle


1. CPU completes current instruction.
2. Saves PC & status.
3. Jumps to ISR (Interrupt Service Routine).
4. Executes ISR.
5. Restores state & resumes program.

✦ 7. Comparison of Data Transfer Methods

Mode CPU Role Efficiency Use Case

Programmed I/O Actively waits Low Simple devices

Interrupt I/O Responds to requests Medium Keyboard, mouse, printer

DMA Initiates only High Disk, graphics, network

IOP Almost none Very High Mainframes, servers

✦ 8. Real-World Examples
• Keyboard Input: Interrupt-driven I/O.
• Hard Disk Data Transfer: DMA for high speed.
• Supercomputer I/O: IOP for parallel handling of multiple devices.

✦ 9. Summary
1. I/O interface is essential for CPU–device communication.
21

2. Asynchronous transfer uses handshaking, while synchronous relies on common


clock.
3. Programmed I/O wastes CPU cycles, Interrupt I/O improves efficiency.
4. DMA enables direct memory transfer, ideal for large data.
5. Priority interrupts decide service order.
6. IOP handles I/O independently, improving performance in large systems.

✦ 10. Diagrams to Sketch in Notebook


1. I/O interface block diagram.
2. Handshaking signals for asynchronous transfer.
3. Programmed vs Interrupt vs DMA flow.
4. Daisy chain priority interrupts.
5. CPU + IOP + Device architecture.
22

Unit V: 8085 to Pentium Processors & Assembly


Language Programming

✦ 1. Introduction
Computer processors have evolved from simple 8-bit microprocessors (8085) to today’s
superscalar, multi-core processors (Pentium and beyond). Alongside hardware evolution,
assembly language was developed to allow programmers to interact directly with hardware.
This unit covers:
• Evolution of Intel processors (8085 → Pentium).
• Assembly language basics, assemblers, and macros.
• Programming constructs (loops, arithmetic & logic).

✦ 2. Overview of Intel Processors


(a) Intel 8085 (8-bit microprocessor)
• Released in 1976.
• 8-bit data bus, 16-bit address bus (64 KB memory).
• Instruction set included arithmetic, logic, control, branching.
• Simple architecture → foundation for microprocessor study.

(b) Intel 8086 (16-bit)


• First 16-bit microprocessor.
• 1 MB addressable memory space.
• Segmented memory model (CS, DS, SS, ES registers).
• Instruction queue for pipelining.

(c) Intel 80286 (286)


• 24-bit address bus (16 MB memory).
• Introduced protected mode → multitasking OS support.
23

• Improved performance over 8086.

(d) Intel 80386 (386)


• 32-bit processor.
• 4 GB address space.
• Paging & virtual memory support.
• Basis for modern operating systems.

(e) Intel 80486 (486)


• Integrated floating-point unit (FPU).
• On-chip cache memory.
• Superscalar → executes more than one instruction per clock cycle.

(f) Intel Pentium (1993)


• Superscalar architecture.
• Dual pipelines (U & V pipe).
• 64-bit data bus.
• Higher clock speeds (up to hundreds of MHz).

✦ 3. Assembly Language Programming


(a) What is Assembly Language?
• Low-level programming language close to machine code.
• Uses mnemonics for instructions (e.g., MOV, ADD, SUB).
• Requires an assembler to translate into machine code.

(b) Assembler & Levels of Instructions


1. Assembler: Converts assembly code → object code.
2. Levels of Instructions:
o Data transfer: MOV AX, BX
24

o Arithmetic: ADD AX, BX


o Logic: AND AL, BL
o Control/Branching: JMP, CALL, RET
o I/O: IN, OUT

(c) Use of Macros in Assembly


• Macro = Group of instructions under a single name.
• Expands inline during assembly (not like functions).
Example:
SUM MACRO X, Y
MOV AX, X
ADD AX, Y
ENDM
• Makes programs modular, easy to reuse.

✦ 4. Program Loops in Assembly


Loops are achieved using conditional & unconditional jumps.
Example: Count 10 numbers
MOV CX, 10 ; Counter
MOV AX, 0 ; Initialize sum
LOOP_START:
ADD AX, [SI]
INC SI
LOOP LOOP_START
• Uses CX register as loop counter.
• LOOP instruction automatically decrements CX until zero.
25

✦ 5. Arithmetic Programming in Assembly


(a) Addition Example
MOV AX, 05H
MOV BX, 07H
ADD AX, BX ; AX = AX + BX
(b) Subtraction Example
MOV AX, 09H
MOV BX, 03H
SUB AX, BX ; AX = AX – BX
(c) Multiplication Example
MOV AL, 06H
MOV BL, 02H
MUL BL ; AX = AL × BL
(d) Division Example
MOV AX, 0010H
MOV BL, 02H
DIV BL ; AL = Quotient, AH = Remainder

✦ 6. Logic Programming in Assembly


(a) AND Example
MOV AL, 0F0H
AND AL, 0FH ; Result = 0000 0000
(b) OR Example
MOV AL, 0A0H
OR AL, 0FH ; Result = 0AFH
(c) XOR Example
MOV AL, 0FFH
XOR AL, AL ; Result = 00H (clear register)
(d) NOT Example
26

MOV AL, 0F0H


NOT AL ; Result = 0FH

✦ 7. Evolution Summary

Processor Year Data Width Memory Key Features

8085 1976 8-bit 64 KB Basic microprocessor

8086 1978 16-bit 1 MB Segmented memory

80286 1982 16-bit 16 MB Protected mode

80386 1985 32-bit 4 GB Paging, VM support

80486 1989 32-bit 4 GB FPU, cache, superscalar

Pentium 1993 32-bit 4 GB+ Dual pipelines, 64-bit bus

✦ 8. Real-World Relevance
• 8085/8086 still used in microcontroller training kits.
• Pentium & successors → foundation for modern Intel Core CPUs.
• Assembly language critical in embedded systems, OS development, device drivers.

✦ 9. Summary
1. 8085 → Pentium shows evolution from simple 8-bit to superscalar processors.
2. Assembly language bridges machine code and high-level programming.
3. Macros simplify repetitive code.
4. Loops, arithmetic, and logic operations can be programmed efficiently at low-level.
5. These foundations are essential for computer architecture & system programming.

✦ 10. Diagrams to Draw


1. Evolution timeline of Intel processors.
2. 8085 block diagram (ALU, registers, control, buses).
3. Pentium dual pipeline (U & V pipe).
27

References
1. M. Morris Mano – Computer System Architecture (3rd Edition,
Prentice Hall).
2. Carl Hamacher, Zvonko Vranesic, Safwat Zaky – Computer
Organization and Embedded Systems (6th Edition, McGraw
Hill).
3. William Stallings – Computer Organization and Architecture:
Designing for Performance (10th Edition, Pearson).
4. John L. Hennessy & David A. Patterson – Computer
Architecture: A Quantitative Approach (6th Edition, Morgan
Kaufmann).
5. A.S. Tanenbaum – Structured Computer Organization (6th
Edition, Pearson).
6. Barry Wilkinson & Michael Allen – Parallel Programming:
Techniques and Applications Using Networked Workstations
and Parallel Computers.
7. B. Govindarajalu – Computer Architecture and Organization:
Design Principles and Applications (Tata McGraw Hill).
8. R.S. Gaonkar – Microprocessor Architecture, Programming, and
Applications with the 8085.

You might also like