Computer Organization & Architecture -
10 Marks Answers
1. Amdahl's Law
Amdahl's Law is used to find the maximum improvement in system performance when only
a part of the system is improved. It is especially useful in parallel computing to estimate the
potential speedup using multiple processors.
Formula:
Speedup = 1 / ((1 - P) + (P / N))
Where,
P = fraction of the program that can be parallelized
N = number of processors
Explanation:
- If part of a program cannot be parallelized, the speedup is limited.
- As the number of processors increases, the speedup approaches 1 / (1 - P).
- Demonstrates diminishing returns of adding processors.
Example:
If 40% of a program can be parallelized (P=0.4), and we use 4 processors (N=4),
Speedup = 1 / ((1 - 0.4) + (0.4 / 4)) = 1 / (0.6 + 0.1) = 1 / 0.7 ≈ 1.43 times faster.
Conclusion:
Amdahl’s Law helps in understanding the limits of parallelization in improving
performance.
2. Restoring and Non-Restoring Division
Restoring and Non-Restoring Division are methods used for binary division in computer
arithmetic.
Restoring Division:
- Performs subtraction of divisor from the dividend (or remainder).
- If the result is positive, sets the quotient bit to 1.
- If negative, restores the previous remainder by adding the divisor back and sets quotient
bit to 0.
- Continues until all bits are processed.
Non-Restoring Division:
- Performs subtraction or addition depending on the sign of the previous remainder without
restoring it immediately.
- If remainder is negative, adds divisor; if positive, subtracts divisor in next step.
- Sets quotient bits accordingly.
- More efficient as it avoids restoring step.
Comparison:
- Non-Restoring is faster due to fewer operations.
- Both produce the same final quotient and remainder.
Example and detailed steps can be studied for clarity.
3. IEEE 754 Single and Double Precision Format
IEEE 754 Standard defines formats for floating-point representation.
Single Precision (32 bits):
- Sign bit: 1 bit
- Exponent: 8 bits (bias 127)
- Mantissa (Fraction): 23 bits
Format: [Sign][Exponent][Mantissa]
Double Precision (64 bits):
- Sign bit: 1 bit
- Exponent: 11 bits (bias 1023)
- Mantissa (Fraction): 52 bits
Format: [Sign][Exponent][Mantissa]
Details:
- Number = (-1)^Sign × 1.Mantissa × 2^(Exponent - Bias)
- Allows representation of very large and very small numbers.
Example:
- Single precision bias = 127 means exponent 130 represents 130 - 127 = 3.
- Double precision bias = 1023 means exponent 1030 represents 1030 - 1023 = 7.
4. Memory Hierarchy & Characteristics of Memory
Memory Hierarchy arranges different types of memory based on speed, cost, and size:
Levels (from fastest to slowest):
1. Registers (fastest, smallest, most expensive)
2. Cache Memory (small, fast, expensive)
3. Main Memory (RAM) (larger, slower, cheaper)
4. Secondary Storage (HDD, SSD) (largest, slowest, cheapest)
5. Tertiary Storage (tapes, optical discs)
Characteristics of Memory:
- Speed: Access time to read/write data.
- Cost: Price per bit of memory.
- Size: Amount of data it can store.
- Volatility: Whether data is lost when power is off (volatile/non-volatile).
- Access Method: How data is accessed (random/sequential/direct).
5. Cache Mapping Technique
Cache Mapping Techniques determine how main memory blocks are placed in cache.
Types:
1. Direct Mapping:
- Each main memory block maps to exactly one cache line.
- Simple and fast but can lead to conflicts.
2. Fully Associative Mapping:
- Any block can be placed in any cache line.
- Flexible but requires searching entire cache (slower).
3. Set Associative Mapping:
- Combination of direct and fully associative.
- Cache divided into sets; a block maps to one set but can be placed anywhere within the set.
- Balances speed and flexibility.
Conclusion:
Choosing mapping technique impacts cache performance and hit ratio.
6. Numerical on Memory
Numerical problems on memory usually involve calculations of:
- Memory size (bytes, KB, MB, GB)
- Addressing capability (number of address lines)
- Cache hits and misses
- Access time calculations
Example:
If a memory has 16 address lines, total memory size = 2^16 = 65,536 locations.
If each location stores 1 byte, total memory = 64 KB.
7. Data Transfer Technique
Data Transfer Techniques are methods to move data between CPU, memory, and I/O
devices.
Types:
1. Programmed I/O (Polling):
- CPU actively checks device status and transfers data.
- Simple but inefficient.
2. Interrupt-Driven I/O:
- Device interrupts CPU when ready.
- CPU handles interrupt and transfers data.
- More efficient than polling.
3. Direct Memory Access (DMA):
- Special DMA controller transfers data without CPU intervention.
- CPU starts transfer and continues other tasks.
- Best for large data transfer.
Comparison Table included for better understanding.
8. Direct Access Memory
Direct Access Memory (DAM) allows data to be accessed directly using an address without
following sequence.
Characteristics:
- Faster than sequential access.
- Used in devices like Hard Disk, SSD, CDs.
- Access time depends on seek time and rotational latency in disks.
Comparison with Sequential Access:
- Direct access allows random jumps to data location.
- Sequential requires reading data in order.
Applications include file systems, databases, and OS memory management.
9. I/O Techniques
I/O Techniques manage data transfer between CPU, memory, and I/O devices.
Types:
1. Programmed I/O:
- CPU waits and polls device status.
2. Interrupt-Driven I/O:
- Device interrupts CPU when ready.
3. Direct Memory Access (DMA):
- DMA controller handles data transfer independently.
Each technique varies in CPU involvement and efficiency.
10. Flynn's Classification
Flynn’s Classification categorizes computer architectures based on instruction and data
streams:
1. SISD (Single Instruction Single Data):
- Traditional sequential processors.
2. SIMD (Single Instruction Multiple Data):
- One instruction operates on multiple data elements simultaneously.
3. MISD (Multiple Instruction Single Data):
- Multiple instructions operate on the same data (rare).
4. MIMD (Multiple Instruction Multiple Data):
- Multiple processors execute different instructions on different data.
Used to understand parallelism in architectures.
11. Six Stage Instruction Pipeline
Six-Stage Instruction Pipeline breaks instruction execution into six stages:
1. Instruction Fetch (IF)
2. Instruction Decode (ID)
3. Operand Fetch (OF)
4. Execution (EX)
5. Memory Access (MEM)
6. Write Back (WB)
Each stage works on different instructions concurrently, increasing throughput.
Advantages:
- Increased CPU performance
- Efficient resource use
Hazards include structural, data, and control hazards.