MUNEEBA NAWAB
41056
BSSE 3RD SEM
COMPUTER ORGANIZATION &
ASSEMBLY LANGUAGE
ASSIGNMENT: 02
SUBMITTED TO: MAM AIMA AZMAT
• You can be creative – add a logo or sketch for NeuroBolt-X, or a mini-story
about the Mars mission.
NeuroBolt-X on Mars:
The sun rose slowly over the red land of Mars. NeuroBolt-X, a smart robot from Earth, moved
carefully across the rocky ground. Its wheels left marks in the dust.
NeuroBolt-X was not just any robot. It could think and learn by itself. It was sent to find new
things on Mars.
As it reached the top of a small hill, the robot stopped. Its sensors found something strange
under the ground. It wasn’t a rock. It was smooth and shiny—like metal.
NeuroBolt-X took a photo and sent a message to Earth:
“I found something strange. What should I do?”
Mars was quiet… but maybe not alone.
Part A: Cache Memory Management
1. In your own words, explain why cache memory is critical in fast systems like
NeuroBolt-X.
2. Identify and explain any 3 problems that can occur if cache memory is not managed
properly (e.g., cache miss, replacement confusion, write delay).
3. Which replacement policy will you choose for your system? (LRU, FIFO, Random,
etc.)
❖ Justify your choice based on space mission conditions like limited energy, speed,
and memory space.
Part A: Answers
• Why Cache Memory is Important in Fast Systems like NeuroBolt-X?
Answer:
Cache memory plays a big role in fast computers because it keeps the most used data close to
the processor. This means the processor doesn't have to waste time looking for data in the
main memory, which is slower.
For a mission like NeuroBolt-X on Mars, every moment is important, so the system needs to
work quickly and smoothly.
In NeuroBolt-X:
• Cache makes data transfer faster by storing important data nearby.
• It saves power by avoiding long searches in slow memory.
In short, cache acts like the brain’s shortcut it helps the processor get the needed data quickly
and easily.
• Identify and explain any 3 problems that can occur if cache memory is not
managed properly (e.g., cache miss, replacement confusion, write delay).
Answer:
Problems That Can Happen If Cache Is Not Handled Properly
1. Cache Miss
This happens when the processor checks the cache for data, but it’s not there. As a result,
the system has to get the data from the main memory, which is slower. This causes the
system to slow down.
2. Wrong Data Replacement
If the system doesn’t use a good rule to choose which data to remove when the cache is full,
it might delete important data by mistake. This leads to confusion and poor performance.
3. Delay in Writing Back Data
When data is changed in the cache, it should also be updated in the main memory. If this
step is skipped or delayed, the processor may use old or incorrect data, which can cause
errors.
• Which replacement policy will you choose for your system? (LRU, FIFO,
Random,
etc.)
• Justify your choice based on space mission conditions like limited energy,
speed, and memory space.
Answer:
Replacement Policy: LRU (Least Recently Used)
Why I Choose LRU:
• In space missions, we have limited speed, memory, and energy.
• LRU removes the data that hasn’t been used for the longest time.
• It helps keep important and recently used data in the cache.
• This reduces wasteful replacements and keeps things running fast and smooth.
• It avoids storing old and unused data, which is perfect for space missions where every
second and byte matters.
FLOWCHART:
START
Check mission needs:
Speed ,energy, memory limits
Choose Replacement
Policy
Random LRU
Best choice: remove least recently used
Not suitable: unpredictable
data
FIFO
Not suitable: may replace recent data
Retain frequently used data
Improved speed and saves
energy
END
Part B: Elements of Cache Design
1. List and explain 5 important elements of cache design. (Hint: block size, mapping
method, associativity, replacement policy, write policy).
2. Choose one cache mapping method (Direct, Associative, or Set-Associative) and:
• Explain how it works.
• Draw a labeled diagram showing memory blocks being mapped into cache.
3. Create a simple memory access example (5–6 addresses of your choice) and show:
• How data enters the cache.
• What happens when a block is replaced.
• Support your answer using a table or diagram only (no code)
Part B: Answers
• Five Important Elements of Cache Design
Block Size:
The amount of data (in bytes or words) transferred between cache and main memory in one
operation. Larger blocks exploit spatial locality but may cause more miss penalties.
Mapping Method:
Determines how memory blocks are placed in cache lines. Main types are Direct Mapping,
Associative Mapping, and Set-Associative Mapping.
Associativity:
Refers to how many places a block from memory can be placed in the cache. It affects flexibility
and cache hit rate. More associativity reduces conflict misses.
Replacement Policy:
Decides which cache block to replace when the cache is full. Common policies: LRU (Least
Recently Used), FIFO (First In First Out), Random.
Write Policy:
Defines how data is written to memory when a write operation occurs. Two types: Write-
through (updates both cache and memory) and Write-back (updates memory only when block
is replaced).
• Choose one cache mapping method
Direct Mapping:
• How It Works
Each memory block maps to exactly one cache line.
Formula:
Cache Line Number = (Block Address) MOD (Number of Cache Lines)
Diagram: Direct Mapping
Let’s suppose:
• Cache has 4 lines.
• Memory has 8 blocks (Block 0 to Block 7).
Memory Access Sequence: We’ll access these blocks in order:
Block 0 → Block 3 → Block 4 → Block 1 → Block 7 → Block 0
Table: Direct Mapping – Cache Status Example
Memory Cache Line Cache Status Action Cache Status
Block (Block % 4) (Before) Taken (After)
0 0 Empty Load block Line 0 ←
0 Block 0
3 3 Empty Load block Line 3 ←
3 Block 3
4 0 Block 0 Replace Line 0 ←
block 0 Block 4
1 1 Empty Load block Line 1 ←
1 Block 1
7 3 Block 3 Replace Line 3 ←
block 3 Block 7
0 0 Block 4 Replace Line 0 ←
block 4 Block 0
• Create a simple memory access example (5–6 addresses of your choice) and show:
• How data enters the cache.
Memory Access Example
• Let’s assume we have a cache with 4 lines: 0, 1, 2, 3.
• And we are accessing memory addresses: 12, 8, 4, 12, 16, 20
• To find the cache line: Address MOD 4
Memory Access Example
Step Memory Cache Line Hit/Miss Action
Address (Address %
4)
1 12 0 Miss
Load 12 into
Line 0
2 8 0 Miss
Replace 12
with 8
3 4 0 Miss
Replace 8 with
4
4 12 0 Miss
Replace 4 with
12
5 16 0 Miss
Replace 12
with 16
6 20 0 Miss
Replace 16
with 20
All addresses map to Cache Line 0, since:
• 12 % 4 = 0
• 8%4=0
• 4%4=0
• 16 % 4 = 0
• 20 % 4 = 0
So each new access replaces the previous one in Line 0
Diagram: