Background
• Program must be brought (from disk)
into memory and placed within a
process for it to be run
• Main memory and registers are only
storage CPU can access directly
• Register access in one CPU clock (or
less)
• Cache sits between main memory and
CPU registers
• Protection of memory required to
ensure correct operation
Storage Hierarchy
• Storage systems organized in hierarchy
– Speed
– Cost
– Volatility
• As we move down cost per bit
decreases, access time increases
• The storage system above electronic
disk are volatile, whereas below are non
volatile.
• Caching – copying information into
faster storage system; main memory
can be viewed as a cache for secondary
storage
Memory hierarchy
• Large memory with low access time is
required
• Single memory can’t serve the purpose, so
hierarchal memory
• Access time, size and per unit cost
tradeoff
• Down the hierarchy access time increases,
size increases
• Video link
– https://www.youtube.com/watch?v=n0evkeNZKM4
Storage-Device Hierarchy
How cache memory works
• Pre fetch the data into the cache before the
processor needs it
• Cache miss
– Request not satisfied by cache
– failed attempt to read or write a piece of data in
the cache
– Increases access time
• Cache hit
– Request satisfied by cache
– Data transfer at maximum speed
Cache mapping
• Direct
– Specifies a single cache line for each memory
block
– Easier
• Set associative
– Specifies a set of cache lines for each memory
block
• Associative
– No restriction
– Any cache line can be used for any cache block
Direct mapping
Fully associative
• Greatest flexibility
• Any block can go into any line of the
cache
• Most expensive (cost of associative
memory, comparator)