UNIT 5
Memory Organization: Types and capacity of memory, Memory Hierarchy, Cache Memory,
Virtual Memory.
What is Computer Memory?
Computer memory is just like the human brain. It is used to store data/information
and instructions. It is a data storage unit or a data storage device where data is to be processed
and instructions required for processing are stored. It can store both the input and output can be
stored here.
Characteristics of Computer Memory
It is faster computer memory as compared to secondary memory.
It is semiconductor memories.
It is usually a volatile memory, and main memory of the computer.
A computer system cannot run without primary memory.
How Does Computer Memory Work?
When you open a program, it is loaded from secondary memory into primary memory.
Because there are various types of memory and storage, an example would be moving a program
from a solid-state drive (SSD) to RAM. Because primary storage is accessed more quickly, the
opened software can connect with the computer’s processor more quickly. The primary memory
is readily accessible from temporary memory slots or other storage sites.
Memory is volatile, which means that data is only kept temporarily in memory. Data
saved in volatile memory is automatically destroyed when a computing device is turned off.
When you save a file, it is sent to secondary memory for storage.
There are various kinds of memory accessible. It’s operation will depend upon the type of
primary memory used. but normally, semiconductor-based memory is more related with
memory. Semiconductor memory made up of IC (integrated circuits) with silicon-based metal-
oxide-semiconductor (MOS) transistors.
Types of Computer Memory
In general, computer memory is of three types:
Primary memory
Secondary memory
Cache memory
Now we discuss each type of memory one by one in detail:
1. Primary Memory
It is also known as the main memory of the computer system. It is used to store data
and programs or instructions during computer operations. Primary memory is a segment of
computer memory that can be accessed directly by the processor. In a hierarchy of memory,
primary memory has access time less than secondary memory and greater than cache memory.
Generally, primary memory has a storage capacity lesser than secondary memory and greater
than cache memory.It uses semiconductor technology and hence is commonly called
semiconductor memory.
Need of primary memory
In order to enhance the efficiency of the system, memory is organized in such a way that access
time for the ready process is minimized. The following approach is followed to minimize access
time for the ready process.
All programs, files, and data are stored in secondary storage that is larger and hence has
greater access time.
Secondary memory can not be accessed directly by a CPU or processor.
In order, to execute any process operating system loads the process in primary memory
which is smaller and can be accessed directly by the CPU.
Since only those processes are loaded in primary memory which is ready to be executed,
the CPU can access those processes efficiently and this optimizes the performance of the
system.
Primary Memory Example
Primary Memory examples are RAM, ROM, cache, PROM, EPROM, registers, etc.
Classification of Primary Memory
Primary memory can be broadly classified into two parts:
1. Read-Only Memory (ROM)
2. Random Access Memory (RAM)
RAM (Random Access Memory): It is a volatile memory. Volatile memory stores information
based on the power supply. If the power supply fails/ interrupted/stopped, all the data and
information on this memory will be lost. RAM is used for booting up or start the computer. It
temporarily stores programs/data which has to be executed by the processor.
Any process in the system which needs to be executed is loaded in RAM which is processed by
the CPU as per Instructions in the program. Like if we click on applications like Browser, firstly
browser code will be loaded by the Operating system into the RAM after which the CPU will
execute and open up the Browser. RAM is of two types:
S RAM (Static RAM): S RAM uses transistors and the circuits of this memory
are capable of retaining their state as long as the power is applied. It has less
access time and hence, it is faster. Static RAM or SRAM keeps the data as long as
power is supplied to the system. SRAM uses Sequential circuits like a flip-flop to
store 1 bit and hence need not be periodically refreshed. SRAM is expensive and
hence only used where speed is the utmost priority.
D RAM (Dynamic RAM): D RAM uses capacitors and transistors and stores the
data as a charge on the capacitors. They contain thousands of memory cells. It
needs refreshing of charge on capacitor after a few milliseconds. This memory is
slower than S RAM. Dynamic RAM or D RAM needs to periodically refresh in a
few milliseconds to retain data. D RAM is made up of capacitors and transistors
and electric charge leaks from capacitors and DRAM needs to be charged
periodically. DRAM is widely used in home PCs and servers as it is cheaper than
SRAM.
DRAM SRAM
1. Constructed of tiny capacitors that leak Constructed of circuits similar to D flip
electricity. flops.
2. Requires a recharge every few milliseconds Holds its contents as long as power ia
to maintain its data. available.
3. Inexpensive. Expensive.
4. Slower than SRAM. Faster than DRAM.
5. Can store many bits per chip. Can not store many bits per chip.
6. Uses less power. Uses more power.
7. Generates less heat. Generates more heat.
8. Used for main memory. Used for cache.
RAM chips are available in a variety of sizes and are used as per the system requirement. The
following block diagram demonstrates the chip interconnection in a 128 * 8 RAM chip.
o A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one byte) per
word. This requires a 7-bit address and an 8-bit bidirectional data bus.
o The 8-bit bidirectional data bus allows the transfer of data either from memory to CPU
during a read operation or from CPU to memory during a write operation.
o The read and write inputs specify the memory operation, and the two chip select (CS)
control inputs are for enabling the chip only when the microprocessor selects it.
o The bidirectional data bus is constructed using three-state buffers.
o The output generated by three-state buffers can be placed in one of the three possible
states which include a signal equivalent to logic 1, a signal equal to logic 0, or a high-
impedance state.
Note: The logic 1 and 0 are standard digital signals whereas the high-impedance state behaves
like an open circuit, which means that the output does not carry a signal and has no logic
significance.
The following function table specifies the operations of a 128 * 8 RAM chip.
From the functional table, we can conclude that the unit is in operation only when CS1 = 1
and CS2 = 0. The bar on top of the second select variable indicates that this input is enabled
when it is equal to 0.
ROM (Read Only Memory): It is a non-volatile memory. Non-volatile memory stores
information even when there is a power supply failed/ interrupted/stopped. ROM is used
to store information that is used to operate the system. As its name refers to read-only
memory, we can only read the programs and data that is stored on it. ROM includes those
programs which run on booting of the system (known as a bootstrap
program that initializes OS) along with data like algorithm required by OS. Anything
stored in ROM cannot be altered or changed. It contains some electronic fuses that can be
programmed for a piece of specific information. The information stored in the ROM in
binary format. It is also known as permanent memory. ROM is of four types:
MROM(Masked ROM): Hard-wired devices with a pre-programmed collection
of data or instructions were the first ROMs. Masked ROMs are a type of low-cost
ROM that works in this way.
PROM (Programmable Read Only Memory): This read-only memory is
modifiable once by the user. The user purchases a blank PROM and uses
a PROM program to put the required contents into the PROM. Its content can’t be
erased once written.
EPROM (Erasable Programmable Read Only Memory): EPROM is an
extension to PROM where you can erase the content of ROM by exposing it to
Ultraviolet rays for nearly 40 minutes.
EEPROM (Electrically Erasable Programmable Read Only Memory): Here
the written contents can be erased electrically. You can delete and
reprogram EEPROM up to 10,000 times. Erasing and programming take very
little time, i.e., nearly 4 -10 ms(milliseconds). Any area in an EEPROM can be
wiped and programmed selectively.
ROM chips are also available in a variety of sizes and are also used as per the system
requirement. The following block diagram demonstrates the chip interconnection in a 512 * 8
ROM chip.
o A ROM chip has a similar organization as a RAM chip. However, a ROM can only
perform read operation; the data bus can only operate in an output mode.
o The 9-bit address lines in the ROM chip specify any one of the 512 bytes stored in it.
o The value for chip select 1 and chip select 2 must be 1 and 0 for the unit to operate.
Otherwise, the data bus is said to be in a high-impedance state.
2. Secondary Memory
It is also known as auxiliary memory and backup memory. It is a non-volatile memory and used
to store a large amount of data or information. The data or information stored in secondary
memory is permanent, and it is slower than primary memory. A CPU cannot access secondary
memory directly. The data/information from the auxiliary memory is first transferred to the main
memory, and then the CPU can access it.
Characteristics of Secondary Memory
It is a slow memory but reusable.
It is a reliable and non-volatile memory.
It is cheaper than primary memory.
The storage capacity of secondary memory is large.
A computer system can run without secondary memory.
In secondary memory, data is stored permanently even when the power is off.
Types of Secondary Memory
Secondary memory is of two types:
1. Fixed storage
2. Removable storage
1. Fixed storage
In secondary memory, a fixed storage is an internal media device that is used to store data in a
computer system. Fixed storage is generally known as fixed disk drives or hard drives.
Generally, the data of the computer system is stored in a built-in fixed storage device. Fixed
storage does not mean that you can not remove them from the computer system, you can remove
the fixed storage device for repairing, for the upgrade, or for maintenance, etc. with the help of
an expert or engineer.
Types of fixed storage:
Following are the types of fixed storage:
Internal flash memory (rare)
SSD (solid-state disk)
Hard disk drives (HDD)
2. Removable storage
In secondary memory, removable storage is an external media device that is used to store data in
a computer system. Removable storage is generally known as disks drives or external drives. It is
a storage device that can be inserted or removed from the computer according to our
requirements. We can easily remove them from the computer system while the computer system
is running. Removable storage devices are portable so we can easily transfer data from one
computer to another. Also, removable storage devices provide the fast data transfer rates
associated with storage area networks (SANs).
Types of Removable Storage:
Optical discs (like CDs, DVDs, Blu-ray discs, etc.)
Memory cards
Floppy disks
Magnetic tapes
Disk packs
Paper storage (like punched tapes, punched cards, etc.)
Following are the commonly used secondary memory devices are:
1. Floppy Disk: A floppy disk consists of a magnetic disc in a square plastic case. It is used to
store data and to transfer data from one device to another device. Floppy disks are available in
two sizes (a) Size: 3.5 inches, the Storage capacity of 1.44 MB (b) Size: 5.25 inches, the Storage
capacity of 1.2 MB. To use a floppy disk, our computer needs to have a floppy disk drive. This
storage device becomes obsolete now and has been replaced by CDs, DVDs, and flash drives.
2. Compact Disc: A Compact Disc (CD) is a commonly used secondary storage device. It
contains tracks and sectors on its surface. Its shape is circular and is made up of polycarbonate
plastic. The storage capacity of CD is up to 700 MB of data. A CD may also be called a CD-
ROM (Compact Disc Read-Only Memory), in this computers can read the data present in a CD-
ROM, but cannot write new data onto it. For a CD-ROM, we require a CD-ROM. CD is of two
types:
CD-R (compact disc recordable): Once the data has been written onto it cannot be
erased, it can only be read.
CD-RW (compact disc rewritable): It is a special type of CD in which data can be
erased and rewritten as many times as we want. It is also called an erasable CD.
3. Digital Versatile Disc: A Digital Versatile Disc also known as DVD it is looks just like a CD,
but the storage capacity is greater compared to CD, it stores up to 4.7 GB of data. DVD-ROM
drive is needed to use DVD on a computer. The video files, like movies or video recordings, etc.,
are generally stored on DVD and you can run DVD using the DVD player. DVD is of three
types:
DVD-ROM(Digital Versatile Disc Readonly): In DVD-ROM the manufacturer writes
the data in it and the user can only read that data, cannot write new data in it. For
example movie DVD, movie DVD is already written by the manufacturer we can only
watch the movie but we cannot write new data into it.
DVD-R(Digital Versatile Disc Recordable): In DVD-R you can write the data but only
one time. Once the data has been written onto it cannot be erased, it can only be read.
DVD-RW(Digital Versatile Disc Rewritable and Erasable): It is a special type of
DVD in which data can be erased and rewritten as many times as we want. It is also
called an erasable DVD.
4. Blu-ray Disc: A Blu-ray disc looks just like a CD or a DVD but it can store data or
information up to 25 GB data. If you want to use a Blu-ray disc, you need a Blu-ray reader. The
name Blu-ray is derived from the technology that is used to read the disc ‘Blu’ from the blue-
violet laser and ‘ray’ from an optical ray.
5. Hard Disk: A hard disk is a part of a unit called a hard disk drive. It is used to storing a large
amount of data. Hard disks or hard disk drives come in different storage capacities.(like 256 GB,
500 GB, 1 TB, and 2 TB, etc.). It is created using the collection of discs known as platters. The
platters are placed one below the other. They are coated with magnetic material. Each platter
consists of a number of invisible circles and each circle having the same centre called tracks.
Hard disk is of two types (i) Internal hard disk (ii) External hard disk.
6. Flash Drive: A flash drive or pen drive comes in various storage capacities, such as 1 GB, 2
GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, up to 1 TB. A flash drive is used to transfer and store
data. To use a flash drive, we need to plug it into a USB port on a computer. As a flash drive is
easy to use and compact in size, Nowadays it is very popular.
7. Solid-state disk: It is also known as SDD. It is a non-volatile storage device that is used to
store and access data. It is faster, does noiseless operations(because it does not contain any
moving parts like the hard disk), consumes less power, etc. It is a great replacement for standard
hard drives in computers and laptops if the price is low and it is also suitable for tablets,
notebooks, etc because they do not require large storage.
8. SD Card: It is known as a Secure Digital Card. It is generally used in portable devices like
mobile phones, cameras, etc., to store data. It is available in different sizes like 1 GB, 2 GB, 4
GB, 8 GB, 16 GB, 32 GB, 64 GB, etc. To view the data stored in the SD card you can remove
them from the device and insert them into a computer with help of a card reader. The data stores
in the SD card is stored in memory chips(present in the SD Card) and it does not contain any
moving parts like the hard disk.
Advantages:
1. Large storage capacity: Secondary memory devices typically have a much larger
storage capacity than primary memory, allowing users to store large amounts of data and
programs.
2. Non-volatile storage: Data stored on secondary memory devices is typically non-
volatile, meaning it can be retained even when the computer is turned off.
3. Portability: Many secondary memory devices are portable, making it easy to transfer
data between computers or devices.
4. Cost-effective: Secondary memory devices are generally more cost-effective than
primary memory.
Disadvantages:
1. Slower access times: Accessing data from secondary memory devices typically takes
longer than accessing data from primary memory.
2. Mechanical failures: Some types of secondary memory devices, such as hard disk
drives, are prone to mechanical failures that can result in data loss.
3. Limited lifespan: Secondary memory devices have a limited lifespan, and can only
withstand a certain number of read and write cycles before they fail.
4. Data corruption: Data stored on secondary memory devices can become corrupted due
to factors such as electromagnetic interference, viruses, or physical damage.
Memory Hierarchy:
In the Computer System Design, Memory Hierarchy is an enhancement to organize the memory
such that it can minimize the access time. The Memory Hierarchy was developed based on a
program behavior known as locality of references. The figure below clearly demonstrates the
different levels of the memory hierarchy.
Why Memory Hierarchy is Required in the System?
Memory Hierarchy is one of the most required things in Computer Memory as it helps in
optimizing the memory available in the computer. There are multiple levels present in the
memory, each one having a different size, different cost, etc. Some types of memory like cache,
and main memory are faster as compared to other types of memory but they are having a little
less size and are also costly whereas some memory has a little higher storage value, but they are
a little slower. Accessing of data is not similar in all types of memory, some have faster access
whereas some have slower access.
Apart from the basic classifications of a memory unit, the memory hierarchy consists all of the
storage devices available in a computer system ranging from the slow but high-capacity auxiliary
memory to relatively faster main memory.
Types of Memory Hierarchy
This Memory Hierarchy Design is divided into 2 main types:
External Memory or Secondary Memory: Comprising of Magnetic Disk, Optical Disk,
and Magnetic Tape i.e. peripheral storage devices which are accessible by the processor
via an I/O Module.
Internal Memory or Primary Memory: Comprising of Main Memory, Cache Memory
& CPU registers. This is directly accessible by the processor.
Memory Hierarchy Design
Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU. They are used to store the
most frequently used data and instructions. Registers have the fastest access time and the
smallest storage capacity, typically ranging from 16 to 64 bits.
In Computer Architecture, the Registers are very fast computer memory which are used to
execute programs and operations efficiently. This does by giving access to commonly used
values, i.e., the values which are in the point of operation/execution at that time. So, for this
purpose, there are several different classes of CPU registers which works in coordination with
the computer memory to run operations efficiently.
The sole purpose of having register is fast retrieval of data for processing by CPU. Though
accessing instructions from RAM is comparatively faster with hard drive, it still isn’t enough for
CPU. For even better processing, there are memories in CPU which can get data from RAM
which are about to be executed beforehand. After registers we have cache memory, which are
faster but less faster than registers.
These are classified as given below.
Accumulator:
This is the most frequently used register used to store data taken from memory. It is in
different numbers in different microprocessors.
Memory Address Registers (MAR):
It holds the address of the location to be accessed from memory. MAR and MDR
(Memory Data Register) together facilitate the communication of the CPU and the main
memory.
Memory Data Registers (MDR):
It contains data to be written into or to be read out from the addressed location.
General Purpose Registers:
These are numbered as R0, R1, R2….Rn-1, and used to store temporary data during any
ongoing operation. Its content can be accessed by assembly programming. Modern CPU
architectures tends to use more GPR so that register-to-register addressing can be used
more, which is comparatively faster than other addressing modes.
Program Counter (PC):
Program Counter (PC) is used to keep the track of execution of the program. It contains
the memory address of the next instruction to be fetched. PC points to the address of the
next instruction to be fetched from the main memory when the previous instruction has
been successfully completed. Program Counter (PC) also functions to count the number
of instructions. The incrementation of PC depends on the type of architecture being used.
If we are using 32-bit architecture, the PC gets incremented by 4 every time to fetch the
next instruction.
Instruction Register (IR):
The IR holds the instruction which is just about to be executed. The instruction from PC
is fetched and stored in IR. As soon as the instruction in placed in IR, the CPU starts
executing the instruction and the PC points to the next instruction to be executed.
Condition code register ( CCR ) :
Condition code registers contain different flags that indicate the status of any
operation.for instance lets suppose an operation caused creation of a negative result or
zero, then these flags are set high accordingly.and the flags are
1. Carry C: Set to 1 if an add operation produces a carry or a subtract operation produces a
borrow; otherwise cleared to 0.
2. Overflow V: Useful only during operations on signed integers.
3. Zero Z: Set to 1 if the result is 0, otherwise cleared to 0.
4. Negate N: Meaningful only in signed number operations. Set to 1 if a negative result is
produced.
5. Extend X: Functions as a carry for multiple precision arithmetic operations.
These are generally decided by ALU.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores frequently used
data and instructions that have been recently accessed from the main memory. Cache memory is
designed to minimize the time it takes to access data by providing the CPU with quick access to
frequently used data.
A faster and smaller segment of memory whose access time is as close as registers are known as
Cache memory. In a hierarchy of memory, cache memory has access time lesser than primary
memory. Generally, cache memory is very small and hence is used as a buffer.
Data in primary memory can be accessed faster than secondary memory but still, access times of
primary memory are generally in a few microseconds, whereas the CPU is capable of performing
operations in nanoseconds. Due to the time lag between accessing data and acting on data
performance of the system decreases as the CPU is not utilized properly, it may remain idle for
some time. In order to minimize this time gap new segment of memory is Introduced known as
Cache Memory.
Role of Cache Memory
The role of cache memory is explained below,
Cache memory plays a crucial role in computer systems.
It provide faster access.
It acts buffer between CPU and main memory(RAM).
Primary role of it is to reduce average time taken to access data, thereby improving
overall system performance.
Benefits of Cache Memory
Various benefits of the cache memory are,
1. Faster access: Faster than main memory. It resides closer to CPU , typically on same
chip or in close proximity. Cache stores subset of data and instruction.
2. Reducing memory latency: Memory access latency refers to time taken for processes to
retrieve data from memory. Caches are designed to exploit principle of locality.
3. Lowering bus traffic: Accessing data from main memory involves transferring it over
system bus. Bus is shared resource and excessive traffic can lead to congestion and
slower data transfers. By utilizing cache memory , processor can reduce frequency of
accessing main memory resulting in less bus traffic and improves system efficiency.
4. Increasing effective CPU utilization: Cache memory allows CPU to operate at a higher
effective speed. CPU can spend more time executing instruction rather than waiting for
memory access. This leads to better utilization of CPU’s processing capabilities and
higher overall system performance.
5. Enhancing system scalability: Cache memory helps improve system scalability by
reducing impact of memory latency on overall system performance.
In order to understand the working of cache we must understand few points:
Cache memory is faster, they can be accessed very fast
Cache memory is smaller, a large amount of data cannot be stored
Whenever CPU needs any data it searches for corresponding data in the cache (fast process) if
data is found, it processes the data according to instructions, however, if data is not found in the
cache CPU search for that data in primary memory(slower process) and loads it into the cache.
This ensures frequently accessed data are always found in the cache and hence minimizes the
time required to access the data.
Cache Performance
On searching in the cache if data is found, a cache hit has occurred.
On searching in the cache if data is not found, a cache miss has occurred.
Performance of cache is measured by the number of cache hits to the number of searches. This
parameter of measuring performance is known as the Hit Ratio.
Hit ratio=(Number of cache hits)/(Number of searches)
Types of Cache Memory
L1 or Level 1 Cache: It is the first level of cache memory that is present inside the processor. It
is present in a small amount inside every core of the processor separately. The size of this
memory ranges from 2KB to 64 KB.
L2 or Level 2 Cache: It is the second level of cache memory that may present inside or outside
the CPU. If not present inside the core, It can be shared between two cores depending upon the
architecture and is connected to a processor with the high-speed bus. The size of memory ranges
from 256 KB to 512 KB.
L3 or Level 3 Cache: It is the third level of cache memory that is present outside the CPU and is
shared by all the cores of the CPU. Some high processors may have this cache. This cache is
used to increase the performance of the L2 and L1 cache. The size of this memory ranges from 1
MB to 8MB.
Cache vs RAM
Although Cache and RAM both are used to increase the performance of the system there exists a
lot of differences in which they operate to increase the efficiency of the system.
RAM Cache
RAM is larger in size compared to cache. Memory The cache is smaller in size. Memory
ranges from 1MB to 16GB ranges from 2KB to a few MB generally.
It stores data that is currently processed by the
It holds frequently accessed data.
processor.
OS interacts with secondary memory to get data to OS interacts with primary memory to get
be stored in Primary Memory or RAM data to be stored in Cache.
It is ensured that data in RAM are loaded before
CPU searches for data in Cache, if not
access to the CPU. This eliminates RAM miss
found cache miss occur.
never.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary memory of a
computer system. It has a larger storage capacity than cache memory, but it is slower. Main
memory is used to store data and instructions that are currently in use by the CPU.
Types of Main Memory
Static RAM: Static RAM stores the binary information in flip flops and information
remains valid until power is supplied. It has a faster access time and is used in
implementing cache memory.
Dynamic RAM: It stores the binary information as a charge on the capacitor. It requires
refreshing circuitry to maintain the charge on the capacitors after a few milliseconds. It
contains more memory cells per unit area as compared to SRAM.
4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives (SSD), is a non-volatile
memory unit that has a larger storage capacity than main memory. It is used to store data and
instructions that are not currently in use by the CPU. Secondary storage has the slowest access
time and is typically the least expensive type of memory in the memory hierarchy.
Hard Disk Drives(HDD) and Solid State Drives(SSD) both are data storage devices. Whereas
HDDs are more traditional storage mechanisms, SSDs are newer and more sophisticated. The
primary distinction between HDD and SSD is in how data is stored and accessed. Let’s look at
the fundamental distinctions between HDD and SSD.
What is a Hard Disk Drive(HDD)?
An HDD consists of a spinning disk (platter) coated with a magnetic material and a read/write
head that reads and writes data on the disk’s surface. The read/write head moves back and forth
across the spinning disk to access different parts of the data stored on the disk. HDDs have been
around for decades and are the more traditional type of storage device.
Features of Hard Disk Drive (HDD)
High storage capacity: HDDs offer a high storage capacity, with some models capable of
storing up to 16TB of data.
Lower cost: HDDs are generally less expensive than SSDs, making them a more cost-
effective option for storing large amounts of data.
Larger size: HDDs are physically larger and heavier than SSDs, making them less
suitable for portable devices.
Slower performance: HDDs are slower than SSDs when it comes to data access and
transfer speeds.
Mechanical parts: HDDs contain mechanical details that can wear out over time, making
them less durable than SSDs.
What is Solid State Drive(SSD)?
SSDs, on the other hand, use flash memory to store data instead of a spinning disk. SSDs have
no moving parts, making them much faster, more durable, and less susceptible to mechanical
failure than HDDs.
Features of Solid State Drive (SSD)
Fast performance: SSDs offer much faster data access and transfer speeds than HDDs.
Compact size: SSDs are smaller and lighter than HDDs, making them an ideal option for
use in portable devices such as laptops and tablets.
Lower power consumption: SSDs consume less power than HDDs, making them more
energy-efficient.
Higher cost: SSDs are generally more expensive than HDDs, making them a less cost-
effective option for storing large amounts of data.
No mechanical parts: SSDs have no moving parts, making them more durable and less
susceptible to mechanical failure than HDDs.
Similarities Between HDD and SDD
HDD and SDD both are used to store data.
Hard Disk Drive (HDD) and Solid State Drive (SSD) are used to boot the system.
Hard Disk Drive (HDD) and Solid State Drive (SSD) Both are I/O devices.
Differences Between Hard Disk Drive (HDD) and Solid State Drive (SSD)
HDD SSD
HDD stands for Hard Disk Drive. SSD stands for Solid State Drive.
HDD contains moving mechanical parts, like SSD does not contains, mechanical parts, only
the arm. electronical parts like ICs.
HDD has longer R/W time. SSD has shorter R/W time..
HDD has higher latency. SSD has lower latency.
HDD supports fewer I/O operations per
SSD supports more I/O operations per second.
second.
HDD is heavier in weight. SSD is lighter in weight.
HDD is larger in size. SSD is more compact in size.
In HDD the data transfer is sequential. In SSD the data transfer is random access.
HDD SSD
HDD is less reliable due to possibility of
mechanical failure, like head crash and SSD is more reliable.
susceptibility to strong magnets.
HDD is cheaper per unit storage. SSD is costlier per unit storage.
HDD is older and more traditional. SSD is newer to use.
HDD can produce noise due to mechanical
SSD does not produces noise.
movements.
The availability of SSD is limited in terms of
The availability of HDD in a variety of
variety of storage capacities as compared to
capacities.
HDD.
It is more likely to breakdown after more uses
It is less likely to breakdown as compared to
because of the magnetic platters and moving
HDD because of no moving parts.
mechanical parts.
HDD drives are more established and A more recent kind of storage drive is an
traditional. SSD.
SSDs are comparatively less reliable for long-
HDDs are more reliable for long-term storage. term storage due to data leaks that can occur if
kept unpowered for more than a year.
The data accessing speed is slower as The data accessing speed is much higher as
compared to SSD. compared to HDD.
SSD does not have fragmentation. The
HDD has fragmentation that’s why The
performance does not suffer because of
performance suffers because of fragmentation.
fragmentation.
SSDs are suitable for
HDDs are suitable for
Fast data retrieval
Extensive storage
Laptop or desktop because of low
Long-term storage
power consumption and size.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a plastic or a
magnetized material. The Magnetic disks work at a high speed inside the computer and these are
frequently used.
A magnetic Disk is a type of secondary memory that is a flat disc covered with a magnetic
coating to hold information. It is used to store various programs and files. The polarized
information in one direction is represented by 1, and vice versa. The direction is indicated by 0.
Magnetic disks are less expensive than RAM and can store large amounts of data, but the data
access rate is slower than main memory because of secondary memory. Data can be modified or
can be deleted easily in the magnetic disk memory. It also allows random access to data.
Figure – Magnetic Disk
There are various advantages and disadvantages of magnetic disk memory.
Advantages:-
1.These are economical memory
2. Easy and direct access to data is possible.
3. It can store large amounts of data.
4. It has a better data transfer rate than magnetic tapes.
5. It has less prone to corruption of data as compared to tapes.
Disadvantages:-
1. These are less expensive than RAM but more expensive than magnetic tape memories.
2. It needs a clean and dust-free environment to store.
3. These are not suitable for sequential access.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic film. It is
generally used for the backup of data. In the case of a magnetic tape, the access time for a
computer is a little slower and therefore, it requires some amount of time for accessing the strip.
Magnetic drums, magnetic tape and magnetic disks are types of magnetic memory. These
memories use property for magnetic memory.
In magnetic tape only one side of the ribbon is used for storing data. It is sequential memory
which contains thin plastic ribbon to store data and coated by magnetic oxide. Data read/write
speed is slower because of sequential access. It is highly reliable which requires magnetic tape
drive writing and reading data.
Figure Magnetic Tape Memory
The width of the ribbon varies from 4mm to 1 Inch and it has storage capacity 100 MB to 200
GB.
Advantages :
1. These are inexpensive, i.e., low cost memories.
2. It provides backup or archival storage.
3. It can be used for large files.
4. It can be used for copying from disk files.
5. It is a reusable memory.
6. It is compact and easy to store on racks.
Disadvantages :
1. Sequential access is the disadvantage, means it does not allow access randomly or
directly.
2. It requires caring to store, i.e., vulnerable humidity, dust free, and suitable environment.
3. It stored data cannot be easily updated or modified, i.e., difficult to make updates on data.
Characteristics of Memory Hierarchy
Capacity: It is the global volume of information the memory can store. As we move
from top to bottom in the Hierarchy, the capacity increases.
Access Time: It is the time interval between the read/write request and the availability of
the data. As we move from top to bottom in the Hierarchy, the access time increases.
Performance: Earlier when the computer system was designed without a Memory
Hierarchy design, the speed gap increased between the CPU registers and Main Memory
due to a large difference in access time. This results in lower performance of the system
and thus, enhancement was required. This enhancement was made in the form of
Memory Hierarchy Design because of which the performance of the system increases.
One of the most significant ways to increase system performance is minimizing how far
down the memory hierarchy one has to go to manipulate data.
Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases
i.e. Internal Memory is costlier than External Memory.
Advantages of Memory Hierarchy
It helps in removing some destruction, and managing the memory in a better way.
It helps in spreading the data all over the computer system.
It saves the consumer’s price and time.
System-Supported Memory Standards
According to the memory Hierarchy, the system-supported memory standards are defined below:
Level 1 2 3 4
Secondary
Name Register Cache Main Memory
Memory
less than 16
Size <1 KB <16GB >100 GB
MB
On-chip/ DRAM (capacitor
Implementation Multi-ports Magnetic
SRAM memory)
0.25ns to
Access Time 0.5 to 25ns 80ns to 250ns 50 lakh ns
0.5ns
20000 to 1
Bandwidth 5000 to 15000 1000 to 5000 20 to 150
lakh MB
Operating
Managed by Compiler Hardware Operating System
System
Backing from Main from Secondary
From cache from End User
Mechanism Memory Memory
Virtual memory:
Virtual Memory (VM) Concept is similar to the Concept of Cache Memory. While Cache solves
the speed up requirements in memory access by CPU, Virtual Memory solves the Main Memory
(MM) Capacity requirements with a mapping association to Secondary Memory i.e Hard Disk.
Both Cache and Virtual Memory are based on the Principle of Locality of Reference. Virtual
Memory provides an illusion of unlimited memory being available to the Processes/
Programmers.
In a VM implementation, a process looks at the resources with a logical view and the CPU looks
at it from a Physical or real view of resources. Every program or process begins with its starting
address as ‘0’ ( Logical view). However, there is only one real '0' address in Main Memory.
Further, at any instant, many processes reside in Main Memory (Physical view). A Memory
Management Hardware provides the mapping between logical and physical view.
VM is hardware implementation and assisted by OS’s Memory Management Task. The basic
facts of VM are:
All memory references by a process are all logical and dynamically translated by
hardware into physical.
There is no need for the whole program code or data to be present in Physical memory
and neither the data or program need to be present in contiguous locations of Physical
Main Memory. Similarly, every process may also be broken up into pieces and loaded as
necessitated.
The storage in secondary memory need not be contiguous. (Remember your single file
may be stored in different sectors of the disk, which you may observe while doing
defrag).
However, the Logical view is contiguous. Rest of the views are transparent to the user.
Figure Storage views
Virtual Memory Design factors
Any VM design has to address the following factors choosing the options available.
Type of implementation – Segmentation, Paging, Segmentation with Paging
Address Translation – Logical to Physical
Address Translation Type – Static or Dynamic Translation
o Static Translation – Few simpler programs are loaded once and may be executed
many times. During the lifetime of these programs, nothing much changes and
hence the Address Space can be fixed.
o Dynamic Translation – Complex user programs and System programs use a stack,
queue, pointers, etc., which require growing spaces at run time. Space is allotted
as the requirement comes up. In such cases, Dynamic Address Translation is used.
In this chapter, we discuss only Dynamic Address Translation Methods.
A Page/Segment table to be maintained as to what is available in MM
Identification of the Information in MM as a Hit or Page / Segment Fault
Page/Segment Fault handling Mechanism
Protection of pages/ Segments in Memory and violation identification
Allocation / Replacement Strategy for Page/Segment in MM –Same as Cache Memory.
FIFO, LIFO, LRU and Random are few examples.
Segmentation
A Segment is a logically related contiguous allocation of words in MM. Segments vary in length.
A segment corresponds to logical entities like a Program, stack, data, etc. A word in a segment is
addressed by specifying the base address of the segment and the offset within the segment as in
figure.
Figure Example of allotted Segments in Main Memory
A segment table is required to be maintained with the details of those segments in MM
and their status. Figure shows typical entries in a segment table. A segment table resides in the
OS area in MM. The sharable part of a segment, i.e. with other programs/processes are created as
a separate segment and the access rights for the segment is set accordingly. Presence bit indicates
that the segment is available in MM. The Change bit indicates that the content of the segment has
been changed after it was loaded in MM and is not a copy of the Disk version. Please recall in
Multilevel hierarchical memory, the lower level has to be in coherence with the immediately
higher level. The address translation in segmentation implementation is as shown in figure. The
virtual address generated by the program is required to be converted into a physical address in
MM. The segment table help achieve this translation.
Figure Address Translation in Segmentation Mechanism
Generally, a Segment size coincides with the natural size of the program/data. Although this is
an advantage on many occasions, there are two problems to be addressed in this regard.
1. Identifying a contiguous area in MM for the required segment size is a complex process.
2. As we see chunks are identified and allotted as per requirement. There is a possibility that
there may be some gaps of memory in small chunks which are too small to be allotted for
a new segment. At the same time, the sum of such gaps may become huge enough to be
considered as undesirable. These gaps are called external fragmentation. External
fragments are cleared by a special process like Compaction by OS.
Paging
Paging is another implementation of Virtual Memory. The logical storage is marked as Pages of
some size, say 4KB. The MM is viewed and numbered as page frames. Each page frame equals
the size of Pages. The Pages from the logical view are fitted into the empty Page Frames in MM.
This is synonymous to placing a book in a bookshelf. Also, the concept is similar to cache blocks
and their placement. Figure explains how two program’s pages are fitted in Page Frames in MM.
As you see, any page can get placed into any available Page Frame. Unallotted Page Frames are
shown in white.
Figure Virtual Memory Pages to MM Page Frame Mapping
This mapping is necessary to be maintained in a Page Table. The mapping is used during address
translation. Typically a page table contains virtual page address, corresponding physical frame
number where the page is stored, Presence bit, Change bit and Access rights ( Refer figure). This
Page table is referred to check whether the desired Page is available in the MM. The Page Table
resides in a part of MM. Thus every Memory access requested by CPU will refer memory twice
– once to the page table and second time to get the data from accessed location. This is called the
Address Translation Process and is detailed in figure.
Figure: Typical Page Table Entry
Virtual Page Page Frame Presence bit Change bit Access
Address Number P C Rights
A 0004000 1 0 R, X
B 0645728 0 1 R, W, X
D 0010234 1 1 R, W, X
F 0060216 0 0 R
Figure. Virtual Memory Address Translation in Paging Implementation
Page size determination is an important factor to obtain Maximum Page Hits and Minimum
Thrashing. Thrashing is very costly in VM as it means getting data from Disk, which is 1000
times likely to be slower than MM.
In the Paging Mechanism, Page Frames of fixed size are allotted. There is a possibility that some
of the pages may have contents less than the page size, as we have in our printed books. This
causes unutilized space (fragment) in a page frame. By no means, this unutilized space is usable
for any other purpose. Since these fragments are inside the allotted Page Frame, it is
called Internal Fragmentation.
Additional Activities in Address Translation
During address translation, few more activities happen as listed below but are not shown in
figures , for simplicity of understanding.
In segmentation, the length of the segment mentioned in the segment table is compared
with the offset. If the Offset exceeds it is a Segment Violation and an error is generated
to this effect.
The control bits are meant to be used during Address Translation.
o The presence bit is verified to know that the requested segment/page is available
in the MM.
o The Change bit indicates that the segment/page in main memory is not a true copy
of that in Disk; if this segment/page is a candidate for replacement, it is to be
written onto the disk before replacement. This logic is part of the Address
Translation mechanism.
o Segment/Page access rights are checked to verify any access violation. Ex: one
with Read-only attribute cannot be allowed access for WRITE, or so.
The requested Segment/Page not in the respective Table, it means, it is not available in
MM and a Segment/Page Fault is generated. Subsequently what happens is,
o The OS takes over to READ the segment/page from DISK.
o A Segment needs to be allotted from the available free space in MM. If Paging, an
empty Page frame need to be identified.
o In case, the free space/Page frame is unavailable, Page Replacement algorithm
plays the role to identify the candidate Segment/Page Frame.
o The Data from Disk is written on to the MM
o The Segment /Page Table is updated with the necessary information that a new
block is available in MM
Translation Look-aside Buffer (TLB)
Every Virtual address Translation requires two memory references,
once to read the segment/page table and
once more to read the requested memory word.
TLB is a hardware functionality designed to speedup Page Table lookup by reducing one extra
access to MM. A TLB is a fully associative cache of the Page Table. The entries in TLB
correspond to the recently used translations. TLB is sometimes referred to as address cache.
TLB is part of the Memory Management Unit (MMU) and MMU is present in the CPU block.
TLB entries are similar to that of Page Table. With the inclusion of TLB, every virtual address is
initially checked in TLB for address translation. If it is a TLB Miss, then the page table in MM is
looked into. Thus, a TLB Miss does not cause Page fault. Page fault will be generated only if it is
a miss in the Page Table too but not otherwise. Since TLB is an associative address cache in
CPU, TLB hit provides the fastest possible address translation; Next best is the page hit in Page
Table; worst is the page fault.
Having discussed the various individual Address translation options, it is to be understood that in
a Multilevel Hierarchical Memory all the functional structures coexist. i.e. TLB, Page Tables,
Segment Tables, Cache (Multiple Levels), Main Memory and Disk. Page Tables can be many
and many levels too, in which case, few Page tables may reside in Disk. In this scenario, what is
the hierarchy of verification of tables for address translation and data service to the CPU? Refer
figure.
Figure Address
Translation sequence in a Multilevel Memory with TLB
Address Translation verification sequence starts from the lowest level i.e.
TLB -> Segment / Page Table Level 1 -> Segment / Page Table Level n
Once the address is translated into a physical address, then the data is serviced to CPU. Three
possibilities exist depending on where the data is.
Case 1 - TLB or PT hit and also Cache Hit - Data returned from CPU to Cache
Case 2 - TLB or PT hit and Cache Miss - Data returned from MM to CPU and Cache
Case 3 - Page Fault - Data from disk loaded into a segment / page frame in MM; MM returns
data to CPU and Cache
It is simple, in case of Page hit either Cache or MM provides the Data to CPU readily. The
protocol between Cache and MM exists intact. If it is a Segment/Page fault, then the routine is
handled by OS to load the required data into Main Memory. In this case, data is not in the cache
too. Therefore, while returning data to CPU, the cache is updated treating it as a case of Cache
Miss.
Advantages of Virtual Memory
Generality - ability to run programs that are larger than the size of physical memory.
Storage management - allocation/deallocation either by Segmentation or Paging
mechanisms.
Protection - regions of the address space in MM can selectively be marked as Read
Only, Execute,..
Flexibility - portions of a program can be placed anywhere in Main Memory without
relocation
Storage efficiency -retain only the most important portions of the program in memory
Concurrent I/O -execute other processes while loading/dumping page. This increases
the overall performance
Expandability - Programs/processes can grow in virtual address space.
Seamless and better Performance for users.