0% found this document useful (0 votes)
34 views51 pages

Module 4

The document covers key concepts in memory management within operating systems, including address spaces, swapping, contiguous memory allocation, segmentation, and paging. It explains the importance of efficient memory utilization, process isolation, and the role of the Memory Management Unit (MMU) in translating logical to physical addresses. Additionally, it discusses dynamic loading and linking, fragmentation issues, and various memory allocation strategies.

Uploaded by

albytomy07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views51 pages

Module 4

The document covers key concepts in memory management within operating systems, including address spaces, swapping, contiguous memory allocation, segmentation, and paging. It explains the importance of efficient memory utilization, process isolation, and the role of the Memory Management Unit (MMU) in translating logical to physical addresses. Additionally, it discusses dynamic loading and linking, fragmentation issues, and various memory allocation strategies.

Uploaded by

albytomy07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

23 CST 206

OPERTING SYSTEMS
MODULE-IV
Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, ' Operating System
Concepts' 9th Edition, Wiley India 2015
Memory Management
4.1 Concept of Address spaces
4.2 Swapping
4.3 Contiguous memory allocation, fixed and variable partitions
4.4 Segmentation
4.5 Paging
4.6 TLBs, TLB hits and misses TLB structure
4.7 Virtual memory, Demand Paging
4.8 Page replacement algorithms
Memory Management
• Memory management is the functionality of an operating system
that handles primary memory.

Objectives:

• Efficient utilization of memory


• Process isolation
• Fast access to memory
• Support for multiprogramming
• Virtual memory implementation
Background
• Program must be brought (from disk) into memory and placed within a
process for it to be run
• Main memory and registers are only storage CPU can access directly

• Memory unit only sees a stream of addresses + read requests, or address +


data and write requests

• Register access in one CPU clock (or less)

• Main memory can take many cycles, causing a stall

• Cache sits between main memory and CPU registers to speedup things.

• Protection of memory required to ensure correct operation


Address Space
• Each process should have a separate memory space.
• To ensure that we use Base and Limit Registers
• A pair of base and limit registers define the logical address space
• The base register holds the smallest legal physical memory address
• The limit register specifies the size of the range.

• Protection of memory space is accomplished by having the CPU


hardware compare every address generated in user mode with the
registers.

• The base and limit registers can be loaded only by the operating
system(kernel mode—so OS/user m/y is accessible), which uses a
special privileged instruction.
Hardware Address Protection
Address Binding
• A program initially resides on disk as a binary executable file.

• To execute the program, it must be loaded into memory and


assigned a memory address within a process.

• During execution, a process may move between disk and memory


due to memory management strategies.

• The input queue holds processes waiting to be loaded into


memory.

• A process executes by accessing instructions and data in


memory, and when it terminates, its memory space is released.
Address Representation During Execution
A user program undergoes multiple stages before execution, each
with a different form of address representation:

• Symbolic Addresses – Found in source code (e.g., count variable).

• Relocatable Addresses – Generated by the compiler (e.g., 14 bytes


from the start of the module).

• Absolute Addresses – Final memory address after linking and


loading (e.g., 74014).

Each step involves mapping addresses from one form to another.


Binding of Instructions and Data to Memory
Address binding can occur at different stages:

Compile Time Binding


• If the process location in memory is known at compile time, absolute code is generated.
• Example: MS-DOS .COM files are bound at compile time.
• Disadvantage: If the program’s memory location changes, it must be recompiled.

Load Time Binding


• If memory location is unknown at compile time, the compiler generates relocatable code.
• Final binding occurs at load time when the program is placed in memory.
• Advantage: The program can be loaded at different locations without recompilation.
• Disadvantage: If the location changes, the program must be reloaded.

Execution Time Binding


• If a process moves between memory locations during execution, binding is delayed until run time.
• This requires hardware support (such as a memory management unit for address maps e.g., base and
limit registers)
• Most modern OS use this method as it enables dynamic memory allocation and swapping.
Multistep Processing of a User Program
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to a separate physical
address space is central to proper memory management
• Logical address – generated by the CPU; also referred to as virtual address

• Physical address – address seen by the memory unit

• Logical and physical addresses are the same in compile-time and load-time
address-binding schemes.
• logical (virtual) and physical addresses differ in execution-time address-
binding scheme

• Logical address space is the set of all logical addresses generated by a


program

• Physical address space is the set of all physical addresses generated by a


program
Memory-Management Unit (MMU)
• Hardware device that at run time maps virtual to physical address

• To start, consider simple scheme where the value in the relocation


register is added to every address generated by a user process at the
time it is sent to memory
• Base register now called relocation register

• MS-DOS on Intel 80x86 used 4 relocation registers

• The user program deals with logical addresses; it never sees the real
physical addresses
• Execution-time binding occurs when reference is made to location in memory

• Logical address bound to physical addresses


• MMU translates logical addresses to physical addresses dynamically.
• Uses a relocation register (base register).
• Example:
• Logical Address: 346
• Relocation Register: 14000
• Physical Address = 14000 + 346 = 14346
Dynamic Loading
• Routine is not loaded until it is called

• Better memory-space utilization; unused routine is never loaded

• All routines kept on disk in relocatable load format

• Useful when large amounts of code are needed to handle infrequently


occurring cases

• No special support from the operating system is required

• Implemented through program design

• OS can help by providing libraries to implement dynamic loading


Dynamic Linking
• Traditional Linking: Libraries are combined with the program at
load time.

• Dynamic Linking: Linking is delayed until execution time.

• Purpose: Saves memory and disk space by allowing multiple


programs to share a single copy of a library.
Static vs. Dynamic Linking

Feature Static Linking Dynamic Linking


Library Inclusion Included in each program Loaded at runtime
Memory Usage More memory needed Shared among processes
Disk Space Large executable files Smaller executables

Update Mechanism Requires recompilation Can update libraries independently

The stub is a small piece of code that indicates how to locate the appropriate
memory-resident library routine or how to load the library if the routine is
not already present.
How Dynamic Linking Works
•The program executable contains stubs instead of full library
functions.

• When a stub is executed, it:

•Checks if the required library is already in memory.

•Loads the library if not present.

•Replaces itself with the actual function address.

•Subsequent calls to the function directly execute the loaded


routine.
Swapping
Swapping Cont..
• A process must be in memory to be executed.

• A process, however, can be swapped temporarily out of memory to a backing


store and then brought into memory for continued execution.

• The memory manager will start to swap out the process that just finished
and to swap another process into the memory space that has been freed.

• CPU scheduler will allocate a time slice to some other process in memory

• Classic example are:


• Round-Robin (Quantum expired process is swapped out and another one is
swapped in).

• Priority Scheduling ( lower priority process will be rolled out and higher
priority process is rolled in)
Memory manager is the one who does swapping
Swapping Cont..
• Major part of swap time is transfer time.

• Total transfer time is directly proportional to the amount of memory


swapped

• System maintains a ready queue of ready-to-run processes which


have memory images on disk

Address Binding and Swapping Behaviour

• If a process is bound to a memory address at assembly or load time,


it must be swapped back into the same memory location.

• If execution-time binding (via a memory management unit, MMU) is


used, the process can be swapped into a different memory location,
allowing more flexibility.
Swapping Cont..
Backing Store for Swapping

• Swapping requires a fast disk (backing store) to temporarily hold processes.

• The backing store should be large and provide direct access to process images.

• The CPU scheduler selects a process to execute, and the dispatcher ensures it is
in memory. If not, another process is swapped out to bring it in.

Context-Switch Time in Swapping


• Swapping introduces a significant overhead due to data transfer.

• Example:
• If a process is 100 MB and the disk speed is 50 MB/sec, swapping takes 2 seconds.
• Adding an 8 ms latency, the total swap time is about 4 seconds (swap-out + swap-in).
• Larger processes take much longer (e.g., a 3GB process would take about 60 seconds to
swap).
Swapping Cont..
Optimizing Swapping
• Since actual memory usage varies among processes, swapping entire
allocated memory is inefficient.
• The system should track real memory usage and only swap used
memory pages, significantly reducing swap time.
• Dynamic memory allocation using system calls allows processes to
request and release memory as needed, improving efficiency.

Practical Implications
• Execution-time binding improves flexibility in memory allocation.
• Minimizing swap size improves system responsiveness and reduces
context-switch overhead.
• Using paging and demand paging instead of full swapping can
further enhance efficiency.
Contiguous Memory Allocation
• Main memory must support both OS and user processes

• Usually divided into two partitions: one for the resident operating
system and one for the user processes.

• We can place the operating system in either low memory* (lower order
address space) or high memory (higher order address space).

• Resident operating system, usually held in low memory with interrupt vector

• User processes then held in high memory

• Each process contained in single contiguous section of memory


Contiguous Memory Allocation Cont..
Memory Mapping and Protection
Contiguous Memory Allocation Cont..
Memory Allocation

• Simplest methods for allocating memory is to divide memory into


several fixed-sized partitions.
• Each partition may contain exactly one process
• Degree of multiprogramming is bound by the number of
partitions.

Multiple-
partition
allocation
Contiguous Memory Allocation Cont..
Multiple-partition allocation
• Degree of multiprogramming limited by number of partitions

• Variable-partition sizes for efficiency (sized to a given process’ needs)

• Hole – block of available memory; holes of various size are scattered


throughout memory

• When a process arrives, it is allocated memory from a hole large


enough to accommodate it.

• Process exiting frees its partition--adjacent free partitions combined

• Operating system maintains information about:


a) allocated partitions b) free partitions (hole)
Contiguous Memory Allocation Cont..
Dynamic Storage-Allocation Problem
• How to satisfy a request of size n from a list of free holes?
• First-fit: Allocate the first hole that is big enough

• Best-fit: Allocate the smallest hole that is big enough; must


search entire list, unless ordered by size
• Produces the smallest leftover hole

• Worst-fit: Allocate the largest hole; must also search entire list
• Produces the largest leftover hole
• First-fit and best-fit better than worst-fit in terms of speed and
storage utilization
Contiguous Memory Allocation Cont..
Fragmentation
• External Fragmentation – total memory space exists to satisfy a
request, but it is not contiguous

• Internal Fragmentation – allocated memory may be slightly larger


than requested memory; this size difference is memory internal to a
partition, but not being used

• First fit analysis reveals that given N blocks allocated, 0.5 N blocks
lost to fragmentation
• 1/3 may be unusable -> 50-percent rule

• Statistical analysis of first fit, for instance, reveals that, even with some
optimization, given N allocated blocks, another 0.5 N blocks will be lost to
fragmentation. That is, one-third of memory may be unusable! This property is
known as the 50-percent rule
• Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in one large
block
• Compaction is possible only if relocation is dynamic, and is done at
execution time
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
• A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
Segmentation Cont…
User’s View of a Program
Segmentation Cont…
Logical View of Segmentation
1

3 2
4
• Segmentation divides a 3
program's memory into
segments, each with a specific
purpose (e.g., code, data,
stack). user space physical memory space
Segmentation Cont…
• Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>
• Segment table – maps two-dimensional physical addresses;
Each table entry has:
• base – contains the starting physical address where the segments reside in
memory
• limit – specifies the length of the segment

• Segment-table base register (STBR) points to the segment table’s


location in memory
• Segment-table length register (STLR) indicates number of segments
used by a program;
segment number s is legal if s < STLR
Segmentation Cont…
• Protection
• With each entry in segment table associate:
• validation bit = 0  illegal segment
• read/write/execute privileges
• Protection bits associated with segments; code sharing occurs at
segment level
• Since segments vary in length, memory allocation is a dynamic
storage-allocation problem
• A segmentation example is shown in the following diagram
Segmentation Cont…
• Segmentation Hardware
Segmentation Cont…
•For segment 2, starting at base 4300 and
with a length of 400 bytes:
• Byte 53 maps
to 4300+53=43534300+53=4353.

•For segment 3, starting at base 3200:


• Byte 852 maps
to 3200+852=40523200+852=4052.

•If the offset exceeds the segment's limit, it


triggers a trap to the operating system
(e.g., byte 1222 in segment 0, which is
limited to 1000 bytes).
Segmentation Cont…
Advantages:

• It aligns with how users view memory (logical divisions like


functions or arrays).
• Enables sharing and protection of segments between processes.

Disadvantages:

• Can lead to external fragmentation.


• Requires additional overhead for maintaining segment tables.
23 CST 206
OPERTING SYSTEMS
MODULE-IV
Abraham Silberschatz, Peter Baer Galvin, Greg Gagne, ' Operating System
Concepts' 9th Edition, Wiley India 2015
Memory Management
4.1 Concept of Address spaces
4.2 Swapping
4.3 Contiguous memory allocation, fixed and variable partitions
4.4 Segmentation
4.5 Paging
4.6 TLBs, TLB hits and misses TLB structure
4.7 Virtual memory, Demand Paging
4.8 Page replacement algorithms

You might also like