0% found this document useful (0 votes)
12 views8 pages

Operating System Notes

Chapter 6 covers CPU scheduling concepts, including the role of the CPU scheduler, dispatcher, and various scheduling algorithms such as FCFS, SJF, and Round Robin, emphasizing their advantages and disadvantages. Chapter 8 discusses memory management, focusing on address binding, the Memory-Management Unit (MMU), and techniques like paging and fragmentation, along with their respective solutions. Key takeaways include the importance of preemptive scheduling and the efficiency of paging in reducing fragmentation.

Uploaded by

5124rabia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views8 pages

Operating System Notes

Chapter 6 covers CPU scheduling concepts, including the role of the CPU scheduler, dispatcher, and various scheduling algorithms such as FCFS, SJF, and Round Robin, emphasizing their advantages and disadvantages. Chapter 8 discusses memory management, focusing on address binding, the Memory-Management Unit (MMU), and techniques like paging and fragmentation, along with their respective solutions. Key takeaways include the importance of preemptive scheduling and the efficiency of paging in reducing fragmentation.

Uploaded by

5124rabia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

🎯 Chapter 6: CPU Scheduling – Detailed

Summary

🔹 1. Basic Concepts
 Goal: Achieve maximum CPU utilization through multiprogramming.
 CPU-I/O Burst Cycle:
o A process alternates between:
 CPU bursts (computation)
 I/O bursts (waiting for input/output)
o This behavior is predictable and can be charted as a histogram of
burst lengths.

🔹 2. CPU Scheduler
 Definition: Chooses among ready processes in memory to allocate the
CPU.
 Scheduling can occur when:
1. Running → Waiting (e.g., I/O request)
2. Running → Ready (preemption)
3. Waiting → Ready (I/O complete)
4. Termination
 Preemptive: Cases 2 & 3
 Non-preemptive: Cases 1 & 4

🔹 3. Dispatcher
 Role: Gives control of CPU to the selected process.
 Performs:
o Context switch
o Switch to user mode
o Jump to instruction location
 Dispatch latency: Time taken to perform the above steps.

🔹 4. Scheduling Criteria (What Makes a Good Scheduler?)


Criteria Goal
CPU Utilization Maximize
Throughput Maximize
Turnaround Time Minimize
Waiting Time Minimize
Response Time Minimize (esp. for time-sharing systems)

🔹 5. Scheduling Algorithms
✅ a. First-Come, First-Served (FCFS)

 Non-preemptive.
 Processes are served in the order of arrival.
 Problem: Long processes delay short ones (convoy effect).
 Example:
o Order: P1(24), P2(3), P3(3) → Avg Wait = 17ms
o Order: P2, P3, P1 → Avg Wait = 3ms

✅ b. Shortest Job First (SJF)

 Non-preemptive.
 Chooses process with smallest CPU burst.
 Optimal average waiting time.
 Requires predicting next burst using exponential averaging:

τn+1=α⋅tn+(1−α)⋅τn\tau_{n+1} = \alpha \cdot t_n + (1 - \alpha) \cdot \tau_n

Where:

o τ\tau: predicted burst


o tt: actual last burst
o α\alpha: prediction weight (0 ≤ α ≤ 1)

✅ c. Shortest Remaining Job First (SRJF)

 Preemptive SJF.
 If a new process has a shorter remaining time than the running one →
preemption.
 Example:
o Processes: P1(8), P2(4), P3(9), P4(5)
o Arrival times cause preemption → Avg Wait = 6.5ms

✅ d. Priority Scheduling

 Each process is assigned a priority number.


 Lower number = higher priority.
 Can be:
o Preemptive (high-priority interrupts low-priority)
o Non-preemptive
 Starvation risk for low-priority processes.
 Aging: Gradually increase priority over time to prevent starvation.

✅ e. Round Robin (RR)

 Each process gets a time quantum (q), e.g., 10–100ms.


 If not finished in q → preempted and sent to back of the ready queue.
 Ensures fairness, good for time-sharing.
 q too small = too many context switches = overhead.
 q too large = becomes FCFS-like.

✅ f. Multilevel Queue Scheduling

 Multiple queues based on process type:


o Foreground → RR
o Background → FCFS
 Fixed priority across queues → starvation possible.
 Time-slice across queues: e.g., 80% CPU to foreground, 20% to
background.

✅ g. Multilevel Feedback Queue

 Processes move between queues based on behavior:


o If a process uses too much CPU → demoted.
o If it waits too long → promoted (aging).
 Example:
o Q0: RR (8ms), Q1: RR (16ms), Q2: FCFS

🎯 Chapter 8: Memory Management –


Detailed Summary

🔹 1. Background
 Only main memory (RAM) and registers can be accessed directly by CPU.
 Cache lies between registers and memory to speed things up.
 Programs must be loaded into memory before execution.

🔹 2. Address Binding
Binding Time Address Type Behavior
Compile-time Physical Absolute addresses assigned
Load-time Physical Addresses assigned when loaded
Execution-time Logical → Physical Requires MMU for mapping
 Logical Address: Seen by the program
 Physical Address: Actual address in memory

🔹 3. Memory-Management Unit (MMU)


 Translates logical to physical addresses dynamically.
 Uses:
o Base Register (Relocation Register): Starting physical address
o Limit Register: Maximum valid range

🔹 4. Contiguous Memory Allocation


 One big block per process.
 Divides memory into:
o OS (low memory)
o User processes (high memory)
 Dynamic allocation strategies:
1. First-Fit: First hole big enough
2. Best-Fit: Smallest suitable hole
3. Worst-Fit: Largest hole

🔹 5. Fragmentation
Type Cause Fix
External Enough space but scattered Compaction (shuffle)
Internal Allocated > requested Avoid padding waste

 Compaction only possible if dynamic relocation is supported.

🔹 6. Paging
 Non-contiguous memory allocation
 Divide:
o Logical memory → Pages
o Physical memory → Frames
 Same size → Page Table maps pages to frames.

✅ Paging Example

 Logical Address: m bits


 Page Size: 2n2^n
 Split Logical Address:
o Page number (p) = m–n bits
o Offset (d) = n bits

✅ Page Table Details

 Stored in main memory


 Each memory access needs:
o 1 for page table
o 1 for actual data → slow
 Fix: TLB (Translation Lookaside Buffer)
o Cache that holds recent page table entries

🔹 7. Protection & Validity


 Each page entry has:
o Protection bit (Read, Write, Execute)
o Valid/Invalid bit:
 Valid → Legal memory
 Invalid → Outside process space

🔹 8. Shared Pages
 Shared code: One copy among processes
o Must appear at same location in each logical space
 Private code/data: Separate copies per process

🔹 9. Page Table Structures


✅ a. Hierarchical Paging

 Breaks large page tables into multiple levels


 Example: 2-level paging

✅ b. Hashed Page Tables

 Ideal for large address spaces (>32-bit)


 Hash virtual page number → physical frame

✅ c. Inverted Page Table

 One entry per physical frame, not per page


 Contains:
o Process ID
o Virtual Page #
o Reduces memory usage

🔹 10. Linux Memory Management


 3-level paging (on 32-bit systems):
o Page Global Directory (PGD)
o Page Middle Directory (PMD) – may be ignored
o Page Table
 Data Structures:
o mm_struct → memory info
o task_struct → process control block
� Final Pro-Tips:
 Preemptive algorithms: SRJF, Priority (preemptive), RR
 Non-preemptive: FCFS, SJF (non-preemptive)
 TLB = speed boost for address translation
 Paging solves external fragmentation
 Segmentation (not discussed deeply here) solves logical separation but
causes fragmentation

You might also like