0% found this document useful (0 votes)
61 views11 pages

Operating System Notes

POLO

Uploaded by

vansh tomar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views11 pages

Operating System Notes

POLO

Uploaded by

vansh tomar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Operating System Notes – Index

Basic Concepts

1. Main Purpose of Operating System (OS)

2. Types of Operating Systems

3. Socket, Kernel, and Monolithic Kernel

4. Difference Between Process, Program, and Thread

5. Types of Processes

6. Virtual Memory, Thrashing, and Threads

7. RAID – Definition and Types

8. Deadlock – Definition and Conditions

9. Fragmentation – Definition and Types

Memory Management

10. Spooling

11. Semaphore and Mutex (with Differences)

12. Binary Semaphore

13. Belady’s Anomaly

14. Starvation and Aging

15. Thrashing – Causes and Effects

16. Paging – Concept and Need

17. Demand Paging

18. Segmentation

19. Real-Time Operating System (RTOS) and Its Types

20. Main Memory vs Secondary Memory

21. Dynamic Binding

CPU Scheduling Algorithms

22. FCFS Scheduling (First-Come, First-Served)

23. SJF Scheduling (Shortest Job First)

24. SRTF Scheduling (Shortest Remaining Time First)

25. LRTF Scheduling (Longest Remaining Time First)


26. Priority Scheduling

27. Round Robin Scheduling

Process Synchronization and Deadlock Avoidance

28. Producer-Consumer Problem

29. Banker’s Algorithm

Memory and System Architecture

30. Cache Memory – Definition, Purpose, and Levels

31. Direct Mapping vs Associative Mapping

32. Multitasking vs Multiprocessing

1. What is the main purpose of an Operating System (OS)?

The main purpose of an operating system is to act as an intermediary between users and hardware.
It manages hardware resources, provides a user interface, and allows execution of programs
efficiently.

Key functions:

• Process management

• Memory management

• File system management

• Device management

• Security and access control

• User interface

2. Types of Operating Systems:

• Batch OS: Executes batches of jobs with no user interaction (e.g., early IBM OS).

• Time-Sharing OS: Multiple users access the system simultaneously via time slots.

• Distributed OS: Manages a group of independent systems and makes them appear as a
single system.

• Network OS: Allows systems to communicate and share resources over a network.

• Real-Time OS: Processes data instantly; used in critical systems like avionics.

• Mobile OS: Optimized for smartphones and tablets (e.g., Android, iOS).
3. What is a Socket, Kernel, and Monolithic Kernel?

• Socket:
An endpoint for communication between two machines (or processes) over a network. It
allows programs to send/receive data via IP and port combinations.

• Kernel:
The core component of the OS responsible for interacting with hardware and managing
system resources like CPU, memory, and devices.

• Monolithic Kernel:
A type of kernel where the entire OS runs in kernel space, including drivers, file system, etc.
Examples: Linux, Unix.

4. Difference Between Process, Program, and Thread:

Concept Description

Program A passive set of instructions stored on disk (e.g., a .exe file).

Process A running instance of a program with its own memory and resources.

A lightweight process within a process; shares memory with other threads in the same
Thread
process.

5. Types of Processes:

• Foreground Process: Requires user interaction (e.g., browser).

• Background Process: Runs without user interaction (e.g., antivirus).

• Daemon Process: Background process on Unix-like systems (e.g., sshd).

• Zombie Process: Completed process that still has an entry in the process table.

• Orphan Process: Parent process dies while child continues running.

6. Define: Virtual Memory, Thrashing, Threads

• Virtual Memory:
Technique that uses disk space as extra RAM, allowing large programs to run even with
limited physical memory.

• Thrashing:
Occurs when the OS spends more time swapping between RAM and disk than executing
processes, reducing performance.
• Threads:
The smallest unit of execution within a process. Threads in a process share the same address
space but execute independently.

7. What is RAID? Types of RAID

RAID (Redundant Array of Independent Disks): A method of combining multiple hard drives to
improve performance and/or data redundancy.

Types of RAID:

• RAID 0: Striping, no redundancy, faster (no fault tolerance).

• RAID 1: Mirroring, high reliability, duplicate copies.

• RAID 5: Block-level striping with parity, balanced performance and fault tolerance.

• RAID 6: Like RAID 5 but with double parity, can tolerate two disk failures.

• RAID 10 (1+0): Combines mirroring and striping, high performance and redundancy.

8. What is a Deadlock? Conditions for Deadlock

Deadlock: A situation where a set of processes are waiting indefinitely for resources held by each
other.

Four Necessary Conditions:

1. Mutual Exclusion – Only one process can use a resource at a time.

2. Hold and Wait – A process holds resources while waiting for others.

3. No Preemption – Resources cannot be forcibly taken from a process.

4. Circular Wait – A set of processes are waiting in a circular chain.

9. What is Fragmentation? Types of Fragmentation

Fragmentation: Wastage of memory due to inefficient allocation.

Types:

• Internal Fragmentation: Wasted space within allocated memory blocks (e.g., block too big
for data).

• External Fragmentation: Wasted space between allocated memory blocks, making large
contiguous blocks unavailable.

10. What is Spooling?

Spooling (Simultaneous Peripheral Operations Online) is a technique where data meant for
input/output (I/O) devices is temporarily stored in a buffer or disk to be accessed and executed
sequentially by the I/O device.
Example: When you print multiple documents, they are first queued (spooled) in the print
spooler and sent one by one to the printer.

11. What is Semaphore and Mutex? (Differences)

Semaphore

• A synchronization tool that uses an integer variable to control access to shared resources.

• Can be binary (0 or 1) or counting (>1).

• Allows multiple threads/processes access (depending on the count).

• Used with wait() and signal() operations.

Mutex (Mutual Exclusion Object)

• A lock that ensures only one thread accesses a critical section at a time.

• Strict ownership: only the thread that locked it can unlock it.

• Mostly used in multi-threaded environments.

Key Differences:

Aspect Semaphore Mutex

Value Range Binary or counting Always binary

Ownership No ownership Owned by the thread that locked it

Access Control Allows multiple (count-based) Only one at a time

Use Case Process & thread synchronization Thread locking in critical sections

12. What is a Binary Semaphore?

A binary semaphore is a type of semaphore that can only take two values:

• 0: Locked or unavailable

• 1: Unlocked or available

Similar to a mutex but no ownership, so any thread can release it.

13. What is Belady’s Anomaly?

Belady’s Anomaly occurs when increasing the number of page frames in memory increases the
number of page faults, which is unexpected.

Most commonly seen in FIFO (First-In-First-Out) page replacement algorithm.


14. What is Starvation and Aging in OS?

• Starvation: A condition where a low-priority process never gets scheduled because higher-
priority processes keep executing.

• Aging: A technique to prevent starvation by gradually increasing the priority of waiting


processes over time.

15. Why Does Thrashing Occur?

Thrashing happens when a system is spending more time swapping pages in and out than executing
actual processes.

Causes:

• Too many processes

• Not enough RAM

• High degree of page faults

Result: Drastic drop in system performance.

16. What is Paging and Why Do We Need It?

Paging is a memory management scheme where:

• Logical memory is divided into pages.

• Physical memory is divided into frames.

• Pages are mapped to frames using a page table.

Why it's needed:

• Eliminates external fragmentation.

• Allows non-contiguous memory allocation.

• Simplifies memory management.

How it works:

• When a process runs, its pages are loaded into available frames.

• If a page is not in memory, a page fault occurs and it's fetched from secondary storage.

17. What is Demand Paging?

Demand Paging is a memory management technique where pages are loaded into memory only
when they are required (on demand), instead of loading the entire process at once.

How it works:
• If a required page is not in memory → page fault occurs → OS loads it from disk.

Benefits:

• Saves memory.

• Reduces load time.

• Allows larger programs to run with less RAM.

18. What is Segmentation?

Segmentation is a memory management technique where memory is divided into variable-sized


logical segments based on program structure (like code, stack, heap, data, etc.).

Each segment has:

• A segment number

• An offset

• A segment table is used to map segments to physical memory.

Advantage: Provides logical separation and protection between parts of a program.

19. What is a Real-Time Operating System (RTOS)?

A Real-Time Operating System is an OS designed to process data and execute tasks within a
guaranteed time constraint.

Types of RTOS:

1. Hard RTOS:

o Strict timing constraints.

o Missing a deadline is a system failure.

o Examples: Aircraft control systems, medical devices.

2. Soft RTOS:

o Timing constraints are less strict.

o Occasional deadline misses are tolerable.

o Examples: Multimedia systems, online games.

20. Difference between Main Memory and Secondary Memory:

Feature Main Memory (RAM) Secondary Memory (HDD/SSD)

Speed Very fast Slower


Feature Main Memory (RAM) Secondary Memory (HDD/SSD)

Volatility Volatile (data lost on shutdown) Non-volatile

Cost Expensive per GB Cheaper per GB

Use Temporary storage for execution Permanent data storage

Accessibility Directly accessed by CPU Accessed via I/O interfaces

21. What is Dynamic Binding?

Dynamic Binding (Late Binding) is when the method or function to be executed is determined at
runtime, not at compile time.

Common in object-oriented languages (e.g., C++, Java) using polymorphism.

22. FCFS Scheduling (First-Come, First-Served):

• Simplest CPU scheduling algorithm

• Processes are executed in the order they arrive.

• Non-preemptive.

Pros: Easy to implement


Cons: Poor average waiting time; Convoy Effect (short processes wait behind long ones)

23. SJF Scheduling (Shortest Job First):

• The process with the shortest burst time is executed first.

• Can be preemptive or non-preemptive.

Best average waiting time


Difficult to know burst time in advance, may cause starvation of longer jobs.

24. SRTF Scheduling (Shortest Remaining Time First):

• Preemptive version of SJF

• If a new process arrives with less remaining time than the current process, preemption
occurs.

Very efficient in reducing average waiting time


May cause starvation

25. LRTF Scheduling (Longest Remaining Time First):


• Opposite of SRTF: process with the longest remaining burst time is selected next.

• Preemptive.

Rarely used due to inefficiency


Long processes dominate; poor turnaround time for short processes

26. Priority Scheduling

Priority Scheduling is a CPU scheduling algorithm where each process is assigned a priority, and the
CPU is allocated to the process with the highest priority.

• Can be preemptive or non-preemptive

• Lower numerical value often means higher priority

Pros:

• Important tasks can be handled faster


Cons:

• Starvation: low-priority processes may never execute

• Aging is used to solve starvation

27. Round Robin Scheduling

Round Robin (RR) is a preemptive scheduling algorithm where:

• Each process gets a fixed time slice (quantum) in a cyclic order

• After its turn, the process goes to the end of the queue if not completed

Pros:

• Fair and simple

• Ideal for time-sharing systems


Cons:

• Performance depends on time quantum size

o Too small → high overhead

o Too large → behaves like FCFS

28. Producer-Consumer Problem

This is a classic synchronization problem involving:

• A producer that creates data and adds it to a bounded buffer

• A consumer that removes data from the buffer and processes it

Constraints:
• Producer must wait if the buffer is full

• Consumer must wait if the buffer is empty

Solution involves:

• Semaphores or mutexes to ensure mutual exclusion and synchronization

• Common semaphores used:

o mutex → to access buffer

o full → count of full slots

o empty → count of empty slots

29. Banker’s Algorithm

The Banker’s Algorithm is a deadlock avoidance method. It ensures a safe state before allocating
resources by checking whether the system can still allocate resources to every process in some safe
sequence.

Key Concepts:

• Each process must declare its maximum resource needs in advance

• The system allocates resources only if it doesn’t lead to an unsafe state

Data Structures Used:

• Available[]: Available instances of each resource

• Max[][]: Maximum demand of each process

• Allocation[][]: Resources currently allocated

• Need[][]: Remaining resource needs = Max - Allocation

Safe State: If the system can find a safe sequence for all processes
Unsafe State ≠ Deadlock, but may lead to one if not corrected

30. What is Cache?

Cache is a small, high-speed memory placed between the CPU and main memory (RAM) to store
frequently accessed data and instructions.

Why use cache?

• Faster access than main memory

• Reduces average memory access time

• Takes advantage of temporal and spatial locality

Levels of Cache:

• L1 Cache: Fastest and smallest (inside CPU)


• L2 Cache: Slightly slower, larger

• L3 Cache: Shared across cores, larger and slower than L2

31. Difference between Direct Mapping and Associative Mapping

Feature Direct Mapping Associative Mapping

Each block of main memory maps to only one Any block can go into any cache
Definition
cache line line

High – can be placed anywhere in


Flexibility Low – fixed position
cache

Mapping Cache line = (Block Number) mod (Number of


No fixed formula – searched fully
Formula lines)

Slower – full cache needs to be


Search Time Fast – single location
searched

Conflict Misses Higher Lower

Cost (Hardware) Low High – needs complex hardware

32. Difference between Multitasking and Multiprocessing

Feature Multitasking Multiprocessing

Running multiple tasks Using two or more CPUs to execute


Definition
(processes/threads) on a single CPU processes simultaneously

Type Logical parallelism Physical parallelism

Used In Single-processor systems Multi-core/multi-CPU systems

Running a browser + music player on 1 Two CPUs running independent


Example
CPU processes

CPU
One CPU Two or more CPUs
Requirement

Speed Slower compared to multiprocessing Faster due to parallel execution

You might also like