Chapter 3
Operating System
Support
Syllabus(3 hours )
3.1 The operating system layer
3.2 Protection
3.3 Process and threads
3.4 Communication and invocation
3.5 Operating system architecture
Introduction
● It's software that manages computer hardware and provides services for
computer programs.
● Essentially, it acts as an intermediary between computer hardware and the
applications that run on it.
● An operating system is system software that manages computer hardware and
software resources, and provides common services for computer programs.
● Examples of popular operating systems include Windows, macOS, Linux, and
Android.
Roles of Operating System(OS)
● Resource Management: The OS manages computer hardware resources such
as CPU (central processing unit), memory (RAM), storage (hard drives, SSDs),
and input/output devices (keyboard, mouse, printers).
● Process Management: It controls and manages processes or programs running
on the computer. This includes creating, scheduling, and terminating processes,
as well as providing mechanisms for communication and synchronization
between processes.
Roles of Operating System(OS)
● Memory Management: The OS allocates memory to processes, ensuring that each process
has enough memory to execute and that memory is used efficiently.
● File System Management: The OS provides a file system that organizes and manages files
and directories on storage devices. It handles tasks such as file creation, deletion, reading,
and writing, as well as maintaining file permissions and security.
● Device Management: It manages input/output devices such as keyboards, mice, displays,
printers, and network interfaces. This involves device detection, device drivers installation,
and providing an interface for applications to interact with devices.
Roles of Operating System(OS)
● User Interface: The OS provides a user interface, this can be a graphical user interface (GUI)
like Windows or macOS, or a command-line interface (CLI) like Unix/Linux.
● Security: The OS implements security mechanisms to protect the system and user data from
unauthorized access, viruses, malware, and other threats. This includes user authentication,
access control, encryption, and firewall services.
● Error Handling: It detects and handles errors that occur during system operation, such as
hardware failures, software crashes, and invalid user inputs. The OS may provide error
messages, logging, and recovery mechanisms to help diagnose and resolve issues.
Distributed Operating System
● A Distributed Operating System (DOS) is a type of operating system that runs
on a network of interconnected computers and provides a unified interface to
users and applications.
● Manages a collection of independent computers and makes them appear to the
users of the system as if it were a single computer.
● Distributed Computing Systems commonly use two types of Operating
Systems.
○ Network OS
○ Distributed OS
DOS vs NOS
SCOPE
● A DOS extends the functionality of traditional operating systems to manage
resources and provide services across multiple interconnected computers.
● A NOS primarily focuses on providing network services such as file sharing,
printer sharing, and communication services within a local area network
(LAN) or wide area network (WAN).
DOS vs NOS
Resource Management
● DOS manages resources such as CPU, memory, and storage across multiple
computers in the network. It provides mechanisms for process management,
file system management, and communication between processes running on
different computers.
● NOS primarily facilitates resource sharing within the network, such as file
sharing and printer sharing.
DOS vs NOS
Focus
● The focus of a DOS is on providing a transparent and integrated environment
for distributed computing.
● NOS focuses on providing network services and facilitating resource sharing
within a networked environment.
DOS vs NOS
Examples
● Examples of distributed operating systems include Amoeba, Plan 9 from Bell
Labs, and Google's distributed operating system used in their data centers.
● Examples of network operating systems include Microsoft Windows Server
(with features like Active Directory), Novell NetWare, and Linux-based
network operating systems like Samba.
The Operating System Layer
The Operating System Layer
● Figure shows how the operating system layer at each of two nodes supports a
common middleware layer in providing a distributed infrastructure for
applications and services.
● Kernels and server processes are the components that manage resources and
present clients with an interface to the resources
● The OS facilitates: Encapsulation Protection Concurrent processing
● Invocation mechanism is a means of accessing an encapsulated resource.
The Operating System Layer
The Operating System Layer
Process Manager: handles the creation of and operations upon processes.
Thread manager: Thread Creation, Synchronization and Scheduling. Threads are
schedulable activities attached to processes
Communication Manager: manages communication between threads attached to
different processes on the same computer. Communication between threads in
remote processes.
Memory Manager: management of physical and virtual memory.
Supervisor: Dispatching of interrupts, system call traps and other exceptions;
control of memory management unit and hardware caches. Processor and floating
point unit register manipulations.
Protection
● Resources require protection from illegitimate access.
● File protection by providing access privileges.
● Hardware supports to protect modules: hard lock(tiny device to be attached to
a computer)
● Kernel can control the memory management unit and set of processor registers
so that no other code may access the machine’s physical resources except in
acceptable way.
Processes and Threads
● A process can be thought of as a program in execution.
● It includes the program code, data, and resources (such as memory and CPU
time) allocated to it.
● Each process operates independently of other processes, and the OS manages
them to ensure they don't interfere with each other.
● Processes have their own address space, which means they cannot directly
access the memory of other processes unless facilitated by the OS through
mechanisms like inter-process communication (IPC).
Processes and Threads
● A thread is a basic unit of CPU utilization.
● It represents a single sequence of execution within a process.
● Multiple threads can exist within a single process, sharing the same resources
like memory space, file descriptors, and other process-related attributes.
● Threads within the same process can communicate directly with each other,
making communication and data sharing more efficient compared to
inter-process communication
Processes and Threads
Process
● A process consists of an execution environment together with one or more
threads.
● A thread is the operating system abstraction of an activity.
● An execution environment is the unit of resource management: a collection of
local kernel managed resources to which its threads have access.
Processes and Threads
Process
● An execution environment consists of :
○ An address space
○ Thread synchronization and communication resources (e.g., semaphores, sockets)
○ Higher-level resources (e.g., file systems, windows)
Processes and Threads
Threads
● Threads are schedulable activities attached to processes.
● The aim of having multiple threads of execution is :
○ To maximize degree of concurrent execution between operations
○ To enable the overlap of computation with input and output
○ To enable concurrent processing on multiprocessors.
● Threads can be helpful within servers: Concurrent processing of client’s
requests can reduce the tendency for servers to become bottleneck.
● E.g. one thread can process a client’s request while a second thread serving
another request waits for a disk access to complete.
Processes Vs Threads
● Computer processes are independent program instances with their own
memory space and resources, operating in isolation.
● In contrast, threads are smaller units within processes that share the same
memory, making communication easier but requiring careful synchronization.
● Threads are “lightweight”.
● Processes are expensive to create but threads are easier to create and destroy.
Thread Programming
Client and server with threads
Communication and Invocation
● Inter-Process Communication (IPC): In a distributed OS, processes running on
different nodes need to communicate with each other. IPC mechanisms
facilitate this communication. Common IPC mechanisms include message
passing, remote procedure calls (RPC), and shared memory.
● Message Passing: Processes exchange messages through a messaging system.
This can be synchronous or asynchronous and may involve direct messaging or
message queues.
● Remote Procedure Calls (RPC): RPC allows a process to execute procedures or
functions on remote nodes as if they were local. It abstracts away the details of
network communication, making remote invocation appear similar to local
function calls.
Communication and Invocation
● Shared Memory: Processes can communicate by sharing a common memory
space. In a distributed environment, this typically involves memory-mapped
files or distributed shared memory systems.
Invocation
● Local Invocation: Processes invoke procedures or functions locally within the
same node. This is similar to traditional function calls within a single program.
Communication and Invocation
● Remote Invocation: Processes invoke procedures or functions located on
remote nodes. This is often achieved through RPC mechanisms, where the
caller sends a request to the remote node, which executes the requested
procedure and returns the result.
● Service Invocation: In distributed systems, services are often hosted on
different nodes. Invocation of these services involves locating the service,
sending a request, and receiving a response. This can be done through various
communication protocols such as HTTP, REST, or custom protocols tailored for
the distributed OS.
Invocation Performance
● Client and Server may make many millions of invocation related operations in
their life times.
● However the network bandwidth improves invocation times have not
decreased in proportion.
● NULL RPC is defined as an RPC without parameters that executes a null
procedure and returns no values.
● Its execution involves an exchange of messages carrying some system data but
not user data.
● Null Invocation costs are important because they measure a fixed overhead, the
latency.
Invocation Performance
● Invocation cost increase with the size of arguments and results.
● RPC bandwidth/throughput is also concern when data has to be transferred in
bulk.
● Marshalling and unmarshalling: which involve copying and converting data
become significant when amount of data grows.
● Data copying: message data is copied several times in the course of RPC like
from across the user-kernel boundary, across each protocol layer, between
network interface and kernel buffers.
● Packet initialization: initializing protocol headers and trailers including
checksums
Invocation Performance
● Thread scheduling is the process of selecting the next thread to run on a CPU
core when the current thread relinquishes the CPU, either voluntarily (e.g., by
yielding) or involuntarily (e.g., due to a blocking operation or time slice
expiration).
● Context switching is the process of saving the current state of a thread or
process (its context) and loading the state of another thread or process so that
it can run on the CPU.
Operating System Architecture
The major kernel architectures:
● Monolithic kernels
● Micro-kernels
Monolithic Kernels
● Monolithic kernels represent a traditional approach to designing operating
systems where the entire operating system runs as a single large program in
kernel mode.
● A monolithic kernel can contain some server processes that execute within its
address space, including file servers and some networking.
● The code that these processes execute is part or the standard kernel
configuration.
Monolithic Kernels - Design
● Single Address Space: In a monolithic kernel, the entire operating system
resides in a single address space.
● Tight Integration: All operating system services, including device drivers, file
system management, memory management, and scheduling, are tightly
integrated into a single executable.
● Direct Communication: Components communicate with each other directly
through function calls and data structures, without the need for inter-process
communication mechanisms.
● Efficiency: Monolithic kernels are often efficient in terms of performance
because there's little overhead in communication between different
components.
Monolithic Kernels - Advantages
● Efficiency: Monolithic kernels typically have high performance because there's
minimal overhead in accessing system services.
● Simplicity: The design of monolithic kernels is often simpler compared to other
kernel architectures like microkernels.
● Direct Access: Since all components run in kernel mode, they have direct access
to hardware resources, which can lead to better performance in certain
scenarios.
● Ease of Development: Developing for monolithic kernels can be
straightforward since all system services are available within a single address
space.
Monolithic kernels - Disadvantages
● Lack of Modularity: Monolithic kernels lack modularity, making them less
flexible and harder to maintain and extend. A change in one component may
require modifications to the entire kernel.
● Difficulty in Debugging: Since all components run in kernel mode, a bug in one
component can potentially crash the entire system.
● Security Vulnerabilities: Monolithic kernels are more susceptible to security
vulnerabilities because a bug or exploit in one component can compromise the
entire system.
● Limited Fault Isolation: A failure in one component of the kernel can potentially
affect the stability of the entire system.
Microkernel
● A microkernel is an alternative kernel design to the monolithic kernel, aiming to
provide a minimalistic approach to operating system construction.
● The microkernel appears as a layer between hardware layer and a layer consisting of
major systems.
● Minimalism: The microkernel design philosophy emphasizes minimalism, with only the
most essential functions implemented in kernel space.
● Modularity: Unlike monolithic kernels, which integrate all operating system services
into a single kernel space, a microkernel keeps only the most fundamental functions
such as memory management, thread scheduling, and inter-process communication
(IPC) within the kernel space. Other services, like device drivers, file systems, and
networking protocols, are implemented as user-space processes or servers.
Microkernel - Design
● Inter-Process Communication (IPC): IPC mechanisms are crucial in
microkernels to facilitate communication between user-space servers and to
enforce isolation between components.
● Fault Isolation: By keeping the kernel small and modular, microkernel designs
aim to improve fault isolation. If a user-space component crashes, it's less likely
to affect other parts of the system.
● Extensibility: Microkernel architectures are often designed to be extensible,
allowing developers to add or replace components without modifying the core
kernel.
Microkernel - Design
Advantages
● Modularity: Microkernels offer better modularity compared to monolithic
kernels, making them easier to maintain, extend, and debug.
● Security: The reduced kernel size and clear separation between kernel and user
space improve security. Security-critical components can be isolated in user
space, reducing the attack surface.
● Fault Isolation: Microkernels provide better fault isolation. If a component fails,
it's less likely to crash the entire system.
● Portability: Microkernels often facilitate portability since they provide a small
and well-defined interface between the kernel and user space, making it easier
to adapt the operating system to different hardware architectures.
Disadvantages
● Performance Overhead: The use of IPC mechanisms for communication
between user-space components can introduce performance overhead
compared to direct function calls in monolithic kernels.
● Complexity: Designing and implementing a microkernel-based operating
system can be more complex than building a monolithic kernel due to the need
for careful management of inter-process communication and the coordination
of user-space servers.
● Limited Hardware Access: Since device drivers and other hardware-related
functions run in user space, accessing hardware may incur additional overhead
and complexity.
Hybrid Approaches
● Hybrid kernel architectures combine elements of both monolithic and
microkernel designs, aiming to leverage the benefits of each while mitigating
their respective drawbacks.
● Hybrid kernels offer a compromise between the simplicity and security of
microkernels and the performance and familiarity of monolithic kernels.
● They are widely used in commercial operating systems where a balance
between flexibility, performance, and security is desired.
Hybrid Approaches
● Pure microkernel operating system such as Chorus & Mach have changed over
a time to allow servers to be loaded dynamically into the kernel address space
or into a user-level address space.
● In some operating system such as SPIN, the kernel and all dynamically loaded
modules grafted onto the kernel execute within a single address space.
That’s all For chapter 3
Thankyou !!