0% found this document useful (0 votes)
2 views11 pages

OS Assignment 2

The document provides a comprehensive overview of processes in operating systems, including definitions, states, and the structure of a Process Control Block (PCB). It explains process scheduling, types of schedulers, context switching, and inter-process communication methods such as shared memory and message passing. Additionally, it discusses threads, their benefits, and various multithreading models, along with essential process management commands in Linux/Unix environments.

Uploaded by

parkaraaryan.908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views11 pages

OS Assignment 2

The document provides a comprehensive overview of processes in operating systems, including definitions, states, and the structure of a Process Control Block (PCB). It explains process scheduling, types of schedulers, context switching, and inter-process communication methods such as shared memory and message passing. Additionally, it discusses threads, their benefits, and various multithreading models, along with essential process management commands in Linux/Unix environments.

Uploaded by

parkaraaryan.908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ASSIGNMENT NO.

1. Define Processes
Informally, a process is a program in execution.
A process is an active entity that has a program counter specifying the
next instruction to execute and a set of associated resources. In contrast, a
program is a passive entity, such as an executable file containing a list of
instructions stored on disk.
The structure of a process in memory includes:
• Text Section: The program code, which also contains the current
activity (e.g., the program counter and register contents).
• Stack: Contains temporary data such as function parameters, return
addresses, and local variables.
• Data Section: Contains global variables.
• Heap: Memory that is dynamically allocated during process run time.

2. Explain the Process State


As a process executes, it changes state, which is defined in part by its
current activity. Although names may vary across operating systems, the
states found on all systems include:

State Description

New The process is being created.

Ready The process is waiting to be assigned to a processor.

Running Instructions are currently being executed.

The process is waiting for some event to occur (e.g., I/O


Waiting
completion or reception of a signal).
State Description

Terminated The process has finished execution.

State Transitions:
• A process moves from the New state to the Ready state upon being
admitted.
• A process in the Ready state is moved to the Running state by the
scheduler dispatch.
• A process in the Running state can move to the:
o Waiting state if it makes an I/O or event wait.
o Ready state due to an interrupt (e.g., time slice expired).
o Terminated state upon exit.
• A process moves from the Waiting state back to the Ready state
upon I/O or event completion.

3. Describe Process Control Block (PCB)


Each process is represented in the operating system by a Process Control
Block (PCB), also called a task control block. The PCB contains all the
information associated with a specific process:

Information
Description
Category

The current state of the process (e.g., new, ready,


Process State
running, waiting, halted, etc.).

The address of the next instruction to be executed for


Program Counter
this process.
Information
Description
Category

The contents of all CPU registers (accumulators, index


CPU Registers registers, stack pointers, etc.) which must be saved
during an interrupt.

CPU-Scheduling Includes the process priority, pointers to scheduling


Info queues, and other scheduling parameters.

Memory- May include values like the base and limit registers,
Management Info page tables, or segment tables.

Includes the amount of CPU and real time used, time


Accounting Info
limits, account numbers, and job/process numbers.

Includes the list of I/O devices allocated to the process


I/O Status Info
and a list of open files.

4. Process Scheduling
Process Scheduling is the activity performed by the process scheduler to
select an available process for execution on the CPU.
The objectives of process scheduling are:
• Multiprogramming: To have some process running at all times to
maximize CPU utilization.
• Time Sharing: To switch the CPU among processes so frequently that
users can interact with each program while it is running.
In a single-processor system, only one process can run at a time; others
must wait until the CPU is free and can be rescheduled.
5. Explain Scheduling Queues
The operating system manages processes in various queues, which are
typically implemented as linked lists using pointers in the Process Control
Blocks (PCBs).
• Job Queue: This queue consists of all processes in the system as
they enter.
• Ready Queue: This queue holds processes that are residing in main
memory and are ready and waiting to execute. The CPU is allocated
to a process selected from this queue.
• Device Queues: A device queue is a list of processes that are
waiting for a particular I/O device. Each I/O device has its own
device queue.
A process flows through these queues in a lifecycle represented by a
queueing diagram. Once a process is executing on the CPU, it may move
back to the ready queue (due to an interrupt), move to an I/O queue (due to
an I/O request), or create a child process and wait for its termination.

6. Explain Types of Schedulers


Operating systems use different types of schedulers based on the
frequency and purpose of their scheduling decisions:
A. Long-Term Scheduler (Job Scheduler)
• Function: Controls the degree of multiprogramming (the number of
processes in memory). It is responsible for selecting which
processes to load from the job queue into the ready queue (main
memory).
• Frequency: Executes much less frequently (minutes may separate
executions) and may only be invoked when a process leaves the
system.
• Selection Criterion: Must select a good process mix of I/O-bound
(spends more time doing I/O) and CPU-bound (spends more time
doing computations) processes to ensure the system remains
balanced and efficient.
B. Short-Term Scheduler (CPU Scheduler)
• Function: Selects from among the processes that are ready to
execute and allocates the CPU to one of them.
• Frequency: Must select a new process frequently (often at least
once every 100 milliseconds), as a process may execute for only a
few milliseconds before waiting for an I/O request.
• Performance: Must be very fast due to its high execution frequency,
as time spent on scheduling is overhead.
C. Medium-Term Scheduler
• Function: The key idea is to remove a process from memory
(swapping out) to reduce the degree of multiprogramming, and later
reintroduce it (swapping in) to continue execution where it left off.
• Use: Often introduced in time-sharing systems. Swapping may be
necessary to improve the process mix or to free up memory due to
overcommitted resources.

7. Define Context Switching


Context switching is the task of switching the CPU to another process. It
involves two main steps:
1. State Save: Performing a state save of the current process (saving
the contents of the CPU registers, program counter, and other
machine state into its PCB).
2. State Restore: Performing a state restore of a different process
(retrieving the saved state of the next process from its PCB and
loading the registers).
Context switches occur when an interrupt or system call causes the
operating system kernel to take control from the currently running process.
• Overhead: Context switch time is considered pure overhead
because the system does not perform any useful work during this
time. The time required is highly dependent on hardware support.

8. Define Inter Process Communication (IPC)


Inter Process Communication (IPC) is a mechanism that allows
cooperating processes to exchange data and information.
• Cooperating Process: A process that can affect or be affected by
other processes executing in the system (i.e., any process that shares
data with others).
• Independent Process: A process that cannot affect or be affected by
other processes (i.e., one that does not share data).
Reasons for allowing process cooperation include: information sharing,
computation speedup (on multicore systems), modularity, and user
convenience.

9. Explain Shared Memory System and Message Passing System


The two fundamental models of Interprocess Communication (IPC) are
shared memory and message passing.
A. Shared Memory System
• Mechanism: A region of memory is established that is shared by
cooperating processes. Processes exchange information by reading
and writing data to this shared region.
• Setup: System calls are required only to establish the shared
memory regions and attach them to the address spaces of the
communicating processes. The processes must agree to remove the
operating system's normal restriction that prevents one process from
accessing another's memory.
• Performance: Can be faster than message passing because once
established, all accesses are treated as routine memory accesses,
requiring no assistance from the kernel.
• Responsibility: The communicating processes are responsible for
determining the form and location of the data, as well as ensuring
they are not writing to the same location simultaneously.
B. Message Passing System
• Mechanism: Communication takes place by means of messages
exchanged between the cooperating processes. It allows processes
to communicate and synchronize their actions
without sharing the same address space.
• Operations: Provides at least two operations: send(message) and
receive(message).
• Usefulness: Particularly useful in a distributed environment where
communicating processes may reside on different computers. It is
easier to implement in a distributed system than shared memory.
• Performance: Message passing is often implemented using system
calls, which typically require time-consuming kernel intervention.
• Buffering: Messages exchanged by communicating processes reside
in a temporary queue (buffer), which can have zero capacity
(sender blocks until receipt), bounded capacity (sender blocks if
full), or unbounded capacity (sender never blocks).

10. Define Threads, Enlist Benefits, and Compare User and Kernel Level
Threads
A. Definition of Thread
A thread is the basic unit of CPU utilization. It comprises a thread ID, a
program counter, a register set, and a stack. A thread
shares its code section, data section, and other OS resources (like open
files and signals) with other threads belonging to the same process.
A traditional (or heavyweight) process has a single thread of control; if a
process has multiple threads, it can perform more than one task at a time.
B. Benefits of Threads
1. Responsiveness: Multithreading an interactive application allows
the program to continue running and remain responsive to the user
even if part of it is blocked or performing a lengthy operation.
2. Resource Sharing: Threads share the memory and resources of the
process they belong to by default, which is a benefit compared to
processes that must explicitly arrange sharing via IPC mechanisms.
3. Economy: Creating and managing threads is generally more
economical (faster) than creating and managing processes, as
threads share resources.
4. Scalability: Threads can be run in parallel on different processing
cores in a multiprocessor architecture, providing greater benefits
than a single-threaded process, which can only run on one
processor.
C. Comparison of User-Level and Kernel-Level Threads

Feature User-LevelThreads
Kernel-Level Threads

managed by Managed by the


user space by a Kernel in kernel
Management thread library; space; the
the kernel is not kernel is aware
aware of their of and directly
existence. supports them.

Tread Transfer of control between


switching does
Switching threads in the same process
not require
requires a
Kernel mode
mode switch to
privileges.
the Kernel.
Feature User-LevelThreads
Kernel-Level Threads

Fast to create Generally slower to


Speed
and manage. create and manage
than user threads.

Scheduling is done by the


Scheduling can be
Scheduling Kernel onthread
a basis.
application specific.

If one thread is
If one thread makes a
blocked, the Kernel
Blocking blocking system call, the can schedule another
entire process is blocked. thread of the same process.

Cannot take Can be


Multiprocessing advantage of simultaneously
multiprocessing. scheduled on
multiple processors.

11. Enlist and Explain Multithreading Models


Multithreading models determine how user-level threads are mapped to
kernel-level threads.
A. Many-to-One Model
• Mapping: Maps many user-level threads to one kernel-level
thread.
• Management: Thread management is performed in user space by
the thread library.
• Concurrency Issues: Only one thread can access the Kernel at a
time, so multiple threads cannot run in parallel on multiprocessors.
• Blocking Issue: When one user thread makes a blocking system call,
the entire process will be blocked.
B. One-to-One Model
• Mapping: Establishes a one-to-one relationship between each
user-level thread and a corresponding kernel-level thread.
• Concurrency: Provides more concurrency than the many-to-one
model and supports multiple threads executing in parallel on
microprocessors.
• Blocking Solution: Allows another thread to run when one thread
makes a blocking system call.
• Disadvantage: Creating a user thread requires creating a
corresponding Kernel thread.
C. Many-to-Many Model
• Mapping: Multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
• Concurrency: Provides the best accuracy on concurrency.
Developers can create many user threads, and the corresponding
Kernel threads can run in parallel on a multiprocessor machine.
• Blocking Solution: When a thread performs a blocking system call,
the kernel can schedule another thread for execution.

12. Execute Process Commands like: top, ps, kill, wait, sleep, exit, nice
These commands are essential for managing processes and monitoring
system performance, typically used in Linux/Unix environments.
Command Purpose Example

Displays a dynamic, real-time


view of running processes and
top (Press
top system information (CPU/memory
q to exit)
usage).

To view all processes in


Provides a snapshot
detailed format:
ps of currently running ps -ef
processes (PID, TTY, CPU time).

Sends a signal to a
To force-terminate
kill process (requires PID),
(SIGKILL):
typically to terminate it. kill -9 PID

Used in scripts to suspend To wait for a


wait execution until a specified specific process:
background process or job wait PID
finishes.
Pauses the execution Pause for 2 minutes:
sleep of a script or command sleep 2m
for a specified duration.
To exit with a status code
Terminates the (e.g., for success):
exit current shell session exit 0
or script.

Launches a process with an To launch with


nice altered scheduling priority lower priority:
(nice value ranges from -20 to nice -n 10
19). command

You might also like