Q1.
Consider the following statements about process state transitions for a system using
preemptive scheduling. I. A running process can move to ready state. II. A ready process
can move to ready state. III. A blocked process can move to running state. IV. A blocked
process can move to ready state. Which are TRUE? (a) I, II, III (b) II, III (c) I, II, IV (d) I, II,
III, IV
● I – True: With preemption, a running process can be interrupted and placed back into the
ready queue.
● II – Considered True here: A ready process can remain in the ready state (effectively a
‘ready→ready’ transition) when another process is chosen; some models include this as a
valid transition.
● III – False: A blocked process does not jump directly to running; it must first become ready
after I/O completion.
● IV – True: On I/O completion, a blocked process becomes ready and competes for the CPU.
● Correct option: (c) I, II and IV only.
Q2. The following C program is executed on a Unix/Linux system: #include int main(){int
i; for(i=0;i<10;i++) if(i%2==0) fork(); return 0;} The total number of child processes
created is _____.
● Forks occur at i = 0, 2, 4, 6, 8 — that is, 5 times in total.
● Each fork() doubles the number of processes executing from that point onward.
● Starting from one process, after 5 forks the total processes become 2^5 = 32.
● Child processes created = total processes − 1 original = 32 − 1.
● Answer: 31 child processes.
Q3. CPU■bound processes with unequal CPU bursts, all arriving together: which
scheduling minimizes average waiting time? (a) Shortest remaining time first (b)
Round■robin with tiny quantum (c) Uniform random (d) Highest priority proportional to
CPU burst length
● Optimal policy (for average waiting time) is to run the shortest jobs first.
● With all processes present at time 0, preemptive SRTF behaves like non■preemptive SJF.
● Round■robin incurs frequent context switches and does not minimize average waiting.
● Random scheduling is non■optimal by design and may be arbitrarily bad.
● Answer: (a) Shortest remaining time first (SRTF).
Q4. In a multiprogramming environment, consider the program: int main(){int x=0,i=0;
scanf("%d",&x;); for(i=0;i<20;i++){ x = x+20; printf("%d\n",x);} return 0; } A process goes
into the I/O queue on I/O, with a context switch on every I/O request and completion.
How many times does the process enter the ready queue during its lifetime (excluding
the initial entry)?
● Each I/O completion returns the process to the ready queue (and triggers a context switch).
● There is one input operation via scanf → 1 ready■queue reentry after completion.
● There are 20 calls to printf (treated as I/O) → 20 more ready■queue reentries.
● Total reentries (excluding the very first entry when the process starts) = 1 + 20.
● Answer: 21.
Q5. Explain differences: (1) Long■term vs short■term scheduler; (2) Preemptive vs
non■preemptive; (3) CPU■bound vs I/O■bound; (4) Time■sharing vs batch.
● Long■term vs Short■term: Long■term (admission) controls degree of multiprogramming;
selects jobs for the ready pool. Short■term (CPU) scheduler picks the next ready process for
the CPU frequently (ms scale).
● Preemptive vs Non■preemptive: Preemptive can interrupt a running process (e.g., on timer)
for fairness/latency; non■preemptive runs till block/terminate.
● CPU■bound vs I/O■bound: CPU■bound spends most time computing with long CPU bursts;
I/O■bound issues frequent I/O with short CPU bursts.
● Time■sharing vs Batch: Time■sharing emphasizes interactivity/response with preemption
and many users; batch emphasizes throughput with little/no interaction.
Q6. List key CPU scheduling criteria.
● CPU utilization (keep CPU busy).
● Throughput (completed processes per unit time).
● Turnaround time (submission to completion).
● Waiting time (time in ready queue).
● Response time (first response latency, important for interactive tasks).
● Fairness/starvation avoidance.
● Predictability and low variance.
Q7. Advantages of: (A) Aging techniques (B) RAG (C) PCB (D) Cooperating processes
(E) System call in OS.
● Aging: Gradually increases priority of waiting processes to prevent starvation and improve
fairness.
● RAG (Resource Allocation Graph): Visual model to reason about resource
requests/allocations; helps detect/prevent deadlocks in simple cases.
● PCB (Process Control Block): Central metadata (state, registers, PC, accounting, open files)
enabling context switching and process management.
● Cooperating processes: Enable modularity, parallel speedups, and information sharing via
IPC; improve throughput on multiprocessors.
● System call: Safe interface from user space to kernel services (I/O, process, memory, IPC),
enforcing protection and abstraction.
Q8. What are the deadlock conditions?
● Mutual exclusion: at least one non■sharable resource.
● Hold and wait: processes hold resources while waiting for additional ones.
● No preemption: resources cannot be forcibly taken.
● Circular wait: closed chain of processes each waiting for a resource held by the next.
Q9. Methods for handling deadlocks.
● Ignore (ostrich approach) if deadlocks are rare and cost is low.
● Prevention: structurally negate one of the necessary conditions (e.g., no circular wait).
● Avoidance: make safe■state decisions using advance knowledge (e.g., Banker’s algorithm).
● Detection and recovery: let deadlocks occur, detect periodically, then recover
(preempt/rollback/kill).
Q10. How can we prevent the occurrence of deadlocks?
● Break hold and wait: require all resources at once or release before requesting new ones.
● Eliminate no preemption: allow resource preemption or rollback on denial.
● Remove circular wait: impose a strict global resource ordering and request in order.
● Relax mutual exclusion where possible via making resources sharable (e.g., read■only
locks).
Q11. How can we avoid deadlocks?
● Use safe■state scheduling: only grant requests that keep the system in a safe state.
● Apply Banker’s algorithm using declared maximum demands and current allocations.
● Perform need and available checks before allocation; deny or delay unsafe requests.
● Maintain accurate resource accounting and run■time tests efficiently.
Q12. Explain deadlock detection and recovery.
● Detection: Periodically run a wait■for graph cycle test (single■instance) or general algorithm
(multi■instance) to find deadlocked sets.
● Victim selection: Choose processes to preempt/abort minimizing cost (priority, progress,
resources held).
● Recovery: Preempt resources, rollback to checkpoints, or abort one/more processes to break
the cycle.
● After recovery, ensure starvation is prevented via aging or limits on how often a process is
chosen as victim.
Q13. Find average waiting time and average turnaround time using FCFS for:
Processes: P0=7, P1=5, P2=2, P3=9 (all arrive at time 0).
● Order: P0 → P1 → P2 → P3 (all arrive together).
● Waiting times: P0=0, P1=7, P2=12, P3=14 → Average waiting = (33)/4 = 8.25.
● Turnaround: add burst to waiting → P0=7, P1=12, P2=14, P3=23 → Average TAT = (56)/4 =
14.00.
● Answer: Average waiting = 8.25 ms; Average turnaround = 14.00 ms.
Q14. P1, P2, P3, P4 arrive at times 0,1,2,8 ms with bursts 10,13,6,9 ms. Using SRTF
(ignore context switch), what is the average turnaround time?
● SRTF always runs the process with the smallest remaining time among arrived jobs.
● Schedule sketch: P1 runs [0,2), P3 preempts and finishes by 8; P1 resumes and finishes by
16; P4 runs next till 25; P2 completes last at 38.
● Completion times: P3=8, P1=16, P4=25, P2=38.
● Turnaround = completion − arrival → P3=6, P1=16, P4=17, P2=37 → average =
(6+16+17+37)/4.
● Answer: Average turnaround time = 19 ms.