0% found this document useful (0 votes)
23 views13 pages

Cp25c03 Aos QB

Uploaded by

Malini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views13 pages

Cp25c03 Aos QB

Uploaded by

Malini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CP25C03 Advanced Operating Systems

Chapter I
Advanced Process and Thread Management: Multithreading models, thread pools, context
switching, Synchronization issues and solutions: semaphores, monitors, lock-free data structures,
CPU scheduling in multi-core systems

Activity: CPU scheduler simulation for multicore systems.

2 Marks Questions

1. Define process and thread in modern operating systems.


2. List the advantages of multithreading in multi-core environments.
3. Differentiate between user-level and kernel-level threads.
4. What is a thread pool?
5. Define context switching and its overhead.
6. What are synchronization primitives? Give examples.
7. State the role of semaphores in thread synchronization.
8. What is a monitor in concurrent programming?
9. Differentiate between binary semaphore and counting semaphore.
10. Define lock-free data structures. Why are they important?
11. Mention two common CPU scheduling algorithms used in multi-core systems.
12. What is load balancing in multi-core scheduling?
13. Define race condition and give an example.
14. Explain the need for critical section protection.
15. What is the difference between preemptive and non-preemptive scheduling?
16. Define affinity scheduling in multi-core processors.
17. What is NUMA awareness and how does it relate to thread management?
18. List any two advantages of thread pools over dynamic thread creation.
19. What is the purpose of thread synchronization?
20. Write a short note on real-time scheduling in multi-core systems.

�16 Marks Questions

1. Explain in detail the multithreading models with neat diagrams. Compare user-level,
kernel-level, and hybrid threading models.
2. Discuss the design and implementation of thread pools. Explain how they improve
performance and resource utilization in modern OS.
3. Describe the process of context switching between threads and processes. Explain how
hardware and software jointly manage context.
4. Explain synchronization mechanisms using semaphores and monitors. Compare their
advantages and disadvantages with examples.
5. Discuss lock-free data structures and their role in high-performance concurrent
programming. Provide an example of a lock-free queue or stack.
6. Write short notes on the following:
a) Thread lifecycle
b) Thread cancellation
c) Thread safety
d) Thread-local storage
7. Explain the design of CPU scheduling algorithms for multi-core systems. Compare
global scheduling and per-core scheduling policies.
8. Analyze synchronization issues in multi-core systems and discuss software-level
solutions for minimizing contention.
9. Explain multi-core scheduling techniques such as load balancing, work stealing, and
cache-aware scheduling with examples.
10. Write a detailed note on thread synchronization problems such as race conditions,
deadlocks, and priority inversion.
11. Discuss the implementation of monitors in operating systems. Compare their use with
semaphores in managing critical sections.
12. Explain how operating systems handle multithreaded applications in distributed or
cloud environments.
13. Describe the architecture and mechanism of thread pools. How are threads created,
assigned, and destroyed efficiently?
14. Compare and contrast various CPU scheduling algorithms (Round Robin, Multilevel
Queue, Priority, CFS) in multi-core systems.
15. Explain the concept of parallelism and concurrency. How does thread-level
parallelism improve CPU throughput?
16. Case Study: Design a simulation for multi-core CPU scheduling where multiple
threads compete for CPU cycles. Discuss fairness, load balance, and throughput.

�Activity (Practical Component)

Objective: To simulate CPU scheduler behavior in multi-core environments.


Task:

 Implement a simple CPU scheduler simulation using Round Robin or Priority


Scheduling.
 Simulate multiple cores processing threads concurrently.
 Analyze load distribution, waiting time, and turnaround time.
Tools Suggested: Python, C++, or Java with threading libraries.

Chapter II
Memory and Resource Management in Modern OS: Virtual memory, demand paging, page
replacement policies-Huge pages, NUMA-aware memory management-Resource allocation in
cloud-native environments

Activity: Simulate demand paging and page replacement algorithms.

2 Marks Questions

1. Define virtual memory.


2. What is demand paging?
3. State the difference between physical memory and logical memory.
4. What is a page fault?
5. Name two page replacement algorithms.
6. What is a TLB (Translation Lookaside Buffer)?
7. Define huge pages.
8. Mention one advantage of NUMA-aware memory management.
9. What is resource allocation in cloud-native environments?
10. Give an example of cloud-native memory management technique.
11. What is local memory allocation in NUMA systems?
12. Define interleaved memory allocation.
13. What is the purpose of page tables?
14. Name two common page replacement policies.
15. Define working set in memory management.
16. What is lazy loading in demand paging?
17. Mention one disadvantage of FIFO page replacement.
18. What is page migration in NUMA systems?
19. Define affinity in memory allocation.
20. What is activity goal in simulating demand paging and page replacement?

�16 Marks Questions

1. Explain virtual memory with its advantages, structure, and operation. Include a diagram
showing logical to physical memory mapping.
2. Discuss demand paging and illustrate the step-by-step process when a page fault occurs.
3. Explain page replacement policies in detail, including FIFO, LRU, Optimal, and Clock
algorithms. Compare their performance and use cases.
4. Describe huge pages and their impact on TLB misses and system performance. Include
examples of page sizes used in modern OS.
5. Explain NUMA architecture and discuss NUMA-aware memory management strategies
like local allocation, interleaved allocation, and page migration.
6. Analyze resource allocation in cloud-native environments, including resource quotas,
auto-scaling, and affinity/anti-affinity scheduling.
7. Compare and contrast traditional memory management versus cloud-native resource
management in multi-core systems.
8. Explain working set model and its significance in virtual memory management.
9. Case Study: Simulate demand paging in a system with a fixed number of frames. Show
page faults using FIFO, LRU, and Optimal algorithms, and compare results.
10. Explain memory allocation policies for containers in cloud environments, including
best-fit, worst-fit, and dynamic allocation.
11. Discuss the role of TLBs in speeding up memory access and their interaction with page
tables.
12. Explain how page migration works in NUMA systems and its effect on performance.
13. Describe advanced page replacement algorithms like LFU and aging algorithms.
Compare with standard FIFO and LRU.
14. Analyze trade-offs between memory overhead and performance in using huge pages
versus standard pages.
15. Explain simulation activity for demand paging and page replacement algorithms,
including expected outcomes.
16. Discuss challenges in cloud-native resource management, such as contention,
overcommitment, and scalability.

�Activity Component

Objective: To simulate demand paging and evaluate different page replacement algorithms.

Tasks:

1. Implement a simulation using a programming language (Python, C, or Java).


2. Create a reference string for page requests.
3. Simulate at least three page replacement policies (FIFO, LRU, Optimal).
4. Measure and compare the number of page faults.
5. Discuss the impact of frame count on page fault rate.
6. Extend simulation to NUMA-aware allocation if possible.

Expected Outcome: Students will understand virtual memory operation, the behavior of page
replacement algorithms, and performance optimization techniques in modern OS.

CHAPTER III
Virtualization and Containerization: Hypervisors (Type I & II), KVM, QEMU, Xen-
Containers: Docker, LXC, systemd-nspawn-OS-level virtualization and namespaces
Activity: Deploy and configure Docker containers with various images.

2 Marks Questions

1. Define virtualization.
2. Differentiate between Type I and Type II hypervisors.
3. What is KVM?
4. Define QEMU.
5. What is Xen hypervisor?
6. Define containerization.
7. What is Docker?
8. Define LXC (Linux Containers).
9. What is systemd-nspawn?
10. Define OS-level virtualization.
11. What is a namespace in Linux?
12. Name two types of namespaces used for container isolation.
13. Define cgroups and their purpose.
14. Differentiate between container and VM.
15. What is container image?
16. Mention one advantage of Docker over traditional VMs.
17. What is container orchestration?
18. Define hypervisor-based virtualization.
19. State one disadvantage of Type II hypervisors.
20. Mention one example of cloud-native container deployment.

�16 Marks Questions

1. Explain the concept of virtualization and describe the architecture of Type I and Type
II hypervisors with diagrams.
2. Discuss KVM, QEMU, and Xen in detail. Compare their performance and use cases.
3. Explain containerization and compare it with traditional virtualization.
4. Describe Docker architecture, including Docker Engine, images, containers, and
registries.
5. Explain LXC and systemd-nspawn. How do they differ from Docker?
6. Discuss OS-level virtualization and explain how namespaces and cgroups provide
isolation.
7. Describe Linux namespaces in detail: PID, NET, MNT, UTS, IPC, and USER
namespaces.
8. Explain container orchestration and resource management in cloud environments
using Docker/Kubernetes.
9. Case Study: Deploy multiple Docker containers with different images. Discuss
networking, volumes, and isolation.
10. Compare hypervisor-based virtualization and container-based virtualization in
terms of performance, scalability, and resource usage.
11. Explain the advantages and limitations of Docker containers for modern application
deployment.
12. Discuss the security mechanisms in containers and hypervisors.
13. Explain the process of building a custom Docker image using a Dockerfile. Include an
example.
14. Discuss the role of cgroups and namespaces in container resource allocation and
isolation.
15. Describe how virtualization and containerization support cloud-native and
microservices architectures.
16. Activity Simulation: Deploy and configure multiple Docker containers, test inter-
container communication, and manage container lifecycle.

�Activity Component

Objective: Deploy and manage Docker containers to understand container lifecycle and
isolation.

Tasks:

1. Install Docker on Linux or Windows.


2. Pull container images from Docker Hub using docker pull <image_name>.
3. Run containers with docker run -it <image_name>.
4. List running containers using docker ps.
5. Stop and remove containers using docker stop and docker rm.
6. Build a custom Docker image using a Dockerfile.
7. Explore namespaces and resource limits using Docker options.

Expected Outcome: Students will understand container deployment, image management,


isolation mechanisms, and resource allocation.

CHAPTER IV
Distributed Operating Systems and File Systems: Distributed scheduling, communication, and
synchronization-Distributed file systems: NFS, GFS, HDFS-Transparency issues and fault
tolerance
Activity: Simulate distributed process synchronization.

2 Marks Questions

1. Define distributed operating system.


2. What is distributed scheduling?
3. Define distributed process synchronization.
4. Name two distributed file systems.
5. What is NFS?
6. What is GFS (Google File System)?
7. Define HDFS (Hadoop Distributed File System).
8. What is transparency in distributed systems?
9. List types of transparency in distributed OS.
10. Define fault tolerance.
11. What is communication latency in distributed systems?
12. Mention one advantage of distributed scheduling.
13. Define remote procedure call (RPC).
14. What is deadlock in distributed systems?
15. What is a replica in distributed file systems?
16. Name one challenge in distributed process communication.
17. What is client-server model in distributed file systems?
18. Define process migration.
19. What is atomic broadcast?
20. Mention one method to achieve fault tolerance.

�16 Marks Questions

1. Explain distributed scheduling with algorithms like centralized, decentralized, and


hierarchical scheduling. Include examples.
2. Discuss distributed process communication and synchronization. Explain message
passing, logical clocks, and mutual exclusion algorithms.
3. Describe distributed file systems: NFS, GFS, and HDFS. Include architecture, key
features, and differences.
4. Explain transparency issues in distributed systems: access, location, replication,
concurrency, and fault transparency.
5. Discuss fault tolerance techniques in distributed systems. Explain checkpointing,
replication, and recovery protocols.
6. Case Study: Design a distributed system that synchronizes multiple processes. Discuss
communication and synchronization mechanisms used.
7. Compare NFS, GFS, and HDFS in terms of scalability, reliability, and performance.
8. Explain process synchronization algorithms in distributed systems, including
Lamport’s algorithm and Ricart–Agrawala algorithm.
9. Discuss distributed mutual exclusion techniques and their importance.
10. Explain distributed system communication models, including synchronous and
asynchronous messaging.
11. Discuss challenges in achieving fault tolerance in large-scale distributed file systems.
12. Explain how replication and consistency mechanisms work in distributed file systems.
13. Describe the role of distributed scheduling in cloud-native and high-performance
computing environments.
14. Analyze transparency trade-offs in distributed OS design.
15. Explain logical and vector clocks for maintaining consistency across distributed
processes.
16. Activity Simulation: Simulate distributed process synchronization using message
passing or shared variables. Analyze deadlock prevention and fault recovery.

�Activity Component
Objective: To simulate distributed process synchronization and understand communication,
coordination, and fault tolerance mechanisms.

Tasks:

1. Implement a distributed process synchronization algorithm (Lamport’s or Ricart–


Agrawala) in Python, Java, or C++.
2. Simulate multiple processes accessing a shared resource.
3. Demonstrate communication and synchronization between processes.
4. Introduce simulated failures and demonstrate fault tolerance mechanisms.
5. Measure performance metrics like waiting time, message count, and throughput.

Expected Outcome: Students will understand distributed process coordination, synchronization


algorithms, transparency issues, and fault tolerance in distributed OS environments.

CHAPTER V
Security and Trust in Operating Systems: Access control models: DAC, MAC, RBAC-OS
hardening techniques, sandboxing, SELinux, AppArmor-Secure boot, rootkit detection, trusted
execution environments
Activity: Implement Role-Based Access Control (RBAC) using Linux user and group
permissions.

2 Marks Questions

1. Define access control in operating systems.


2. What is DAC (Discretionary Access Control)?
3. Define MAC (Mandatory Access Control).
4. What is RBAC (Role-Based Access Control)?
5. Mention two OS hardening techniques.
6. What is sandboxing in OS security?
7. Define SELinux.
8. Define AppArmor.
9. What is secure boot?
10. Define rootkit.
11. What is a trusted execution environment (TEE)?
12. Name two types of access control models.
13. Define principle of least privilege.
14. What is a security context in SELinux?
15. Mention one benefit of RBAC.
16. What is the purpose of file permissions in Linux?
17. Define capabilities in Linux.
18. What is security auditing?
19. Name one method for rootkit detection.
20. Define policy enforcement in OS security.
�16 Marks Questions

1. Explain access control models in detail: DAC, MAC, and RBAC, with examples and
use cases.
2. Discuss OS hardening techniques including patch management, configuration policies,
and minimizing attack surface.
3. Explain sandboxing and its role in isolating processes. Give examples of sandboxed
applications.
4. Describe SELinux and AppArmor. Compare their features, modes, and enforcement
policies.
5. Explain secure boot mechanisms and how they protect the system from unauthorized
code execution.
6. Discuss rootkit detection techniques in Linux and Windows environments.
7. Explain Trusted Execution Environments (TEE) and their role in modern OS security.
8. Analyze RBAC implementation in Linux using user and group permissions. Include a
step-by-step approach.
9. Discuss the principle of least privilege and its application in OS security.
10. Explain security auditing and logging mechanisms in operating systems.
11. Case Study: Implement RBAC in Linux. Assign roles to users, enforce permissions, and
verify access control.
12. Discuss challenges and solutions for enforcing access control in multi-user and cloud
environments.
13. Compare DAC, MAC, and RBAC in terms of flexibility, scalability, and security.
14. Explain OS hardening for servers hosting critical applications. Include SELinux,
AppArmor, and secure boot in your explanation.
15. Describe methods to protect against privilege escalation attacks in Linux.
16. Activity Simulation: Implement RBAC in Linux using users, groups, and permissions.
Demonstrate access control enforcement and auditing.

�Activity Component

Objective: Implement Role-Based Access Control (RBAC) using Linux user and group
permissions.

Tasks:

1. Create Linux users and groups for different roles (e.g., admin, developer, guest).
2. Assign appropriate permissions to each group.
3. Create files and directories and enforce group-based access controls.
4. Verify access control by switching users and testing read/write/execute permissions.
5. Enable auditing to log permission violations.
6. Demonstrate RBAC enforcement through sample use cases.
Expected Outcome: Students will understand role-based access control, Linux permission
management, auditing, and secure user access enforcement.

CHAPTER VI
Real-Time and Embedded Operating Systems: Real-time scheduling algorithms (EDF, RM)-
POSIX RT extensions, RTOS architecture-TinyOS, FreeRTOS case studies
Activity: Analyze FreeRTOS task scheduling behavior.

2 Marks Questions

1. Define real-time operating system (RTOS).


2. What is the difference between hard and soft real-time systems?
3. Define task in RTOS.
4. What is EDF (Earliest Deadline First) scheduling?
5. Define Rate Monotonic (RM) scheduling.
6. What are POSIX RT extensions?
7. Define RTOS architecture.
8. Mention two features of TinyOS.
9. Mention two features of FreeRTOS.
10. What is a priority inversion problem?
11. Define preemptive scheduling.
12. Define non-preemptive scheduling.
13. What is task latency?
14. Define jitter in real-time systems.
15. Name two embedded operating systems.
16. What is tickless RTOS operation?
17. Define semaphores in RTOS.
18. What is a task control block (TCB)?
19. Define context switching in RTOS.
20. Mention one use case of FreeRTOS in embedded systems.

�16 Marks Questions

1. Explain real-time operating systems and their key characteristics. Include differences
between hard and soft RTOS.
2. Discuss real-time scheduling algorithms: EDF and RM. Include mathematical analysis
of schedulability.
3. Explain POSIX RT extensions and their role in real-time task management.
4. Describe RTOS architecture with key components: kernel, scheduler, inter-task
communication, and device drivers.
5. Discuss TinyOS and FreeRTOS architectures and compare their features and use cases.
6. Explain priority inversion problem and describe solutions like priority inheritance and
priority ceiling protocols.
7. Analyze preemptive vs non-preemptive scheduling in real-time systems with
examples.
8. Explain inter-task communication mechanisms in RTOS: semaphores, queues, and
mutexes.
9. Case Study: Implement task scheduling in FreeRTOS and analyze task execution order
and timing behavior.
10. Discuss context switching in RTOS and its impact on real-time performance.
11. Explain the concept of jitter and methods to minimize it in embedded systems.
12. Discuss scheduling of periodic and aperiodic tasks in embedded RTOS.
13. Compare TinyOS, FreeRTOS, and other embedded OS in terms of memory footprint,
scheduling, and real-time performance.
14. Analyze task latency and response time in FreeRTOS using example tasks.
15. Explain tickless RTOS operation and its benefits in low-power embedded systems.
16. Activity Simulation: Analyze FreeRTOS task scheduling behavior. Measure timing,
priorities, and response under load.

�Activity Component

Objective: Analyze task scheduling behavior in FreeRTOS.

Tasks:

1. Set up a FreeRTOS environment on a supported microcontroller or simulator.


2. Create multiple tasks with different priorities.
3. Configure EDF or RM scheduling if available.
4. Monitor task execution order, response times, and CPU utilization.
5. Simulate high-priority and low-priority task preemption.
6. Analyze results for latency, jitter, and priority inversion scenarios.

Expected Outcome: Students will understand real-time scheduling, task priorities, and
performance behavior in embedded RTOS environments.

CHAPTER VII
Edge and Cloud OS: Future Paradigms: Serverless OS, unikernels, lightweight OS for edge
computing-Mobile OS internals (Android, iOS)-OS for quantum and neuromorphic computing
(intro)
Activity: Analyze Android’s system architecture using emulator tools.
2 Marks Questions

1. Define serverless operating system.


2. What is a unikernel?
3. Mention one advantage of lightweight OS for edge computing.
4. Define mobile operating system.
5. Name the two major mobile OS platforms.
6. What is Android OS internals?
7. Define iOS OS internals.
8. What is a hypervisor-based unikernel?
9. Define quantum computing OS.
10. Define neuromorphic OS.
11. Mention one key challenge of edge OS design.
12. What is container-based OS deployment?
13. Define microkernel in OS context.
14. What is Android Emulator used for?
15. Define app sandboxing in mobile OS.
16. Mention one future trend in cloud OS.
17. Define serverless function execution.
18. What is resource isolation in edge computing OS?
19. Define unikernel boot time advantage.
20. Mention one feature of iOS security model.

�16 Marks Questions

1. Explain serverless OS architectures and their advantages in cloud computing.


2. Discuss unikernels: design principles, advantages, and deployment scenarios.
3. Explain lightweight operating systems for edge computing and how they handle
resource constraints.
4. Describe Android OS internals: Linux kernel, runtime, libraries, application
framework, and system services.
5. Discuss iOS OS internals and its layered architecture including kernel, frameworks, and
security.
6. Compare Android and iOS system architectures, focusing on process management,
memory, and security.
7. Explain microkernel design for cloud and edge OS. Include benefits and limitations.
8. Introduce OS support for quantum computing: challenges and possible architectures.
9. Discuss OS concepts for neuromorphic computing: event-driven scheduling and
hardware abstraction.
10. Analyze serverless function execution and resource allocation strategies.
11. Explain app sandboxing and security mechanisms in Android and iOS.
12. Case Study: Use Android emulator to analyze system architecture, processes, and
memory management.
13. Discuss resource isolation and scheduling in lightweight edge OS.
14. Explain the role of unikernels in secure cloud deployments.
15. Discuss trends in mobile OS evolution toward edge and cloud integration.
16. Activity Simulation: Analyze Android’s system architecture using emulator tools,
including process, memory, and security management.

�Activity Component

Objective: Analyze Android system architecture using emulator tools.

Tasks:

1. Set up Android Studio and Android Emulator on your system.


2. Launch an emulator with a virtual device (AVD).
3. Examine system processes, memory usage, and application framework components.
4. Explore process scheduling and priority using adb shell commands.
5. Analyze security features such as app sandboxing, permissions, and SELinux policies.
6. Document the architecture layers: Linux kernel, runtime, libraries, frameworks, and
applications.
7. Compare behavior across different Android versions if possible.

Expected Outcome: Students will understand Android OS internals, system architecture,


process management, memory handling, and security enforcement in a simulated environment.

You might also like