1.
a) Why could Cloud Computing be successful when other paradigms have
failed? What are the characteristics of cloud computing?
1. **Scalability**: Cloud computing allows systems to scale up or down based
on demand easily.
2. **Cost Efficiency**: Users pay only for what they use, reducing infrastructure
and maintenance costs.
3. **Accessibility**: Services can be accessed from anywhere with an internet
connection.
4. **Reliability**: Cloud providers offer high uptime and data backup features.
5. **Flexibility**: Supports a wide range of services (storage, processing, apps)
without hardware setup.
6. **Automatic Updates**: Cloud platforms regularly update and maintain
systems without user effort.
7. **On-Demand Service**: Resources are available instantly when required.
### ✅ Characteristics of Cloud Computing:
1. **On-Demand Self-Service**: Users can access computing resources as
needed without human interaction.
2. **Broad Network Access**: Services are available over the internet via
various devices (PC, mobile, etc.).
3. **Resource Pooling**: Resources are shared among multiple users using
multi-tenant models.
4. **Rapid Elasticity**: Resources can be scaled quickly to meet changing
demands.
5. **Measured Service**: Usage is monitored, controlled, and billed based on
consumption.
6. **High Availability**: Cloud services ensure continuous operation and
minimal downtime.
7. **Security**: Data protection, access control, and backup are key features.
b) What is Platform-as-a-Service (PaaS)? Explain the backend architecture of
cloud computing.
Definition:
Platform-as-a-Service (PaaS) is a cloud computing service model that provides
developers with a platform to build, deploy, and manage applications without
dealing with the underlying infrastructure.
Purpose:
It allows users to focus on coding and development rather than server
management, storage, or networking.
Examples:
Google App Engine
Microsoft Azure App Services
Heroku
Backend architecture of cloud computing
The backend architecture of cloud computing is responsible for
processing and managing user requests. It includes:
Cloud Infrastructure (Hardware Layer) – Physical servers, storage devices, and
network components in data centers.
Virtualization Layer – Uses hypervisors (like VMware, Xen, or KVM) to create
and manage virtual machines (VMs).
Resource Management – Allocates and optimizes resources (CPU, RAM,
storage) dynamically based on demand.
Security & Compliance – Implements firewalls, encryption,
authentication mechanisms to ensure data protection.
APIs & Middleware – Provides interfaces for users and developers to
interact with the cloud services.
Storage Management – Manages block storage, object storage,
and database storage for efficient data handling.
This backend infrastructure ensures high availability, scalability, and security in
cloud computing environments.
what is Infrastructure-as-a-Service?
Infrastructure-as-a-Service (IaaS) is a cloud computing model that provides
virtualized computing resources over the internet. It offers fundamental
infrastructure such as virtual machines, storage, networking, and operating
systems on a pay-as-you-go basis.
Key Features of IaaS:
Scalability – Users can scale computing power up or down as needed.
Cost-Efficiency – No need for physical hardware investment; resources are
rented. On-Demand Resources – Provides servers, storage, and networking on
demand. Flexibility – Users can install and manage their own software and
applications. Managed Infrastructure – The cloud provider manages
hardware, while users control OS and applications.
Examples of IaaS Providers:
Amazon Web Services (AWS) – EC2
Microsoft Azure – Virtual Machines
Google Cloud – Compute Engine
2.
a) What are the infrastructural constraints in cloud computing? Explain
Multiple-Instruction Multiple-Data (MIMD) systems.
**Infrastructural Constraints in Cloud Computing:**
1. **Network Latency**: Delay in data transfer affects real-time application
performance.
2. **Bandwidth Limitations**: Insufficient bandwidth leads to slow data access
and upload/download issues.
3. **Power and Cooling Needs**: Data centers require high power and efficient
cooling systems to operate.
4. **Data Security and Privacy**: Protecting large amounts of sensitive data is
challenging.
5. **Hardware Failures**: Physical components (servers, disks) may fail,
affecting uptime.
6. **Scalability Issues**: Difficulty in scaling infrastructure during peak loads.
7. **Regulatory Compliance**: Following different national and international IT
laws can be complex.
8. **Resource Allocation**: Efficient distribution of virtual resources can be
hard to manage.
**Multiple-Instruction Multiple-Data (MIMD) Systems:**
1. **Definition**: MIMD is a type of parallel computing architecture where
multiple processors execute different instructions on different data
simultaneously.
2. **Working**: Each processor has its own control unit and can perform
independent tasks in parallel.
3. **Use Cases**:
* Supercomputers
* Cloud servers
* High-performance computing tasks
4. **Advantages**:
* High processing power
* Efficient for complex and diverse workloads
5. **Example**: Cloud platforms using multiple virtual machines (VMs) to
perform independent tasks.
b) What is a hypervisor? How can microservices eliminate the problem of
monolith architecture?
A hypervisor is a software or firmware layer that allows multiple virtual
machines (VMs) to run on a single physical machine by managing hardware
resources.
Microservices architecture addresses several key problems associated with
monolithic architectures by breaking down applications into smaller,
independent services. Here’s how microservices eliminate or mitigate common
monolithic challenges:
1. Scalability Issues
- **Monolith Problem:** Scaling a monolithic app requires replicating the
entire application, even if only one module faces high demand.
- **Microservices Solution:** Individual services can be scaled
independently based on demand (e.g., scaling only the payment service during
peak shopping hours).
2. Slow Development&Deployment
- **Monolith Problem:** Large codebases lead to slower builds, testing, and
deployments. A single change requires redeploying the entire app.
- **Microservices Solution:** Smaller, decoupled services allow teams to
develop, test, and deploy independently (CI/CD pipelines per service).
3. Technology Lock-in
- **Monolith Problem:** A monolithic app is usually built with a single tech
stack, making it hard to adopt new technologies.
- **Microservices Solution:** Each service can use a different
language/framework (e.g., Python for ML, Node.js for APIs, Java for backend).
4. Fault Isolation&Resilience
- **Monolith Problem:** A single bug or failure can crash the entire
application.
- **Microservices Solution:** Failures are isolated (e.g., if the user service
crashes, the product service remains operational). Circuit breakers and retries
improve resilience.
5. Team Collaboration Bottlenecks
- **Monolith Problem:** Large teams working on a single codebase cause
merge conflicts and coordination delays.
- **Microservices Solution:** Small, cross-functional teams own specific
services (aligned with DevOps principles), enabling parallel development.
6. Performance&Efficiency
- **Monolith Problem:** A bloated codebase leads to slow startup times and
inefficient resource usage.
- **Microservices Solution:** Lightweight services start faster and optimize
resource allocation (e.g., memory-heavy services can run on dedicated
instances).
7. Database Scalability&Flexibility
- **Monolith Problem:** A single database becomes a bottleneck and forces
a one-size-fits-all schema.
- **Microservices Solution:** Each service has its own database (Polyglot
Persistence), allowing SQL for transactions and NoSQL for analytics.
8. Long-Term Maintainability
- **Monolith Problem:** Over time, monoliths become harder to refactor due
to tight coupling.
- **Microservices Solution:** Loose coupling allows incremental updates
without full rewrites.
3.
a) How Can Attackers Attack the Hypervisors?
1. **VM Escape**:
* Attackers exploit vulnerabilities to break out of a virtual machine and gain
control over the host hypervisor or other VMs.
2. **Hypervisor Exploits**:
* Using malware or known software bugs to gain unauthorized access to
the hypervisor layer.
3. **Denial of Service (DoS) Attacks**:
* Overloading the hypervisor with traffic or requests to crash or disable
virtual machines.
4. **Unauthorized VM Access**:
* Weak access controls or misconfigurations allow attackers to access or
control virtual machines.
5. **Inter-VM Attacks**:
* Compromising one VM to spy on or attack other VMs sharing the same
hypervisor (also known as side-channel attacks).
6. **Management Interface Attack**:
* Targeting the hypervisor’s management console or API using stolen
credentials or brute force attacks.
7. **Malicious VM Injection**:
* Deploying a compromised or malicious VM into the environment to
monitor or manipulate other VMs.
8. **Insecure Hypervisor Configuration**:
* Default passwords, open ports, or outdated software can be exploited by
attackers.
b) What is horizontal scaling? How can IT resource over-utilization be avoided?
Horizontal scaling (also called scale-out) is the process of adding more
machines or servers to handle increased load.
How Can IT Resource Over-Utilization Be Avoided?
Auto-Scaling: Automatically add or remove resources based on system load or
traffic.
Load Balancing: Distribute workloads evenly across multiple servers to avoid
overloading one system.
Monitoring&Alerts: Use tools to monitor CPU, memory, and disk usage; set
alerts to act early.
Resource Quotas: Set limits for users or applications to prevent excessive
resource usage.
Virtualization: Use virtual machines or containers to optimize usage and isolate
workloads.
Scheduled Tasks: Run heavy processes during off-peak hours to balance
system load.
Efficient Coding: Optimize software to use fewer resources through better
algorithms and data handling.
4.
a) What is threat agent? Explain map reduce algorithm in cloud.
A threat agent is any entity (human or non-human) that can exploit
vulnerabilities in a system, network, or application to cause harm.
MapReduce Algorithm in Cloud Computing:
Definition:
MapReduce is a programming model used in cloud computing for processing
large data sets across distributed clusters (like Hadoop).
Components:
Map Function: Processes input data and produces intermediate key-value pairs.
Reduce Function: Aggregates or summarizes data with the same key from the
Map phase.
Example (Word Count):
Map: (“cloud”, 1), (“computing”, 1), (“cloud”, 1)
Reduce: (“cloud”, 2), (“computing”, 1)
Advantages in Cloud:
Highly scalable and fault-tolerant
Can run on commodity hardware
Ideal for big data analytics
Automatically manages parallelization and distribution
b) What is cloud computing open architecture? Explain Jericho cloud cube
model.
Cloud computing open architecture refers to a design model that allows
interoperability, flexibility, and integration of various cloud services and
platforms, regardless of vendor.
Key Features:
Vendor-Neutral: Can work across different cloud providers.
Modular Design: Components can be added or removed easily.
Open Standards: Uses widely accepted protocols and APIs.
Interoperability: Ensures different systems can communicate with each other.
Jericho Cloud Cube Model:
Definition:
The Jericho Cloud Cube Model is a framework developed by the Jericho Forum
to classify cloud services based on four dimensions related to security and
deployment.
Four Dimensions of the Cube:
1. Internal vs. External
Internal Cloud: Hosted within an organization.
External Cloud: Hosted by third-party providers.
2. Proprietary vs. Open
Proprietary: Uses closed, vendor-specific technologies.
Open: Uses open standards for better compatibility.
3. Perimeterised vs. De-perimeterised
Perimeterised: Secured by traditional network boundaries (firewalls).
De-perimeterised: Security depends on identity, encryption, and endpoint
protection, not physical boundaries.
4. Insourced vs. Outsourced
Insourced: Managed by the organization’s internal team.
Outsourced: Managed by an external vendor or cloud provider.
Purpose:
Helps organizations analyze risks and select cloud solutions based on their
security needs and business goals.
5.
a) Explain service level agreement (SLA) With it's types.
An SLA is a formal agreement between a cloud service provider and a customer
that defines the expected level of service, responsibilities, and performance
standards.
Components:
Uptime Guarantee (e.g., 99.9% availability)
Performance Metrics (e.g., response time, latency)
Support Availability (e.g., 24/7 support)
Disaster Recovery Time
Compensation or Penalty for SLA violations
Types of SLA in Cloud Computing:
1. Customer-Based SLA
Definition: Agreement tailored for an individual customer covering all services
used by that customer.
Example: A company signing one SLA covering email, storage, and hosting
services.
2. Service-Based SLA
Definition: A standard SLA for a particular service, applied to all customers
using that service.
Example: A cloud storage service offering the same 99.9% uptime SLA to all
users.
3. Multi-Level SLA
Definition: Combines elements of both customer-based and service-based
SLAs.
Levels:
Corporate-Level: General issues like security, compliance for all services
Customer-Level: Specific to a customer group
Service-Level: Specific service performance
Benefits of SLA:
Builds trust and transparency
Defines clear expectations
Helps in monitoring and improving service
Provides legal protection in case of service failure
b)
6.
a) What is containerization? A company wants to lunch a ride sharing
application. As a software developer create the service oriented architecture for
the ride sharing applications.
Containerization is a lightweight virtualization method where applications are
packaged along with their dependencies, libraries, and runtime into a single unit
called a container.
Service-Oriented Architecture (SOA) for Ride-Sharing Application in Cloud
Computing
As a software developer, here is how you can design an SOA-based cloud
architecture for a ride-sharing application:
Key Microservices in the SOA Model:
1. **User Service**
* Handles user registration, login, profile management
* Authenticated via OAuth2 or JWT tokens
2. **Driver Service**
* Driver registration, verification, vehicle details
* Availability status and location updates
3. **Ride Matching Service**
* Matches passengers with nearby available drivers using GPS
* Uses real-time data and maps API
4. **Booking Service**
* Handles ride requests, confirmation, cancellation, and trip history
5. **Payment Service**
* Processes payments, generates invoices
* Integrates with payment gateways (e.g., Khalti, eSewa, Stripe)
6. **Notification Service**
* Sends SMS, email, and push notifications for ride updates
7. **Ratings&Review Service**
* Allows users and drivers to rate each other
* Stores feedback for quality monitoring
8. **Analytics Service**
* Collects ride data, user behavior for business insights and reporting
Benefits of SOA in Cloud for Ride-Sharing App:
Scalability: Services can scale independently (e.g., more driver services during
rush hour)
Resilience: If one service fails (like rating), others continue running
Faster Development: Teams can work on different services in parallel
Easy Deployment with Containers: Fast updates and rollback using
Docker/Kubernetes
b) Explain different types of cloud security threats.
### 1. 🛑 **Data Breaches**
* **Definition**: Unauthorized access, theft, or leakage of sensitive data stored
in the cloud.
* **Cause**: Weak access controls, misconfigurations, or poor encryption.
* **Impact**: Loss of customer trust, legal issues, and financial damage.
### 2. 🐞 **Insecure APIs**
* **Definition**: APIs that lack proper security controls can be exploited by
attackers.
* **Risk**: Attackers can gain unauthorized access or manipulate services.
* **Prevention**: Use secure tokens, rate limiting, and input validation.
### 3. 🧑💻 **Insider Threats**
* **Definition**: Employees or contractors who misuse their access to harm the
organization.
* **Examples**: Data theft, sabotage, leaking credentials.
* **Mitigation**: Role-based access control and activity monitoring.
### 4. 📡 **Account Hijacking**
* **Definition**: Attackers gain control of cloud user accounts.
* **Method**: Phishing, weak passwords, or credential leaks.
* **Result**: Full access to cloud services and data.
### 5. ⚠ **Misconfigured Cloud Storage**
* **Definition**: Improper setup of cloud buckets or databases, making them
publicly accessible.
* **Common with**: AWS S3, Google Cloud Storage.
* **Fix**: Use encryption, access control lists (ACLs), and security policies.
### 6. 🦠 **Malware Injection**
* **Definition**: Attacker injects malicious code into cloud applications or
services.
* **Effect**: Compromises data integrity, control, or availability.
* **Solution**: Use antivirus scanners and application firewalls.
### 7. 🌐 **Denial of Service (DoS/DDoS) Attacks**
* **Definition**: Flooding a cloud service with excessive traffic to make it
unavailable.
* **Target**: Web servers, APIs, or databases.
* **Protection**: Use traffic filtering and auto-scaling.
### 8. 🔄 **Data Loss**
* **Definition**: Permanent loss of data due to accidental deletion,
ransomware, or system failure.
* **Fix**: Use regular backups, replication, and disaster recovery strategies.
7.
a) Public Cloud
1. **Definition**:
A **public cloud** is a cloud computing model where services such as
servers, storage, and applications are delivered over the internet by **third-party
providers**.
2. **Examples**:
* Amazon Web Services (AWS)
* Microsoft Azure
* Google Cloud Platform (GCP)
3. **Key Features**:
* **Shared Infrastructure**: Resources are shared among multiple users
(multi-tenant environment).
* **Scalable**: Easily scale up or down based on demand.
* **Pay-as-you-go**: Users pay only for the resources they use.
* **Accessible Anywhere**: Services can be accessed via the internet from
any location.
4. **Advantages**:
* Low initial cost
* No need to maintain hardware
* Fast deployment
* High availability and reliability
5. **Disadvantages**:
* Less control over security
* Data privacy concerns
* Dependent on internet connectivity
b) Identity and access management
Identity and Access Management (IAM) refers to the policies, processes, and
technologies that ensure the right individuals (users, devices, applications)
have the appropriate access to systems and data within an organization. IAM
helps enforce security protocols and complies with regulations.
Key Components of IAM:
• Authentication: Verifies the identity of users, typically through passwords,
biometrics, or multi-factor authentication (MFA).
• Authorization: Grants or denies access to resources based on the user's
permissions or roles.
• Roles & Permissions: Defines the level of access a user has to different
systems or data (e.g., admin, user, guest).
• Single Sign-On (SSO): Allows users to authenticate once and gain access to
multiple systems without re-entering credentials.
• Multi-Factor Authentication (MFA): Requires users to provide additional
authentication factors, enhancing security.
Advantages of IAM:
• Improved Security: Minimizes unauthorized access and prevents data
breaches.
• Compliance: Helps meet regulatory requirements like GDPR and HIPAA.
• Centralized Management: Provides a single interface to manage
access for multiple systems.
• User Convenience: Simplifies the login process for end-users, particularly
with SSO and MFA.
c) Kubernetes
Kubernetes is an open-source container orchestration platform that automates
the deployment, scaling, and management of containerized applications. It
works with tools like Docker and container runtimes to manage containers at
scale.
Key Features of Kubernetes:
• Pods: A group of one or more containers that are deployed together on the
same host machine.
• Replication Controllers: Ensures the desired number of pods are running.
• Services: Provides a stable IP address and DNS name for accessing pods.
• Scaling: Allows automatic scaling of applications based on demand.
• Self-Healing: Automatically restarts containers if they fail and
reschedules them on healthy nodes.
• Resource Management: Efficiently manages resources like CPU and memory.
Advantages of Kubernetes:
• Scalability: Handles large-scale containerized workloads effectively.
• Portability: Supports multi-cloud and hybrid cloud deployments.
• Automated Deployment & Updates: Simplifies application deployment,
scaling, and updates.
• High Availability: Ensures high availability of applications by distributing
containers across nodes.
Set 2
1
a) Why is cloud computing considered an evolution rather than
as an innovation? What are the advantages of grid computing?
Cloud computing is considered an evolution rather than a pure innovation
because it builds upon existing technologies such as distributed computing,
virtualization, and the internet. It enhances and refines these technologies
rather than introducing something entirely new.
The key reasons are:
1. Foundation on Existing Technologies – Cloud computing evolved
from earlier concepts like mainframe computing, grid computing,
and utility computing.
2. Advancement of Virtualization – Virtualization, a core aspect of
cloud computing, existed before but was improved for scalable cloud solutions.
3. Improved Resource Utilization – Cloud computing refines resource-sharing
models from grid and cluster computing.
4. Internet&Web Services Expansion – The rise of the internet and webbased
services accelerated the shift to cloud-based solutions.
Advantages of Grid Computing
Grid computing is a form of distributed computing where multiple computers
work together to solve complex problems. Its advantages include:
1. High Computational Power – Utilizes multiple systems to perform tasks that
require large processing power.
2. Cost Efficiency – Reduces the need for expensive supercomputers by using
available networked resources.
3. Resource Sharing – Allows different organizations or users to
share computing resources effectively.
4. Fault Tolerance – If one node fails, the task can be reassigned to another,
ensuring reliability.
5. Scalability – Easily expands by adding more nodes, making it flexible for
different workloads.
b) Pokhara university has significant number of students and conducts various
examinations throughout of the year. The university wants to deploy cloud
computing for its exam Department. Create a scenario that outlines which
deployment model can be used?
Pokhara University plans to integrate cloud computing to
streamline its examination department, ensuring secure, scalable, and
efficient management of exam-related activities. Given the university's
requirements, the most suitable deployment model is the Private Cloud
Justification for Private Cloud
1. Data Security & Confidentiality – Exam-related data, including question
papers, student records, and results, must be kept confidential. A private cloud
ensures restricted access to authorized personnel only.
2. Customizability & Control – The university can tailor the cloud environment
based on its needs, such as online examination platforms, result processing
systems, and secure storage.
3. Compliance with Regulations – Universities must adhere to education and
data protection policies. A private cloud allows better compliance with
government regulations and internal policies.
4. Dedicated Resources – Since a private cloud is used
exclusively by the university, there are no resource-sharing issues, ensuring
high performance and reliability during peak exam times
Cloud-Based Examination Scenario
Pokhara University sets up a private cloud for its examination department,
where:
• Question Paper Management – Faculty securely upload and manage question
papers in an encrypted cloud repository.
• Online Exam Platform – Students can access secured online exams through a
university-authenticated login
• Automated Result Processing – Cloud-based systems process exam results,
reducing manual workload and errors.
• Data Backup & Disaster Recovery – Cloud storage ensures automatic
backups, preventing data loss due to system failures.
• Scalability for Exam Periods – During peak exam seasons, computing power
can be dynamically allocated to handle high traffic.
2
a) Why is service Oriented Architecture considered as the emergence
of flexible application architecture? [8]
Service-Oriented Architecture (SOA) as the Emergence of Flexible Application
Architecture Service-Oriented Architecture (SOA) is considered the
emergence of flexible application architecture because it enables
modular, reusable, and scalable application development. It allows
different software components (services) to communicate over a
network, making applications more adaptable and efficient.
Key Reasons Why SOA Provides Flexibility (4 Marks)
● Loose Coupling – Services operate independently, meaning changes in one
service do not impact others, ensuring flexibility in development and
maintenance.
● Reusability – Common functionalities (e.g., authentication, payment
processing) are developed once and reused across multiple applications,
reducing redundancy
● Interoperability – SOA allows services to communicate across different
platforms and programming languages using standardized protocols like
SOAP and REST.
● Scalability – Applications can scale easily by adding or modifying services
without overhauling the entire system.
SOA and the Evolution of Application Architecture
● Before SOA: Applications were developed as monolithic structures,
meaning all components were tightly integrated, making updates difficult.
● With SOA: Applications are broken into independent, reusable services,
which can be updated or replaced without affecting the entire system.
● Foundation for Cloud&Microservices: SOA laid the groundwork
for cloud computing and microservices architecture, allowing
businesses to develop dynamic, cloud-based applications.
SOA revolutionized application architecture by promoting modularity,
reusability, and platform independence, making applications more adaptive and
scalable in modern IT environments.
3. ShopEasy is a popular online retailer known for offering a wide range of
products, from electronics to clothing. As the business grew, the monolithic
architecture that powered its e-commerce platform faced several challenges.
ShopEasy wants to transform its monolithic e-commerce platform
into a scalable and agile system using a microservices
architecture. Design a microservices architectural journey of
ShopEasy to improve flexibility, scalability, and faster development cycles.
ShopEasy’s Micro services Architectural Journey
1. Challenges with Monolithic Architecture (5 Marks)
As ShopEasy’ s business expanded, its monolithic architecture created
bottlenecks:
Scalability Issues – A single application structure made it hard to scale
individual services.
Slow Development & Deployment – Changes in one module required
redeploying the entire application.
Reliability Risks – A failure in one part of the system could bring down the
entire platform.
Technology Lock-in – Limited flexibility to adopt new technologies.
2. Micro services Architecture Design for Shop Easy (10 Marks)
Step 1: Breaking Down the Monolith into Micro services
Shop Easy’s platform is decomposed into independent micro services based on
business
functionalities:
User Service – Manages customer registration, authentication, and profiles.
Product Catalog Service – Handles product listings, categories, and inventory.
Order Management Service – Processes orders, payments, and shipping.
Cart Service – Manages shopping cart operations.
Payment Service – Processes transactions securely.
Recommendation Service – Uses AI to suggest products to users.
Notification Service – Sends emails and SMS notifications for orders and
offers.
Each micro service is developed, deployed, and scaled independently.
Step 2: Technology Stack Selection
Backend: Node.js (for API development), Python (for AI-based
recommendations)
Databases: MySQL for orders & users, MongoDB for product catalog, Redis for
caching
Communication: REST APIs & gRPC for internal service communication
Containerization: Docker & Kubernetes for deployment
API Gateway: Manages authentication, load balancing, and routing
Step 3: Deployment & Scaling Strategy
Containerization & Orchestration: Docker + Kubernetes for efficient deployment
and scaling.
Load Balancing: Implemented using NGINX and API Gateway to distribute
traffic.
Auto-Scaling: Kubernetes Horizontal Pod Auto scaler (HPA) scales services
dynamically.
Database Sharding: Distributed databases for improved query performance.
Step 4: Continuous Integration & Deployment (CI/CD)
CI/CD Pipeline: Jenkins/GitHub Actions automates testing and deployment.
Service Monitoring: Prometheus & Grafana for real-time insights and alerts
4. How can the capacity of physical IT resources be used to their potential?
Explain threats of hypervisor used in virtualization.
Utilizing Physical IT Resources to Their Full Potential (5 Marks)
To maximize the capacity of physical IT resources, organizations use various
optimization
techniques, including:
Virtualization – Allows multiple virtual machines (VMs) to run on a single
physical server, improving hardware utilization.
Load Balancing – Distributes workloads evenly across servers to prevent
underutilization and overloading.
Resource Pooling – Combines computing, storage, and network resources
dynamically based on demand.
Containerization – Uses lightweight containers (e.g., Docker, Kubernetes) to run
multiple applications efficiently on a single machine.
Dynamic Resource Allocation – Implements auto scaling to allocate CPU, RAM,
and storage based on real-time workload needs.
These techniques ensure higher efficiency, reduced costs, and better
performance of IT resources.
Threats to Hypervisors in Virtualization (10 Marks)
A hypervisor is a critical component in virtualization that allows multiple VMs to
run on a single physical machine. However, it introduces several security
threats, including:
Hyper jacking – Attackers install a malicious hypervisor to gain complete
control over the virtual machines.
VM Escape – A vulnerability that allows a compromised VM to break out and
access the hypervisor or other VMs.
Denial of Service (DoS) Attacks – Overloading the hypervisor with requests can
cause service failures and downtime.
Data Leakage – Unauthorized access to virtual machine memory or storage
can expose sensitive data.
Side-Channel Attacks – Attackers analyze shared resource usage (CPU cache,
network traffic) to extract confidential information.
Weak Authentication & Privilege Escalation – Poorly configured access
controls can allow unauthorized users to gain administrative privileges.
Rootkit Infections – Attackers may install persistent malware at the hypervisor
level, making it difficult to detect and remove.
Insider Threats – Employees with high-level access can exploit hypervisor
vulnerabilities for malicious purposes.
Insecure VM Migration – If VM data is not properly encrypted during migration,
it can be intercepted by attackers.
Firmware & Patch Vulnerabilities – Unpatched hypervisors can be exploited
using known security flaws.
Mitigation Strategies:
Implement strong access controls & multi-factor authentication (MFA).
Use hypervisor security tools to detect abnormal activities.
Regularly update and patch the hypervisor software.
Isolate critical VMs from less secure environments.
Encrypt VM migrations to prevent data interception.
By addressing these threats, organizations can secure their virtualization
environments and ensure safe IT operations
5
a) manually preparing or extending IT resources in response to workload
fluctuation is time-intensive and unacceptably efficient. How can IT resources
be scaled automatically in response to fluctuating demand?
Automatic Scaling of IT Resources in Response to Fluctuating Demand
Manual scaling of IT resources is slow and inefficient. To address this,
automatic scaling (autoscaling) mechanisms ensure resources are allocated
dynamically based on demand, improving efficiency and cost-effectiveness.
Methods of Automatic Scaling (4 Marks)
Vertical Scaling (Scaling Up/Down)
Increases or decreases the power (CPU, RAM) of an existing server.
Example: A cloud database increases RAM when queries spike and reduces it
during low traffic.
Horizontal Scaling (Scaling Out/In)
Adds or removes instances (servers, containers) dynamically.
Example: An e-commerce website adds more web servers during sales events
and removes them afterward.
Load Balancing
Distributes traffic across multiple servers to prevent overload.
Example: A cloud-based application directs requests to the least busy server.
Containerization & Orchestration (Kubernetes, Docker Swarm) Manages
containers, ensuring applications scale automatically.
Example: Kubernetes automatically launches more containers when user
requests increase.
Auto scaling Strategies (4 Marks)
Threshold-Based Scaling
Resources scale up or down based on pre-defined CPU, memory, or network
usage limits.
Example: AWS Auto Scaling adds servers when CPU usage exceeds 80%.
Scheduled Scaling
Resources scale based on predictable demand patterns.
Example: An educational platform scales up at 9 AM when students log in and
scales down at night.
Predictive Scaling (AI/ML-Based)
Uses machine learning to anticipate future demand and scale resources
accordingly
Example: Cloud AI predicts website traffic surges before a product launch and
scales infrastructure proactively.
Event-Driven Scaling
Resources scale dynamically based on real-time triggers (e.g., user logins, order
placements).
Example: A ride-sharing app scales up its backend when demand spikes during
peak hours.
By implementing auto scaling techniques, organizations ensure high
performance, cost efficiency, and seamless user experience, reducing manual
intervention.
6
a) Explain the key features and advantages of the Hadoop Distributed File
System (HDFS) in the context of big data processing.
Hadoop Distributed File System (HDFS) is a scalable, fault-tolerant, and
highthroughput storage system designed to handle large-scale data processing
in distributed computing environments.
Key Features of HDFS (4 Marks)
● Scalability – HDFS supports horizontal scaling, allowing data to be
distributed across thousands of machines.
● Fault Tolerance – Data is automatically replicated (default: 3 copies)
across multiple nodes, ensuring recovery from failures.
● High Throughput – Optimized for batch processing, enabling efficient
handling of large files in parallel.
● Write-Once, Read-Many Model – Once written, files cannot be modified,
ensuring data integrity and faster processing.
● Distributed Storage – Large files are split into blocks (default: 128MB or
256MB) and distributed across multiple nodes
● Master-Slave Architecture – The Name Node manages metadata, while
Data Nodes store actual data.
● Support for Large Files – HDFS is designed to handle petabytes of
structured and unstructured data efficiently.
Advantages of HDFS in Big Data Processing (3 Marks)
● Cost-Effective Storage – HDFS runs on commodity hardware, reducing
infrastructure costs.
● Efficient Data Locality – Processing occurs close to data (Data Locality
Principle), reducing network congestion.
● Seamless Integration with Big Data Tools – Works well with MapReduce,
Apache Spark, Hive, and Pig for large-scale data analytics.
b) The given document are divided into three blocks of HDFS. Design a
Hadoop's map reduce framework to count the frequency of the world in the
total document.
• D1: "the quick brown fox jumps over the lazy dog. a red apple hangs from the
tree."
• D2: "mountains cast long shadows during sunset. the tree provides shade for
the red apple."
• D3: "the sunsets behind the mountains. the lazy dog barks at the quick brown
fox."
## 🗺 1. **Map Phase**
Each mapper processes one block/document and emits `<word, 1>` pairs.
### 🧾 Mapper Output (Example):
#### From D1:
```
<the, 1>
<quick, 1>
<brown, 1>
<fox, 1>
<jumps, 1>
<over, 1>
<the, 1>
<lazy, 1>
<dog, 1>
<a, 1>
<red, 1>
<apple, 1>
<hangs, 1>
<from, 1>
<the, 1>
<tree, 1>
```
#### From D2:
```
<mountains, 1>
<cast, 1>
<long, 1>
<shadows, 1>
<during, 1>
<sunset, 1>
<the, 1>
<tree, 1>
<provides, 1>
<shade, 1>
<for, 1>
<the, 1>
<red, 1>
<apple, 1>
```
#### From D3:
```
<the, 1>
<sunsets, 1>
<behind, 1>
<the, 1>
<mountains, 1>
<the, 1>
<lazy, 1>
<dog, 1>
<barks, 1>
<at, 1>
<the, 1>
<quick, 1>
<brown, 1>
<fox, 1>
## 🔄 2. **Shuffle and Sort Phase**
All intermediate key-value pairs are grouped by **key (word)** and sent to
reducers:
Example:
```
<the, [1, 1, 1, 1, 1, 1, 1]>
<quick, [1, 1]>
<brown, [1, 1]>
<fox, [1, 1]>
<lazy, [1, 1]>
<dog, [1, 1]>
<red, [1, 1]>
<apple, [1, 1]>
<tree, [1, 1]>
<mountains, [1, 1]>
## 🧮 3. **Reduce Phase**
Each reducer sums up the list of counts for each word.
### 🧾 Reducer Output:
```
<the, 7>
<quick, 2>
<brown, 2>
<fox, 2>
<jumps, 1>
<over, 1>
<lazy, 2>
<dog, 2>
<a, 1>
<red, 2>
<apple, 2>
<hangs, 1>
<from, 1>
<tree, 2>
<mountains, 2>
<cast, 1>
<long, 1>
<shadows, 1>
<during, 1>
<sunset, 1>
<provides, 1>
<shade, 1>
<for, 1>
<sunsets, 1>
<behind, 1>
<barks, 1>
<at, 1>
## ✅ Final Word Count Output:
| Word | Frequency |
| --------- | --------- |
| the |7 |
| quick |2 |
| brown |2 |
| fox |2 |
| jumps |1 |
| over |1 |
| lazy |2 |
| dog |2 |
|a |1 |
| red |2 |
| apple |2 |
| hangs |1 |
| from |1 |
| tree |2 |
| mountains | 2 |
| cast |1 |
| long |1 |
| shadows |1 |
| during |1 |
| sunset |1 |
| provides |1 |
| shade |1 |
| for |1 |
| sunsets |1 |
| behind |1 |
| barks |1 |
| at |1 |