Concepts of Abstraction and Virtualization
In cloud computing, abstraction
refers to the process of hiding the
complex underlying details of IT
infrastructure from users,
presenting them with simplified,
virtualized resources and
services. This allows users to
interact with the cloud without
needing to understand the
intricacies of the physical
hardware or specific software
configurations.
In cloud computing, a hypervisor is a crucial software layer that enables
virtualization, allowing multiple virtual machines (VMs) to run on a single physical
server. It acts as a virtual machine monitor (VMM), abstracting hardware resources
and sharing them among different VMs. This allows cloud providers to efficiently
utilize their hardware and offer on-demand computing resources to users.
Virtualization has become increasingly popular in recent
years for several reasons:
Increased Efficiency: Virtualization allows organizations to make more efficient use
of their hardware resources by consolidating multiple servers onto a single physical
machine. This reduces the amount of physical hardware that is required, which can
lead to significant cost savings.
Flexibility: Virtualization makes it easy to provision and de-provision VMs as
needed, which makes it easier to adapt to changing business needs. This means that
organizations can quickly and easily spin up new VMs to support new applications or
services, or retire VMs that are no longer needed.
Improved Security: Virtualization can help improve security by isolating
applications and services from one another, which reduces the risk of one
compromised system affecting others.
Disaster Recovery: Virtualization can help simplify disaster recovery by allowing
organizations to easily replicate VMs to a secondary site or cloud-based environment.
This means that in the event of a disaster, organizations can quickly spin up VMs in
another location to ensure business continuity.
Virtualization in Cloud Computing and Types
Virtualization is a way to use
one computer as if it were
many. Before virtualization,
most computers were only
doing one job at a time, and a
lot of their power was wasted.
Virtualization lets you run
several virtual computers on
one real computer, so you can
use its full power and do
more tasks at once.
In cloud computing, this idea
is taken further. Cloud
providers use virtualization to
split one big server into many
smaller virtual ones, so
businesses can use just what
they need, no extra hardware,
no extra cost.
5. Server Virtualization: This splits a physical server into multiple virtual servers, each functioning
independently. It helps improve performance, cut costs and makes tasks like server migration and energy
management easier.
Example: A startup company has a powerful physical server. This company can use server virtualization
software like VMware vSphere, Microsoft Hyper-V or KVM to create more virtual machines (VMs) on that
one server.
Each VM here is an isolated server, that runs on their own operating system (like Windows and Linux) and
run it's own applications. For example, a company might run A web server on one VM, A database server
on another VM, A file server on a third VM all on the same physical machine. This reduces costs, makes it
easier to manage and back up servers, and allows quick recovery if one VM fails.
Load Balancing
Load balancing in cloud computing is
the process of distributing incoming
network traffic and workload across
multiple servers or resources to ensure
no single server becomes overwhelmed.
It optimizes resource use, improves
performance, and ensures high
availability and reliability of applications
or services.
Types of Load Balancing
•Static Load Balancing
•Dynamic Load Balancing
•Hardware Load Balancing
•Software Load Balancing
•Global Load Balancing
Migration types in cloud computing, virtualization, or IT
infrastructure transitions:
1. P2V (Physical to Virtual)
• Migrating a physical machine to a virtual machine.
• Example: Converting a physical Windows server into a VM using VMware or Hyper-V.
2. V2V (Virtual to Virtual)
• Migrating a virtual machine from one platform or hypervisor to another.
• Example: Moving a VM from VMware ESXi to Microsoft Hyper-V.
3. V2P (Virtual to Physical)
• Converting a virtual machine back to a physical server.
• Less common, but used when virtual environments no longer meet performance or
hardware needs.
4. P2P (Physical to Physical)
• Migration between two physical systems.
• Often seen during hardware upgrades or OS installations on new servers.
5. D2C (Data to Cloud)
• Moving data from on-premises storage to a cloud environment.
• Example: Uploading backup files to Amazon S3 or Azure Blob Storage.
6. C2C (Cloud to Cloud)
• Data or service migration between two cloud providers.
• Example: Moving from AWS to Azure or from Google Cloud to Oracle Cloud.
7. C2D (Cloud to Data Center)
• Bringing data or services from cloud back to a private on-premises data center.
• Often used in hybrid or reverse cloud adoption strategies.
8. D2D (Data Center to Data Center)
• Transferring data between two on-premises data centers.
• Used in large enterprises for redundancy, disaster recovery, or regional
distribution.
Static Load Balancing
Definition: The distribution of tasks among servers is decided before runtime (predefined
rules). Once assigned, the load doesn’t change much during execution.
Characteristics: Predefined strategy (e.g., Round Robin, Weighted Round Robin).
Does not consider current server state (CPU usage, memory, traffic).Best when servers are
homogeneous (same power) and tasks are predictable.
Advantages: Simple to implement. Low overhead (no need for real-time monitoring).
Disadvantages: Poor performance under uneven or unpredictable workloads. Can overload
some servers while others stay underutilized.
Dynamic Load Balancing
Definition: The distribution of tasks is adjusted at runtime, based on the current load of
servers.
Characteristics: Decisions are made on the fly using metrics (CPU usage, response time,
active connections, etc.).Algorithms include Least Connections, Least Response Time, Resource-
Based, ML-based balancing. Suitable for heterogeneous servers or variable workloads.
Advantages: Efficient resource utilization. Handles traffic spikes better. Improves fault
tolerance (can reroute if a server fails).
Disadvantages: More complex to implement. Requires real-time monitoring (extra overhead).
Load Balancing Algorithms
Load balancing is the process of distributing workloads across multiple servers or
resources to optimize performance, reliability, and availability. The algorithm you
choose depends on the system’s goals (speed, fault tolerance, fairness, cost, etc.).
Round Robin (RR) Load Balancing
Distribute incoming requests sequentially to a list of servers, then
loop back to the first server after reaching the last.
When servers have equal capacity and tasks are similar in
cost/duration.
•Advantages:
• Simple to implement
• Ensures fairness in request distribution
•Limitations:
• Doesn’t consider server load or request complexity
• Can overload slower servers
How It Works
On the first request, the load balancer
assigns the client to a server using
Round Robin. The client’s IP address or
session ID is mapped to that server. On
subsequent requests, the same client is
always routed to the same server until
the session ends.
Enhanced version of Round Robin
What is Sticky Round Robin?
Normal Round Robin (RR): Requests are
distributed sequentially among servers, without
caring if the same client connects again.
Sticky Round Robin: Each client (user/session) is
"stuck" to the same server for all requests during a
session. This is important when the application
stores session data locally on the server (e.g.,
shopping cart, login state).