Unit-I
Introduction: The emergence of cloud computing, Benefits of Using a Cloud Model, what about
Legal issues when using cloud models, what are the key characteristics of cloud computing,
Challenges for the cloud.
The Evolution of Cloud Computing: Hardware Evolution, Internet Software Evolution, Server
Virtualization.
UNIT 1 - INTRODUCTION AND EVOLUTION OF CLOUD
COMPUTING
TOPIC 1 : INTRODUCTION
DEFINITION 1 : Cloud computing is the on-demand delivery of IT resources over the Internet with
pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you
can access technology services, such as computing power, storage, and databases, from a cloud provider
like Amazon Web Services (aws) and Microsoft Azure.
DEFINITION 2 : Cloud computing is the delivery of computing services—such as storage, software,
processing power, and databases—over the internet (“the cloud”) instead of using local computers or
servers. It allows users to access these services anytime, anywhere, using an internet connection.
Cloud computing works like renting services instead of owning them. Users only pay for what they use,
and the cloud provider handles maintenance, upgrades, and security. This makes it easier, faster, and more
cost-effective for individuals and businesses to store data, run applications, and work collaboratively.
Examples: Google Drive, Dropbox, Amazon Web Services (AWS), Microsoft Azure.
APPLICATIONS :
1. healthcare companies are using the cloud to develop more personalized treatments for patients.
2. Financial services companies are using the cloud to power real-time fraud detection and
prevention.
3. Video game makers are using the cloud to deliver online games to millions of players around the
world.
DIFFERENCE BETWEEN CLOUD AND CLOUD COMPUTING
CLOUD CLOUD COMPUTING
Cloud refers to a collection of servers, storage Cloud Computing is the process of using the
systems, databases, and networking resources that resources of the cloud (like storage, computing
are hosted on the internet instead of on local power, databases, or software) through the
computers. internet.
In short, the Cloud is the infrastructure or the It means delivering IT services such as
“place” where data and applications are stored and Infrastructure (IaaS), Platform (PaaS), and
accessed remotely. Software (SaaS) on demand.
EG:Google drive and Dropbox (because your files EG: Amazon Web Services (AWS) to host a
are stored on remote servers.) website , Microsoft Azure
TOPIC 2 : EMERGENCE OF CLOUD COMPUTING
The phrase "Cloud Computing" was first introduced in the 1950s to describe internet-related
services, and it evolved from distributed computing to the modern technology known as cloud computing.
Cloud services are provided by Amazon, Google, and Microsoft. Cloud computing allows users
to access a wide range of services stored in the cloud or on the Internet.
MainFrame Computing (1950-1970) :
Mainframes which first came into existence in 1951 are highly powerful and reliable computing
machines. These are responsible for handling large data such as massive input-output operations. Even
today these are used for bulk processing tasks such as online transactions etc. These systems have almost
no downtime with high fault tolerance. But these were very expensive. To reduce this cost, cluster
computing came as an alternative to mainframe technology.
Distributed Systems (1970-1980) :
A Distributed System is a composition of multiple independent systems but all of them are seen
as a single entity to the users. Distributed systems share resources and also use them effectively and
efficiently. Distributed systems have scalability, concurrency, continuous availability and independence
in failures. But the main problem with this system was that all the systems were required to be present at
the same geographical location.
Cluster computing came as an alternative to mainframe computing. Each machine in the cluster
was connected to each other by a network with high bandwidth. These were way cheaper than those
mainframe systems. These were equally capable of high computations. Also, new nodes could easily be
added to the cluster if it was required.
Grid Computing and Utility Computing (1990-2000) :
In the 1990s, grid computing was introduced. It means that different systems were placed at
entirely different geographical locations and these all were connected via the internet. These systems
belonged to different organizations and thus the grid consists of heterogeneous nodes. But new problems
emerged as the distance between the nodes increased. Thus cloud computing is often referred to as
"Successor of grid computing". Utility Computing is a computing model that defines service
provisioning techniques for services such as compute services along with other major services such as
storage, infrastructure, etc which are provisioned on a pay-per-use basis.
TOPIC 3 : BENEFITS OF USING A CLOUD MODEL
CLOUD MODEL DIAGRAM
1. Accessibility : Access anywhere with any device. Access data and applications from anywhere, at
any time, using an internet connection. Encourages remote work and global collaboration. Cloud
storage enables you to make data available anywhere, anytime you need it.
2. Pay and Use : We only pay for the resources and services we actually use, instead of buying and
owning costly hardware or software in advance. Works like electricity or mobile data billing —
charges are based on usage (e.g., storage space, processing time, bandwidth). Helps avoid
wastage of money on unused capacity. Makes it easier for businesses to start small and grow as
needed without large investments.
3. Scalability and flexibility : Easily scale up or scale down resources depending on demand (e.g.,
seasonal business peaks). Companies don’t need to pay for or build the infrastructure needed to
support their highest load levels. Likewise, they can quickly scale down if resources aren’t being
used.
4. Cost savings : Whatever cloud service model you choose, you only pay for the resources you
actually use. This helps in avoiding overbuilding and overprovisioning your data center and gives
your IT teams back valuable time to focus on more strategic work. No heavy upfront investment
in hardware or software; you pay only for what you use (pay-as-you-go). Reduces expenses on
system maintenance and IT staff.
5. Centralised data security : In Centralised Data Security, all the organization’s data is stored,
managed, and protected in a single central location (usually a cloud data center) rather than
scattered across multiple devices or offices. For example , a company storing all customer records
in one secure cloud database , instead of each branch keeping its own copy.
6. Increased system capacity and performance : Cloud can handle more workloads and deliver
faster processing without the need for physical upgrades by the user.
- On-Demand Resources: Extra computing power (CPU, RAM, storage) can be added instantly
when workload increases.
- High-Speed Infrastructure: Cloud providers use powerful servers and optimized networks to
improve performance.
- Load Balancing: Distributes work evenly across servers to prevent slowdowns.
- Regular Upgrades: Providers keep hardware and software up to date for better speed and
efficiency.
7. Reliability & Availability : The cloud services are dependable and accessible almost all the time,
ensuring smooth business operations without frequent interruptions.
- High Uptime: Most cloud providers guarantee 99.9% or more uptime through Service Level
Agreements (SLAs).
- Redundancy: Data is stored in multiple servers/locations, so if one fails, another takes over.
- Disaster Recovery: Built-in backup systems restore services quickly after failures.
- 24/7 Accessibility: Users can access resources from anywhere, anytime.
TOPIC 4 : THE KEY CHARACTERISTICS OF CLOUD COMPUTING
1. Multi-tenancy: Cloud computing providers can
support multiple tenants (users or organizations) on a
single set of shared resources. Each tenant’s data and
applications are kept logically separate and secure,
even though they share the same physical resources.
Sharing resources lowers costs for all users. Multiple
tenants can expand usage without affecting each
other’s performance.
2. Reliability: Cloud systems consistently perform
well, deliver services without unexpected
interruptions, and ensure that data and applications
are always available when needed. In High Uptime , Providers often guarantee 99.9%+ uptime. Backup
servers and storage ensure service continues even if one component [Link] one server or data center goes
down, another instantly takes over. Data Backups are done Regularly to prevent data loss.
3. On-demand self-services: The Cloud computing services do not require any human
administrators,users can access and manage computing resources whenever they need—without
requiring human assistance from the service provider. Users can create, configure, or delete resources like
storage, virtual machines, or applications anytime. No Manual Approval is required i.e., No need to
contact support or wait for setup. Dashboards or APIs allow direct management of resources to the user
and the Resources can be increased or decreased based on current needs which offers flexibility.
4. Resource pooling: The cloud provider shares its computers, storage, and networks among many
users at the same time, but keeps each user’s data separate and safe. Shared resources (servers, storage,
networks) , Each customer’s data is private , Efficient use of hardware and software , Supports many users
at once without problems.
( Like a library — many people use the same building, shelves, and books, but each person chooses what
they need, and their borrowed books are kept separate from others. )
5. Security : Security in cloud computing means protecting data, applications, and systems stored in the
cloud from unauthorized access, attacks, and loss.
- Data Encryption: Information is scrambled so only authorized people can read it.
- Access Control: Only approved users can access certain data or services.
- Firewalls & Threat Detection: Block hackers and detect suspicious activity.
TOPIC 5 : CHALLENGES FOR THE CLOUD
1. Data Security and Privacy : Storing sensitive data on remote servers (owned and managed by a cloud
provider) creates risks of unauthorized access, breaches, and legal complications. It means protecting
stored and processed data from unauthorized access, misuse, and loss, while also ensuring that personal
and sensitive information is handled according to privacy laws and user consent.
A) Hackers may target cloud servers because they hold valuable data.
B) Multiple users share the same hardware, increasing exposure if isolation fails.
C) Different countries have different privacy laws; storing data in the wrong region can break
regulations.
D) Cloud provider staff with high-level access could potentially misuse data.
2. Service reliability : The cloud services are always available and function correctly can be difficult,
because even the best providers can experience outages or performance issues. Power failures, network
issues, or technical faults can cause temporary service unavailability. Shared infrastructure may slow
down during peak usage times and natural disasters or large-scale cyberattacks could disrupt services.
3. Cost Management : Cost Management as a challenge in cloud computing means controlling and
predicting expenses can be difficult because cloud services often use a pay-as-you-go model, where costs
depend on actual usage. Almost all cloud service providers have a "Pay As You Go" model, which
reduces the overall cost of the resources being used. The servers are not being used to their full potential,
adding up to the hidden costs. As the company grows, cloud expenses can rise quickly. Sudden spikes in
usage can cause unexpected costs.
4. Performance Challenges : The applications and services hosted in the cloud may not always run at
top speed due to shared resources, network delays, or infrastructure limitations. Performance is an
important factor while considering cloud-based solutions. Delay in data transfer between the user and
cloud data center can slow response [Link] may drop during peak traffic times.
5. High dependence on Network : As cloud computing deals with provisioning resources in real-time, it
deals with huge amounts of data transfer to and from the servers. This is only made possible due to the
availability of the high-speed network. The cloud services can only work well if there is a fast, stable,
and reliable internet connection. If the network is slow or unavailable, access to cloud resources is
affected.
6. Interoperability and Flexibility : When an organization uses a specific cloud service provider and
wants to switch to another cloud-based solution, it often turns up to be a tedious procedure since
Applications built for one cloud provider may not easily run on [Link] use different formats,
interfaces, and technologies, making integration harder. Hence, moving data and apps between clouds can
be complex and time-consuming. There is a lack of flexibility from switching from one cloud to another
due to the complexities involved.
TOPIC 6 : The Evolution of Cloud Computing: HARDWARE AND
SOFTWARE:
1. 1930 : In 1930, Vannevar Bush built the Differential Analyzer. It was a very large mechanical
machine that could solve complex mathematical equations automatically, which was very useful
for engineers and scientists.
2. 1936 : In 1936, Alan Turing introduced the idea of the Turing Machine. This idea became the
base for all modern computer designs. This was not a real machine but a theoretical model that
explained how a computer could follow step-by-step instructions to solve any problem.
3. 1937 : In 1937, George Stibitz came up with the concept of using binary numbers (only 0s and
1s) in calculators. This was a big step toward digital computing because modern computers still
use binary to store and process data.
4. 1939 : In 1939, Bill Hewlett and Dave Packard started the company Hewlett-Packard (HP). At
first, they made electronic measuring devices, but later HP became one of the world’s biggest
computer and electronics companies.
5. 1941 : In 1941, Konrad Zuse built the Z3, which is known as the world’s first programmable
digital computer. It could take instructions from punched tape and perform calculations
automatically without human intervention. This period was important because these inventions
and ideas laid the foundation for the computers we use today.
Hardware Evolution :
1. First Generation Computers
2. Second Generation Computers
3. Third Generation Computers
4. Fourth Generation Computers
1. First Generation (1940s–1950s) –The first generation of computers used vacuum tubes for circuitry
and punch cards for input and output. These machines were huge, consumed a lot of electricity, and
generated heat. Example : ENIAC and UNIVAC. At this stage, computing was limited to basic
number-crunching. There was no concept of networking or remote access, so the cloud was not yet
possible.
2. Second Generation (1950s–1960s) –In 1946 , another general purpose computer was built called
ENIAC, which is the first digital computer which was programmed to solve full range computing
problems. ENIAC has 18,000 thermonic values , weighs over 60,000 pounds and consumes 25 kilowatts
of electric power per hour. It can calculate 1,00,000 instructions per second.
The invention of the transistor replaced vacuum tubes, making computers smaller, faster,
cheaper, and more reliable. Languages like COBOL and FORTRAN became popular. Examples include
IBM 1401 and CDC 1604. Computing became more efficient, but still relied on batch processing. Even
though cloud did not exist yet, here many users could share one computer.
3. Third Generation (1960s–1970s) – In 1971 , Intel released the worlds first commercial
microprocessor which is ‘Intel 4004’. It was the first CPU on one chip and became the first available
microprocessor commercially. Integrated Circuits allowed many transistors to be placed on a single chip,
reducing size and cost while increasing performance. Mainframes like the IBM System/360 became
common. Time-sharing was an early step toward the multi-tenancy concept used in cloud computing
today. Users could access computing power remotely via terminals.
4. Fourth Generation (1970s–Present) – Intel 4004 was capable of only 60,000 instructions per second.
So, new processors were invented with more speed and computing capability. Microprocessors placed the
CPU on a single chip, leading to the development of personal computers. Networking technologies
(LANs, WANs) and the internet emerged. Examples include Intel 4004 processors and IBM PCs.
Networking and distributed systems provided the technical foundation for the internet-based cloud
services we know today.
5. Fifth Generation (Present & Future) – AI & High-Performance Systems : Current and future
computing uses AI chips, quantum processors, and massively parallel processing to handle huge
workloads. Data centers are equipped with virtualization technologies, enabling multiple virtual machines
to run on one physical server. This generation powers modern cloud platforms, enabling on-demand
services, big data analytics, AI-powered cloud apps, and global scalability with high reliability.
TOPIC 7 :INTERNET SOFTWARE EVOLUTION
Cloud computing became possible not only because of powerful hardware, but also because of the
evolution of internet-based software. Over time, the internet moved from simple communication tools to
delivering full computing services online.
A) Early Internet Applications (1970s–1980s) were mainly used for basic communication and file
sharing. Protocols like FTP (File Transfer Protocol) and Telnet allowed users to connect to remote
computers and transfer data.
B) World Wide Web & Web Applications (1990s) were developed.
C) With Tim Berners-Lee’s invention of the World Wide Web (WWW) in 1991, the internet became
easier to use through websites. Simple web applications like email (Hotmail, Yahoo Mail) started
to appear.
D) Web 2.0 & Interactive Services (2000s) were introduced where Websites became dynamic and
interactive, allowing users to create, share, and collaborate online. Services like Google Docs,
YouTube, and Facebook emerged. Companies like Amazon Web Services (AWS), Google Cloud,
and Microsoft Azure began offering cloud-based infrastructure.
1. Establishing a common protocol for the internet : In early 1980s, TCP/IP , became the universal
standard for internet communication. TCP is responsible for breaking data into packets, sending them, and
then reassembling them in the correct order at the destination. IP ensures each packet is given the correct
address and is delivered to the right computer. In cloud computing, TCP/IP allows users anywhere in the
world to access resources and applications hosted in distant data centers as if they were on their local
machines.
For example, when you send an email from Gmail to someone using Outlook, both services can
communicate seamlessly because they follow the same TCP/IP protocol, even though they are made by
different companies.
2. Evolution of IPV6 : IPv4 uses 32-bit addresses, allowing about 4.3 billion unique IP addresses. With
the rapid growth of the internet in the 1990s and 2000s , including personal computers, mobile devices,
and IoT devices, the pool of available IPv4 addresses started running out. This shortage led to the
development of IPv6 (Internet Protocol version 6) by the Internet Engineering Task Force (IETF).
IPv6 uses 128-bit addresses, which allows for an almost unlimited number of unique addresses.
This means every device on Earth can have its own unique IP address. IPv6offers Simpler address
configuration (automatic device setup) , Better security , More efficient routing and packet handling ,
Support for modern services like mobile networks and IoT.
3. Building a common interface to the internet : The World’s First Web Browser was developed by Tim
Berners-Lee at CERN and completed in December 1990, this was the inaugural web browser and page
editor. In 1994 , Netscape released the first beta version of its browser , which is called Mozilla 0.96b ,
over the internet.
4. Appearance of Cloud Formations : The evolution of internet software began with cluster computing
in the 1980s, which overcame limitations of expensive mainframes by connecting multiple inexpensive,
similar machines via high-speed local networks. Clusters were constrained by their locality—machines
are physically close to one another. In the 1990s, grid computing linked heterogeneous systems across
different organizations and locations through middleware and the Internet, creating powerful virtual
supercomputers.
The concept of utility computing emerged, presenting “pay-as-you-go” models where computing
resources were rented rather than owned. In the 1990s, Service-Oriented Architecture (SOA) advanced
modular, distributed applications as web services—an architectural principle that paved the way for SaaS
and cloud-based services. In 1999, [Link] launched , Software as a Service (SaaS)
offering—allowing users to access enterprise applications via the web, without local installation.
Amazon Web Services (AWS) began in 2002, launched Amazon S3 and EC2 (Elastic Compute Cloud) in
2006, delivering scalable storage and compute resources online.
TOPIC 8 : SERVER VIRTUALIZATION
Server Virtualization is a technology that
allows one physical server to be divided into
multiple virtual servers. Each virtual server
behaves like an independent machine with its
own operating system, applications, and
resources, but they all run on the same physical
hardware. This is achieved using a software
layer called a hypervisor (also known as a
Virtual Machine Monitor). The hypervisor
manages and allocates the physical resources
(CPU, memory, storage, and network) to each virtual server. This can include the number and identity of
operating systems, processors, and individual physical servers.
By having each physical server divided into multiple virtual servers, server virtualization allows
each virtual server to act as a unique physical device. Each virtual server can run its own applications and
operating system. This process increases the utilization of resources by making each virtual server act as a
physical server and increases the capacity of each physical machine.
1. PARALLEL PROCESSING :
Parallel processing means executing program instructions simultaneously across multiple
processors. Earlier, computers could run only one program at a time. For example, if a computation-heavy
program took X minutes, and another I/O (Input/Output) program took Y minutes, both together would
take X + Y minutes. With parallel processing, while one program is waiting for I/O operations, the CPU
can execute another program. This reduces the overall execution time and improves performance.
The next improvement was multiprogramming. In multiprogramming, multiple programs are
submitted, and each gets a short time slot with the processor. This creates the illusion that all programs are
running simultaneously. A common technique used here is Round Robin Scheduling. In this method, each
process gets a time slice (or quantum) to execute.
For example: If there are 5 processes in the queue and the time slice is 1 second, each process
gets 200 ms of CPU time in rotation. The CPU scheduler manages this by assigning the CPU to each
process for its time slice. a process doesn’t finish in its time slice, it goes to the end of the queue, and the
next process gets the CPU. If a process finishes early, the CPU immediately moves to the next one. A
deadlock occurs when multiple processes request resources and wait indefinitely because each one is
holding a resource that the other needs. Deadlocks must be detected and resolved in multiprogramming
systems to avoid freezing.
2. VECTOR PROCESSING : Vector Processing is a technique developed to improve processing
performance by handling multiple data items at once in a multitasking way. This is especially useful in
applications where data appears in the form of vectors or matrices. Vector processing works by executing
operations on all elements of a vector simultaneously, rather than one by one. This reduces overhead and
speeds up computation, as it can process numerous data components at the same time.
The main features of vector processing include simultaneous operations—using specialized
hardware that processes multiple data elements in parallel—and high performance through data
parallelism and reduced memory access. This means that vector processors can handle large datasets more
efficiently than traditional processors.
There are two important architectures in vector processing:
1. SIMD (Single Instruction, Multiple Data) – This architecture executes the same instruction on
multiple datasets at the same time. In other words, all processors perform the same operation but
on different pieces of data.
2. MIMD (Multiple Instruction, Multiple Data) – This architecture allows multiple processors to
execute different instructions on different datasets simultaneously. Each processor has its own
program counter and instruction set, making it operate independently from the others in the
system.
3. SYMMETRIC MULTI PROCESSING SYSTEMS : Symmetric Multi-Processing (SMP) systems
were developed to solve the problem of managing resources efficiently. In an SMP system, all processors
are equally capable and share the same responsibilities for managing workflow. The main goal of SMP is
to achieve sequential consistency, which means the system behaves as if only one processor is working,
even though multiple processors are used.
However, when more processors are added, problems like increased data propagation time and
message passing delays occur. To solve this, SMP systems use techniques like broadcasting messages
between processors. For example, when one processor changes data, the change is communicated only to
the processors that need to know it, instead of sending it to everyone.
The next stage in parallel processing after SMP is multiprocessing. In multiprocessing, two or
more processors work together on the same workload. Early versions of multiprocessing used a
master-slave model. Here, one processor (the master) managed all the tasks and distributed them to other
processors (slaves). The slaves only executed the given tasks. This model helped in increasing the overall
performance of the system by dividing the workload.
4. MASSIVELY PARALLEL PROCESSING : In MPP systems, a computer is built using many
independent processors, sometimes hundreds or even thousands, all working together in parallel. Each
processor has its own memory and works on a part of the problem. All processors are interconnected,
making the whole system act like one very large computer. This approach is similar to distributed
computing, where many separate computers are connected to solve a single large problem. Early MPP
systems used many serial computers combined together.
Types of Server Virtualization:
1. Type 1 Hypervisor (Bare-Metal Virtualization) : Runs directly on the physical server hardware
without needing any host operating system. Provides better performance, efficiency, and security
because it communicates directly with the hardware. Mostly used in data centers and enterprise
environments.
Examples: VMware ESXi, Microsoft Hyper-V (bare-metal), Citrix XenServer, KVM.
2. Type 2 Hypervisor (Hosted Virtualization) : Runs on top of a host operating system, just like a
normal software application. Easier to install and use but has lower performance compared to
Type 1 because it depends on the host OS. Suitable for personal use, software testing, and
small-scale setups.
Examples: VMware Workstation, Oracle VirtualBox, Parallels Desktop.