0% found this document useful (0 votes)
203 views147 pages

Cloud Computing Basics

it iss nice

Uploaded by

sagarbhond0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
203 views147 pages

Cloud Computing Basics

it iss nice

Uploaded by

sagarbhond0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 147

UNIT -I & II

Basics of Cloud Computing

What is cloud computing?


Simply put, cloud computing is the delivery of computing services—including servers, storage,
databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to
offer faster innovation, flexible resources, and economies of scale. You typically pay only for
cloud services you use, helping you lower your operating costs, run your infrastructure more
efficiently, and scale as your business needs change.

Cloud Computing
The term cloud refers to a network or the internet. It is a technology that uses remote servers
on the internet to store, manage, and access data online rather than local drives. The data can
be anything such as files, images, documents, audio, video, and more.

There are the following operations that we can do using cloud computing:

o Developing new applications and services


o Storage, back up, and recovery of data
o Hosting blogs and websites
o Delivery of software on demand
o Analysis of data
o Streaming videos and audios
Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure. That means for any IT company, we need a Server Room that is the basic
need of IT companies.

In that server room, there should be a database server, mail server, networking, firewalls,
routers, modem, switches, QPS (Query Per Second means how much queries or load will be
handled by the server), configurable system, high net speed, and the maintenance engineers.

To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.

Characteristics of Cloud Computing


The characteristics of cloud computing are given below:

1) Agility

The cloud works in a distributed computing environment. It shares resources among users
and works very fast.

2) High availability and reliability

The availability of servers is high and more reliable because the chances of infrastructure
failure are minimum.

3) High Scalability

Cloud offers "on-demand" provisioning of resources on a large scale, without having


engineers for peak loads.

4) Multi-Sharing
With the help of cloud computing, multiple users and applications can work more
efficiently with cost reductions by sharing common infrastructure.

5) Device and Location Independence

Cloud computing enables the users to access systems using a web browser regardless of their
location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-
site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.

6) Maintenance

Maintenance of cloud computing applications is easier, since they do not need to be installed
on each user's computer and can be accessed from different places. So, it reduces the cost
also.

7) Low Cost

By using cloud computing, the cost will be reduced because to take the services of cloud
computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.

8) Services in the pay-per-use mode

Application Programming Interfaces (APIs) are provided to the users so that they can access
services on the cloud by using these APIs and pay the charges as per the usage of services.

Top benefits of cloud computing


Cloud computing is a big shift from the traditional way businesses think about IT resources.
Here are seven common reasons organizations are turning to cloud computing services:

Cost
 Moving to the cloud helps companies optimize IT costs. This is because cloud
computing eliminates the capital expense of buying hardware and software and setting
up and running onsite datacenters—the racks of servers, the round-the-clock electricity
for power and cooling, and the IT experts for managing the infrastructure. It adds up
fast.

Speed
 Most cloud computing services are provided self service and on demand, so even vast
amounts of computing resources can be provisioned in minutes, typically with just a
few mouse clicks, giving businesses a lot of flexibility and taking the pressure off
capacity planning.
Global scale
 The benefits of cloud computing services include the ability to scale elastically. In cloud
speak, that means delivering the right amount of IT resources—for example, more or
less computing power, storage, bandwidth—right when they’re needed, and from the
right geographic location.

Productivity
 Onsite datacenters typically require a lot of “racking and stacking”—hardware setup,
software patching, and other time-consuming IT management chores. Cloud computing
removes the need for many of these tasks, so IT teams can spend time on achieving
more important business goals.

Performance
 The biggest cloud computing services run on a worldwide network of secure
datacenters, which are regularly upgraded to the latest generation of fast and efficient
computing hardware. This offers several benefits over a single corporate datacenter,
including reduced network latency for applications and greater economies of scale.

Reliability
 Cloud computing makes data backup, disaster recovery, and business continuity easier
and less expensive because data can be mirrored at multiple redundant sites on the cloud
provider’s network.

Security
 Many cloud providers offer a broad set of policies, technologies, and controls that
strengthen your security posture overall, helping protect your data, apps, and
infrastructure from potential threats.

Advantages and Disadvantages of Cloud Computing


Advantages of Cloud Computing
As we all know that Cloud computing is trending technology. Almost every company switched
their services on the cloud to rise the company growth.

Here, we are going to discuss some important advantages of Cloud Computing-


1) Back-up and restore data

Once the data is stored in the cloud, it is easier to get back-up and restore that data using the
cloud.

2) Improved collaboration

Cloud applications improve collaboration by allowing groups of people to quickly and easily
share information in the cloud via shared storage.

Backward Skip 10sPlay 10s

3) Excellent accessibility

Cloud allows us to quickly and easily access store information anywhere, anytime in the whole
world, using an internet connection. An internet cloud infrastructure increases organization
productivity and efficiency by ensuring that our data is always accessible.

4) Low maintenance cost

Cloud computing reduces both hardware and software maintenance costs for organizations.

5) Mobility

Cloud computing allows us to easily access all cloud data via mobile.

6) IServices in the pay-per-use model

Cloud computing offers Application Programming Interfaces (APIs) to the users for access
services on the cloud and pays the charges as per the usage of service.
7) Unlimited storage capacity

Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.

8) Data security

Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced
features related to security and ensures that data is securely stored and handled.

Disadvantages of Cloud Computing


A list of the disadvantage of cloud computing is given below -

1) Internet Connectivity

As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the cloud,
and we access these data through the cloud by using the internet connection. If you do not have
good internet connectivity, you cannot access these data. However, we have no any other way
to access data from the cloud.

2) Vendor lock-in

Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different vendors
provide different platforms, that can cause difficulty moving from one cloud to another.

3) Limited Control

As we know, cloud infrastructure is completely owned, managed, and monitored by the service
provider, so the cloud users have less control over the function and execution of services within
a cloud infrastructure.

4) Security

Although cloud service providers implement the best security standards to store important
information. But, before adopting cloud technology, you should be aware that you will be
sending all your organization's sensitive information to a third party, i.e., a cloud computing
service provider. While sending the data on the cloud, there may be a chance that your
organization's information is hacked by Hackers.
History of Cloud Computing
Before emerging the cloud computing, there was Client/Server computing which is basically a
centralized storage in which all the software applications, all the data and all the controls are
resided on the server side.

If a single user wants to access specific data or run a program, he/she need to connect to the
server and then gain appropriate access, and then he/she can do his/her business.

Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed.

On the basis of above computing, there was emerged of cloud computing concepts that
later implemented.

“At around in 1961, John MacCharty suggested in a speech at MIT that computing can
be sold like a utility, just like a water or electricity. It was a brilliant idea, but like all
brilliant ideas, it was ahead if its time, as for the next few decades, despite interest in the
model, the technology simply was not ready for it.”

But of course time has passed and the technology caught that idea and after few years we
mentioned that:

In 1999, Salesforce.com started delivering of applications to users using a simple website. The
applications were delivered to enterprises over the Internet, and this way the dream of
computing sold as utility were true.

In 2002, Amazon started Amazon Web Services, providing services like storage, computation
and even human intelligence. However, only starting with the launch of the Elastic Compute
Cloud in 2006 a truly commercial service open to everybody existed.

In 2009, Google Apps also started to provide cloud computing enterprise applications.

Of course, all the big players are present in the cloud computing evolution, some were earlier,
some were later. In 2009, Microsoft launched Windows Azure, and companies like Oracle and
HP have all joined the game. This proves that today, cloud computing has become mainstream.

Evolution of Cloud Computing

Cloud computing is all about renting computing services. This idea first came in the 1950s. In
making cloud computing what it is today, five technologies played a vital role. These are
distributed systems and its peripherals, virtualization, web 2.0, service orientation, and utility
computing.

 Distributed Systems:

It is a composition of multiple independent systems but all of them are depicted as a


single entity to the users. The purpose of distributed systems is to share resources and
also use them effectively and efficiently. Distributed systems possess characteristics
such as scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was that all the systems
were required to be present at the same geographical location. Thus to solve this
problem, distributed computing led to three more types of computing and they were-
Mainframe computing, cluster computing, and grid computing.

 Mainframe computing:

Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive
input-output operations. Even today these are used for bulk processing tasks such as
online transactions etc. These systems have almost no downtime with high fault
tolerance. After distributed computing, these increased the processing capabilities of
the system. But these were very expensive. To reduce this cost, cluster computing came
as an alternative to
mainframe technology.
 Cluster computing:

In 1980s, cluster computing came as an alternative to mainframe computing. Each


machine in the cluster was connected to each other by a network with high bandwidth.
These were way cheaper than those mainframe systems. These were equally capable of
high computations. Also, new nodes could easily be added to the cluster if it was
required. Thus, the problem of the cost was solved to some extent but the problem
related to geographical restrictions still pertained. To solve this, the concept of grid
computing was introduced.

 Grid computing:

In 1990s, the concept of grid computing was introduced. It means that different systems
were placed at entirely different geographical locations and these all were connected
via the internet. These systems belonged to different organizations and thus the grid
consisted of heterogeneous nodes. Although it solved some problems but new problems
emerged as the distance between the nodes increased. The main problem which was
encountered was the low availability of high bandwidth connectivity and with it other
network associated issues. Thus. cloud computing is often referred to as “Successor of
grid computing”.

 Virtualization:

It was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.

 Web 2.0:

It is the interface through which the cloud computing services interact with the clients.
It is because of Web 2.0 that we have interactive and dynamic web pages. It also
increases flexibility among web pages. Popular examples of web 2.0 include Google
Maps, Facebook, Twitter, etc. Needless to say, social media is possible because of this
technology only. It gained major popularity in 2004.
 Service orientation:

It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable applications. Two important concepts were introduced in this computing
model. These were Quality of Service (QoS) which also includes the SLA (Service
Level Agreement) and Software as a Service (SaaS).

 Utility computing:

It is a computing model that defines service provisioning techniques for services such
as compute services along with other major services such as storage, infrastructure, etc
which are provisioned on a pay-per-use basis.
Thus, the above technologies contributed to the making of cloud computing.

Cloud Computing Architecture

What is cloud architecture?


Cloud architecture is a key element of building in the cloud. It refers to the layout and connects
all the necessary components and technologies required for cloud computing.

Cloud architecture dictates how components are integrated so that you can pool, share, and
scale resources over a network. Think of it as a building blueprint for running and deploying
applications in cloud environments.

Cloud architecture defined


Cloud architecture refers to how various cloud technology components, such as hardware,
virtual resources, software capabilities, and virtual network systems interact and connect to
create cloud computing environments. It acts as a blueprint that defines the best way to
strategically combine resources to build a cloud environment for a specific business need.
Cloud architecture components

Cloud architecture components include:

 A frontend platform

 A backend platform

 A cloud-based delivery model

 A network (internet, intranet, or intercloud)

In cloud computing, frontend platforms contain the client infrastructure—user interfaces,


client-side applications, and the client device or network that enables users to interact with and
access cloud computing services. For example, you can open the web browser on your mobile
phone and edit a Google Doc. All three of these things describe frontend cloud architecture
components.

On the other hand, the back end refers to the cloud architecture components that make up the
cloud itself, including computing resources, storage, security mechanisms, management, and
more.

Below is a list of the main backend components:

Application: The backend software or application the client is accessing from the front end to
coordinate or fulfill client requests and requirements.

Service: The service is the heart of cloud architecture, taking care of all the tasks being run on
a cloud computing system. It manages which resources you can access, including storage,
application development environments, and web applications.

Runtime cloud: Runtime cloud provides the environment where services are run, acting as an
operating system that handles the execution of service tasks and management. Runtimes use
virtualization technology to create hypervisors that represent all your services, including apps,
servers, storage, and networking.

Storage: The storage component in the back end is where data to operate applications is stored.
While cloud storage options vary by provider, most cloud service providers offer flexible
scalable storage services that are designed to store and manage vast amounts of data in the
cloud. Storage may include hard drives, solid-state drives, or persistent disks in server bays.

Infrastructure: Infrastructure is probably the most commonly known component of cloud


architecture. In fact, you might have thought that cloud infrastructure is cloud architecture.
However, cloud infrastructure comprises all the major hardware components that power cloud
services, including the CPU, graphics processing unit (GPU), network devices, and other
hardware components needed for systems to run smoothly. Infrastructure also refers to all the
software needed to run and manage everything.

Cloud architecture, on the other hand, is the plan that dictates how cloud resources and
infrastructure are organized.

Management: Cloud service models require that resources be managed in real time according
to user requirements. It is essential to use management software, also known as middleware, to
coordinate communication between the backend and frontend cloud architecture components
and allocate resources for specific tasks. Beyond middleware, management software will also
include capabilities for usage monitoring, data integration, application deployment, and
disaster recovery.

Security: As more organizations continue to adopt cloud computing, implementing cloud


security features and tools is critical to securing data, applications, and platforms. It’s essential
to plan and design data security and network security to provide visibility, prevent data loss
and downtime, and ensure redundancy. This may include regular backups, debugging, and
virtual firewalls.

How does cloud architecture work?


In cloud architecture, each of the components works together to create a cloud computing
platform that provides users with on-demand access to resources and services.

The back end contains all the cloud computing resources, services, data storage, and
applications offered by a cloud service provider. A network is used to connect the frontend and
backend cloud architecture components, enabling data to be sent back and forth between them.
When users interact with the front end (or client-side interface), it sends queries to the back
end using middleware where the service model carries out the specific task or request.

The types of services available to use vary depending on the cloud-based delivery model or
service model you have chosen. There are three main cloud computing service models:

 Infrastructure as a service (IaaS): This model provides on-demand access to cloud


infrastructure, such as servers, storage, and networking. This eliminates the need to
procure, manage, and maintain on-premises infrastructure.

 Platform as a service (PaaS): This model offers a computing platform with all the
underlying infrastructure and software tools needed to develop, run, and manage
applications.

 Software as a service (SaaS): This model offers cloud-based applications that are
delivered and maintained by the service provider, eliminating the need for end users to
deploy software locally.
Cloud architecture layers
A simpler way of understanding how cloud architecture works is to think of all these
components as various layers placed on top of each other to create a cloud platform.

Here are the basic cloud architecture layers:

1. Hardware: The servers, storage, network devices, and other hardware that power the
cloud.

2. Virtualization: An abstraction layer that creates a virtual representation of physical


computing and storage resources. This allows multiple applications to use the same
resources.

3. Application and service: This layer coordinates and supports requests from the
frontend user interface, offering different services based on the cloud service model,
from resource allocation to application development tools to web-based applications.

Cloud infrastructure components :


Different components of cloud infrastructure supports the computing requirements of a
cloud computing model. Cloud infrastructure has number of key components but not
limited to only server, software, network and storage devices. Still cloud infrastructure is
categorized into three parts in general i.e.

1. Computing

2. Networking

3. Storage

The most important point is that cloud infrastructure should have some basic infrastructural
constraints like transparency, scalability, security and intelligent monitoring etc.

The below figure represents components of cloud infrastructure

Components of Cloud Infrastructure


1. Hypervisor :

Hypervisor is a firmware or a low level program which is a key to enable virtualization. It


is used to divide and allocate cloud resources between several customers. As it monitors
and manages cloud services/resources that’s why hypervisor is called as VMM (Virtual
Machine Monitor) or (Virtual Machine Manager).

2. Management Software :
Management software helps in maintaining and configuring the infrastructure. Cloud
management software monitors and optimizes resources, data, applications and services.

3. Deployment Software :

Deployment software helps in deploying and integrating the application on the cloud. So,
typically it helps in building a virtual computing environment.

4. Network :

It is one of the key component of cloud infrastructure which is responsible for connecting
cloud services over the internet. For the transmission of data and resources externally and
internally network is must required.

5. Server :

Server which represents the computing portion of the cloud infrastructure is responsible for
managing and delivering cloud services for various services and partners, maintaining
security etc.

6. Storage :

Storage represents the storage facility which is provided to different organizations for
storing and managing data. It provides a facility of extracting another resource if one of the
resource fails as it keeps many copies of storage.

Along with this, virtualization is also considered as one of important component of cloud
infrastructure. Because it abstracts the available data storage and computing power away
from the actual hardware and the users interact with their cloud infrastructure through GUI
(Graphical User Interface).

Client-Server Model
Client-Server Model – The client-server model describes the communication between two
computing entities over a network. Clients are the ones requesting a resource or service and
Servers are the ones providing that resource or service. Note, the server can be running one
or more programs and involved in multiple communications with multiple clients at the
same time. The client initiates the communication and awaits a response from the server.
This model was developed in the ‘70s at Xerox Palo Alto Research Center (PARC).

How the Client-Server Model works?

In this article we are going to take a dive into the Client-Server model and have a look at
how the Internet works via, web browsers. This article will help us in having a solid
foundation of the WEB and help in working with WEB technologies with ease.
 Client: When we talk the word Client, it mean to talk of a person or an organization
using a particular service. Similarly in the digital world a Client is a computer (Host)
i.e. capable of receiving information or using a particular service from the service
providers (Servers).

 Servers: Similarly, when we talk the word Servers, It mean a person or medium that
serves something. Similarly in this digital world a Server is a remote computer which
provides information (data) or access to particular services.

So, its basically the Client requesting something and the Server serving it as long as its
present in the database.

How the browser interacts with the servers?

There are few steps to follow to interacts with the servers a client.

 User enters the URL(Uniform Resource Locator) of the website or file. The Browser
then requests the DNS(DOMAIN NAME SYSTEM) Server.

 DNS Server lookup for the address of the WEB Server.

 DNS Server responds with the IP address of the WEB Server.

 Browser sends over an HTTP/HTTPS request to WEB Server’s IP (provided


by DNS server).

 Server sends over the necessary files of the website.

 Browser then renders the files and the website is displayed. This rendering is done with
the help of DOM (Document Object Model) interpreter, CSS interpreter and JS
Engine collectively known as the JIT or (Just in Time) Compilers.
Advantages of Client-Server model:

 Centralized system with all data in a single place.

 Cost efficient requires less maintenance cost and Data recovery is possible.

 The capacity of the Client and Servers can be changed separately.

Disadvantages of Client-Server model:

 Clients are prone to viruses, Trojans and worms if present in the Server or uploaded
into the Server.

 Server are prone to Denial of Service (DOS) attacks.

 Data packets may be spoofed or modified during transmission.

 Phishing or capturing login credentials or other useful information of the user are
common and MITM(Man in the Middle) attacks are common.

Categories of Client-Server Computing


There are four main categories of the client-server model:

 One-Tier architecture: consists of a simple program running on a single computer


without requiring access to the network. User requests don’t manage any network
protocols, therefore the code is simple and the network is relieved of the extra traffic.

 Two-Tier architecture: consists of the client, the server, and the protocol that links the
two tiers. The Graphical User Interface code resides on the client host and the domain
logic resides on the server host. The client-server GUI is written in high-level languages
such as C++ and Java.
 Three-Tier architecture: consists of a presentation tier, which is the User Interface
layer, the application tier, which is the service layer that performs detailed processing,
and the data tier, which consists of a database server that stores information.

 N-Tier architecture: divides an application into logical layers, which separate


responsibilities and manage dependencies, and physical tiers, which run on separate
machines, improve scalability, and add latency from the additional network
communication. N-Tier architecture can be closed-layer, in which a layer can only
communicate with the next layer down, or open-layer, in which a layer can
communicate with any layers below it.

Benefits of the Client-Server Model


There are numerous advantages of the client-server model:

When it comes to 5G network builds, Cloud Metro has you covered.

 A single server hosting all the required data in a single place facilitates easy protection
of data and management of user authorization and authentication.

 Resources such as network segments, servers and computers can be added to a client-
server network without any significant interruptions.

 Data can be accessed efficiently without requiring clients and the server to be in close
proximity.

 All nodes in the client-server system are independent, requesting data only from the
server, which facilitates easy upgrades, replacements, and relocation of the nodes.

 Data that is transferred through client-server protocols are platform-agnostic.

Peer to Peer Computing


The peer to peer computing architecture contains nodes that are equal participants in data
sharing. All the tasks are equally divided between all the nodes. The nodes interact with each
other as required as share resources.
A diagram to better understand peer to peer computing is as follows −
Characteristics of Peer to Peer Computing
The different characteristics of peer to peer networks are as follows −

 Peer to peer networks are usually formed by groups of a dozen or less computers.
These computers all store their data using individual security but also share data
with all the other nodes.
 The nodes in peer to peer networks both use resources and provide resources. So,
if the nodes increase, then the resource sharing capacity of the peer to peer
network increases. This is different than client server networks where the server
gets overwhelmed if the nodes increase.
 Since nodes in peer to peer networks act as both clients and servers, it is difficult
to provide adequate security for the nodes. This can lead to denial of service
attacks.
 Most modern operating systems such as Windows and Mac OS contain software to
implement peer to peer networks.
Advantages of Peer to Peer Computing
Some advantages of peer to peer computing are as follows −

 Each computer in the peer to peer network manages itself. So, the network is quite easy to
set up and maintain.
 In the client server network, the server handles all the requests of the clients. This
provision is not required in peer to peer computing and the cost of the server is saved.
 It is easy to scale the peer to peer network and add more nodes. This only increases the
data sharing capacity of the system.
 None of the nodes in the peer to peer network are dependent on the others for their
functioning.

Disadvantages of Peer to Peer Computing


Some disadvantages of peer to peer computing are as follows −
 It is difficult to backup the data as it is stored in different computer systems and
there is no central server.
 It is difficult to provide overall security in the peer to peer network as each system
is independent and contains its own data.

Difference between Grid Computing and Utility Computing

1. Grid Computing :
Grid Computing, as name suggests, is a type of computing that combine resources from various
administrative domains to achieve common goal. Its main goal to virtualized resources to simply
solve problems or issues and apply resources of several computers in network to single problem
at same time to solve technical or scientific problem.
2. Utility Computing:
Utility Computing, as name suggests, is a type of computing that provide services and computing
resources to customers. It is basically a facility that is being provided to users on their demand
and charge them for specific usage. It is similar to cloud computing and therefore requires cloud-
like infrastructure.

Difference between Grid Computing and Utility Computing :


Grid Computing Utility Computing
It is a process architecture that It is process architecture that provide on-
combines different computing resources demand computing resources and
from multiple locations to achieve infrastructure on basis of pay per use
desired and common goal. method.
It distributes workload across multiple It allows organization to allocate and
systems and allow computers to segregate computing resources and
contribute their individual resources to infrastructure to various users on basis of
common goal. their requirements.
It makes better use of existing It simply reduces IT costs, easier to manage,
resources, address rapid fluctuations in provide greater flexibility, compatibility,
customer demands, improve provide more convenience, etc.
computational capabilities, provide
flexibility, etc.
It mainly focuses on sharing computing It mainly focuses on acquiring computing
resources. resources.
It is of three types i.e., computational It is of two type i.e., Internal and external
grid, data grid, and collaborative grid. utility.
It is used in ATMs, back-end It is used in large organizations such as
infrastructures, marketing research, Amazon, Google, etc., where they establish
etc. their own utility services for computing
storage and applications.

Its main purpose is to integrate usage Its main purpose is to make computing
of computer resources from resources and infrastructure management
cooperating partners in form of VO available to customer as per their need,
(Virtual Organizations). and charge them for specific usage rather
than flat rate.
Its characteristics include resource Its characteristics include scalability,
coordination, transparent access, demand pricing, standardized utility
dependable access, etc. computing services, automation, etc.

Difference between Cloud Computing and Grid Computing

Cloud Computing
Cloud computing uses a client-server architecture to deliver computing resources such as
servers, storage, databases, and software over the cloud (Internet) with pay-as-you-go pricing.

Cloud computing becomes a very popular option for organizations by providing various
advantages, including cost-saving, increased productivity, efficiency, performance, data back-
ups, disaster recovery, and security.

Grid Computing
Grid computing is also called as "distributed computing." It links multiple computing
resources (PC's, workstations, servers, and storage elements) together and provides a
mechanism to access them.

The main advantages of grid computing are that it increases user productivity by providing
transparent access to resources, and work can be completed more quickly.
Backward Skip 10sPla10s

Let's understand the difference between cloud computing and grid computing.

Cloud Computing Grid Computing

Cloud Computing follows client-server Grid computing follows a distributed computing


computing architecture. architecture.

Scalability is high. Scalability is normal.

Cloud Computing is more flexible than Grid Computing is less flexible than cloud computing.
grid computing.

Cloud operates as a centralized Grid operates as a decentralized management system.


management system.

In cloud computing, cloud servers are In Grid computing, grids are owned and managed by the
owned by infrastructure providers. organization.

Cloud computing uses services like Iaas, Grid computing uses systems like distributed computing,
PaaS, and SaaS. distributed information, and distributed pervasive.

Cloud Computing is Service-oriented. Grid Computing is Application-oriented.

It is accessible through standard web It is accessible through grid middleware.


protocols.
How does cloud computing work
Assume that you are an executive at a very big corporation. Your particular responsibilities
include to make sure that all of your employees have the right hardware and software they need
to do their jobs. To buy computers for everyone is not enough. You also have to purchase
software as well as software licenses and then provide these softwares to your employees as
they require. Whenever you hire a new employee, you need to buy more software or make sure
your current software license allows another user. It is so stressful that you have to spend lots
of money.

But, there may be an alternative for executives like you. So, instead of installing a suite of
software for each computer, you just need to load one application. That application will allow
the employees to log-in into a Web-based service which hosts all the programs for the user that
is required for his/her job. Remote servers owned by another company and that will run
everything from e-mail to word processing to complex data analysis programs. It is called cloud
computing, and it could change the entire computer industry.

In a cloud computing system, there is a significant workload shift. Local computers have no
longer to do all the heavy lifting when it comes to run applications. But cloud computing can
handle that much heavy load easily and automatically. Hardware and software demands on the
user's side decrease. The only thing the user's computer requires to be able to run is the cloud
computing interface software of the system, which can be as simple as a Web browser and the
cloud's network takes care of the rest.

Cloud Computing Applications


Cloud service providers provide various applications in the field of art, business, data storage
and backup services, education, entertainment, management, social networking, etc.

The most widely used cloud computing applications are given below -
1. Art Applications
Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:

i Moo e

Moo is one of the best cloud art applications. It is used for designing and printing business
cards, postcards, and mini cards.

ii. Vistaprint

Vistaprint allows us to easily design various printed marketing products such as business cards,
Postcards, Booklets, and wedding invitations cards.

iii. Adobe Creative Cloud

Adobe creative cloud is made for designers, artists, filmmakers, and other creative
professionals. It is a suite of apps which includes PhotoShop image editing programming,
Illustrator, InDesign, TypeKit, Dreamweaver, XD, and Audition.

2. Business Applications
Business applications are based on cloud service providers. Today, every organization requires
the cloud business application to grow their business. It also ensures that business applications
are 24*7 available to users.

There are the following business applications of cloud computing -

i. MailChimp
MailChimp is an email publishing platform which provides various options to design,
send, and save templates for emails.

iii. Salesforce

Salesforce platform provides tools for sales, service, marketing, e-commerce, and more. It also
provides a cloud development platform.

iv. Chatter

Chatter helps us to share important information about the organization in real time.

v. Bitrix24

Bitrix24 is a collaboration platform which provides communication, management, and social


collaboration tools.

vi. Paypal

Paypal offers the simplest and easiest online payment mode using a secure internet account.
Paypal accepts the payment through debit cards, credit cards, and also from Paypal account
holders.

vii. Slack

Slack stands for Searchable Log of all Conversation and Knowledge. It provides a user-
friendly interface that helps us to create public and private channels for communication.

viii. Quickbooks

Quickbooks works on the terminology "Run Enterprise anytime, anywhere, on any device."
It provides online accounting solutions for the business. It allows more than 20 users to work
simultaneously on the same system.

3. Data Storage and Backup Applications


Cloud computing allows us to store information (data, files, images, audios, and videos) on the
cloud and access this information using an internet connection. As the cloud provider is
responsible for providing security, so they offer various backup recovery application for
retrieving the lost data.

A list of data storage and backup applications in the cloud are given below -

i. Box.com
Box provides an online environment for secure content management,
workflow, and collaboration. It allows us to store different files such as Excel, Word, PDF,
and images on the cloud. The main advantage of using box is that it provides drag & drop
service for files and easily integrates with Office 365, G Suite, Salesforce, and more than 1400
tools.

ii. Mozy

Mozy provides powerful online backup solutions for our personal and business data. It
schedules automatically back up for each day at a specific time.

iii. Joukuu

Joukuu provides the simplest way to share and track cloud-based backup files. Many users
use joukuu to search files, folders, and collaborate on documents.

iv. Google G Suite

Google G Suite is one of the best cloud storage and backup application. It includes Google
Calendar, Docs, Forms, Google+, Hangouts, as well as cloud storage and tools for managing
cloud apps. The most popular app in the Google G Suite is Gmail. Gmail offers free email
services to users.

4. Education Applications
Cloud computing in the education sector becomes very popular. It offers various online
distance learning platforms and student information portals to the students. The advantage
of using cloud in the field of education is that it offers strong virtual classroom environments,
Ease of accessibility, secure data storage, scalability, greater reach for the students, and
minimal hardware requirements for the applications.

There are the following education applications offered by the cloud -

i. Google Apps for Education

Google Apps for Education is the most widely used platform for free web-based email,
calendar, documents, and collaborative study.

ii. Chromebooks for Education

Chromebook for Education is one of the most important Google's projects. It is designed for
the purpose that it enhances education innovation.

iii. Tablets with Google Play for Education


It allows educators to quickly implement the latest technology solutions into the classroom and
make it available to their students.

iv. AWS in Education

AWS cloud provides an education-friendly environment to universities, community colleges,


and schools.

5. Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video
conferencing.

i. Online games

Today, cloud gaming becomes one of the most important entertainment media. It offers various
online games that run remotely from the cloud. The best cloud gaming services are Shaow,
GeForce Now, Vortex, Project xCloud, and PlayStation Now.

ii. Video Conferencing Apps

Video conferencing apps provides a simple and instant connected experience. It allows us to
communicate with our business partners, friends, and relatives using a cloud-based video
conferencing. The benefits of using video conferencing are that it reduces cost, increases
efficiency, and removes interoperability.

6. Management Applications
Cloud computing offers various cloud management tools which help admins to manage all
types of cloud activities, such as resource deployment, data integration, and disaster recovery.
These management tools also provide administrative control over the platforms, applications,
and infrastructure.

Some important management applications are -

i. Toggl
Toggl helps users to track allocated time period for a particular project.

ii. Evernote
Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one
convenient place. It is available for both free as well as a paid version.
It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.

iii. Outright
Outright is used by management users for the purpose of accounts. It helps to track income,
expenses, profits, and losses in real-time environment.

iv. GoToMeeting
GoToMeeting provides Video Conferencing and online meeting apps, which allows you to
start a meeting with your business partners from anytime, anywhere using mobile phones or
tablets. Using GoToMeeting app, you can perform the tasks related to the management such as
join meetings in seconds, view presentations on the shared screen, get alerts for upcoming
meetings, etc.

7. Social Applications
Social cloud applications allow a large number of users to connect with each other using social
networking applications such as Facebook, Twitter, Linkedln, etc.

There are the following cloud based social applications -

i. Facebook

Facebook is a social networking website which allows active users to share files, photos,
videos, status, more to their friends, relatives, and business partners using the cloud storage
system. On Facebook, we will always get notifications when our friends like and comment on
the posts.

ii. Twitter

Twitter is a social networking site. It is a microblogging system. It allows users to follow


high profile celebrities, friends, relatives, and receive news. It sends and receives short posts
called tweets.

iii. Yammer

Yammer is the best team collaboration tool that allows a team of employees to chat, share
images, documents, and videos.

iv. LinkedIn

LinkedIn is a social network for students, freshers, and professionals.


What are the Security Risks of Cloud Computing
Cloud computing provides various advantages, such as improved collaboration, excellent
accessibility, Mobility, Storage capacity, etc. But there are also security risks in cloud
computing.

Some most common Security Risks of Cloud Computing are given below-

Data Loss
Data loss is the most common cloud security risks of cloud computing. It is also known as data
leakage. Data loss is the process in which data is being deleted, corrupted, and unreadable by
a user, software, or application. In a cloud computing environment, data loss occurs when our
sensitive data is somebody else's hands, one or more data elements can not be utilized by the
data owner, hard disk is not working properly, and software is not updated.

Hacked Interfaces and Insecure APIs


As we all know, cloud computing is completely depends on Internet, so it is compulsory to
protect interfaces and APIs that are used by external users. APIs are the easiest way to
communicate with most of the cloud services. In cloud computing, few services are available
in the public domain. These services can be accessed by third parties, so there may be a chance
that these services easily harmed and hacked by hackers.n

Data Breach
Data Breach is the process in which the confidential data is viewed, accessed, or stolen by the
third party without any authorization, so organization's data is hacked by the hackers.

Vendor lock-in
Vendor lock-in is the of the biggest security risks in cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different vendors
provide different platforms, that can cause difficulty moving one cloud to another.

Increased complexity strains IT staff


Migrating, integrating, and operating the cloud services is complex for the IT staff. IT staff
must require the extra capability and skills to manage, integrate, and maintain the data to the
cloud.
Spectre & Meltdown
Spectre & Meltdown allows programs to view and steal data which is currently processed on
computer. It can run on personal computers, mobile devices, and in the cloud. It can store the
password, your personal information such as images, emails, and business documents in the
memory of other running programs.

Denial of Service (DoS) attacks


Denial of service (DoS) attacks occur when the system receives too much traffic to buffer the
server. Mostly, DoS attackers target web servers of large organizations such as banking sectors,
media companies, and government organizations. To recover the lost data, DoS attackers
charge a great deal of time and money to handle the data.

Account hijacking
Account hijacking is a serious security risk in cloud computing. It is the process in which
individual user's or organization's cloud account (bank account, e-mail account, and social
media account) is stolen by hackers. The hackers use the stolen account to perform
unauthorized activities.

Types of Cloud(Deployment Model)


There are the following 4 types of cloud that you can deploy according to the organization's
needs-

o Public Cloud
o Private Cloud
o Hybrid Cloud
o Community Cloud
Public Cloud
Public cloud is open to all to store and access information via the Internet using the pay-per-
usage method.

In public cloud, computing resources are managed and operated by the Cloud Service Provider
(CSP).

Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft,
Google App Engine, Windows Azure Services Platform.

Characteristics of Public Cloud


The public cloud has the following key characteristics:

o Accessibility: Public cloud services are available to anyone with an internet connection. Users
can access their data and programs at any time and from anywhere.
o Shared Infrastructure: Several users share the infrastructure in public cloud settings. Cost
reductions and effective resource use are made possible by this.
o Scalability: By using the public cloud, users can easily adjust the resources they need based on
their requirements, allowing for quick scaling up or down.
o Pay-per-Usage: When using the public cloud, payment is based on usage, so users only pay
for the resources they actually use. This helps optimize costs and eliminates the need for upfront
investments.
o Managed by Service Providers: Cloud service providers manage and maintain public cloud
infrastructure. They handle hardware maintenance, software updates, and security tasks,
relieving users of these responsibilities.
o Reliability and Redundancy: Public cloud providers ensure high reliability by implementing
redundant systems and multiple data centers. By doing this, the probability of losing data and
experiencing service disruptions is reduced.
o Security Measures: Public cloud providers implement robust security measures to protect user
data. These include encryption, access controls, and regular security audits.

Advantages of Public Cloud


There are the following advantages of Public Cloud -
o Public cloud is owned at a lower cost than the private and hybrid cloud.
o Public cloud is maintained by the cloud service provider, so do not need to worry about the
maintenance.
o Public cloud is easier to integrate. Hence it offers a better flexibility approach to consumers.
o Public cloud is location independent because its services are delivered through the internet.
o Public cloud is highly scalable as per the requirement of computing resources.
o It is accessible by the general public, so there is no limit to the number of users.

Disadvantages of Public Cloud


o Public Cloud is less secure because resources are shared publicly.
o Performance depends upon the high-speed internet network link to the cloud provider.
o The Client has no control of data.

Private Cloud
Private cloud is also known as an internal cloud or corporate cloud. It is used by
organizations to build and manage their own data centers internally or by the third party. It can
be deployed using Opensource tools such as Openstack and Eucalyptus.

Based on the location and management, National Institute of Standards and Technology (NIST)
divide private cloud into the following two parts-

o On-premise private cloud: An on-premise private cloud is situated within the physical
infrastructure of the organization. It involves setting up and running a specific data
center that offers cloud services just for internal usage by the company. The
infrastructure is still completely under the hands of the organization, which gives them
the freedom to modify and set it up in any way they see fit. Organizations can
successfully manage security and compliance issues with this degree of control.
However, on-premise private cloud setup and management necessitate significant
hardware, software, and IT knowledge expenditures.
o Outsourced private cloud: An outsourced private cloud involves partnering with a
third-party service provider to host and manage the cloud infrastructure on behalf of
the organization. The provider may operate the private cloud in their data center or a
colocation facility. In this arrangement, the organization benefits from the expertise and
resources of the service provider, alleviating the burden of infrastructure management.
The outsourced private cloud model offers scalability, as the provider can adjust
resources based on the organization's needs. Due to its flexibility, it is a desirable choice
for businesses that desire the advantages of a private cloud deployment without the
initial capital outlay and ongoing maintenance expenses involved with an on-premise
implementation.
Compared to public cloud options, both on-premise and external private clouds give businesses
more control over their data, apps, and security. Private clouds are particularly suitable for
organizations with strict compliance requirements, sensitive data, or specialized workloads that
demand high levels of customization and security.

Characteristics of Private Cloud


The private cloud has the following key characteristics:

o Exclusive Use: Private cloud is dedicated to a single organization, ensuring the


resources and services are tailored to its needs. It is like having a personal cloud
environment exclusively for that organization.
o Control and Security: Private cloud offers organizations higher control and security
than public cloud options. Organizations have more control over data governance,
access controls, and security measures.
o Customization and Flexibility: Private cloud allows organizations to customize the
infrastructure according to their specific requirements. They can configure resources,
networks, and storage to optimize performance and efficiency.
o Scalability and Resource Allocation: The private cloud can scale and allocate
resources. According to demand, businesses may scale up or down their infrastructure,
effectively using their resources.
o Performance and dependability: Private clouds give businesses more control over the
infrastructure at the foundation, improving performance and dependability.
o Compliance and Regulatory Requirements: Organizations may more easily fulfill
certain compliance and regulatory standards using the private cloud. It provides the
freedom to put in place strong security measures, follow data residency laws, and follow
industry-specific norms.
o Hybrid Cloud Integration: Private cloud can be integrated with public cloud services,
forming a hybrid cloud infrastructure. This integration allows organizations to leverage
the benefits of both private and public clouds.

Advantages of Private Cloud


There are the following advantages of the Private Cloud -
o Private cloud provides a high level of security and privacy to the users.
o Private cloud offers better performance with improved speed and space capacity.
o It allows the IT team to quickly allocate and deliver on-demand IT resources.
o The organization has full control over the cloud because it is managed by the organization itself.
So, there is no need for the organization to depends on anybody.
o It is suitable for organizations that require a separate cloud for their personal use and data
security is the first priority.

Disadvantages of Private Cloud


o Skilled people are required to manage and operate cloud services.
o Private cloud is accessible within the organization, so the area of operations is limited.
o Private cloud is not suitable for organizations that have a high user base, and organizations that
do not have the prebuilt infrastructure, sufficient manpower to maintain and manage the cloud.

Hybrid Cloud
Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:

Hybrid Cloud = Public Cloud + Private Cloud

Hybrid cloud is partially secure because the services which are running on the public cloud can
be accessed by anyone, while the services which are running on a private cloud can be accessed
only by the organization's users.

Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365
(MS Office on the Web and One Drive), Amazon Web Services.

Characteristics of Hybrid Cloud


o Integration of Public and Private Clouds: Hybrid cloud seamlessly integrates public and
private clouds, allowing organizations to leverage both advantages. It provides a unified
platform where workloads and data can be deployed and managed across both environments.
o Flexibility and Scalability: Hybrid cloud offers resource allocation and scalability flexibility.
Organizations can dynamically scale their infrastructure by utilizing additional resources from
the public cloud while maintaining control over critical workloads on the private cloud.
o Enhanced Security and Control: Hybrid cloud allows organizations to maintain higher
security and control over their sensitive data and critical applications. Private cloud components
provide a secure and dedicated environment, while public cloud resources can be used for non-
sensitive tasks, ensuring a balanced approach to data protection.
o Cost Optimization: Hybrid cloud enables organizations to optimize costs by utilizing the cost-
effective public cloud for non-sensitive workloads while keeping mission-critical applications
and data on the more cost-efficient private cloud. This approach allows for efficient resource
allocation and cost management.
o Data and Application Portability: Organizations can move workloads and data between
public and private clouds as needed with a hybrid cloud. This portability offers agility and the
ability to adapt to changing business requirements, ensuring optimal performance and
responsiveness.
o Compliance and Regulatory Compliance: Hybrid cloud helps organizations address
compliance and regulatory requirements more effectively. Sensitive data and applications can
be kept within the private cloud, ensuring compliance with industry-specific regulations while
leveraging the public cloud for other non-sensitive operations.
o Disaster Recovery and Business Continuity: Hybrid cloud facilitates robust disaster recovery
and business continuity strategies. Organizations can replicate critical data and applications
between the private and public clouds, ensuring redundancy and minimizing the risk of data
loss or service disruptions.

Advantages of Hybrid Cloud


There are the following advantages of Hybrid Cloud -

o Hybrid cloud is suitable for organizations that require more security than the public cloud.
o Hybrid cloud helps you to deliver new products and services more quickly.
o Hybrid cloud provides an excellent way to reduce the risk.
o Hybrid cloud offers flexible resources because of the public cloud and secure resources because
of the private cloud.
o Hybrid facilitates seamless integration between on-premises infrastructure and cloud
environments.
o Hybrid provides greater control over sensitive data and compliance requirements.
o Hybrid enables efficient workload distribution based on specific needs and performance
requirements.
o Hybrid offers cost optimization by allowing organizations to choose the most suitable cloud
platform for different workloads.
o Hybrid enhances business continuity and disaster recovery capabilities with private and public
cloud resources.
o Hybrid supports hybrid cloud architecture, allowing applications and data to be deployed across
multiple cloud environments based on their unique requirements.
Advantages of Hybrid Cloud
There are the following advantages of Hybrid Cloud -

o Hybrid cloud is suitable for organizations that require more security than the public cloud.
o Hybrid cloud helps you to deliver new products and services more quickly.
o Hybrid cloud provides an excellent way to reduce the risk.
o Hybrid cloud offers flexible resources because of the public cloud and secure resources because
of the private cloud.

Disadvantages of Hybrid Cloud


o In Hybrid Cloud, security feature is not as good as the private cloud.
o Managing a hybrid cloud is complex because it is difficult to manage more than one type of
deployment model.
o In the hybrid cloud, the reliability of the services depends on cloud service providers.

Community Cloud
Community cloud allows systems and services to be accessible by a group of several
organizations to share the information between the organization and a specific community. It
is owned, managed, and operated by one or more organizations in the community, a third party,
or a combination of them.

Example: Health Care community cloud

Characteristics of Community Cloud


o Shared Infrastructure: Community cloud provides a shared infrastructure accessible to a
specific community of organizations. The participating organizations can leverage this common
cloud infrastructure to meet their shared computing needs and objectives.
o Community-specific Services: The community cloud provides resources, apps, and services
adapted to the participating organizations' demands. These services are created to meet the
community's specific requirements and difficulties while promoting effective communication
and information exchange.
o Community Ownership and Management: The community cloud is owned, managed, and
operated by one or more organizations from the community, a third party, or a combination of
both. The involved organizations have a say in the governance and decision-making procedures
to ensure that the cloud infrastructure meets their shared objectives.
o Enhanced Security and Compliance: Community cloud emphasizes security and compliance
measures relevant to the specific community. It allows for implementing robust security
controls, access management, and compliance frameworks that meet the community's
regulatory requirements and industry standards.
o Cost Sharing and Efficiency: Participating organizations in a community cloud benefit from
cost sharing. By sharing the infrastructure and resources, the costs associated with establishing
and maintaining the cloud environment are distributed among the community members. This
leads to cost efficiency and reduced financial burden for individual organizations.
o Collaboration and Knowledge Sharing: The community cloud encourages communication
and information exchange amongst participating businesses. It gives community members a
forum for project collaboration, information sharing, and resource exploitation. This
encourages creativity, education, and effectiveness within the neighborhood.
o Scalability and Flexibility: Community cloud enables organizations to scale up or reduce their
resources in response to demand. This allows the community to adjust to shifting computing
requirements and efficiently use cloud resources as needed.

Advantages of Community Cloud


There are the following advantages of Community Cloud -

o Community cloud is cost-effective because the whole cloud is being shared by several
organizations or communities.
o Community cloud is suitable for organizations that want to have a collaborative cloud with
more security features than the public cloud.
o It provides better security than the public cloud.
o It provdes collaborative and distributive environment.
o Community cloud allows us to share cloud resources, infrastructure, and other capabilities
among various organizations.

Disadvantages of Community Cloud


o Community cloud is not a good choice for every organization.
o Security features are not as good as the private cloud.
o It is not suitable if there is no collaboration.
o The fixed amount of data storage and bandwidth is shared among all community members.
Difference between public cloud, private cloud, hybrid cloud, and
community cloud -
The below table shows the difference between public cloud, private cloud, hybrid cloud, and
community cloud.

Parameter Public Private Cloud Hybrid Cloud Community


Cloud Cloud
Host Service Enterprise (Third Enterprise (Third Community (Third
provider party) party) party)
Users General Selected users Selected users Community members
public
Access Internet Internet, VPN Internet, VPN Internet, VPN
Owner Service Enterprise Enterprise Community
provider

What is the Right Choice for Cloud Deployment Model?


As of now, no such approach fits picking a cloud deployment model. We will always consider the best
cloud deployment model as per our requirements. Here are some factors which should be considered
before choosing the best deployment model.

 Cost: Cost is an important factor for the cloud deployment model as it tells how much amount
you want to pay for these things.

 Scalability: Scalability tells about the current activity status and how much we can scale it.

 Easy to use: It tells how much your resources are trained and how easily can you manage
these models.

 Compliance: Compliance tells about the laws and regulations which impact the
implementation of the model.

 Privacy: Privacy tells about what data you gather for the model.

Each model has some advantages and some disadvantages, and the selection of the best is only done
on the basis of your requirement. If your requirement changes, you can switch to any other model.

Overall Analysis of Cloud Deployment Models


The overall Analysis of these models with respect to different factors is described below.

Factors Public Cloud Private Cloud Community Hybrid Cloud


Cloud
Initial Setup Easy Complex, requires Complex, requires Complex, requires
a professional a professional team a professional
team to setup to setup team to setup
Scalability and High High Fixed High
Flexibility
Cost- Cost-Effective Costly Distributed cost Between public
Comparison among members and private cloud
Reliability Low Low High High
Data Security Low High High High
Data Privacy Low High High High

Cloud Service Models


There are the following three types of cloud service models -

1. Infrastructure as a Service (IaaS)


2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

Infrastructure as a Service (IaaS)


IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure
managed over the internet. The main advantage of using IaaS is that it helps users to avoid the
cost and complexity of purchasing and managing the physical servers.

Infrastructure as a Service (IaaS) helps in delivering computer infrastructure on an external basis for
supporting operations. Generally, IaaS provides services to networking equipment, devices, databases,
and web servers.

Infrastructure as a Service (IaaS) helps large organizations, and large enterprises in managing and
building their IT platforms. This infrastructure is flexible according to the needs of the client.
Characteristics of IaaS
There are the following characteristics of IaaS -

o Resources are available as a service


o Services are highly scalable
o Dynamic and flexible
o GUI and API-based access
o Automated administrative tasks

Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute
Engine (GCE), Rackspace, and Cisco Metacloud.

Advantages of IaaS

 IaaS is cost-effective as it eliminates capital expenses.

 IaaS cloud provider provides better security than any other software.

 IaaS provides remote access.

Disadvantages of IaaS

 In IaaS, users have to secure their own data and applications.

 Cloud computing is not accessible in some regions of the World.

Platform as a Service (PaaS)


PaaS cloud computing platform is created for the programmer to develop, test, run, and manage
the applications.

Platform as a Service (PaaS) is a type of cloud computing that helps developers to build applications
and services over the Internet by providing them with a platform.

PaaS helps in maintaining control over their business applications.

Characteristics of PaaS
There are the following characteristics of PaaS -

o Accessible to various users via the same development application.


o Integrates with web services and databases.
o Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization's need.
o Support multiple languages and frameworks.
o Provides an ability to "Auto-scale".
Example: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App Engine,
Apache Stratos, Magento Commerce Cloud, and OpenShift.

Advantages of PaaS

 PaaS is simple and very much convenient for the user as it can be accessed via a web browser.

 PaaS has the capabilities to efficiently manage the lifecycle.

Disadvantages of PaaS

 PaaS has limited control over infrastructure as they have less control over the environment
and are not able to make some customizations.

 PaaS has a high dependence on the provider.

Software as a Service (SaaS)


SaaS is also known as "on-demand software". It is a software in which the applications are
hosted by a cloud service provider. Users can access these applications with the help of internet
connection and web browser.

Software as a Service (SaaS) is a type of cloud computing model that is the work of delivering
services and applications over the Internet. The SaaS applications are called Web-Based Software or
Hosted Software.

SaaS has around 60 percent of cloud solutions and due to this, it is mostly preferred by
companies.

Characteristics of SaaS
There are the following characteristics of SaaS -

o Managed from a central location


o Hosted on a remote server
o Accessible over the internet
o Users are not responsible for hardware and software updates. Updates are applied automatically.
o The services are purchased on the pay-as-per-use basis

Example: BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco WebEx,


ZenDesk, Slack, and GoToMeeting.

Advantages of SaaS
 SaaS can access app data from anywhere on the Internet.

 SaaS provides easy access to features and services.

Disadvantages of SaaS

 SaaS solutions have limited customization, which means they have some restrictions within
the platform.

 SaaS has little control over the data of the user.

 SaaS are generally cloud-based, they require a stable internet connection for proper working.

Characteristics Of Service Model of cloud computing


 Multi-Tenant: Multi-tenancy is an architecture in which a single instance of a software
application serves multiple customers. Each customer is called a tenant.

 Self-Sevice: Self-service cloud computing is a private cloud service where the customer
provisions storage and launches applications without an external cloud service provider. With
a self-service cloud, users access a web-based portal to request or configure servers and
launch applications.

 Elastic (Scale-Up | Scale-Down): Elasticity is the ability to grow or shrink infrastructure


resources dynamically as needed to adapt to workload changes in an autonomic manner,
maximizing the use of resources. This can result in savings in infrastructure costs overall.

 Web-Based: It means you can access your resources via Web-Based applications.

 Automated: Most of the things in the Cloud are automated, and human intervention is less.

 Pay As You Go Model: You only have to pay when utilizing cloud resources.

 Modern Web-Based Integration: It allows you to configure multiple application programs


to share data in the cloud. In a network that incorporates cloud integration, diverse
applications communicate either directly or through third-party software.

 Secure: Cloud services create a copy of the data that you want to store to prevent any form of
data loss. If one server loses the data by any chance, the copy version is restored from the
other server.
The high-level talking points for this analogy were:

 On-premises: is like owning your cars - you can go anywhere you want
at anytime (full control), in a car make/model/color/trim of your choice,
but you own the car and you’re responsible for its maintenance
 IaaS: is like a car rental service - you still can go anywhere you want at
anytime, with some limits in car choices, but you don’t have to maintain
the vehicles; just take the keys and go
 PaaS: is like public transportation - you can go to places as
defined/limited by available routes and schedules, but it’s easy to use and
pay-per-use (full economies of scale)

Difference between IaaS, PaaS, and SaaS


The below table shows the difference between IaaS, PaaS, and SaaS -

IaaS Paas SaaS

It provides a virtual data center to It provides virtual platforms and It provides web software and
store information and create tools to create, test, and deploy apps to complete business
platforms for app development, apps. tasks.
testing, and deployment.

It provides access to resources It provides runtime environments It provides software as a


such as virtual machines, virtual and deployment tools for service to the end-users.
storage, etc. applications.
It is used by network architects. It is used by developers. It is used by end users.

IaaS provides only Infrastructure. PaaS provides SaaS provides


Infrastructure+Platform. Infrastructure+Platform
+Software.

What is Cloud API?


A cloud application programming interface (cloud API) enables applications to communicate
and transfer information to one another in the cloud. Cloud APIs essentially enable you to
develop applications and services in the cloud. APIs also connect multiple clouds or connect
cloud and on-premises apps.
Using cloud APIs gives businesses a competitive advantage.

That’s why APIs play a critical role in cloud computing. And that role is only going to expand
as organizations’ cloud deployments expand and grow more complex.

What Is the Role of APIs in Cloud Computing?


APIs enable cloud computing in the enterprise. You can use enterprise APIs in cloud computing
to enable access to different platforms and efficiently manage security. You can even use API
gateways to move workloads across different cloud environments.
There’s no way around it. Cloud is the future of API management. And you’ll need cloud APIs
in order to build a thriving digital ecosystem, ensure governance, and integrate your
applications.

How Does a Cloud API Work?


Here’s how cloud APIs work. Developers access cloud APIs to connect services. The API
communicates with the cloud service and brings the information back to the developer and their
application. As a result, the cloud API enables communication and integration between cloud
services.
As you build and manage cloud APIs, there are some considerations you’ll need to make.

Architecture
First, there’s your API architecture. How will you design your API architecture for the cloud?
There are four key layers you’ll need to consider when planning your cloud architecture:
1. Information management layer — your data repositories.
2. Application layer — where your applications live.
3. Integration layer — where APIs connect your services.

4. Interaction layer — where your API gateway enables interaction between employees,
customers, and partners.

The API gateway plays a critical role in your cloud deployment. It allows you to deploy across
multiple clouds, enforce security policies, and control access.

Orchestration
Next, you’ll need to consider API orchestration. How will you orchestrate your individual API
calls to cloud services?

To do this, you’ll need to use your API gateway. You can design your API catalog and use the
gateway to decide how to service each request in the cloud.

Integration
Finally, you’ll want to consider API integration. How will you use APIs to integrate your cloud
native applications?

To do this, you’ll need an API platform like Akana. Using Akana ensures you can connect your
cloud applications easily, create new cloud APIs, and work with your existing data sources.

Why Are Cloud APIs Important?


Cloud APIs are important to drive digital transformation success. Using cloud APIs helps you
accelerate time-to-market and gain a competitive advantage.

Create New Markets


APIs enable innovation. They empower you to create new markets and opportunities for your
business.

Take John Deere, for example. John Deere uses APIs to create smart, connected products. This
helps them build a new type of marketplace for the food system.

Deliver Better Customer Experiences


APIs allow you to deliver better customer experiences — without sacrificing security. That’s
because cloud APIs allow you to connect applications and expose services.

Take this Fortune 500 company, for example. They use cloud APIs to ensure security and
protect back-end data, while exposing services to customers. This ensures customer satisfaction
and better experiences.

Create New Partnerships


APIs enable partnerships to flourish. That’s because you can use APIs to make services
available to your partners — and vice versa — so you can innovate faster.
Take this North American Bank, for example. They use cloud APIs to create new, innovative
partnerships with disruptive IoT and FinTech companies.

Modernize Mainframe Applications


APIs help you modernize mainframe applications and leverage their data in the cloud.
Watch the webinar below to learn how to use APIs to go from mainframe to cloud in minutes:

Cloud Service Provider


What is a cloud service provider?

A cloud service provider, or CSP, is a company that offers components of cloud computing --
typically, infrastructure as a service (IaaS), software as a service (SaaS) or platform as a
service (PaaS).

Cloud service providers use their own data centers and compute resources to host cloud
computing-based infrastructure and platform services for customer organizations. Cloud
services typically are priced using various pay-as-you-go subscription models. Customers are
charged only for resources they consume, such as the amount of time a service is used or the
storage capacity or virtual machines used.

For SaaS products, cloud service providers may host and deliver their own managed services
to users. Or they can act as a third party, hosting the app of an independent software vendor.

The most well-known cloud service platforms are Amazon Web Services (AWS), Google
Cloud (formerly Google Cloud Platform or GCP) and Microsoft Azure.

What are the benefits and challenges of using a cloud service


provider?
Using a cloud provider has benefits and challenges. Companies considering using these
services should think about how these factors would affect their priorities and risk profile, for
both the present and long term. Individual CSPs have their own strengths and weaknesses,
which are worth considering.
Benefits
 Cost and flexibility. The pay-as-you-go model of cloud services enables
organizations to only pay for the resources they consume. Using a cloud service
provider also eliminates the need for IT-related capital equipment purchases.
Organizations should review the details of cloud pricing to accurately break down
cloud costs.

 Scalability. Customer organizations can easily scale up or down the IT resources


they use based on business demands.

 Mobility. Resources and services purchased from a cloud service provider can
be accessed from any physical location that has a working network connection.

 Disaster recovery. Cloud computing services typically offer quick and reliable
disaster recovery.

Challenges
 Hidden costs. Cloud use may incur expenses not factored into the initial return
on investment analysis. For example, unplanned data needs can force a customer
to exceed contracted amounts, leading to extra charges. To be cost-effective,
companies also must factor in additional staffing needs for monitoring and
managing cloud use. Terminating use of on-premises systems also has costs, such
as writing off assets and data cleanup.

 Cloud migration. Moving data to and from the cloud can take time.
Companies might not have access to their critical data for weeks, or even months,
while large amounts of data are first transferred to the cloud.

 Cloud security. When trusting a provider with critical data, organizations risk
security breaches, compromised credentials and other substantial security risks.
Also, providers may not always be transparent about security issues and practices.
Companies with specific security needs may rely on open source cloud security
tools, in addition to the provider's tools.

 Performance and outages. Outages, downtime and technical issues on the


provider's end can render necessary data and resources inaccessible during critical
business events.
 Complicated contract terms. Organizations contracting cloud service
providers must actively negotiate contracts and service-level agreements (SLAs).
Failure to do so can result in the provider charging high prices for the return of
data, high prices for early service termination and other penalties.

 Vendor lock-in. High data transfer costs or use of proprietary cloud


technologies that are incompatible with competitor services can make it difficult
for customers to switch CSPs. To avoid vendor lock-in, companies should have a
cloud exit strategy before signing any contracts.

Types of cloud service providers


Customers are purchasing an increasing variety of services from cloud service providers. As
mentioned above, the three most common categories of cloud-based services are IaaS, SaaS
and PaaS.

 IaaS providers. In the IaaS model, the cloud service provider delivers
infrastructure components that would otherwise exist in an on-premises data
center. These components include servers, storage, networking and the
virtualization layer, which the IaaS provider hosts in its own data center. CSPs
may also complement their IaaS products with services such as monitoring,
automation, security, load balancing and storage resiliency.

 SaaS providers. SaaS vendors offer a variety of business technologies, such as


productivity suites, customer relationship management software, human resources
management software and data management software, all of which the SaaS
vendor hosts and provides over the internet. Many traditional software vendors
sell cloud-based versions of their on-premises software products. Some SaaS
vendors will contract a third-party cloud provider, while other vendors -- usually
larger companies -- will host their own cloud services.

 PaaS providers. The third type of cloud service provider, PaaS vendors, offers
cloud infrastructure and services that users can access to perform various
functions. PaaS products are commonly used in software development. In
comparison to an IaaS provider, PaaS providers will add more of the application
stack, such as operating systems and middleware, to the underlying infrastructure.
Cloud providers are also categorized by whether they deliver public cloud, private cloud
or hybrid cloud services.

Understand the similarities and differences between the public cloud, private cloud and hybrid
cloud models.

Common characteristics and services


In general, cloud service providers make their offerings available as an on-demand, self-
provisioning purchase. Customers can pay for the cloud-based services on a subscription
basis -- for example, under a monthly or quarterly billing structure.

Some cloud service providers differentiate themselves by tailoring their offerings to a vertical
market's requirements. Their cloud-based services might deliver industry-specific
functionality and tools or help users meet certain regulatory requirements. For instance,
several healthcare cloud products let healthcare providers store, maintain, optimize and back
up personal health information. Industry-specific cloud offerings encourage organizations to
use multiple cloud service providers.
Amazon and Microsoft lead the cloud infrastructure market. See how the market share breaks out
among the top five providers.

Major cloud service providers and offerings


The cloud services market has a range of providers, but AWS, Microsoft and Google are the
established leaders in the public cloud market.

Amazon was the first major cloud provider, with the 2006 offering of Amazon Simple
Storage Service. Since then, the growing cloud market has seen rapid development of
Amazon's cloud platform, as well as Microsoft's Azure platform and Google Cloud. These
three vendors continue to jockey for the lead on a variety of cloud fronts. The vendors are
developing cloud-based services around emerging technologies, such as machine learning,
artificial intelligence, containerization and Kubernetes.

Other major cloud service providers in the market include the following:

 Adobe

 Akamai Technologies

 Alibaba Cloud

 Apple

 Box

 Citrix

 DigitalOcean
 IBM Cloud

 Joyent

 Oracle Cloud

 Rackspace Cloud

 Salesforce

How to choose a cloud service provider


Organizations evaluating potential cloud partners should consider the following factors:

 Cost. The cost is usually based on a per-use utility model, but all subscription
details and provider-specific variations must be reviewed. Cost is often considered
one of the main reasons to adopt a cloud service platform.

 Tools and features. An overall assessment of a provider's features, including data


management and security features, is important to ensure it meets current and
future IT needs.

 Physical location of the servers. Server location may be an important factor for
sensitive data, which must meet data storage regulations.

 Reliability. Reliability is crucial if customers' data must be accessible. For


example, a typical cloud storage provider's SLA specifies precise levels of service
-- such as 99.9% uptime -- and the recourse or compensation the user is entitled to
should the provider fail to deliver the service as described. However, it's important
to understand the fine print in SLAs, because some providers discount outages of
less than 10 minutes, which may be too long for some businesses.

 Security. Cloud security should top the list of cloud service provider
considerations. Organizations such as the Cloud Security Alliance offer
certification to cloud providers that meet its criteria.

 Business strategy. An organization's business requirements should align with the


offerings and technical capabilities of a potential cloud provider to meet both
current and long-term enterprise goals.
Cloud Service Provider Companies
Cloud Service providers (CSP) offers various services such as Software as a
Service, Platform as a service, Infrastructure as a service, network
services, business applications, mobile applications, and infrastructure in the
cloud. The cloud service providers host these services in a data center, and users can
access these services through cloud provider companies using an Internet connection.

There are the following Cloud Service Providers Companies -

Amazon Web Services (AWS)


AWS (Amazon Web Services) is a secure cloud service platform provided
by Amazon. It offers various services such as database storage, computing power,
content delivery, Relational Database, Simple Email, Simple Queue, and other
functionality to increase the organization's growth.

Features of AWS

AWS provides various powerful features for building scalable, cost-effective, enterprise
applications. Some important features of AWS is given below-

o AWS is scalable because it has an ability to scale the computing resources up or down
according to the organization's demand.
o AWS is cost-effective as it works on a pay-as-you-go pricing model.
o It provides various flexible storage options.
o It offers various security services such as infrastructure security, data encryption,
monitoring & logging, identity & access control, penetration testing, and DDoS attacks.
o It can efficiently manage and secure Windows workloads.
2. Microsoft Azure
Microsoft Azure is also known as Windows Azure. It supports various operating
systems, databases, programming languages, frameworks that allow IT professionals
to easily build, deploy, and manage applications through a worldwide network. It also
allows users to create different groups for related utilities.

Features of Microsoft Azure


o Microsoft Azure provides scalable, flexible, and cost-effective
o It allows developers to quickly manage applications and websites.
o It managed each resource individually.
o Its IaaS infrastructure allows us to launch a general-purpose virtual machine in different
platforms such as Windows and Linux.
o It offers a Content Delivery System (CDS) for delivering the Images, videos, audios,
and applications.

3. Google Cloud Platform


Google cloud platform is a product of Google. It consists of a set of physical devices,
such as computers, hard disk drives, and virtual machines. It also helps organizations
to simplify the migration process.
Features of Google Cloud
o Google cloud includes various big data services such as Google BigQuery, Google
CloudDataproc, Google CloudDatalab, and Google Cloud Pub/Sub.
o It provides various services related to networking, including Google Virtual Private
Cloud (VPC), Content Delivery Network, Google Cloud Load Balancing, Google Cloud
Interconnect, and Google Cloud DNS.
o It offers various scalable and high-performance
o GCP provides various serverless services such as Messaging, Data Warehouse,
Database, Compute, Storage, Data Processing, and Machine learning (ML)
o It provides a free cloud shell environment with Boost Mode.

4. IBM Cloud Services


IBM Cloud is an open-source, faster, and more reliable platform. It is built with a suite
of advanced data and AI tools. It offers various services such as Infrastructure as a
service, Software as a service, and platform as a service. You can access its services like
compute power, cloud data & Analytics, cloud use cases, and storage networking using
internet connection.
Feature of IBM Cloud
o IBM cloud improves operational efficiency.
o Its speed and agility improve the customer's satisfaction.
o It offers Infrastructure as a Service (IaaS), Platform as a Service (PaaS), as well as
Software as a Service (SaaS)
o It offers various cloud communications services to our IT environment.

5. VMware Cloud
VMware cloud is a Software-Defined Data Center (SSDC) unified platform for the
Hybrid Cloud. It allows cloud providers to build agile, flexible, efficient, and robust
cloud services.

Features of VMware
o VMware cloud works on the pay-as-per-use model and monthly subscription
o It provides better customer satisfaction by protecting the user's data.
o It can easily create a new VMware Software-Defined Data Center (SDDC) cluster on
AWS cloud by utilizing a RESTful API.
o It provides flexible storage options. We can manage our application storage on a per-
application basis.
o It provides a dedicated high-performance network for managing the application traffic
and also supports multicast networking.
o It eliminates the time and cost complexity.

6. Oracle cloud
Oracle cloud platform is offered by the Oracle Corporation. It combines Platform as
a Service, Infrastructure as a Service, Software as a Service, and Data as a Service with
cloud infrastructure. It is used to perform tasks such as moving applications to the
cloud, managing development environment in the cloud, and optimize connection
performance.

Features of Oracle cloud


o Oracle cloud provides various tools for build, integrate, monitor, and secure the
applications.
o Its infrastructure uses various languages including, Java, Ruby, PHP, Node.js.
o It integrates with Docker, VMware, and other DevOps tools.
o Oracle database not only provides unparalleled integration between IaaS, PaaS, and
SaaS, but also integrates with the on-premises platform to improve operational
efficiency.
o It maximizes the value of IT investments.
o It offers customizable Virtual Cloud Networks, firewalls, and IP addresses to securely
support private networks.

7. Red Hat
Red Hat virtualization is an open standard and desktop virtualization platform
produced by Red Hat. It is very popular for the Linux environment to provide various
infrastructure solutions for virtualized servers as well as technical workstations. Most
of the small and medium-sized organizations use Red Hat to run their organizations
smoothly. It offers higher density, better performance, agility, and security to the
resources. It also improves the organization's economy by providing cheaper and
easier management capabilities.

Features of Rad Hat


o Red Hat provides secure, certified, and updated container images via the Red Hat
Container catalog.
o Red Hat cloud includes OpenShift, which is an app development platform that allows
developers to access, modernize, and deploy apps
o It supports up to 16 virtual machines, each having up to 256GB of RAM.
o It offers better reliability, availability, and serviceability.
o It provides flexible storage capabilities, including very large SAN-based storage, better
management of memory allocations, high availability of LVMs, and support for
particularly roll-back.
o In the Desktop environment, it includes features like New on-screen keyboard, GNOME
software, which allows us to install applications, update application, as well as extended
device support.

8. DigitalOcean
DigitalOcean is the unique cloud provider that offers computing services to the
organization. It was founded in 2011 by Moisey Uretsky and Ben. It is one of the best
cloud provider that allows us to manage and deploy web applications.
Features of DigitalOcean
o It uses the KVM hypervisor to allocate physical resources to the virtual servers.
o It provides high-quality performance.
o It offers a digital community platform that helps to answer queries and holding
feedbacks.
o It allows developers to use cloud servers to quickly create new virtual machines for
their projects.
o It offers one-click apps for droplets. These apps include MySQL, Docker, MongoDB,
Wordpress, PhpMyAdmin, LAMP stack, Ghost, and Machine Learning.

9. Rackspace
Rackspace offers cloud computing services such as hosting web applications, Cloud
Backup, Cloud Block Storage, Databases, and Cloud Servers. The main aim to designing
Rackspace is to easily manage private and public cloud deployments. Its data centers
operating in the USA, UK, Hong Kong, and Australia.
Features of Rackspace
o Rackspace provides various tools that help organizations to collaborate and
communicate more efficiently.
o We can access files that are stored on the Rackspace cloud drive, anywhere, anytime
using any device.
o It offers 6 globally data centers.
o It can manage both virtual servers and dedicated physical servers on the same network.
o It provides better performance at a lower cost.

10. Alibaba Cloud


Alibaba Cloud is used to develop data management and highly scalable cloud
computing services. It offers various services, including Elastic Computing, Storage,
Networking, Security, Database Services, Application Services, Media Services, Cloud
Communication, and Internet of Things.

Features of Alibaba Cloud


o Alibaba cloud offers a suite of global cloud computing services for both international
customers and Alibaba Group's e-commerce ecosystem.
o Its services are available on a pay-as-per-use basis.
o It globally deals with its 14 data centers.
o It offers scalable and reliable data storage.
Virtual Clusters & Resource Management

• What is a physical cluster?


• What is a virtual cluster?
• live migration of VMs, memory and file
migrations, and dynamic deployment of virtual
clusters.

GCC- Virtualization: Rajeev Wankar 122


Virtual Cluster

• As with traditional physical servers, virtual


machines (VMs) can also be clustered. A VM
cluster starts with two or more physical
servers;
• we'll call them Server A and Server B.
• In simple deployments if Server A fails, its
workloads restart on Server B

CC- Virtualization: Rajeev Wankar 123


Virtual Cluster features

• HA: virtual machines can be restarted on


another hosts if the host where the virtual
machine running fails.
• DRS (Distributed Resource Scheduler): virtual
machines can be load balanced so that none
of the hosts is too overloaded or too much
empty in the cluster.
• Live migration: of virtual machines from one
host to other.

CC- Virtualization: Rajeev Wankar 124


Virtual Clusters & Resource Management

• In a traditional VM initialization, the


administrator manually writes configuration
information/specify the configuration sources.
• With many VMs, an inefficient configuration
always causes problems with overloading or
underutilization.

GCC- Virtualization: Rajeev Wankar 125


Virtual Clusters & Resource Management

• Amazon’s EC2 provides elastic computing


power in a cloud. EC2 permits customers to
create VMs and to manage user accounts over
the time of their use (resizable capacity).
• XenServer and VMware ESXi Server support
a bridging mode which allows all domains to
appear on the network as individual hosts.
• With this mode VMs can communicate with
one another freely through the virtual network
interface card and configure the network
automatically.
GCC- Virtualization: Rajeev Wankar 126
Virtual Clusters

• Virtual clusters are built with VMs installed at


distributed servers from one or more physical
clusters.

• The VMs in a virtual cluster are


interconnected logically by a virtual
network across several physical networks

GCC- Virtualization: Rajeev Wankar 127


Virtual Clusters

Virtual
Physical Physical Physical Machine
Cluster 1 Cluster 2 Cluster 1

Virtual
Cluster 1

Virtual
Cluster 2

Virtual Virtual
Cluster 3 Cluster 4

Courtesy of Fan Zhang, Tsinghua University

GCC- Virtualization: Rajeev Wankar 128


Provisioning of VMs in Virtual Clusters

• The provisioning of VMs to a virtual cluster is


done dynamically to have some interesting
properties:

GCC- Virtualization: Rajeev Wankar 129


Provisioning of VMs in Virtual Clusters Conti…

1. The virtual cluster nodes can be either


physical or virtual machines. Multiple VMs
running with different OSes can be deployed
on the same physical node.
2. A VM runs with a guest OS, which is often
different from the host OS.
3. The purpose of using VMs is to consolidate
multiple functionalities on the same server.
This will greatly enhance server utilization and
application flexibility.

GCC- Virtualization: Rajeev Wankar 130


Provisioning of VMs in Virtual Clusters Conti…

4. VMs can be colonized (replicated) in multiple


servers for the purpose of promoting
distributed parallelism, fault tolerance,
disaster recovery.
5. The size of a virtual cluster can grow or shrink
dynamically.
6. The failure of any physical nodes may disable
some VMs installed on the failing nodes. But
the failure of VMs will not pull down the host
system.

GCC- Virtualization: Rajeev Wankar 131


Virtual Clusters Management

• It is necessary to effectively manage VMs


running on virtual clusters and consequently
build a high-performance virtualized computing
environment
• This involves
– virtual cluster deployment,
– monitoring and management over large-scale
clusters, resource scheduling, load balancing,
– server consolidation, fault tolerance, and other
techniques

GCC- Virtualization: Rajeev Wankar 132


Virtual cluster based on application partitioning
System Area Network
VM3
VM2
VM1

VM4
VM3
VM2
VM1

VM4
VM3
VM2

VM4
VM3
VM2
VM1
VMM VMM VMM VMM

Virtual Virtual Virtual Virtual


Cluster node Cluster node Cluster node Cluster node
for for for for
Application A Application B Application C Application D
CC- Virtualization: Rajeev Wankar 133
Virtual Clusters Management Conti…

• Since large number of VM images might be


present, the most important thing is to
determine how to store those images in the
system efficiently
• Apart from it there are common installations for
most users or applications, such as OS or
user-level programming libraries.
• These software packages can be preinstalled
as templates (called template VMs).

GCC- Virtualization: Rajeev Wankar 134


Deployment

• There are four steps to deploy a group of VMs


onto a target cluster:

– preparing the disk image,


– configuring the VMs,
– choosing the destination nodes, and
– executing the VM deployment command on
every host.

GCC- Virtualization: Rajeev Wankar 137


Deployment Conti…

• Many systems use templates to simplify the


disk image preparation process.
• A template is a disk image that includes a
preinstalled operating system with or without
certain application software.

GCC- Virtualization: Rajeev Wankar 138


Deployment Conti…

• Users choose a proper template according to


their requirements and make a duplicate of it
as their own disk image.
• Templates could implement the COW (Copy on
Write) format. A new COW backup file is very
small and easy to create and transfer.
• Therefore, it definitely reduces disk space
consumption.

GCC- Virtualization: Rajeev Wankar 139


Copy-on-write

• An optimization strategy in which if multiple


callers ask for resources which are initially
indistinguishable, give them pointers to the
same resource.
• This function can be maintained until a caller
tries to modify its "copy" of the resource, at
which point a true private copy is created to
prevent the changes becoming visible to
everyone else.

CC- Virtualization: Rajeev Wankar 140


Copy-on-write

• All of this happens transparently to the callers.


• The primary advantage is that if a caller never
makes any modifications, no private copy need
ever be created.
• All changes are recorded in a separate file
preserving the original image. Several COW
files can point to the same image to test
several configurations simultaneously without
jeopardizing the basic system.

CC- Virtualization: Rajeev Wankar 141


Copy-on-write

• Unlike the snapshot, the copy-on-write uses


multiple files and allows to simultaneously run
multiple instances of the basic machine.

CC- Virtualization: Rajeev Wankar 142


Deployment Conti…

• In addition, VM deployment time is much


shorter than that of copying the whole raw
image file.
• VM is configured with a name, disk image,
network setting, and allocated CPU and
memory.
• One needs to record each VM configuration
into a file. However, this method is inefficient
when managing a large group of VMs

GCC- Virtualization: Rajeev Wankar 143


Deployment Conti…

• VMs with the same configurations could use


pre-edited profiles to simplify the process. In
this scenario, the system configures the VMs
according to the chosen profile.
• Most configuration items use the same
settings, while other items, such as UUID, VM
name, and IP address, are assigned with
automatically calculated values

GCC- Virtualization: Rajeev Wankar 144


Live VM Migration Steps and
Performance Effects Conti…

• When a VM fails, its role could be replaced by


another VM on a different node, as long as
they both run with the same guest OS
• a VM must stop playing its role if its residing
host node fails
• This problem can be mitigated with VM live
migration
• The migration copies the VM state file from the
storage area to the host machine.

GCC- Virtualization: Rajeev Wankar 145


Live VM Migration Steps and
Performance Effects Conti…

• There are four ways to manage a virtual cluster


First way is to use a guest-based manager, by
which the cluster manager resides on a guest
system. In this case, multiple VMs form a virtual
cluster
• Ex. openMosix is an open source Linux cluster
running different guest systems on top of the Xen
hypervisor

GCC- Virtualization: Rajeev Wankar 146


Live VM Migration Steps and
Performance Effects Conti…

• There are four ways to manage a virtual cluster


– Second way is we can build a cluster manager
on the host systems. The host-based manager
supervises the guest systems and can restart
the guest system on another physical machine.
• Ex. A good example is the VMware HA system that
can restart a guest system after failure.

GCC- Virtualization: Rajeev Wankar 147


Live VM Migration Steps and
Performance Effects Conti…

• There are four ways to manage a virtual cluster


– Third way to manage a virtual cluster is to use
an independent cluster manager on both the
host and guest systems. This will make
infrastructure management more complex,

GCC- Virtualization: Rajeev Wankar 148


Live VM Migration Steps and
Performance Effects Conti…

• There are four ways to manage a virtual cluster


– Finally can use an integrated cluster Manager
on the guest and host systems. This means the
manager must be designed to distinguish
between virtualized resources and physical
resources.
– Various cluster management schemes can be
greatly enhanced when VM life migration is
enabled with minimal overhead.

GCC- Virtualization: Rajeev Wankar 149


Live VM Migration Steps and
Performance Effects Conti…

• Virtual clustering plays a key role in cloud


computing.
• VMs can be live-migrated from one physical
machine to another; in case of failure
• When a VM runs a live service, it is necessary
to make a trade-off to ensure that the migration
occurs in a manner that minimizes all three
metrics.

GCC- Virtualization: Rajeev Wankar 150


Live VM Migration Steps and
Performance Effects Conti…

• The motivation is to design a live VM migration


scheme with
– negligible downtime,
– the lowest network bandwidth consumption
possible, and
– a reasonable total migration time

GCC- Virtualization: Rajeev Wankar 151


Live VM Migration Steps and
Performance Effects Conti…
• A VM can be in one of the following four states.
– An inactive state is defined by the virtualization
platform, under which the VM is not enabled.
– An active state refers to a VM that has been
instantiated at the virtualization platform to perform
a real task.
– A paused state corresponds to a VM that has been
instantiated but disabled to process a task or
paused in a waiting state.
– A VM enters the suspended state if its machine file
and virtual resources are stored back to the disk.

GCC- Virtualization: Rajeev Wankar 152


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Steps 0 and 1: Start migration. This step makes
preparations for the migration, including
determining the migrating VM and the
destination host.
• Although users could manually make a VM migrate to
an appointed host, in most circumstances, the
migration is automatically started by strategies such
as load balancing and server consolidation.

GCC- Virtualization: Rajeev Wankar 153


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Steps 2: Transfer memory.
– Since the whole execution state of the VM is
stored in memory, sending the VM’s memory to
the destination node ensures continuity of the
service provided by the VM.

GCC- Virtualization: Rajeev Wankar 154


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Steps 2: Transfer memory.
– All of the memory data is transferred in the first
round, and then the migration controller
recopies the memory data which is changed in
the last round.
– These steps keep iterating until the dirty portion
of the memory is small enough to handle the
final copy.

GCC- Virtualization: Rajeev Wankar 155


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Step 3: Suspend the VM and copy the last
portion of the data.
– The migrating VM’s execution is suspended
when the last round’s memory data is
transferred. Other non-memory data such as
CPU and network states should be sent as well.

GCC- Virtualization: Rajeev Wankar 156


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Step 3: Suspend the VM and copy the last
portion of the data.
– Here the VM is stopped and its applications will
no longer run. This “service unavailable” time is
called the “downtime” of migration, which should
be as short as possible so that it can be
negligible to users.

GCC- Virtualization: Rajeev Wankar 157


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Steps 4 and 5: Commit and activate the new
host.
– After all the needed data is copied, on the
destination host, the VM reloads the states and
recovers the execution of programs in it, and
the service provided by this VM continues.

GCC- Virtualization: Rajeev Wankar 158


Live VM Migration Steps and
Performance Effects Conti…

• Live Migration of a VM consists of the following


six steps:
– Steps 4 and 5: Commit and activate the new
host.
– Then the network connection is redirected to the
new VM and the dependency to the source host
is cleared.
– The whole migration process finishes by
removing the original VM from the source host.

GCC- Virtualization: Rajeev Wankar 159


Live migration process of a VM from one host
to another

GCC- Virtualization: Rajeev Wankar 160


Unit V- Cloud Security
Cloud Security Mechanisms

1 Encryption
2 Hashing
3 Digital Signature

4 Public Key Infrastructure (PKI)


5 Identity and Access Management (IAM)
6 Single Sign-On (SSO)
7 Cloud-Based Security Groups
8 Hardened Virtual Server Images

This chapter establishes a set of fundamental cloud security mechanisms, several of which can
be used to counter the security threats described in Chapter 6.

1. Encryption
Data, by default, is coded in a readable format known as plaintext. When transmitted over a
network, plaintext is vulnerable to unauthorized and potentially malicious access.
The encryption mechanism is a digital coding system dedicated to preserving the
confidentiality and integrity of data. It is used for encoding plaintext data into a protected and
unreadable format.
Encryption technology commonly relies on a standardized algorithm called a cipher to
transform original plaintext data into encrypted data, referred to as ciphertext. Access to
ciphertext does not divulge the original plaintext data, apart from some forms of metadata,
such as message length and creation date. When encryption is applied to plaintext data, the
data is paired with a string of characters called an encryption key, a secret message that is
established by and shared among authorized parties. The encryption key is used to decrypt
the ciphertext back into its original plaintext format.
The encryption mechanism can help counter the traffic eavesdropping, malicious
intermediary, insufficient authorization, and overlapping trust boundaries security threats. For
example, malicious service agents that attempt traffic eavesdropping are unable to decrypt
messages in transit if they do not have the encryption key (Figure 1).
Figure 1 A malicious service agent is unable to retrieve data from an encrypted message. The
retrieval attempt may furthermore be revealed to the cloud service consumer. (Note the use
of the lock symbol to indicate that a security mechanism has been applied to the message
contents.)

There are two common forms of encryption known as symmetric encryption and asymmetric
encryption.

Symmetric Encryption
Symmetric encryption uses the same key for both encryption and decryption, both of which
are performed by authorized parties that use the one shared key. Also known as secret key
cryptography, messages that are encrypted with a specific key can be decrypted by only that
same key. Parties that rightfully decrypt the data are provided with evidence that the original
encryption was performed by parties that rightfully possess the key. A basic authentication
check is always performed, because only authorized parties that own the key can create
messages. This maintains and verifies data confidentiality.
Note that symmetrical encryption does not have the characteristic of non-repudiation, since
determining exactly which party performed the message encryption or decryption is not
possible if more than one party is in possession of the key.

Asymmetric Encryption
Asymmetric encryption relies on the use of two different keys, namely a private key and a
public key. With asymmetric encryption (which is also referred to as public key cryptography),
the private key is known only to its owner while the public key is commonly available. A
document that was encrypted with a private key can only be correctly decrypted with the
corresponding public key. Conversely, a document that was encrypted with a public key can
be decrypted only using its private key counterpart. As a result of two different keys being
used instead of just the one, asymmetric encryption is almost always computationally slower
than symmetric encryption.
The level of security that is achieved is dictated by whether a private key or public key was
used to encrypt the plaintext data. As every asymmetrically encrypted message has its own
private-public key pair, messages that were encrypted with a private key can be correctly
decrypted by any party with the corresponding public key. This method of encryption does
not offer any confidentiality protection, even though successful decryption proves that the
text was encrypted by the rightful private key owner. Private key encryption therefore offers
integrity protection in addition to authenticity and non-repudiation. A message that was
encrypted with a public key can only be decrypted by the rightful private key owner, which
provides confidentiality protection. However, any party that has the public key can generate
the ciphertext, meaning this method provides neither message integrity nor authenticity
protection due to the communal nature of the public key.

NOTE
The encryption mechanism, when used to secure Web-based data transmissions, is most
commonly applied via HTTPS, which refers to the use of SSL/TLS as an underlying encryption
protocol for HTTP. TLS (transport layer security) is the successor to the SSL (secure sockets
layer) technology. Because asymmetric encryption is usually more time-consuming than
symmetric encryption, TLS uses the former only for its key exchange method. TLS systems then
switch to symmetric encryption once the keys have been exchanged.
Most TLS implementations primarily support RSA as the chief asymmetrical encryption cipher,
while ciphers such as RC4, Triple-DES, and AES are supported for symmetrical encryption.

Case Study Example


Innovartus has recently learned that users who access their User Registration Portal via public
Wi-Fi hot zones and unsecured LANs may be transmitting personal user profile details via
plaintext. Innovartus immediately remedies this vulnerability by applying the encryption
mechanism to its Web portal via the use of HTTPS (Figure 2).
Figure 2 The encryption mechanism is added to the communication channel between outside
users and Innovartus’ User Registration Portal. This safeguards message confidentiality via the
use of HTTPS.

2. Hashing
The hashing mechanism is used when a one-way, non-reversible form of data protection is
required. Once hashing has been applied to a message, it is locked and no key is provided for
the message to be unlocked. A common application of this mechanism is the storage of
passwords.
Hashing technology can be used to derive a hashing code or message digest from a message,
which is often of a fixed length and smaller than the original message. The message sender
can then utilize the hashing mechanism to attach the message digest to the message. The
recipient applies the same hash function to the message to verify that the produced message
digest is identical to the one that accompanied the message. Any alteration to the original
data results in an entirely different message digest and clearly indicates that tampering has
occurred.
In addition to its utilization for protecting stored data, the cloud threats that can be mitigated
by the hashing mechanism include malicious intermediary and insufficient authorization. An
example of the former is illustrated in Figure 3.

Figure 3 A hashing function is applied to protect the integrity of a message that is intercepted
and altered by a malicious service agent, before it is forwarded. The firewall can be configured
to determine that the message has been altered, thereby enabling it to reject the message
before it can proceed to the cloud service.

Case Study Example


A subset of the applications that have been selected to be ported to ATN’s PaaS platform
allows users to access and alter highly sensitive corporate data. This information is being
hosted on a cloud to enable access by trusted partners who may use it for critical calculation
and assessment purposes. Concerned that the data could be tampered with, ATN decides to
apply the hashing mechanism as a means of protecting and preserving the data’s integrity.
ATN cloud resource administrators work with the cloud provider to incorporate a digest-
generating procedure with each application version that is deployed in the cloud. Current
values are logged to a secure on-premise database and the procedure is regularly repeated
with the results analyzed. Figure 10.4 illustrates how ATN implements hashing to determine
whether any non-authorized actions have been performed against the ported applications.
Figure 4 A hashing procedure is invoked when the PaaS environment is accessed (1). The
applications that were ported to this environment are checked (2) and their message digests
are calculated (3). The message digests are stored in a secure on-premise database (4), and a
notification is issued if any of their values are not identical to the ones in storage.

3. Digital Signature
The digital signature mechanism is a means of providing data authenticity and integrity
through authentication and non-repudiation. A message is assigned a digital signature prior
to transmission, which is then rendered invalid if the message experiences any subsequent,
unauthorized modifications. A digital signature provides evidence that the message received
is the same as the one created by its rightful sender.

Both hashing and asymmetrical encryption are involved in the creation of a digital signature,
which essentially exists as a message digest that was encrypted by a private key and appended
to the original message. The recipient verifies the signature validity and uses the
corresponding public key to decrypt the digital signature, which produces the message digest.
The hashing mechanism can also be applied to the original message to produce this message
digest. Identical results from the two different processes indicate that the message
maintained its integrity.
The digital signature mechanism helps mitigate the malicious intermediary, insufficient
authorization, and overlapping trust boundaries security threats (Figure 5).
Figure 5 Cloud Service Consumer B sends a message that was digitally signed but was altered
by trusted attacker Cloud Service Consumer A. Virtual Server B is configured to verify digital
signatures before processing incoming messages even if they are within its trust boundary.
The message is revealed as illegitimate due to its invalid digital signature, and is therefore
rejected by Virtual Server B.

Case Study Example


With DTGOV’s client portfolio expanding to include public-sector organizations, many of its
cloud computing policies have become unsuitable and require modification. Considering that
public-sector organizations frequently handle strategic information, security safeguards need
to be established to protect data manipulation and to establish a means of auditing activities
that may impact government operations.
DTGOV proceeds to implement the digital signature mechanism specifically to protect its Web-
based management environment (Figure .6). Virtual server self-provisioning inside the IaaS
environment and the tracking functionality of realtime SLA and billing are all performed via
Web portals. As a result, user error or malicious actions could result in legal and financial
consequences.
Figure 6 Whenever a cloud consumer performs a management action that is related to IT
resources provisioned by DTGOV, the cloud service consumer program must include a digital
signature in the message request to prove the legitimacy of its user.
Digital signatures provide DTGOV with the guarantee that every action performed is linked to
its legitimate originator. Unauthorized access is expected to become highly improbable, since
digital signatures are only accepted if the encryption key is identical to the secret key held by
the rightful owner. Users will not have grounds to deny attempts at message adulteration
because the digital signatures will confirm message integrity.

4. Public Key Infrastructure (PKI)


A common approach for managing the issuance of asymmetric keys is based on the public key
infrastructure (PKI) mechanism, which exists as a system of protocols, data formats, rules, and
practices that enable large-scale systems to securely use public key cryptography. This system
is used to associate public keys with their corresponding key owners (known as public key
identification) while enabling the verification of key validity. PKIs rely on the use of digital
certificates, which are digitally signed data structures that bind public keys to certificate owner
identities, as well as to related information, such as validity periods. Digital certificates are
usually digitally signed by a third-party certificate authority (CA), as illustrated in Figure 7.
Figure 7 The common steps involved during the generation of certificates by a certificate
authority.
Other methods of generating digital signatures can be employed, even though the majority of
digital certificates are issued by only a handful of trusted CAs like VeriSign and Comodo. Larger
organizations, such as Microsoft, can act as their own CA and issue certificates to their clients
and the public, since even individual users can generate certificates as long as they have the
appropriate software tools.

Building up an acceptable level of trust for a CA is time-intensive but necessary. Rigorous


security measures, substantial infrastructure investments, and stringent operational
processes all contribute to establishing the credibility of a CA. The higher its level of trust and
reliability, the more esteemed and reputable its certificates. The PKI is a dependable method
for implementing asymmetric encryption, managing cloud consumer and cloud provider
identity information, and helping to defend against the malicious intermediary and insufficient
authorization threats.
The PKI mechanism is primarily used to counter the insufficient authorization threat.

Case Study Example


DTGOV requires that its clients use digital signatures to access its Web-based management
environment. These are to be generated from public keys that have been certified by a
recognized certificate authority (Figure 8).
Figure 8 An external cloud resource administrator uses a digital certificate to access the Web-
based management environment. DTGOV’s digital certificate is used in the HTTPS connection
and then signed by a trusted CA.

5. Identity and Access Management (IAM)


The identity and access management (IAM) mechanism encompasses the components and
policies necessary to control and track user identities and access privileges for IT resources,
environments, and systems.

Specifically, IAM mechanisms exist as systems comprised of four main components:


• Authentication – Username and password combinations remain the most common forms of
user authentication credentials managed by the IAM system, which also can support digital
signatures, digital certificates, biometric hardware (fingerprint readers), specialized software
(such as voice analysis programs), and locking user accounts to registered IP or MAC
addresses.
• Authorization – The authorization component defines the correct granularity for access
controls and oversees the relationships between identities, access control rights, and IT
resource availability.
• User Management – Related to the administrative capabilities of the system, the user
management program is responsible for creating new user identities and access groups,
resetting passwords, defining password policies, and managing privileges.
• Credential Management – The credential management system establishes identities and
access control rules for defined user accounts, which mitigates the threat of insufficient
authorization.
Although its objectives are similar to those of the PKI mechanism, the IAM mechanism’s scope
of implementation is distinct because its structure encompasses access controls and policies
in addition to assigning specific levels of user privileges.
The IAM mechanism is primarily used to counter the insufficient authorization, denial of
service, overlapping trust boundaries threats, virtualization attack and containerization attack
threats.

Case Study Example


As a result of several past corporate acquisitions, ATN’s legacy landscape has become complex
and highly heterogeneous. Maintenance costs have increased due to redundant and similar
applications and databases running concurrently. Legacy repositories of user credentials are
just as assorted.
Now that ATN has ported several applications to a PaaS environment, new identities are
created and configured in order to grant users access. The CloudEnhance consultants suggest
that ATN capitalize on this opportunity by starting a pilot IAM system initiative, especially since
a new group of cloud-based identities is needed.
ATN agrees, and a specialized IAM system is designed specifically to regulate the security
boundaries within their new PaaS environment. With this system, the identities assigned to
cloud-based IT resources differ from corresponding on-premise identities, which were
originally defined according to ATN’s internal security policies.
6. Single Sign-On (SSO)
Propagating the authentication and authorization information for a cloud service consumer
across multiple cloud services can be a challenge, especially if numerous cloud services or
cloud-based IT resources need to be invoked as part of the same overall runtime activity.
The single sign-on (SSO) mechanism enables one cloud service consumer to be authenticated
by a security broker, which establishes a security context that is persisted while the cloud
service consumer accesses other cloud services or cloud-based IT resources. Otherwise, the
cloud service consumer would need to re-authenticate itself with every subsequent request.

The SSO mechanism essentially enables mutually independent cloud services and IT resources
to generate and circulate runtime authentication and authorization credentials. The
credentials initially provided by the cloud service consumer remain valid for the duration of a
session, while its security context information is shared (Figure 9). The SSO mechanism’s
security broker is especially useful when a cloud service consumer needs to access cloud
services residing on different clouds (Figure 10).
Figure 9 A cloud service consumer provides the security broker with login credentials (1). The
security broker responds with an authentication token (message with small lock symbol) upon
successful authentication, which contains cloud service consumer identity information (2) that
is used to automatically authenticate the cloud service consumer acoss Cloud Services A, B,
and C (3).
Figure 10 The credentials received by the security broker are propagated to ready-made
environments across two different clouds. The security broker is responsible for selecting the
appropriate security procedure with which to contact each cloud.
It primarily enhances the usability of cloud-based environments for access and management
of distributed IT resources and solutions.

Case Study Example

The migration of applications to ATN’s new PaaS platform was successful, but also raised a
number of new concerns pertaining to the responsiveness and availability of PaaS-hosted IT
resources. ATN intends to move more applications to a PaaS platform, but decides to do so by
establishing a second PaaS environment with a different cloud provider. This will allow them
to compare cloud providers during a three-month assessment period.
To accommodate this distributed cloud architecture, the SSO mechanism is used to establish
a security broker capable of propagating login credentials across both clouds (Figure 10). This
enables a single cloud resource administrator to access IT resources on both PaaS
environments without having to log in separately to each one.

7. Cloud-Based Security Groups


Similar to constructing dykes and levees that separate land from water, data protection is
increased by placing barriers between IT resources. Cloud resource segmentation is a process
by which separate physical and virtual IT environments are created for different users and
groups. For example, an organization’s WAN can be partitioned according to individual
network security requirements. One network can be established with a resilient firewall for
external Internet access, while a second is deployed without a firewall because its users are
internal and unable to access the Internet.
Resource segmentation is used to enable virtualization by allocating a variety of physical IT
resources to virtual machines. It needs to be optimized for public cloud environments, since
organizational trust boundaries from different cloud consumers overlap when sharing the
same underlying physical IT resources.
The cloud-based resource segmentation process creates cloud-based security
group mechanisms that are determined through security policies. Networks are segmented
into logical cloud-based security groups that form logical network perimeters. Each cloud-
based IT resource is assigned to at least one logical cloud-based security group. Each logical
cloud-based security group is assigned specific rules that govern the communication between
the security groups.
Multiple virtual servers running on the same physical server can become members of different
logical cloud-based security groups (Figure 11). Virtual servers can further be separated into
public-private groups, development-production groups, or any other designation configured
by the cloud resource administrator.
Figure 11 Cloud-Based Security Group A encompasses Virtual Servers A and D and is assigned
to Cloud Consumer A. Cloud-Based Security Group B is comprised of Virtual Servers B, C, and
E and is assigned to Cloud Consumer B. If Cloud Service Consumer A’s credentials are
compromised, the attacker would only be able to access and damage the virtual servers in
Cloud-Based Security Group A, thereby protecting Virtual Servers B, C, and E.
Cloud-based security groups delineate areas where different security measures can be
applied. Properly implemented cloud-based security groups help limit unauthorized access to
IT resources in the event of a security breach. This mechanism can be used to help counter
the denial of service, insufficient authorization, overlapping trust boundaries, virtualization
attack and container attack threats, and is closely related to the logical network perimeter
mechanism.

Case Study Example


Now that DTGOV has itself become a cloud provider, security concerns are raised pertaining
to its hosting of public-sector client data. A team of cloud security specialists is brought in to
define cloud-based security groups together with the digital signature and PKI mechanisms.
Security policies are classified into levels of resource segmentation before being integrated
into DTGOV’s Web portal management environment. Consistent with the security
requirements guaranteed by its SLAs, DTGOV maps IT resource allocation to the appropriate
logical cloud-based security group (Figure 12), which has its own security policy that clearly
stipulates its IT resource isolation and control levels.
Figure 12 When an external cloud resource administrator accesses the Web portal to allocate
a virtual server, the requested security credentials are assessed and mapped to an internal
security policy that assigns a corresponding cloud-based security group to the new virtual
server.

DTGOV informs its clients about the availability of these new security policies. Cloud
consumers can optionally choose to utilize them and doing so results in increased fees.
8. Hardened Virtual Server Images
As previously discussed, a virtual server is created from a template configuration called a
virtual server image (or virtual machine image). Hardening is the process of stripping
unnecessary software from a system to limit potential vulnerabilities that can be exploited by
attackers. Removing redundant programs, closing unnecessary server ports, and disabling
unused services, internal root accounts, and guest access are all examples of hardening.
A hardened virtual server image is a template for virtual service instance creation that has
been subjected to a hardening process (Figure 13). This generally results in a virtual server
template that is significantly more secure than the original standard image.

Figure 13 A cloud provider applies its security policies to harden its standard virtual server
images. The hardened image template is saved in the VM images repository as part of a
resource management system.
Hardened virtual server images help counter the denial of service, insufficient authorization,
and overlapping trust boundaries threats.

Case Study Example


One of the security features made available to cloud consumers as part of DTGOV adoption of
cloud-based security groups is an option to have some or all virtual servers within a given
group hardened (Figure 14). Each hardened virtual server image results in an extra fee but
spares cloud consumers from having to carry out the hardening process themselves.
Figure 14 The cloud resource administrator chooses the hardened virtual server image option
for the virtual servers provisioned for Cloud-Based Security Group B.
Issues in Cloud Computing
Cloud Computing is a new name for an old concept. The delivery of computing services from
a remote location. Cloud Computing is Internet-based computing, where shared resources,
software, and information are provided to computers and other devices on demand.

These are major issues in Cloud Computing:


1. Privacy: The user data can be accessed by the host company with or without permission.
The service provider may access the data that is on the cloud at any point in time. They could
accidentally or deliberately alter or even delete information.

2. Compliance: There are many regulations in places related to data and hosting. To comply
with regulations (Federal Information Security Management Act, Health Insurance Portability
and Accountability Act, etc.) the user may have to adopt deployment modes that are
expensive.
3. Security: Cloud-based services involve third-party for storage and security. Can one assume
that a cloud-based company will protect and secure one’s data if one is using their services at
a very low or for free? They may share users’ information with others. Security presents a real
threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing on the
environment. Citing the server’s effects on the environmental effects of cloud computing, in
areas where climate favors natural cooling and renewable electricity is readily available, the
countries with favorable conditions, such as Finland, Sweden, and Switzerland are trying to
attract cloud computing data centers. But other than nature’s favors, would these countries
have enough technical infrastructure to sustain the high-end clouds?
5. Abuse: While providing cloud services, it should be ascertained that the client is not
purchasing the services of cloud computing for a nefarious purpose. In 2009, a banking Trojan
illegally used the popular Amazon service as a command and control channel that issued
software updates and malicious instructions to PCs that were infected by the malware So the
hosting companies and the servers should have proper measures to address these issues.
6, Higher Cost: If you want to use cloud services uninterruptedly then you need to have a
powerful network with higher bandwidth than ordinary internet networks, and also if your
organization is broad and large so ordinary cloud service subscription won’t suit your
organization. Otherwise, you might face hassle in utilizing an ordinary cloud service while
working on complex projects and applications. This is a major problem before small
organizations, that restricts them from diving into cloud technology for their business.
7. Recovery of lost data in contingency: Before subscribing any cloud service provider goes
through all norms and documentations and check whether their services match your
requirements and sufficient well-maintained resource infrastructure with proper upkeeping.
Once you subscribed to the service you almost hand over your data into the hands of a third
party. If you are able to choose proper cloud service then in the future you don’t need to
worry about the recovery of lost data in any contingency.
8. Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task because a cloud
architecture contains a large resources infrastructure and other challenges and risks as well,
user satisfaction, etc. As users usually pay for how much they have consumed the resources.
So, sometimes it becomes hard to decide how much should be charged in case the user wants
scalability and extend the services.
9. Lack of resources/skilled expertise: One of the major issues that companies and
enterprises are going through today is the lack of resources and skilled employees. Every
second organization is seeming interested or has already been moved to cloud services. That’s
why the workload in the cloud is increasing so the cloud service hosting companies need
continuous rapid advancement. Due to these factors, organizations are having a tough time
keeping up to date with the tools. As new tools and technologies are emerging every day so
more skilled/trained employees need to grow. These challenges can only be minimized
through additional training of IT and development staff.
10. Pay-per-use service charges: Cloud computing services are on-demand services a user can
extend or compress the volume of the resource as per needs. so you paid for how much you
have consumed the resources. It is difficult to define a certain pre-defined cost for a particular
quantity of services. Such types of ups and downs and price variations make the
implementation of cloud computing very difficult and intricate. It is not easy for a firm’s owner
to study consistent demand and fluctuations with the seasons and various events. So it is hard
to build a budget for a service that could consume several months of the budget in a few days
of heavy use.
Unit 6 : Future of Cloud computing

How the cloud will change operating system :


https://www.youtube.com/watch?v=qFZy41XcCbc

Open the link and watch the video for better understanding.

Key Components of OS

There are different flavors of operating systems: from real-time OS, desktop OS, all
the way to a mainframe OS. The most recent OS is the Cloud OS.

In general, every OS has these common components:

– The kernel, which manages memory, processes, etc.

– Device drivers, which drive different hardware from different vendors.

– User interfaces, including command line shell and Window system.

– File system, which provides a hierarchical way to persist data.

– Security, which authenticates users and protects information.

Depending on the type of OS, you may miss something here or have something extra.
For example, an embedded OS may not have a user interface and everything is
controlled remotely. For the desktop OS, you may have extra commonly used
applications such as a calculator, a calendar, a browser, and so on.

Location aware application :


Location-aware applications use the geographical position of a mobile worker or an
asset to execute a task. Position is detected mainly through satellite technologies, such
as a GPS, or through mobile location technologies in cellular networks and mobile
devices. Examples include fleet management applications with mapping, navigation
and routing functionalities, government inspections and integration with geographic
information system applications.

Location-aware applications not only build an incredibly focused marketing potential


for retailers, but also offer improved social connectivity and superior environmental
awareness, providing users with a location-based filter for online data.
Location tools could be browser plug-ins installed in gadgets like smartphones or
other Web-enabled devices. Mobile phone towers, wireless access points, GPS
satellites or a combination of these can be used to determine the physical location of
the user. When it comes to access points and cell towers, physical location is decided
according to the connectivity to the independent connection point. This information is
then mapped and logged into databases that are constantly updated.

When a user with a compatible mobile device opts for a location-based service, that
info is delivered to location-aware applications, which aim to present resources
according to the spot where the user presently is. On the other hand, a location-aware
application may forward the physical location of a user to other location-aware or
social media applications. Users are able to define which application should get the
information and how detailed the information should be, or they could bypass all
other data simply by manually entering the location coordinates.

A location-aware application offers the following advantages:

 Presents an affordable implementation without resorting to


extra hardware like those needed for GPS-centered systems
 Offers location awareness within buildings or areas where GPS
cannot be used
 Offers convenience to define user-specific locations, which
helps build a fully customized map

Intelligent fabrics :

https://www.youtube.com/watch?v=spFAUlslssg

Open this link for better understanding.

Intelligent Fabric benefits


 Straightforward deployment: Unpack the boxes, mount, connect,
power up

 Provisioning happens faster: Centralized management groups


together devices with common policies

 Cheaper operations: Reduce complexity, eliminate the need for IT


involvement

 Comprehensive visibility: See overlay applications and performance


in real-time
fabrics and paints – Ability to connect devices to cloud from any place, at any
time will open door to wide range of cutting-edge applications. Devices that
once had to be read by utility or city employees, such as electric meters and
parking meters, will connect to web and create report. Intelligence will be
built into fabrics of our clothes, bedding, and furniture. These intelligent
fabrics will provide wide range of services including following :
 Automatically adjust room temperature when body temperature
becomes too warm or too cold.
 Notify rooms when we enter or leave so that lights, music, and other
devices are automatically controlled.
 Monitor body functions such as blood pressure, blood sugar levels, stress,
and more, and notify person and adjust environment to affect those
functions.
 Notify others when elderly person has fallen.
 Provide deterrence against mosquitoes and other insects.

Future of cloud TV :
What is cloud tv:
Cloud TV a new type of satellite TV that offers a number of benefits over traditional
satellite TV. These benefits include scalability, the ability to add or remove channels,
lower prices for premium features and greater flexibility in terms of where you can watch
your favorite shows.
With Cloud TV, you no longer need to install expensive and cumbersome equipment in
your home. Instead, all you need is an Internet connection and a compatible devices such
as a smart TV, computer or mobile phone.
To watch Cloud TV, simply log to your account and select the channels that want to
watch. You can then watch them live or recorded, wherever you are. All you need is an
Internet connection.
Cloud TV is the future of satellite TV. It offers a number of advantages that make an
attractive option for both homes and businesses. If you are looking for a better, more
affordable and more flexible way to watch TV, then Cloud TV is definitely considering.

Future of C-TV
The future of Cloud TV is looking very bright. With its many benefits over traditional
satellite TV, it is no wonder that this technology is becoming more and more popular
every day. If you are looking for a better, more affordable and more flexible way to watch
TV, then Cloud TV is definitely the way to go.
One of the biggest benefits of Cloud TV is thus that it is much more scalable than
traditional satellite TV. This means that you can add or remove channels as you please
without having to go through a long and costly process of installing new equipment.

You can also get packages that include HD channels, DVR service and other premium
features for a lower price than you would pay for these services from a traditional satellite
TV provider. This also depends on the platform you choose for Cloud TV.

Finally, Cloud TV is also much more flexible when it comes to where you can watch your
favorite shows. With traditional satellite TV, you are limited to watching TV in your
home. With Cloud TV, you can watch TV anywhere that you have an Internet connection.

Future of cloud based smart devices :


First there was computer, then came smartphone and now there are many devices that
people use as computing platforms. Tablets, TVs, eBook readers are things of the past.
Now there are Google Glass, Samsung Watch, Smart TV and many more. The list of
devices with processor for computations that can be connected to Internet are increasing
every day.

Businesses are opting to use smart devices to increase quality output. Following are a
couple of interesting uses of Google Glass

Fast training for employees: It takes a lot of training to operate a laser cutter. This can be
taught more efficiently by overlaying visual aids onto the machine, enabling employees to
learn how to use the equipment faster than conventional tutorials.

Museum Tours: Audio recordings used currently will be enhanced with visual
components. It’d be great to look at any painting hanging at the Met, have a software to
recognize it and retrieve additional information on demand by a simple gesture.

If you are worried about having to maintain software for 3 different platforms (iPhone,
Android and Web), you will have to maintain software for 30 different platforms (smart
phone, smart watch, smart glass, smart TV etc…) tomorrow.

Don’t worry. We have cloud computing as the savior.

Cloud computing has become a great enabler of cross platform applications, i.e.
applications that can run on multiple platforms.

Technologies Behind Cross Platform Cloud Applications:


Cross Platform: This is a common framework that enables the same programming code to
get executed on different platforms
Cloud Applications: In Cloud Application, most or all the data is stored on remote servers
and delivered over a network.

Faster time to market for application software :


1: Collaborate
Agile development methodologies, like scrum, encourage ongoing collaboration in
application requirements definition and development between end users and IT. The more
you can keep end users actively engaged in the application development process, the less
you will have to worry about the application drifting from what the business expected.
When you meet business expectations dead-on the first time, your applications can be
placed into production without delay.

2: Prototype often
Application developers now have app prototyping tools that enable users and developers
to see the flows and the looks of applications as the apps are being built. This is important
in terms of user acceptance and ultimate app readiness. Every time you incorporate a new
application element, create a working prototype for end users to test drive and comment
on. It is easier to make adjustments in earlier stages of app development than right before
the app is scheduled to be moved into production.

3: Virtualize development and test environments


It takes time to configure physical hardware and software for application testing and
development. A better approach is to use a cloud service or to virtualize your own
development and test environments so that your developers can have dedicated test and
development systems. With virtualization, the strain on your DBA and system
programmers will also be reduced, since configuration and deployment of virtual systems
is quicker.

4: Hold users accountable


Users get busy, too–so there is always a tendency for them to walk away from the
development and testing process after they feel they have given IT all their app
requirements. Don’t let this happen. Ensuring that applications stay on course with
requirements during development should be as much of an end-business responsibility as
it is an IT responsibility.

5: Work on usability as much as you work on features and functions


You’d be surprised at how many data errors and end user trouble reports are generated
because of poor navigation and screen or report design. Giving equal time to usability as
well as to technical design can go a long way toward ensuring that apps are accepted and
placed into production the first time.
6: Implement a standard library of routines you can reuse
The easiest way to ensure app compatibility with other software you use is to standardize
routines (e.g., a date routine) so that they can be pulled from a common library and used
over and over again.

7: Don’t forget quality assurance


It is important to thoroughly QA an application–from both a usability and a technical
performance standpoint. Organizations are still seeing 50% of IT’s programming time
being committed to software maintenance–which happens because apps fail or don’t do
what they are supposed to do. You can help prevent this by designing apps that work
correctly the first time and every time, thereby freeing up maintenance staff so you can
redirect those resources into more new development.

8: Regression test for performance


Organizations continue to unit test applications and then try to rush them into production
without performing a full regression test to ensure that the new app will handle the
transaction load it is supposed to be able to handle–or that it is compatible with all the
other software it must run with. When an app breakdown occurs in production because
regression testing wasn’t done, it can become a major embarrassment for a company.

9: Train your support staff and your users


User training should be a project task for any new application. If business users aren’t
trained in how to use an app, they will get frustrated and end up calling your support staff.
Before any app goes live, the IT support staff should also be thoroughly trained. If they’re
not knowledgeable and can’t respond to user questions quickly, it could reflect negatively
on an application to the point where it must be pulled from production.

10: Design for simplicity


Applications should always use a modular design structure. This enables developers to
test and debug individual routines without having to read through an entire program.

Home based CC :
Home cloud computing is the process of using a remote server to store, manage and
access data and applications from home. It allows users to access their files, applications,
and other digital content from any device with an internet connection, whether it be a
computer, phone, or tablet. Private cloud computing can also be used to back up data and
protect the information in case of emergencies.

A lot of individuals and small businesses use home/private cloud computing to browse,
search through files and even work on projects from any device. It’s a great alternative to
setting up a server in your house because it eliminates the need for physical storage
devices that contain data.

Mobile cloud :
MCC stands for Mobile Cloud Computing which is defined as a combination of mobile
computing, cloud computing, and wireless network that come up together purpose such as
rich computational resources to mobile users, network operators, as well as to cloud
computing providers. Mobile Cloud Computing is meant to make it possible for rich
mobile applications to be executed on a different number of mobile devices. In this
technology, data processing, and data storage happen outside of mobile devices. Mobile
Cloud Computing applications leverage this IT architecture to generate the following
advantages:

1. Extended battery life.


2. Improvement in data storage capacity and processing power.
3. Improved synchronization of data due to “store in one place, accessible from
anywhere ” platform theme.
4. Improved reliability and scalability.
5. Ease of integration.

Characteristics Of Mobile Cloud Computing Application :

1) Cloud infrastructure: Cloud infrastructure is a specific form of information


architecture that is used to store data.
2) Data cache: In this, the data can be locally cached.
3) User Accommodation: Scope of accommodating different user requirements in
cloud app development is available in mobile Cloud Computing.
4) Easy Access: It is easily accessed from desktop or mobile devices alike.
5) Cloud Apps facilitate to provide access to a whole new range of services.

Autonomic cloud engine :


Autonomic Computing is a type of visionary computing that has been started by IBM.
This is made to make adaptive decisions that use high-level policies. It has a feature of
constant up-gradation using optimization and adaptation.

Need Of Autonomic Computing :

With the increase in the demand for computers, computer-related problems are also
increasing. They are becoming more and more complex. The complexity has become so
much that there is a spike in demand for skilled workers. This has fostered the need for
autonomic computers that would do computing operations without the need for manual
intervention.

Characteristics :

1. The Autonomic system knows itself. This means that it knows its components,
specifications capacity, and the real-time status. It also has knowledge about its
own, borrowed, and shared resources.
2. It can configure itself again and again and run its setup automatically as and when
required.
3. It has the capability of optimizing itself by fine-tuning workflows.
4. It can heal itself. This is a way of mentioning that it can recover from failures.
5. It can protect itself by detecting and identifying various attacks on it.
6. It can open itself. This means that it must not be a proprietary solution and must
implement open standards.
7. It can hide. This means that it has the ability to allow resource optimization, by
hiding its complexity.
8. An autonomic system according to IBM must be able to know or expect what
kind of demand is going to arise for its resources to make it a transparent process
for the users to see this information.

Multimedia CC:
Internet is having a significant impact on the media-related industries which
are using it as a medium to enable delivery of their content to end-users. Rich
web pages, software downloads, interactive communications, and ever-
expanding universe of digital media require a new approach to content
delivery. Size and volume of multimedia content is growing exponentially. For
example, more than 30 billion pieces of content such as web links, news
stories, blog posts, notes, and photo albums are shared each month on
Facebook. On the other hand, Twitter users are tweeting an average 55 million
tweets a day that includes web links and photo albums. Web pages and other
multimedia content are being delivered through content delivery networks
(CDN) technologies. These technologies optimize network usage through
dedicated network links, caching servers and by increasingly using peer-to-
peer technologies. The concept of a CDN was conceived in the early days of
Internet but it took until the end of 1990’s before CDNs from Akamai and
other commercial providers managed to deliver Web content (i.e., web pages,
text, graphics, URLs and scripts) anywhere in the world and at the same time
meet the high availability and quality expected by their end users. For
example, Akamai delivers between fifteen to thirty percent of all Web traffic,
reaching more than 4 Terabits per second. Commercial CDNs achieved this by
deploying a private collection of servers and by using distributed CDN
software system in multiple data centres around the world.
A different variant of CDN technology appeared in the mid 2000’s to support
the streaming of hundreds of high definition channels to paid customers.
These CDNs had to deal with more stringent Quality of Service (QoS)
requirements to support users’ experience pertaining to high definition video.
This required active management of the underlying network resources and the
use of specialized set-top boxes that included video recorders (providing
stop/resume and record/playback functionality) and hardware decoders (e.g.,
providing MPEG 4 video compression/decompression). Major video CDNs
where developed by telecommunications companies that owned the required
network and had Operation Support Systems (OSSs) to manage the network
QoS as required by the CDN to preserve the integrity of high definition video
content. Just like the original CDNs, video CDN also utilize a private collection
of servers distributed around the network of video service provider. The first
notable CDNs in this category include Verizon’s FiOS and AT&T’s U-verse.
Some CDN providers such as Limelight Networks invested billions of dollars
in building dedicated network links (media-grade fiber-optic backbone) for
delivering and moving content from servers to end-users.

A more recent variant of video CDNs involves the caching video content in
cloud storage and the distribution of such content using third-party network
services that are designed to meet QoS requirements of caching and streaming
high definition video. For example, Netflix’s video CDN has been developed on
top of Amazon AWS. CloudFront is Amazon’s own CDN that uses Amazon
AWS and provides streaming video services using Microsoft Xboxes. While
Cloud-based CDNs have made a remarkable progress in the past five years,
they are still limited in the following aspects:

 CDN service providers either own all the services they use to run their
CDN services or they outsource this to a single cloud provider. A specialized
legal and technical relationship is required to make the CDN work in the
latter case.
 Video CDNs are not designed to manage content (e.g., find and play
high definition movies). This is typically done by CDN applications. For
example, CDNs do not provide services that allow an individual to create a
streaming music video service combining music videos from an existing
content source on the Internet (e.g., YouTube), his/her personal collection,
and from live performances he/she attends using his/her smart phone to
capture such content. This can only be done by an application managing
where and when the CDN will deliver the video component of his/her
music program.
 CDNs are designed for streaming staged content but do not perform
well in situations where content is produced dynamically. This is typically
the case when content is produced, managed and consumed in
collaborative activities. For example, an art teacher may find and discuss
movies from different film archives, the selected movies may then be edited
by students. Parts of them may be used in producing new movies that can
be sent to the students’ friends for comments and suggestions. Current
CDNs do not support such collaborative activities that involve dynamic
content creation.
Energy aware cloud computing :
Green Computing
Green computing is the Eco-friendly use of computers and their resources. It is also
defined as the study and practice of designing, engineering, manufacturing and
disposing computing resources with minimal environmental damage.

Figure – Green Cloud Architecture

Green cloud computing is using Internet computing services from a service provider
that has taken measures to reduce their environmental effect and also green cloud
computing is cloud computing with less environmental impact.

Some measures taken by the Internet service providers to make their services more
green are:

Use renewable energy sources.


Make the data center more energy efficient, for example by maximizing power usage
efficiency (PUE).
Reuse waste heat from computer servers (e.g. to heat nearby buildings).
Make sure that all hardware is properly recycled at the end of its life.
Use hardware that has a long lifespan and contains little to no toxic materials.
Jungle Computing :
Jungle Computing is the practice of using untapped, surplus computing resources and
services to solve specific problems. It’s a 21st-century solution to a 20th-century
problem: namely, that there are more computing needs than there are computing
resources available to meet them.

The term “jungle computing” was first used in the early 1990s, as the Internet was
expanding and the boundaries between computing environments were falling away. In
the context of the Internet, jungle computing describes the use of distributed,
heterogeneous, and decentralized computer resources to solve a given problem.

Examples of Jungle Computing Applications


i. Image Recognition - Image recognition is one of the most popular applications of
jungle computing. In this application, untapped CPU resources are used to
analyze images, providing insights into a range of things, including weather
patterns, crop conditions, and more.
ii. Voice Recognition - Voice recognition is another common type of jungle
computing application. In this application, computing resources are used to
convert human speech (often from a telephone call) into text.
iii. Natural Language Processing - Natural language processing is another jungle
computing application in which unprocessed text is broken down into structured
data.
iv. Medical Imaging Analysis - Medical imaging analysis is a jungle computing
application in which unprocessed medical images are broken down into data.
v. Robotics - Robotics is the application of artificial intelligence to solve specific
problems, such as automated trading in financial markets.

Docker in glance:
Docker in cloud computing is a tool that is used to automate the deployment of
applications in an environment designed to manage containers. It is a container
management service. These containers help applications to work while they are being
shifted from one platform to another. Docker’s technology is distinctive because it
focuses on the requirements of developers and systems. This modern technology
enables enterprises to create and run any product from any geographic location.

There are several problems associated with cloud environments and tries to solve
those issues by creating a systematic way to distribute and augment the application. It
helps to separate the applications from other containers resulting in a smooth flow. As
its job, it is possible to manage our infrastructure, in the same ways we use to manage
our applications, with the help of Docker.

Process Simplification
Docker can simplify both workflows and communication, and that usually starts with
the deployment story. Traditionally, the cycle of getting an application to production
often looks something like the following (illustrated in Figure 2-1):

1. Application developers request resources from operations engineers.

2. Resources are provisioned and handed over to developers.

3. Developers script and tool their deployment.

4. Operations engineers and developers tweak the deployment repeatedly.

5. Additional application dependencies are discovered by developers.

6. Operations engineers work to install the additional requirements.

7. Loop over steps 5 and 6 N more times.

8. The application is deployed.


Figure 2-1. ...

Support and adoption :


Container technology is considered one of the most rapidly
evolving in the software industry's recent history. There has
been a seismic shift towards more and more organizations
adopting containerization for their applications.

Containers offer a lightweight, portable, and more efficient


alternative to virtual machines and help us run software securely
and reliably across different server environments. This
technology has existed for more than a decade now but was only
made so easily accessible and exploitable thanks to Docker – an
open-source platform that makes it simple to package, distribute,
and manage containers.
In this post, we are going to look at what gives Docker the edge
over other virtualization technologies, their impact on
businesses across the world, and why migrating your application
to a container setup could be the best thing you could do for it.
Here is an outline of the topics we are going to cover --

Docker architecture
Docker uses a client-server architecture. The Docker client talks
to the Docker daemon, which does the heavy lifting of building,
running, and distributing your Docker containers. The Docker
client and daemon can run on the same system, or you can
connect a Docker client to a remote Docker daemon. The
Docker client and daemon communicate using a REST API,
over UNIX sockets or a network interface. Another Docker
client is Docker Compose, that lets you work with applications
consisting of a set of containers.

Docker Architecture Diagram

Workflow:
https://www.youtube.com/watch?v=dpKUBnSoVNM

For better understanding refers this video


UNIT – VI
Future of Cloud Computing

Almost everything in digital world is connected to cloud in some way or other


unless it’s specifically kept in local storage for security reasons. Let’s take look at
advantages of Cloud Computing before discussing the future of cloud.
 One of the greatest advantages is accessibility of resources. Users can
access their data from anyplace, at any time, and from any type of device
as long as they are connected to internet.
 Services become completely flexible (pay-per-use model) and can be
adjusted at any time which is referred to as scalability in terms of Cloud
Computing.
 Cloud Service Provider (CSP) takes care of all maintenance works, which
allows us to concentrate more efficiently on our tasks, which in turn
helps us in optimizing productivity.
 Cloud Computing provides increased security when compared with
traditional and internal infrastructures in company. It guarantees safety
by providing the best security systems and services with proper
auditing, passwords, and encryptions.
Cloud Computing has many features that make it’s future brighter in mostly all
sectors of the world. But it will not be alone. Internet of Things (IoT) and Big Data
will add more to it. Let’s explore article to understand future of cloud.

Cloud with Operating System:


Operating systems allow users to run programs, store and retrieve data from one
user session to next. Through virtualization, most server operating systems now
support and will continue to support hypervisors that allow multiple (and possibly
different) operating systems to run simultaneously. Virtualized servers will
continue to play huge role in driving operations of cloud. Many organizations are
opting for on-demand model operating systems, in which servers download user’s
operating system applications and environment settings to any computer user logs
in to. With the advent of more programs that run within browser, there may be
much less need for powerful desktop operating systems, such as Windows and Mac
Os.

Cloud with Internet of Thing :


1. Cloud-based location-tracking applications – A location-tracking application
utilizes data from Global Positioning System (GPS) capabilities built into mobile
devices to integrate individuals’ location into processing it performs. As GPS
capabilities are built into more devices, applications will begin to deliver more
location – tracking solutions. Using cloud and location-tracking solutions, you will
be able to track not only packages you ship, but also stolen cars, lost luggage,
misplaced cell phones, missing pets, and more.

2. Cloud-based smart fabrics and paints – Ability to connect devices to cloud


from any place, at any time will open door to wide range of cutting-edge
applications. Devices that once had to be read by utility or city employees, such as
electric meters and parking meters, will connect to web and create report.
Intelligence will be built into fabrics of our clothes, bedding, and furniture. These
intelligent fabrics will provide wide range of services including following :
 Automatically adjust room temperature when body temperature
becomes too warm or too cold.
 Notify rooms when we enter or leave so that lights, music, and other
devices are automatically controlled.
 Monitor body functions such as blood pressure, blood sugar levels,
stress, and more, and notify person and adjust environment to affect
those functions.
 Notify others when elderly person has fallen.
 Provide deterrence against mosquitoes and other insects.
Similarly, new paints being developed change form based on environmental
conditions. Currently, paints can change color on roads to indicate presence of ice.
In the future, intelligent paint may report driving conditions back to cloud.

3. Cloud TV – Few companies are changing way consumers watch TV. With
greater bandwidth available everywhere, DVD’s have fallen by wayside. TV
viewers will not just watch shows on-demand in their homes, in their cars, and on
airplanes but also new breed of projection devices will make any flat surface TV
screen.

4. Cloud-based smart devices – Cloud’s ability to provide internet access and at


any time makes such processing reality. Some devices may initially be intelligent
with reference to their ability to regulate power consumption, possibly avoiding
power use during peak times and costs. Using the cloud for communication,
devices can coordinate activities. For example, your car may notify your home
automation system that you are down blocking and instruct it to light house, turn
on your favorite music and prompt refrigerator for list of ready to cook meals.

Home based Cloud Computing : Today most households have wireless network
capabilities that allow family members to connect to Web and access sites and
contents they desire. With arrival of smart devices, intelligent fabrics, and greater
use of frequency identification devices (RFID), relations will expect on-demand
personalized technology solutions. Families will use cloud devices to customize
their environments and experiences. Within such environment, families will want
to restrict processing to within home, meaning that they will not want neighbors
to receive signals generated by their devices and clothing. That implies ability to
encrypt wide range of signals within home. To that end, you should expect to see
cloud-based in-home devices that store family files, maintain appliance settings.
download and store movies and TV shows, and more.

Modular Software : With cloud computing, companies no longer have to raise


capital required to fund large data center. Instead, they can leverage PaaS solution.
Furthermore, companies no longer have to pay expensive licensing fees for various
software tools such as database management systems. Instead, they can leverage
pay-on-demand solutions. Hence developers will release software solutions at
faster rate, bringing solutions to market that expects high functionality and
demands lower cost. 85% Software developed since 2012 is cloud-enabled and
increase in future data requirements will enable more services through Cloud. All-
State and Center will have its own Cloud Platform for providing basic services in
health, agriculture and social, etc. Aadhaar Card is major example of Cloud
Computing projects and all banking platforms are moving towards serving 7
billion people in world. All Stock exchanges have to move towards cloud
computing to provide efficient and real-time stock details.

Conclusion :
Cloud computing is beginning to transform way enterprises buy and use
technology resources and will become even more prominent in coming years. In
the next-generation, cloud computing technology role is going to be integral
element in life of each human being because Cloud is only place where all software
and hardware and all devices can connect at single place.

What is Docker?

Docker is a set of platforms as a service (PaaS) products that use Operating system-level
virtualization to deliver software in packages called containers. Containers are isolated from
one another and bundle their own software, libraries, and configuration files; they can
communicate with each other through well-defined channels. All containers are run by a
single operating system kernel and therefore use fewer resources than a virtual machine.

What is Docker?

Docker is an open-source containerization platform by which you can pack your application
and all its dependencies into a standardized unit called a container. Containers are light in
weight which makes them portable and they are isolated from the underlying infrastructure
and from each other container. You can run the docker image as a docker container in any
machine where docker is installed without depending on the operating system.

Docker is popular because of the following:

1. Portability.

2. Reproducibility.

3. Efficiency.

4. Scalability.

What is Dockerfile?

The Dockerfile uses DSL (Domain Specific Language) and contains instructions for generating
a Docker image. Dockerfile will define the processes to quickly produce an image. While
creating your application, you should create a Dockerfile in order since the Docker daemon
runs all of the instructions from top to bottom.

 It is a text document that contains necessary commands which on execution help


assemble a Docker Image.

 Docker image is created using a Docker file.

Dockerfile is the source code of the image

To Know more about the Dockerfile refer to the Docker – Concept of Dockerfile.

How Docker Works

Docker makes use of a client-server architecture. The Docker client talks with the docker
daemon which helps in building, running, and distributing the docker containers. The Docker
client runs with the daemon on the same system or we can connect the Docker client with the
Docker daemon remotely. With the help of REST API over a UNIX socket or a network, the
docker client and daemon interact with each other. To know more about working of docker
refer to the Architecture of Docker.

What is Docker Image?

It is a file, comprised of multiple layers, used to execute code in a Docker container. They are
a set of instructions used to create docker containers. Docker Image is an executable package
of software that includes everything needed to run an application. This image informs how a
container should instantiate, determining which software components will run and how.
Docker Container is a virtual environment that bundles application code with all the
dependencies required to run the application. The application runs quickly and reliably from
one computing environment to another.

What is Docker Container?

Docker container is a runtime instance of an image. Allows developers to package applications


with all parts needed such as libraries and other dependencies. Docker Containers are runtime
instances of Docker images. Containers contain the whole kit required for an application, so
the application can be run in an isolated way. For eg.- Suppose there is an image of Ubuntu OS
with NGINX SERVER when this image is run with the docker run command, then a container
will be created and NGINX SERVER will be running on Ubuntu OS.

What is Docker Hub?

Docker Hub is a repository service and it is a cloud-based service where people push their
Docker Container Images and also pull the Docker Container Images from the Docker Hub
anytime or anywhere via the internet. Generally it makes it easy to find and reuse images. It
provides features such as you can push your images as private or public registry where you
can store and share Docker images

Mainly DevOps team uses the Docker Hub. It is an open-source tool and freely available for all
operating systems. It is like storage where we store the images and pull the images when it is
required. When a person wants to push/pull images from the Docker Hub they must have a
basic knowledge of Docker. Let us discuss the requirements of the Docker tool.

What is Docker Compose?

Docker Compose will execute a YAML-based multi-container application. The YAML file
consists of all configurations needed to deploy containers Docker Compose, which is
integrated with Docker Swarm, and provides directions for building and deploying containers.
With Docker Compose, each container is constructed to run on a single host.

How to Download Docker Desktop?


Docker Desktop provides GUI to work on docker containers, docker images and docker
networks. Docker desktop provides and separate environment which contains Docker Engine,
Docker CLI, Docker Compose, Kubernetes, and other tools whcih are needed to build,ship and
run the applications in the form of containerization which make it more user friendly. To know
more how to install docker desktop refer to Docker Desktop Sample Image.

Docker Commands

There are “n” no.of commands in docker following are some of the commands mostly used.

1. Docker Run

2. Docker Pull

3. Docker PS

4. Docker Stop

5. Docker Start

6. Docker rm

7. Docker RMI

8. Docker Images

9. Docker exec

10. Docker Login

To Know more about the docker commands refer tot the Docker – Instruction Commands.

Docker Engine

The software that hosts the containers is named Docker Engine. Docker Engine is a client-
server based application. The docker engine has 3 main components:

1. Server: It is responsible for creating and managing Docker images, containers,


networks, and volumes on the Docker. It is referred to as a daemon process.

2. REST API: It specifies how the applications can interact with the Server and instructs it
what to do.
3. Client: The Client is a docker command-line interface (CLI), that allows us to interact
with Docker using the docker commands.

Why use Docker

Docker can be used to pack the application and its dependencies which makes it lightweight
and easy to ship the code faster with more reliability. Docker make every simple to run the
application in the production environment docker container can be platform independent if
the docker engine is installed in the machine.

What is Docker For AWS?

Docker is the most powerful tool to run the application in the form of containers. Docker
container are light in weight and can be run on any operating system.

AWS provides the Amazon Elastic Container Service (Amazon ECS) it is an fully managed
container service by which you can deploy, scale and manage the docker containers. Amazon
ECS is the most reliable platform according to the performance and also it can be integrated
with the other AWS Service like load balancing, service discovery, and container health
monitoring. To know more about Amazon Elastic Container Service (Amazon ECS).

Difference Between Docker Containers and Virtual Machines

Docker Containers Virtual Machines

Virtual Machines (VMs) run on Hypervisors,


Docker Containers contain binaries,
which allow multiple Virtual Machines to run on
libraries, and configuration files along
a single machine along with its own operating
with the application itself.
system.

They don’t contain a guest OS for each Each VM has its own copy of an operating system
container and rely on the underlying along with the application and necessary
Docker Containers Virtual Machines

OS kernel, which makes the binaries, which makes it significantly larger and it
containers lightweight. requires more resources.

Containers share resources with


other containers in the same host OS They provide Hardware-level process isolation
and provide OS-level process and are slow to boot.
isolation.

Install Docker On Ubuntu

1. Remove old version of Docker

$ sudo apt-get remove docker docker-engine docker.io containerd runc

2. Installing Docker Engine

$ sudo apt-get update

$ sudo apt-get install \

ca-certificates \

curl \

gnupg \

lsb-release
$ sudo mkdir -p /etc/apt/keyrings

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o


/etc/apt/keyrings/docker.gpg

$ echo \

"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg]


https://download.docker.com/linux/ubuntu \

$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

$ sudo apt-get update

$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

$ sudo groupadd docker

$ sudo usermod -aG docker $USER

Check if docker is successfully installed in your system

$ sudo docker run hello-world

Sample Example: Containerizing Application Using Docker

1. Create a folder with 2 files (Dockerfile and main.py file) in it.

 Dockerfile

 main.py

2. Edit main.py with the below code.

 Python3
#!/usr/bin/env python3

print("Docker and GFG rock!")

3. Edit Dockerfile with the below commands.

FROM python:latest

COPY main.py /

CMD [ "python", "./main.py" ]

4. Create a Docker image.

Once you have created and edited the main.py file and the Dockerfile, create your image to
contain your application.

$ sudo docker build -t python-test .

The ‘-t’ option allows to define the name of your image. ‘python-test’ is the name we have
chosen for the image.

5. Run the Docker image

Once the image is created, your code is ready to launch.

$ sudo docker run python-test

Sample Example to Push an image to Docker Hub

1. Create an account on Docker Hub or use an existing one if you already have one.

2. Click on the “Create Repository” button, put the name of the file, and click on “Create”.

3. Now will “tag our image” and “push it to the Docker Hub repository” which we just created.

Now, run the below command to list docker images:

$ docker images

The above will give us this result


REPOSITORY TAG IMAGE_ID CREATED SIZE afrozchakure/python-test latest c7857f97ebbd 2
hours ago 933MB

Image ID is used to tag the image. The syntax to tag the image is:

docker tag <image-id> <your dockerhub username>/python-test:latest

$ docker tag c7857f97ebbd afrozchakure/python-test:latest

4. Push image to Docker Hub repository

$ docker push afrozchakure/python-test

Fetch and run the image from Docker Hub

1. To remove all versions of a particular image from our local system, we use the Image ID for
it.

$ docker rmi -f af939ee31fdc

2. Now run the image, it will fetch the image from the docker hub if it doesn’t exist on your
local machine.

$ docker run afrozchakure/python-test

Conclusion
So you have learned about the basics of Docker, the difference between Virtual Machines and
Docker Containers along some common terminologies in Docker. Also, we went through the
installation of Docker on our systems. We created an application using Docker and pushed our
image to Docker Hub. Lastly, we learned how we could remove a particular image from our
local system and later pull the image from Docker Hub if it doesn’t exist locally.

You might also like