Cloud Computing Basics
Cloud Computing Basics
Cloud Computing
The term cloud refers to a network or the internet. It is a technology that uses remote servers
on the internet to store, manage, and access data online rather than local drives. The data can
be anything such as files, images, documents, audio, video, and more.
There are the following operations that we can do using cloud computing:
In that server room, there should be a database server, mail server, networking, firewalls,
routers, modem, switches, QPS (Query Per Second means how much queries or load will be
handled by the server), configurable system, high net speed, and the maintenance engineers.
To establish such IT infrastructure, we need to spend lots of money. To overcome all these
problems and to reduce the IT infrastructure cost, Cloud Computing comes into existence.
1) Agility
The cloud works in a distributed computing environment. It shares resources among users
and works very fast.
The availability of servers is high and more reliable because the chances of infrastructure
failure are minimum.
3) High Scalability
4) Multi-Sharing
With the help of cloud computing, multiple users and applications can work more
efficiently with cost reductions by sharing common infrastructure.
Cloud computing enables the users to access systems using a web browser regardless of their
location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-
site (typically provided by a third-party) and accessed via the Internet, users can connect
from anywhere.
6) Maintenance
Maintenance of cloud computing applications is easier, since they do not need to be installed
on each user's computer and can be accessed from different places. So, it reduces the cost
also.
7) Low Cost
By using cloud computing, the cost will be reduced because to take the services of cloud
computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.
Application Programming Interfaces (APIs) are provided to the users so that they can access
services on the cloud by using these APIs and pay the charges as per the usage of services.
Cost
Moving to the cloud helps companies optimize IT costs. This is because cloud
computing eliminates the capital expense of buying hardware and software and setting
up and running onsite datacenters—the racks of servers, the round-the-clock electricity
for power and cooling, and the IT experts for managing the infrastructure. It adds up
fast.
Speed
Most cloud computing services are provided self service and on demand, so even vast
amounts of computing resources can be provisioned in minutes, typically with just a
few mouse clicks, giving businesses a lot of flexibility and taking the pressure off
capacity planning.
Global scale
The benefits of cloud computing services include the ability to scale elastically. In cloud
speak, that means delivering the right amount of IT resources—for example, more or
less computing power, storage, bandwidth—right when they’re needed, and from the
right geographic location.
Productivity
Onsite datacenters typically require a lot of “racking and stacking”—hardware setup,
software patching, and other time-consuming IT management chores. Cloud computing
removes the need for many of these tasks, so IT teams can spend time on achieving
more important business goals.
Performance
The biggest cloud computing services run on a worldwide network of secure
datacenters, which are regularly upgraded to the latest generation of fast and efficient
computing hardware. This offers several benefits over a single corporate datacenter,
including reduced network latency for applications and greater economies of scale.
Reliability
Cloud computing makes data backup, disaster recovery, and business continuity easier
and less expensive because data can be mirrored at multiple redundant sites on the cloud
provider’s network.
Security
Many cloud providers offer a broad set of policies, technologies, and controls that
strengthen your security posture overall, helping protect your data, apps, and
infrastructure from potential threats.
Once the data is stored in the cloud, it is easier to get back-up and restore that data using the
cloud.
2) Improved collaboration
Cloud applications improve collaboration by allowing groups of people to quickly and easily
share information in the cloud via shared storage.
3) Excellent accessibility
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole
world, using an internet connection. An internet cloud infrastructure increases organization
productivity and efficiency by ensuring that our data is always accessible.
Cloud computing reduces both hardware and software maintenance costs for organizations.
5) Mobility
Cloud computing allows us to easily access all cloud data via mobile.
Cloud computing offers Application Programming Interfaces (APIs) to the users for access
services on the cloud and pays the charges as per the usage of service.
7) Unlimited storage capacity
Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
8) Data security
Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced
features related to security and ensures that data is securely stored and handled.
1) Internet Connectivity
As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the cloud,
and we access these data through the cloud by using the internet connection. If you do not have
good internet connectivity, you cannot access these data. However, we have no any other way
to access data from the cloud.
2) Vendor lock-in
Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different vendors
provide different platforms, that can cause difficulty moving from one cloud to another.
3) Limited Control
As we know, cloud infrastructure is completely owned, managed, and monitored by the service
provider, so the cloud users have less control over the function and execution of services within
a cloud infrastructure.
4) Security
Although cloud service providers implement the best security standards to store important
information. But, before adopting cloud technology, you should be aware that you will be
sending all your organization's sensitive information to a third party, i.e., a cloud computing
service provider. While sending the data on the cloud, there may be a chance that your
organization's information is hacked by Hackers.
History of Cloud Computing
Before emerging the cloud computing, there was Client/Server computing which is basically a
centralized storage in which all the software applications, all the data and all the controls are
resided on the server side.
If a single user wants to access specific data or run a program, he/she need to connect to the
server and then gain appropriate access, and then he/she can do his/her business.
Then after, distributed computing came into picture, where all the computers are networked
together and share their resources when needed.
On the basis of above computing, there was emerged of cloud computing concepts that
later implemented.
“At around in 1961, John MacCharty suggested in a speech at MIT that computing can
be sold like a utility, just like a water or electricity. It was a brilliant idea, but like all
brilliant ideas, it was ahead if its time, as for the next few decades, despite interest in the
model, the technology simply was not ready for it.”
But of course time has passed and the technology caught that idea and after few years we
mentioned that:
In 1999, Salesforce.com started delivering of applications to users using a simple website. The
applications were delivered to enterprises over the Internet, and this way the dream of
computing sold as utility were true.
In 2002, Amazon started Amazon Web Services, providing services like storage, computation
and even human intelligence. However, only starting with the launch of the Elastic Compute
Cloud in 2006 a truly commercial service open to everybody existed.
In 2009, Google Apps also started to provide cloud computing enterprise applications.
Of course, all the big players are present in the cloud computing evolution, some were earlier,
some were later. In 2009, Microsoft launched Windows Azure, and companies like Oracle and
HP have all joined the game. This proves that today, cloud computing has become mainstream.
Cloud computing is all about renting computing services. This idea first came in the 1950s. In
making cloud computing what it is today, five technologies played a vital role. These are
distributed systems and its peripherals, virtualization, web 2.0, service orientation, and utility
computing.
Distributed Systems:
Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive
input-output operations. Even today these are used for bulk processing tasks such as
online transactions etc. These systems have almost no downtime with high fault
tolerance. After distributed computing, these increased the processing capabilities of
the system. But these were very expensive. To reduce this cost, cluster computing came
as an alternative to
mainframe technology.
Cluster computing:
Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different systems
were placed at entirely different geographical locations and these all were connected
via the internet. These systems belonged to different organizations and thus the grid
consisted of heterogeneous nodes. Although it solved some problems but new problems
emerged as the distance between the nodes increased. The main problem which was
encountered was the low availability of high bandwidth connectivity and with it other
network associated issues. Thus. cloud computing is often referred to as “Successor of
grid computing”.
Virtualization:
It was introduced nearly 40 years back. It refers to the process of creating a virtual layer
over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.
Web 2.0:
It is the interface through which the cloud computing services interact with the clients.
It is because of Web 2.0 that we have interactive and dynamic web pages. It also
increases flexibility among web pages. Popular examples of web 2.0 include Google
Maps, Facebook, Twitter, etc. Needless to say, social media is possible because of this
technology only. It gained major popularity in 2004.
Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable applications. Two important concepts were introduced in this computing
model. These were Quality of Service (QoS) which also includes the SLA (Service
Level Agreement) and Software as a Service (SaaS).
Utility computing:
It is a computing model that defines service provisioning techniques for services such
as compute services along with other major services such as storage, infrastructure, etc
which are provisioned on a pay-per-use basis.
Thus, the above technologies contributed to the making of cloud computing.
Cloud architecture dictates how components are integrated so that you can pool, share, and
scale resources over a network. Think of it as a building blueprint for running and deploying
applications in cloud environments.
A frontend platform
A backend platform
On the other hand, the back end refers to the cloud architecture components that make up the
cloud itself, including computing resources, storage, security mechanisms, management, and
more.
Application: The backend software or application the client is accessing from the front end to
coordinate or fulfill client requests and requirements.
Service: The service is the heart of cloud architecture, taking care of all the tasks being run on
a cloud computing system. It manages which resources you can access, including storage,
application development environments, and web applications.
Runtime cloud: Runtime cloud provides the environment where services are run, acting as an
operating system that handles the execution of service tasks and management. Runtimes use
virtualization technology to create hypervisors that represent all your services, including apps,
servers, storage, and networking.
Storage: The storage component in the back end is where data to operate applications is stored.
While cloud storage options vary by provider, most cloud service providers offer flexible
scalable storage services that are designed to store and manage vast amounts of data in the
cloud. Storage may include hard drives, solid-state drives, or persistent disks in server bays.
Cloud architecture, on the other hand, is the plan that dictates how cloud resources and
infrastructure are organized.
Management: Cloud service models require that resources be managed in real time according
to user requirements. It is essential to use management software, also known as middleware, to
coordinate communication between the backend and frontend cloud architecture components
and allocate resources for specific tasks. Beyond middleware, management software will also
include capabilities for usage monitoring, data integration, application deployment, and
disaster recovery.
The back end contains all the cloud computing resources, services, data storage, and
applications offered by a cloud service provider. A network is used to connect the frontend and
backend cloud architecture components, enabling data to be sent back and forth between them.
When users interact with the front end (or client-side interface), it sends queries to the back
end using middleware where the service model carries out the specific task or request.
The types of services available to use vary depending on the cloud-based delivery model or
service model you have chosen. There are three main cloud computing service models:
Platform as a service (PaaS): This model offers a computing platform with all the
underlying infrastructure and software tools needed to develop, run, and manage
applications.
Software as a service (SaaS): This model offers cloud-based applications that are
delivered and maintained by the service provider, eliminating the need for end users to
deploy software locally.
Cloud architecture layers
A simpler way of understanding how cloud architecture works is to think of all these
components as various layers placed on top of each other to create a cloud platform.
1. Hardware: The servers, storage, network devices, and other hardware that power the
cloud.
3. Application and service: This layer coordinates and supports requests from the
frontend user interface, offering different services based on the cloud service model,
from resource allocation to application development tools to web-based applications.
1. Computing
2. Networking
3. Storage
The most important point is that cloud infrastructure should have some basic infrastructural
constraints like transparency, scalability, security and intelligent monitoring etc.
2. Management Software :
Management software helps in maintaining and configuring the infrastructure. Cloud
management software monitors and optimizes resources, data, applications and services.
3. Deployment Software :
Deployment software helps in deploying and integrating the application on the cloud. So,
typically it helps in building a virtual computing environment.
4. Network :
It is one of the key component of cloud infrastructure which is responsible for connecting
cloud services over the internet. For the transmission of data and resources externally and
internally network is must required.
5. Server :
Server which represents the computing portion of the cloud infrastructure is responsible for
managing and delivering cloud services for various services and partners, maintaining
security etc.
6. Storage :
Storage represents the storage facility which is provided to different organizations for
storing and managing data. It provides a facility of extracting another resource if one of the
resource fails as it keeps many copies of storage.
Along with this, virtualization is also considered as one of important component of cloud
infrastructure. Because it abstracts the available data storage and computing power away
from the actual hardware and the users interact with their cloud infrastructure through GUI
(Graphical User Interface).
Client-Server Model
Client-Server Model – The client-server model describes the communication between two
computing entities over a network. Clients are the ones requesting a resource or service and
Servers are the ones providing that resource or service. Note, the server can be running one
or more programs and involved in multiple communications with multiple clients at the
same time. The client initiates the communication and awaits a response from the server.
This model was developed in the ‘70s at Xerox Palo Alto Research Center (PARC).
In this article we are going to take a dive into the Client-Server model and have a look at
how the Internet works via, web browsers. This article will help us in having a solid
foundation of the WEB and help in working with WEB technologies with ease.
Client: When we talk the word Client, it mean to talk of a person or an organization
using a particular service. Similarly in the digital world a Client is a computer (Host)
i.e. capable of receiving information or using a particular service from the service
providers (Servers).
Servers: Similarly, when we talk the word Servers, It mean a person or medium that
serves something. Similarly in this digital world a Server is a remote computer which
provides information (data) or access to particular services.
So, its basically the Client requesting something and the Server serving it as long as its
present in the database.
There are few steps to follow to interacts with the servers a client.
User enters the URL(Uniform Resource Locator) of the website or file. The Browser
then requests the DNS(DOMAIN NAME SYSTEM) Server.
Browser then renders the files and the website is displayed. This rendering is done with
the help of DOM (Document Object Model) interpreter, CSS interpreter and JS
Engine collectively known as the JIT or (Just in Time) Compilers.
Advantages of Client-Server model:
Cost efficient requires less maintenance cost and Data recovery is possible.
Clients are prone to viruses, Trojans and worms if present in the Server or uploaded
into the Server.
Phishing or capturing login credentials or other useful information of the user are
common and MITM(Man in the Middle) attacks are common.
Two-Tier architecture: consists of the client, the server, and the protocol that links the
two tiers. The Graphical User Interface code resides on the client host and the domain
logic resides on the server host. The client-server GUI is written in high-level languages
such as C++ and Java.
Three-Tier architecture: consists of a presentation tier, which is the User Interface
layer, the application tier, which is the service layer that performs detailed processing,
and the data tier, which consists of a database server that stores information.
A single server hosting all the required data in a single place facilitates easy protection
of data and management of user authorization and authentication.
Resources such as network segments, servers and computers can be added to a client-
server network without any significant interruptions.
Data can be accessed efficiently without requiring clients and the server to be in close
proximity.
All nodes in the client-server system are independent, requesting data only from the
server, which facilitates easy upgrades, replacements, and relocation of the nodes.
Peer to peer networks are usually formed by groups of a dozen or less computers.
These computers all store their data using individual security but also share data
with all the other nodes.
The nodes in peer to peer networks both use resources and provide resources. So,
if the nodes increase, then the resource sharing capacity of the peer to peer
network increases. This is different than client server networks where the server
gets overwhelmed if the nodes increase.
Since nodes in peer to peer networks act as both clients and servers, it is difficult
to provide adequate security for the nodes. This can lead to denial of service
attacks.
Most modern operating systems such as Windows and Mac OS contain software to
implement peer to peer networks.
Advantages of Peer to Peer Computing
Some advantages of peer to peer computing are as follows −
Each computer in the peer to peer network manages itself. So, the network is quite easy to
set up and maintain.
In the client server network, the server handles all the requests of the clients. This
provision is not required in peer to peer computing and the cost of the server is saved.
It is easy to scale the peer to peer network and add more nodes. This only increases the
data sharing capacity of the system.
None of the nodes in the peer to peer network are dependent on the others for their
functioning.
1. Grid Computing :
Grid Computing, as name suggests, is a type of computing that combine resources from various
administrative domains to achieve common goal. Its main goal to virtualized resources to simply
solve problems or issues and apply resources of several computers in network to single problem
at same time to solve technical or scientific problem.
2. Utility Computing:
Utility Computing, as name suggests, is a type of computing that provide services and computing
resources to customers. It is basically a facility that is being provided to users on their demand
and charge them for specific usage. It is similar to cloud computing and therefore requires cloud-
like infrastructure.
Its main purpose is to integrate usage Its main purpose is to make computing
of computer resources from resources and infrastructure management
cooperating partners in form of VO available to customer as per their need,
(Virtual Organizations). and charge them for specific usage rather
than flat rate.
Its characteristics include resource Its characteristics include scalability,
coordination, transparent access, demand pricing, standardized utility
dependable access, etc. computing services, automation, etc.
Cloud Computing
Cloud computing uses a client-server architecture to deliver computing resources such as
servers, storage, databases, and software over the cloud (Internet) with pay-as-you-go pricing.
Cloud computing becomes a very popular option for organizations by providing various
advantages, including cost-saving, increased productivity, efficiency, performance, data back-
ups, disaster recovery, and security.
Grid Computing
Grid computing is also called as "distributed computing." It links multiple computing
resources (PC's, workstations, servers, and storage elements) together and provides a
mechanism to access them.
The main advantages of grid computing are that it increases user productivity by providing
transparent access to resources, and work can be completed more quickly.
Backward Skip 10sPla10s
Let's understand the difference between cloud computing and grid computing.
Cloud Computing is more flexible than Grid Computing is less flexible than cloud computing.
grid computing.
In cloud computing, cloud servers are In Grid computing, grids are owned and managed by the
owned by infrastructure providers. organization.
Cloud computing uses services like Iaas, Grid computing uses systems like distributed computing,
PaaS, and SaaS. distributed information, and distributed pervasive.
But, there may be an alternative for executives like you. So, instead of installing a suite of
software for each computer, you just need to load one application. That application will allow
the employees to log-in into a Web-based service which hosts all the programs for the user that
is required for his/her job. Remote servers owned by another company and that will run
everything from e-mail to word processing to complex data analysis programs. It is called cloud
computing, and it could change the entire computer industry.
In a cloud computing system, there is a significant workload shift. Local computers have no
longer to do all the heavy lifting when it comes to run applications. But cloud computing can
handle that much heavy load easily and automatically. Hardware and software demands on the
user's side decrease. The only thing the user's computer requires to be able to run is the cloud
computing interface software of the system, which can be as simple as a Web browser and the
cloud's network takes care of the rest.
The most widely used cloud computing applications are given below -
1. Art Applications
Cloud computing offers various art applications for quickly and easily design attractive cards,
booklets, and images. Some most commonly used cloud art applications are given below:
i Moo e
Moo is one of the best cloud art applications. It is used for designing and printing business
cards, postcards, and mini cards.
ii. Vistaprint
Vistaprint allows us to easily design various printed marketing products such as business cards,
Postcards, Booklets, and wedding invitations cards.
Adobe creative cloud is made for designers, artists, filmmakers, and other creative
professionals. It is a suite of apps which includes PhotoShop image editing programming,
Illustrator, InDesign, TypeKit, Dreamweaver, XD, and Audition.
2. Business Applications
Business applications are based on cloud service providers. Today, every organization requires
the cloud business application to grow their business. It also ensures that business applications
are 24*7 available to users.
i. MailChimp
MailChimp is an email publishing platform which provides various options to design,
send, and save templates for emails.
iii. Salesforce
Salesforce platform provides tools for sales, service, marketing, e-commerce, and more. It also
provides a cloud development platform.
iv. Chatter
Chatter helps us to share important information about the organization in real time.
v. Bitrix24
vi. Paypal
Paypal offers the simplest and easiest online payment mode using a secure internet account.
Paypal accepts the payment through debit cards, credit cards, and also from Paypal account
holders.
vii. Slack
Slack stands for Searchable Log of all Conversation and Knowledge. It provides a user-
friendly interface that helps us to create public and private channels for communication.
viii. Quickbooks
Quickbooks works on the terminology "Run Enterprise anytime, anywhere, on any device."
It provides online accounting solutions for the business. It allows more than 20 users to work
simultaneously on the same system.
A list of data storage and backup applications in the cloud are given below -
i. Box.com
Box provides an online environment for secure content management,
workflow, and collaboration. It allows us to store different files such as Excel, Word, PDF,
and images on the cloud. The main advantage of using box is that it provides drag & drop
service for files and easily integrates with Office 365, G Suite, Salesforce, and more than 1400
tools.
ii. Mozy
Mozy provides powerful online backup solutions for our personal and business data. It
schedules automatically back up for each day at a specific time.
iii. Joukuu
Joukuu provides the simplest way to share and track cloud-based backup files. Many users
use joukuu to search files, folders, and collaborate on documents.
Google G Suite is one of the best cloud storage and backup application. It includes Google
Calendar, Docs, Forms, Google+, Hangouts, as well as cloud storage and tools for managing
cloud apps. The most popular app in the Google G Suite is Gmail. Gmail offers free email
services to users.
4. Education Applications
Cloud computing in the education sector becomes very popular. It offers various online
distance learning platforms and student information portals to the students. The advantage
of using cloud in the field of education is that it offers strong virtual classroom environments,
Ease of accessibility, secure data storage, scalability, greater reach for the students, and
minimal hardware requirements for the applications.
Google Apps for Education is the most widely used platform for free web-based email,
calendar, documents, and collaborative study.
Chromebook for Education is one of the most important Google's projects. It is designed for
the purpose that it enhances education innovation.
5. Entertainment Applications
Entertainment industries use a multi-cloud strategy to interact with the target audience. Cloud
computing offers various entertainment applications such as online games and video
conferencing.
i. Online games
Today, cloud gaming becomes one of the most important entertainment media. It offers various
online games that run remotely from the cloud. The best cloud gaming services are Shaow,
GeForce Now, Vortex, Project xCloud, and PlayStation Now.
Video conferencing apps provides a simple and instant connected experience. It allows us to
communicate with our business partners, friends, and relatives using a cloud-based video
conferencing. The benefits of using video conferencing are that it reduces cost, increases
efficiency, and removes interoperability.
6. Management Applications
Cloud computing offers various cloud management tools which help admins to manage all
types of cloud activities, such as resource deployment, data integration, and disaster recovery.
These management tools also provide administrative control over the platforms, applications,
and infrastructure.
i. Toggl
Toggl helps users to track allocated time period for a particular project.
ii. Evernote
Evernote allows you to sync and save your recorded notes, typed notes, and other notes in one
convenient place. It is available for both free as well as a paid version.
It uses platforms like Windows, macOS, Android, iOS, Browser, and Unix.
iii. Outright
Outright is used by management users for the purpose of accounts. It helps to track income,
expenses, profits, and losses in real-time environment.
iv. GoToMeeting
GoToMeeting provides Video Conferencing and online meeting apps, which allows you to
start a meeting with your business partners from anytime, anywhere using mobile phones or
tablets. Using GoToMeeting app, you can perform the tasks related to the management such as
join meetings in seconds, view presentations on the shared screen, get alerts for upcoming
meetings, etc.
7. Social Applications
Social cloud applications allow a large number of users to connect with each other using social
networking applications such as Facebook, Twitter, Linkedln, etc.
i. Facebook
Facebook is a social networking website which allows active users to share files, photos,
videos, status, more to their friends, relatives, and business partners using the cloud storage
system. On Facebook, we will always get notifications when our friends like and comment on
the posts.
ii. Twitter
iii. Yammer
Yammer is the best team collaboration tool that allows a team of employees to chat, share
images, documents, and videos.
iv. LinkedIn
Some most common Security Risks of Cloud Computing are given below-
Data Loss
Data loss is the most common cloud security risks of cloud computing. It is also known as data
leakage. Data loss is the process in which data is being deleted, corrupted, and unreadable by
a user, software, or application. In a cloud computing environment, data loss occurs when our
sensitive data is somebody else's hands, one or more data elements can not be utilized by the
data owner, hard disk is not working properly, and software is not updated.
Data Breach
Data Breach is the process in which the confidential data is viewed, accessed, or stolen by the
third party without any authorization, so organization's data is hacked by the hackers.
Vendor lock-in
Vendor lock-in is the of the biggest security risks in cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different vendors
provide different platforms, that can cause difficulty moving one cloud to another.
Account hijacking
Account hijacking is a serious security risk in cloud computing. It is the process in which
individual user's or organization's cloud account (bank account, e-mail account, and social
media account) is stolen by hackers. The hackers use the stolen account to perform
unauthorized activities.
o Public Cloud
o Private Cloud
o Hybrid Cloud
o Community Cloud
Public Cloud
Public cloud is open to all to store and access information via the Internet using the pay-per-
usage method.
In public cloud, computing resources are managed and operated by the Cloud Service Provider
(CSP).
Example: Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft,
Google App Engine, Windows Azure Services Platform.
o Accessibility: Public cloud services are available to anyone with an internet connection. Users
can access their data and programs at any time and from anywhere.
o Shared Infrastructure: Several users share the infrastructure in public cloud settings. Cost
reductions and effective resource use are made possible by this.
o Scalability: By using the public cloud, users can easily adjust the resources they need based on
their requirements, allowing for quick scaling up or down.
o Pay-per-Usage: When using the public cloud, payment is based on usage, so users only pay
for the resources they actually use. This helps optimize costs and eliminates the need for upfront
investments.
o Managed by Service Providers: Cloud service providers manage and maintain public cloud
infrastructure. They handle hardware maintenance, software updates, and security tasks,
relieving users of these responsibilities.
o Reliability and Redundancy: Public cloud providers ensure high reliability by implementing
redundant systems and multiple data centers. By doing this, the probability of losing data and
experiencing service disruptions is reduced.
o Security Measures: Public cloud providers implement robust security measures to protect user
data. These include encryption, access controls, and regular security audits.
Private Cloud
Private cloud is also known as an internal cloud or corporate cloud. It is used by
organizations to build and manage their own data centers internally or by the third party. It can
be deployed using Opensource tools such as Openstack and Eucalyptus.
Based on the location and management, National Institute of Standards and Technology (NIST)
divide private cloud into the following two parts-
o On-premise private cloud: An on-premise private cloud is situated within the physical
infrastructure of the organization. It involves setting up and running a specific data
center that offers cloud services just for internal usage by the company. The
infrastructure is still completely under the hands of the organization, which gives them
the freedom to modify and set it up in any way they see fit. Organizations can
successfully manage security and compliance issues with this degree of control.
However, on-premise private cloud setup and management necessitate significant
hardware, software, and IT knowledge expenditures.
o Outsourced private cloud: An outsourced private cloud involves partnering with a
third-party service provider to host and manage the cloud infrastructure on behalf of
the organization. The provider may operate the private cloud in their data center or a
colocation facility. In this arrangement, the organization benefits from the expertise and
resources of the service provider, alleviating the burden of infrastructure management.
The outsourced private cloud model offers scalability, as the provider can adjust
resources based on the organization's needs. Due to its flexibility, it is a desirable choice
for businesses that desire the advantages of a private cloud deployment without the
initial capital outlay and ongoing maintenance expenses involved with an on-premise
implementation.
Compared to public cloud options, both on-premise and external private clouds give businesses
more control over their data, apps, and security. Private clouds are particularly suitable for
organizations with strict compliance requirements, sensitive data, or specialized workloads that
demand high levels of customization and security.
Hybrid Cloud
Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
Hybrid cloud is partially secure because the services which are running on the public cloud can
be accessed by anyone, while the services which are running on a private cloud can be accessed
only by the organization's users.
Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365
(MS Office on the Web and One Drive), Amazon Web Services.
o Hybrid cloud is suitable for organizations that require more security than the public cloud.
o Hybrid cloud helps you to deliver new products and services more quickly.
o Hybrid cloud provides an excellent way to reduce the risk.
o Hybrid cloud offers flexible resources because of the public cloud and secure resources because
of the private cloud.
o Hybrid facilitates seamless integration between on-premises infrastructure and cloud
environments.
o Hybrid provides greater control over sensitive data and compliance requirements.
o Hybrid enables efficient workload distribution based on specific needs and performance
requirements.
o Hybrid offers cost optimization by allowing organizations to choose the most suitable cloud
platform for different workloads.
o Hybrid enhances business continuity and disaster recovery capabilities with private and public
cloud resources.
o Hybrid supports hybrid cloud architecture, allowing applications and data to be deployed across
multiple cloud environments based on their unique requirements.
Advantages of Hybrid Cloud
There are the following advantages of Hybrid Cloud -
o Hybrid cloud is suitable for organizations that require more security than the public cloud.
o Hybrid cloud helps you to deliver new products and services more quickly.
o Hybrid cloud provides an excellent way to reduce the risk.
o Hybrid cloud offers flexible resources because of the public cloud and secure resources because
of the private cloud.
Community Cloud
Community cloud allows systems and services to be accessible by a group of several
organizations to share the information between the organization and a specific community. It
is owned, managed, and operated by one or more organizations in the community, a third party,
or a combination of them.
o Community cloud is cost-effective because the whole cloud is being shared by several
organizations or communities.
o Community cloud is suitable for organizations that want to have a collaborative cloud with
more security features than the public cloud.
o It provides better security than the public cloud.
o It provdes collaborative and distributive environment.
o Community cloud allows us to share cloud resources, infrastructure, and other capabilities
among various organizations.
Cost: Cost is an important factor for the cloud deployment model as it tells how much amount
you want to pay for these things.
Scalability: Scalability tells about the current activity status and how much we can scale it.
Easy to use: It tells how much your resources are trained and how easily can you manage
these models.
Compliance: Compliance tells about the laws and regulations which impact the
implementation of the model.
Privacy: Privacy tells about what data you gather for the model.
Each model has some advantages and some disadvantages, and the selection of the best is only done
on the basis of your requirement. If your requirement changes, you can switch to any other model.
Infrastructure as a Service (IaaS) helps in delivering computer infrastructure on an external basis for
supporting operations. Generally, IaaS provides services to networking equipment, devices, databases,
and web servers.
Infrastructure as a Service (IaaS) helps large organizations, and large enterprises in managing and
building their IT platforms. This infrastructure is flexible according to the needs of the client.
Characteristics of IaaS
There are the following characteristics of IaaS -
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google Compute
Engine (GCE), Rackspace, and Cisco Metacloud.
Advantages of IaaS
IaaS cloud provider provides better security than any other software.
Disadvantages of IaaS
Platform as a Service (PaaS) is a type of cloud computing that helps developers to build applications
and services over the Internet by providing them with a platform.
Characteristics of PaaS
There are the following characteristics of PaaS -
Advantages of PaaS
PaaS is simple and very much convenient for the user as it can be accessed via a web browser.
Disadvantages of PaaS
PaaS has limited control over infrastructure as they have less control over the environment
and are not able to make some customizations.
Software as a Service (SaaS) is a type of cloud computing model that is the work of delivering
services and applications over the Internet. The SaaS applications are called Web-Based Software or
Hosted Software.
SaaS has around 60 percent of cloud solutions and due to this, it is mostly preferred by
companies.
Characteristics of SaaS
There are the following characteristics of SaaS -
Advantages of SaaS
SaaS can access app data from anywhere on the Internet.
Disadvantages of SaaS
SaaS solutions have limited customization, which means they have some restrictions within
the platform.
SaaS are generally cloud-based, they require a stable internet connection for proper working.
Self-Sevice: Self-service cloud computing is a private cloud service where the customer
provisions storage and launches applications without an external cloud service provider. With
a self-service cloud, users access a web-based portal to request or configure servers and
launch applications.
Web-Based: It means you can access your resources via Web-Based applications.
Automated: Most of the things in the Cloud are automated, and human intervention is less.
Pay As You Go Model: You only have to pay when utilizing cloud resources.
Secure: Cloud services create a copy of the data that you want to store to prevent any form of
data loss. If one server loses the data by any chance, the copy version is restored from the
other server.
The high-level talking points for this analogy were:
On-premises: is like owning your cars - you can go anywhere you want
at anytime (full control), in a car make/model/color/trim of your choice,
but you own the car and you’re responsible for its maintenance
IaaS: is like a car rental service - you still can go anywhere you want at
anytime, with some limits in car choices, but you don’t have to maintain
the vehicles; just take the keys and go
PaaS: is like public transportation - you can go to places as
defined/limited by available routes and schedules, but it’s easy to use and
pay-per-use (full economies of scale)
It provides a virtual data center to It provides virtual platforms and It provides web software and
store information and create tools to create, test, and deploy apps to complete business
platforms for app development, apps. tasks.
testing, and deployment.
That’s why APIs play a critical role in cloud computing. And that role is only going to expand
as organizations’ cloud deployments expand and grow more complex.
Architecture
First, there’s your API architecture. How will you design your API architecture for the cloud?
There are four key layers you’ll need to consider when planning your cloud architecture:
1. Information management layer — your data repositories.
2. Application layer — where your applications live.
3. Integration layer — where APIs connect your services.
4. Interaction layer — where your API gateway enables interaction between employees,
customers, and partners.
The API gateway plays a critical role in your cloud deployment. It allows you to deploy across
multiple clouds, enforce security policies, and control access.
Orchestration
Next, you’ll need to consider API orchestration. How will you orchestrate your individual API
calls to cloud services?
To do this, you’ll need to use your API gateway. You can design your API catalog and use the
gateway to decide how to service each request in the cloud.
Integration
Finally, you’ll want to consider API integration. How will you use APIs to integrate your cloud
native applications?
To do this, you’ll need an API platform like Akana. Using Akana ensures you can connect your
cloud applications easily, create new cloud APIs, and work with your existing data sources.
Take John Deere, for example. John Deere uses APIs to create smart, connected products. This
helps them build a new type of marketplace for the food system.
Take this Fortune 500 company, for example. They use cloud APIs to ensure security and
protect back-end data, while exposing services to customers. This ensures customer satisfaction
and better experiences.
A cloud service provider, or CSP, is a company that offers components of cloud computing --
typically, infrastructure as a service (IaaS), software as a service (SaaS) or platform as a
service (PaaS).
Cloud service providers use their own data centers and compute resources to host cloud
computing-based infrastructure and platform services for customer organizations. Cloud
services typically are priced using various pay-as-you-go subscription models. Customers are
charged only for resources they consume, such as the amount of time a service is used or the
storage capacity or virtual machines used.
For SaaS products, cloud service providers may host and deliver their own managed services
to users. Or they can act as a third party, hosting the app of an independent software vendor.
The most well-known cloud service platforms are Amazon Web Services (AWS), Google
Cloud (formerly Google Cloud Platform or GCP) and Microsoft Azure.
Mobility. Resources and services purchased from a cloud service provider can
be accessed from any physical location that has a working network connection.
Disaster recovery. Cloud computing services typically offer quick and reliable
disaster recovery.
Challenges
Hidden costs. Cloud use may incur expenses not factored into the initial return
on investment analysis. For example, unplanned data needs can force a customer
to exceed contracted amounts, leading to extra charges. To be cost-effective,
companies also must factor in additional staffing needs for monitoring and
managing cloud use. Terminating use of on-premises systems also has costs, such
as writing off assets and data cleanup.
Cloud migration. Moving data to and from the cloud can take time.
Companies might not have access to their critical data for weeks, or even months,
while large amounts of data are first transferred to the cloud.
Cloud security. When trusting a provider with critical data, organizations risk
security breaches, compromised credentials and other substantial security risks.
Also, providers may not always be transparent about security issues and practices.
Companies with specific security needs may rely on open source cloud security
tools, in addition to the provider's tools.
IaaS providers. In the IaaS model, the cloud service provider delivers
infrastructure components that would otherwise exist in an on-premises data
center. These components include servers, storage, networking and the
virtualization layer, which the IaaS provider hosts in its own data center. CSPs
may also complement their IaaS products with services such as monitoring,
automation, security, load balancing and storage resiliency.
PaaS providers. The third type of cloud service provider, PaaS vendors, offers
cloud infrastructure and services that users can access to perform various
functions. PaaS products are commonly used in software development. In
comparison to an IaaS provider, PaaS providers will add more of the application
stack, such as operating systems and middleware, to the underlying infrastructure.
Cloud providers are also categorized by whether they deliver public cloud, private cloud
or hybrid cloud services.
Understand the similarities and differences between the public cloud, private cloud and hybrid
cloud models.
Some cloud service providers differentiate themselves by tailoring their offerings to a vertical
market's requirements. Their cloud-based services might deliver industry-specific
functionality and tools or help users meet certain regulatory requirements. For instance,
several healthcare cloud products let healthcare providers store, maintain, optimize and back
up personal health information. Industry-specific cloud offerings encourage organizations to
use multiple cloud service providers.
Amazon and Microsoft lead the cloud infrastructure market. See how the market share breaks out
among the top five providers.
Amazon was the first major cloud provider, with the 2006 offering of Amazon Simple
Storage Service. Since then, the growing cloud market has seen rapid development of
Amazon's cloud platform, as well as Microsoft's Azure platform and Google Cloud. These
three vendors continue to jockey for the lead on a variety of cloud fronts. The vendors are
developing cloud-based services around emerging technologies, such as machine learning,
artificial intelligence, containerization and Kubernetes.
Other major cloud service providers in the market include the following:
Adobe
Akamai Technologies
Alibaba Cloud
Apple
Box
Citrix
DigitalOcean
IBM Cloud
Joyent
Oracle Cloud
Rackspace Cloud
Salesforce
Cost. The cost is usually based on a per-use utility model, but all subscription
details and provider-specific variations must be reviewed. Cost is often considered
one of the main reasons to adopt a cloud service platform.
Physical location of the servers. Server location may be an important factor for
sensitive data, which must meet data storage regulations.
Security. Cloud security should top the list of cloud service provider
considerations. Organizations such as the Cloud Security Alliance offer
certification to cloud providers that meet its criteria.
Features of AWS
AWS provides various powerful features for building scalable, cost-effective, enterprise
applications. Some important features of AWS is given below-
o AWS is scalable because it has an ability to scale the computing resources up or down
according to the organization's demand.
o AWS is cost-effective as it works on a pay-as-you-go pricing model.
o It provides various flexible storage options.
o It offers various security services such as infrastructure security, data encryption,
monitoring & logging, identity & access control, penetration testing, and DDoS attacks.
o It can efficiently manage and secure Windows workloads.
2. Microsoft Azure
Microsoft Azure is also known as Windows Azure. It supports various operating
systems, databases, programming languages, frameworks that allow IT professionals
to easily build, deploy, and manage applications through a worldwide network. It also
allows users to create different groups for related utilities.
5. VMware Cloud
VMware cloud is a Software-Defined Data Center (SSDC) unified platform for the
Hybrid Cloud. It allows cloud providers to build agile, flexible, efficient, and robust
cloud services.
Features of VMware
o VMware cloud works on the pay-as-per-use model and monthly subscription
o It provides better customer satisfaction by protecting the user's data.
o It can easily create a new VMware Software-Defined Data Center (SDDC) cluster on
AWS cloud by utilizing a RESTful API.
o It provides flexible storage options. We can manage our application storage on a per-
application basis.
o It provides a dedicated high-performance network for managing the application traffic
and also supports multicast networking.
o It eliminates the time and cost complexity.
6. Oracle cloud
Oracle cloud platform is offered by the Oracle Corporation. It combines Platform as
a Service, Infrastructure as a Service, Software as a Service, and Data as a Service with
cloud infrastructure. It is used to perform tasks such as moving applications to the
cloud, managing development environment in the cloud, and optimize connection
performance.
7. Red Hat
Red Hat virtualization is an open standard and desktop virtualization platform
produced by Red Hat. It is very popular for the Linux environment to provide various
infrastructure solutions for virtualized servers as well as technical workstations. Most
of the small and medium-sized organizations use Red Hat to run their organizations
smoothly. It offers higher density, better performance, agility, and security to the
resources. It also improves the organization's economy by providing cheaper and
easier management capabilities.
8. DigitalOcean
DigitalOcean is the unique cloud provider that offers computing services to the
organization. It was founded in 2011 by Moisey Uretsky and Ben. It is one of the best
cloud provider that allows us to manage and deploy web applications.
Features of DigitalOcean
o It uses the KVM hypervisor to allocate physical resources to the virtual servers.
o It provides high-quality performance.
o It offers a digital community platform that helps to answer queries and holding
feedbacks.
o It allows developers to use cloud servers to quickly create new virtual machines for
their projects.
o It offers one-click apps for droplets. These apps include MySQL, Docker, MongoDB,
Wordpress, PhpMyAdmin, LAMP stack, Ghost, and Machine Learning.
9. Rackspace
Rackspace offers cloud computing services such as hosting web applications, Cloud
Backup, Cloud Block Storage, Databases, and Cloud Servers. The main aim to designing
Rackspace is to easily manage private and public cloud deployments. Its data centers
operating in the USA, UK, Hong Kong, and Australia.
Features of Rackspace
o Rackspace provides various tools that help organizations to collaborate and
communicate more efficiently.
o We can access files that are stored on the Rackspace cloud drive, anywhere, anytime
using any device.
o It offers 6 globally data centers.
o It can manage both virtual servers and dedicated physical servers on the same network.
o It provides better performance at a lower cost.
Virtual
Physical Physical Physical Machine
Cluster 1 Cluster 2 Cluster 1
Virtual
Cluster 1
Virtual
Cluster 2
Virtual Virtual
Cluster 3 Cluster 4
VM4
VM3
VM2
VM1
VM4
VM3
VM2
VM4
VM3
VM2
VM1
VMM VMM VMM VMM
1 Encryption
2 Hashing
3 Digital Signature
This chapter establishes a set of fundamental cloud security mechanisms, several of which can
be used to counter the security threats described in Chapter 6.
1. Encryption
Data, by default, is coded in a readable format known as plaintext. When transmitted over a
network, plaintext is vulnerable to unauthorized and potentially malicious access.
The encryption mechanism is a digital coding system dedicated to preserving the
confidentiality and integrity of data. It is used for encoding plaintext data into a protected and
unreadable format.
Encryption technology commonly relies on a standardized algorithm called a cipher to
transform original plaintext data into encrypted data, referred to as ciphertext. Access to
ciphertext does not divulge the original plaintext data, apart from some forms of metadata,
such as message length and creation date. When encryption is applied to plaintext data, the
data is paired with a string of characters called an encryption key, a secret message that is
established by and shared among authorized parties. The encryption key is used to decrypt
the ciphertext back into its original plaintext format.
The encryption mechanism can help counter the traffic eavesdropping, malicious
intermediary, insufficient authorization, and overlapping trust boundaries security threats. For
example, malicious service agents that attempt traffic eavesdropping are unable to decrypt
messages in transit if they do not have the encryption key (Figure 1).
Figure 1 A malicious service agent is unable to retrieve data from an encrypted message. The
retrieval attempt may furthermore be revealed to the cloud service consumer. (Note the use
of the lock symbol to indicate that a security mechanism has been applied to the message
contents.)
There are two common forms of encryption known as symmetric encryption and asymmetric
encryption.
Symmetric Encryption
Symmetric encryption uses the same key for both encryption and decryption, both of which
are performed by authorized parties that use the one shared key. Also known as secret key
cryptography, messages that are encrypted with a specific key can be decrypted by only that
same key. Parties that rightfully decrypt the data are provided with evidence that the original
encryption was performed by parties that rightfully possess the key. A basic authentication
check is always performed, because only authorized parties that own the key can create
messages. This maintains and verifies data confidentiality.
Note that symmetrical encryption does not have the characteristic of non-repudiation, since
determining exactly which party performed the message encryption or decryption is not
possible if more than one party is in possession of the key.
Asymmetric Encryption
Asymmetric encryption relies on the use of two different keys, namely a private key and a
public key. With asymmetric encryption (which is also referred to as public key cryptography),
the private key is known only to its owner while the public key is commonly available. A
document that was encrypted with a private key can only be correctly decrypted with the
corresponding public key. Conversely, a document that was encrypted with a public key can
be decrypted only using its private key counterpart. As a result of two different keys being
used instead of just the one, asymmetric encryption is almost always computationally slower
than symmetric encryption.
The level of security that is achieved is dictated by whether a private key or public key was
used to encrypt the plaintext data. As every asymmetrically encrypted message has its own
private-public key pair, messages that were encrypted with a private key can be correctly
decrypted by any party with the corresponding public key. This method of encryption does
not offer any confidentiality protection, even though successful decryption proves that the
text was encrypted by the rightful private key owner. Private key encryption therefore offers
integrity protection in addition to authenticity and non-repudiation. A message that was
encrypted with a public key can only be decrypted by the rightful private key owner, which
provides confidentiality protection. However, any party that has the public key can generate
the ciphertext, meaning this method provides neither message integrity nor authenticity
protection due to the communal nature of the public key.
NOTE
The encryption mechanism, when used to secure Web-based data transmissions, is most
commonly applied via HTTPS, which refers to the use of SSL/TLS as an underlying encryption
protocol for HTTP. TLS (transport layer security) is the successor to the SSL (secure sockets
layer) technology. Because asymmetric encryption is usually more time-consuming than
symmetric encryption, TLS uses the former only for its key exchange method. TLS systems then
switch to symmetric encryption once the keys have been exchanged.
Most TLS implementations primarily support RSA as the chief asymmetrical encryption cipher,
while ciphers such as RC4, Triple-DES, and AES are supported for symmetrical encryption.
2. Hashing
The hashing mechanism is used when a one-way, non-reversible form of data protection is
required. Once hashing has been applied to a message, it is locked and no key is provided for
the message to be unlocked. A common application of this mechanism is the storage of
passwords.
Hashing technology can be used to derive a hashing code or message digest from a message,
which is often of a fixed length and smaller than the original message. The message sender
can then utilize the hashing mechanism to attach the message digest to the message. The
recipient applies the same hash function to the message to verify that the produced message
digest is identical to the one that accompanied the message. Any alteration to the original
data results in an entirely different message digest and clearly indicates that tampering has
occurred.
In addition to its utilization for protecting stored data, the cloud threats that can be mitigated
by the hashing mechanism include malicious intermediary and insufficient authorization. An
example of the former is illustrated in Figure 3.
Figure 3 A hashing function is applied to protect the integrity of a message that is intercepted
and altered by a malicious service agent, before it is forwarded. The firewall can be configured
to determine that the message has been altered, thereby enabling it to reject the message
before it can proceed to the cloud service.
3. Digital Signature
The digital signature mechanism is a means of providing data authenticity and integrity
through authentication and non-repudiation. A message is assigned a digital signature prior
to transmission, which is then rendered invalid if the message experiences any subsequent,
unauthorized modifications. A digital signature provides evidence that the message received
is the same as the one created by its rightful sender.
Both hashing and asymmetrical encryption are involved in the creation of a digital signature,
which essentially exists as a message digest that was encrypted by a private key and appended
to the original message. The recipient verifies the signature validity and uses the
corresponding public key to decrypt the digital signature, which produces the message digest.
The hashing mechanism can also be applied to the original message to produce this message
digest. Identical results from the two different processes indicate that the message
maintained its integrity.
The digital signature mechanism helps mitigate the malicious intermediary, insufficient
authorization, and overlapping trust boundaries security threats (Figure 5).
Figure 5 Cloud Service Consumer B sends a message that was digitally signed but was altered
by trusted attacker Cloud Service Consumer A. Virtual Server B is configured to verify digital
signatures before processing incoming messages even if they are within its trust boundary.
The message is revealed as illegitimate due to its invalid digital signature, and is therefore
rejected by Virtual Server B.
The SSO mechanism essentially enables mutually independent cloud services and IT resources
to generate and circulate runtime authentication and authorization credentials. The
credentials initially provided by the cloud service consumer remain valid for the duration of a
session, while its security context information is shared (Figure 9). The SSO mechanism’s
security broker is especially useful when a cloud service consumer needs to access cloud
services residing on different clouds (Figure 10).
Figure 9 A cloud service consumer provides the security broker with login credentials (1). The
security broker responds with an authentication token (message with small lock symbol) upon
successful authentication, which contains cloud service consumer identity information (2) that
is used to automatically authenticate the cloud service consumer acoss Cloud Services A, B,
and C (3).
Figure 10 The credentials received by the security broker are propagated to ready-made
environments across two different clouds. The security broker is responsible for selecting the
appropriate security procedure with which to contact each cloud.
It primarily enhances the usability of cloud-based environments for access and management
of distributed IT resources and solutions.
The migration of applications to ATN’s new PaaS platform was successful, but also raised a
number of new concerns pertaining to the responsiveness and availability of PaaS-hosted IT
resources. ATN intends to move more applications to a PaaS platform, but decides to do so by
establishing a second PaaS environment with a different cloud provider. This will allow them
to compare cloud providers during a three-month assessment period.
To accommodate this distributed cloud architecture, the SSO mechanism is used to establish
a security broker capable of propagating login credentials across both clouds (Figure 10). This
enables a single cloud resource administrator to access IT resources on both PaaS
environments without having to log in separately to each one.
DTGOV informs its clients about the availability of these new security policies. Cloud
consumers can optionally choose to utilize them and doing so results in increased fees.
8. Hardened Virtual Server Images
As previously discussed, a virtual server is created from a template configuration called a
virtual server image (or virtual machine image). Hardening is the process of stripping
unnecessary software from a system to limit potential vulnerabilities that can be exploited by
attackers. Removing redundant programs, closing unnecessary server ports, and disabling
unused services, internal root accounts, and guest access are all examples of hardening.
A hardened virtual server image is a template for virtual service instance creation that has
been subjected to a hardening process (Figure 13). This generally results in a virtual server
template that is significantly more secure than the original standard image.
Figure 13 A cloud provider applies its security policies to harden its standard virtual server
images. The hardened image template is saved in the VM images repository as part of a
resource management system.
Hardened virtual server images help counter the denial of service, insufficient authorization,
and overlapping trust boundaries threats.
2. Compliance: There are many regulations in places related to data and hosting. To comply
with regulations (Federal Information Security Management Act, Health Insurance Portability
and Accountability Act, etc.) the user may have to adopt deployment modes that are
expensive.
3. Security: Cloud-based services involve third-party for storage and security. Can one assume
that a cloud-based company will protect and secure one’s data if one is using their services at
a very low or for free? They may share users’ information with others. Security presents a real
threat to the cloud.
4. Sustainability: This issue refers to minimizing the effect of cloud computing on the
environment. Citing the server’s effects on the environmental effects of cloud computing, in
areas where climate favors natural cooling and renewable electricity is readily available, the
countries with favorable conditions, such as Finland, Sweden, and Switzerland are trying to
attract cloud computing data centers. But other than nature’s favors, would these countries
have enough technical infrastructure to sustain the high-end clouds?
5. Abuse: While providing cloud services, it should be ascertained that the client is not
purchasing the services of cloud computing for a nefarious purpose. In 2009, a banking Trojan
illegally used the popular Amazon service as a command and control channel that issued
software updates and malicious instructions to PCs that were infected by the malware So the
hosting companies and the servers should have proper measures to address these issues.
6, Higher Cost: If you want to use cloud services uninterruptedly then you need to have a
powerful network with higher bandwidth than ordinary internet networks, and also if your
organization is broad and large so ordinary cloud service subscription won’t suit your
organization. Otherwise, you might face hassle in utilizing an ordinary cloud service while
working on complex projects and applications. This is a major problem before small
organizations, that restricts them from diving into cloud technology for their business.
7. Recovery of lost data in contingency: Before subscribing any cloud service provider goes
through all norms and documentations and check whether their services match your
requirements and sufficient well-maintained resource infrastructure with proper upkeeping.
Once you subscribed to the service you almost hand over your data into the hands of a third
party. If you are able to choose proper cloud service then in the future you don’t need to
worry about the recovery of lost data in any contingency.
8. Upkeeping(management) of Cloud: Maintaining a cloud is a herculin task because a cloud
architecture contains a large resources infrastructure and other challenges and risks as well,
user satisfaction, etc. As users usually pay for how much they have consumed the resources.
So, sometimes it becomes hard to decide how much should be charged in case the user wants
scalability and extend the services.
9. Lack of resources/skilled expertise: One of the major issues that companies and
enterprises are going through today is the lack of resources and skilled employees. Every
second organization is seeming interested or has already been moved to cloud services. That’s
why the workload in the cloud is increasing so the cloud service hosting companies need
continuous rapid advancement. Due to these factors, organizations are having a tough time
keeping up to date with the tools. As new tools and technologies are emerging every day so
more skilled/trained employees need to grow. These challenges can only be minimized
through additional training of IT and development staff.
10. Pay-per-use service charges: Cloud computing services are on-demand services a user can
extend or compress the volume of the resource as per needs. so you paid for how much you
have consumed the resources. It is difficult to define a certain pre-defined cost for a particular
quantity of services. Such types of ups and downs and price variations make the
implementation of cloud computing very difficult and intricate. It is not easy for a firm’s owner
to study consistent demand and fluctuations with the seasons and various events. So it is hard
to build a budget for a service that could consume several months of the budget in a few days
of heavy use.
Unit 6 : Future of Cloud computing
Open the link and watch the video for better understanding.
Key Components of OS
There are different flavors of operating systems: from real-time OS, desktop OS, all
the way to a mainframe OS. The most recent OS is the Cloud OS.
Depending on the type of OS, you may miss something here or have something extra.
For example, an embedded OS may not have a user interface and everything is
controlled remotely. For the desktop OS, you may have extra commonly used
applications such as a calculator, a calendar, a browser, and so on.
When a user with a compatible mobile device opts for a location-based service, that
info is delivered to location-aware applications, which aim to present resources
according to the spot where the user presently is. On the other hand, a location-aware
application may forward the physical location of a user to other location-aware or
social media applications. Users are able to define which application should get the
information and how detailed the information should be, or they could bypass all
other data simply by manually entering the location coordinates.
Intelligent fabrics :
https://www.youtube.com/watch?v=spFAUlslssg
Future of cloud TV :
What is cloud tv:
Cloud TV a new type of satellite TV that offers a number of benefits over traditional
satellite TV. These benefits include scalability, the ability to add or remove channels,
lower prices for premium features and greater flexibility in terms of where you can watch
your favorite shows.
With Cloud TV, you no longer need to install expensive and cumbersome equipment in
your home. Instead, all you need is an Internet connection and a compatible devices such
as a smart TV, computer or mobile phone.
To watch Cloud TV, simply log to your account and select the channels that want to
watch. You can then watch them live or recorded, wherever you are. All you need is an
Internet connection.
Cloud TV is the future of satellite TV. It offers a number of advantages that make an
attractive option for both homes and businesses. If you are looking for a better, more
affordable and more flexible way to watch TV, then Cloud TV is definitely considering.
Future of C-TV
The future of Cloud TV is looking very bright. With its many benefits over traditional
satellite TV, it is no wonder that this technology is becoming more and more popular
every day. If you are looking for a better, more affordable and more flexible way to watch
TV, then Cloud TV is definitely the way to go.
One of the biggest benefits of Cloud TV is thus that it is much more scalable than
traditional satellite TV. This means that you can add or remove channels as you please
without having to go through a long and costly process of installing new equipment.
You can also get packages that include HD channels, DVR service and other premium
features for a lower price than you would pay for these services from a traditional satellite
TV provider. This also depends on the platform you choose for Cloud TV.
Finally, Cloud TV is also much more flexible when it comes to where you can watch your
favorite shows. With traditional satellite TV, you are limited to watching TV in your
home. With Cloud TV, you can watch TV anywhere that you have an Internet connection.
Businesses are opting to use smart devices to increase quality output. Following are a
couple of interesting uses of Google Glass
Fast training for employees: It takes a lot of training to operate a laser cutter. This can be
taught more efficiently by overlaying visual aids onto the machine, enabling employees to
learn how to use the equipment faster than conventional tutorials.
Museum Tours: Audio recordings used currently will be enhanced with visual
components. It’d be great to look at any painting hanging at the Met, have a software to
recognize it and retrieve additional information on demand by a simple gesture.
If you are worried about having to maintain software for 3 different platforms (iPhone,
Android and Web), you will have to maintain software for 30 different platforms (smart
phone, smart watch, smart glass, smart TV etc…) tomorrow.
Cloud computing has become a great enabler of cross platform applications, i.e.
applications that can run on multiple platforms.
2: Prototype often
Application developers now have app prototyping tools that enable users and developers
to see the flows and the looks of applications as the apps are being built. This is important
in terms of user acceptance and ultimate app readiness. Every time you incorporate a new
application element, create a working prototype for end users to test drive and comment
on. It is easier to make adjustments in earlier stages of app development than right before
the app is scheduled to be moved into production.
Home based CC :
Home cloud computing is the process of using a remote server to store, manage and
access data and applications from home. It allows users to access their files, applications,
and other digital content from any device with an internet connection, whether it be a
computer, phone, or tablet. Private cloud computing can also be used to back up data and
protect the information in case of emergencies.
A lot of individuals and small businesses use home/private cloud computing to browse,
search through files and even work on projects from any device. It’s a great alternative to
setting up a server in your house because it eliminates the need for physical storage
devices that contain data.
Mobile cloud :
MCC stands for Mobile Cloud Computing which is defined as a combination of mobile
computing, cloud computing, and wireless network that come up together purpose such as
rich computational resources to mobile users, network operators, as well as to cloud
computing providers. Mobile Cloud Computing is meant to make it possible for rich
mobile applications to be executed on a different number of mobile devices. In this
technology, data processing, and data storage happen outside of mobile devices. Mobile
Cloud Computing applications leverage this IT architecture to generate the following
advantages:
With the increase in the demand for computers, computer-related problems are also
increasing. They are becoming more and more complex. The complexity has become so
much that there is a spike in demand for skilled workers. This has fostered the need for
autonomic computers that would do computing operations without the need for manual
intervention.
Characteristics :
1. The Autonomic system knows itself. This means that it knows its components,
specifications capacity, and the real-time status. It also has knowledge about its
own, borrowed, and shared resources.
2. It can configure itself again and again and run its setup automatically as and when
required.
3. It has the capability of optimizing itself by fine-tuning workflows.
4. It can heal itself. This is a way of mentioning that it can recover from failures.
5. It can protect itself by detecting and identifying various attacks on it.
6. It can open itself. This means that it must not be a proprietary solution and must
implement open standards.
7. It can hide. This means that it has the ability to allow resource optimization, by
hiding its complexity.
8. An autonomic system according to IBM must be able to know or expect what
kind of demand is going to arise for its resources to make it a transparent process
for the users to see this information.
Multimedia CC:
Internet is having a significant impact on the media-related industries which
are using it as a medium to enable delivery of their content to end-users. Rich
web pages, software downloads, interactive communications, and ever-
expanding universe of digital media require a new approach to content
delivery. Size and volume of multimedia content is growing exponentially. For
example, more than 30 billion pieces of content such as web links, news
stories, blog posts, notes, and photo albums are shared each month on
Facebook. On the other hand, Twitter users are tweeting an average 55 million
tweets a day that includes web links and photo albums. Web pages and other
multimedia content are being delivered through content delivery networks
(CDN) technologies. These technologies optimize network usage through
dedicated network links, caching servers and by increasingly using peer-to-
peer technologies. The concept of a CDN was conceived in the early days of
Internet but it took until the end of 1990’s before CDNs from Akamai and
other commercial providers managed to deliver Web content (i.e., web pages,
text, graphics, URLs and scripts) anywhere in the world and at the same time
meet the high availability and quality expected by their end users. For
example, Akamai delivers between fifteen to thirty percent of all Web traffic,
reaching more than 4 Terabits per second. Commercial CDNs achieved this by
deploying a private collection of servers and by using distributed CDN
software system in multiple data centres around the world.
A different variant of CDN technology appeared in the mid 2000’s to support
the streaming of hundreds of high definition channels to paid customers.
These CDNs had to deal with more stringent Quality of Service (QoS)
requirements to support users’ experience pertaining to high definition video.
This required active management of the underlying network resources and the
use of specialized set-top boxes that included video recorders (providing
stop/resume and record/playback functionality) and hardware decoders (e.g.,
providing MPEG 4 video compression/decompression). Major video CDNs
where developed by telecommunications companies that owned the required
network and had Operation Support Systems (OSSs) to manage the network
QoS as required by the CDN to preserve the integrity of high definition video
content. Just like the original CDNs, video CDN also utilize a private collection
of servers distributed around the network of video service provider. The first
notable CDNs in this category include Verizon’s FiOS and AT&T’s U-verse.
Some CDN providers such as Limelight Networks invested billions of dollars
in building dedicated network links (media-grade fiber-optic backbone) for
delivering and moving content from servers to end-users.
A more recent variant of video CDNs involves the caching video content in
cloud storage and the distribution of such content using third-party network
services that are designed to meet QoS requirements of caching and streaming
high definition video. For example, Netflix’s video CDN has been developed on
top of Amazon AWS. CloudFront is Amazon’s own CDN that uses Amazon
AWS and provides streaming video services using Microsoft Xboxes. While
Cloud-based CDNs have made a remarkable progress in the past five years,
they are still limited in the following aspects:
CDN service providers either own all the services they use to run their
CDN services or they outsource this to a single cloud provider. A specialized
legal and technical relationship is required to make the CDN work in the
latter case.
Video CDNs are not designed to manage content (e.g., find and play
high definition movies). This is typically done by CDN applications. For
example, CDNs do not provide services that allow an individual to create a
streaming music video service combining music videos from an existing
content source on the Internet (e.g., YouTube), his/her personal collection,
and from live performances he/she attends using his/her smart phone to
capture such content. This can only be done by an application managing
where and when the CDN will deliver the video component of his/her
music program.
CDNs are designed for streaming staged content but do not perform
well in situations where content is produced dynamically. This is typically
the case when content is produced, managed and consumed in
collaborative activities. For example, an art teacher may find and discuss
movies from different film archives, the selected movies may then be edited
by students. Parts of them may be used in producing new movies that can
be sent to the students’ friends for comments and suggestions. Current
CDNs do not support such collaborative activities that involve dynamic
content creation.
Energy aware cloud computing :
Green Computing
Green computing is the Eco-friendly use of computers and their resources. It is also
defined as the study and practice of designing, engineering, manufacturing and
disposing computing resources with minimal environmental damage.
Green cloud computing is using Internet computing services from a service provider
that has taken measures to reduce their environmental effect and also green cloud
computing is cloud computing with less environmental impact.
Some measures taken by the Internet service providers to make their services more
green are:
The term “jungle computing” was first used in the early 1990s, as the Internet was
expanding and the boundaries between computing environments were falling away. In
the context of the Internet, jungle computing describes the use of distributed,
heterogeneous, and decentralized computer resources to solve a given problem.
Docker in glance:
Docker in cloud computing is a tool that is used to automate the deployment of
applications in an environment designed to manage containers. It is a container
management service. These containers help applications to work while they are being
shifted from one platform to another. Docker’s technology is distinctive because it
focuses on the requirements of developers and systems. This modern technology
enables enterprises to create and run any product from any geographic location.
There are several problems associated with cloud environments and tries to solve
those issues by creating a systematic way to distribute and augment the application. It
helps to separate the applications from other containers resulting in a smooth flow. As
its job, it is possible to manage our infrastructure, in the same ways we use to manage
our applications, with the help of Docker.
Process Simplification
Docker can simplify both workflows and communication, and that usually starts with
the deployment story. Traditionally, the cycle of getting an application to production
often looks something like the following (illustrated in Figure 2-1):
Docker architecture
Docker uses a client-server architecture. The Docker client talks
to the Docker daemon, which does the heavy lifting of building,
running, and distributing your Docker containers. The Docker
client and daemon can run on the same system, or you can
connect a Docker client to a remote Docker daemon. The
Docker client and daemon communicate using a REST API,
over UNIX sockets or a network interface. Another Docker
client is Docker Compose, that lets you work with applications
consisting of a set of containers.
Workflow:
https://www.youtube.com/watch?v=dpKUBnSoVNM
3. Cloud TV – Few companies are changing way consumers watch TV. With
greater bandwidth available everywhere, DVD’s have fallen by wayside. TV
viewers will not just watch shows on-demand in their homes, in their cars, and on
airplanes but also new breed of projection devices will make any flat surface TV
screen.
Home based Cloud Computing : Today most households have wireless network
capabilities that allow family members to connect to Web and access sites and
contents they desire. With arrival of smart devices, intelligent fabrics, and greater
use of frequency identification devices (RFID), relations will expect on-demand
personalized technology solutions. Families will use cloud devices to customize
their environments and experiences. Within such environment, families will want
to restrict processing to within home, meaning that they will not want neighbors
to receive signals generated by their devices and clothing. That implies ability to
encrypt wide range of signals within home. To that end, you should expect to see
cloud-based in-home devices that store family files, maintain appliance settings.
download and store movies and TV shows, and more.
Conclusion :
Cloud computing is beginning to transform way enterprises buy and use
technology resources and will become even more prominent in coming years. In
the next-generation, cloud computing technology role is going to be integral
element in life of each human being because Cloud is only place where all software
and hardware and all devices can connect at single place.
What is Docker?
Docker is a set of platforms as a service (PaaS) products that use Operating system-level
virtualization to deliver software in packages called containers. Containers are isolated from
one another and bundle their own software, libraries, and configuration files; they can
communicate with each other through well-defined channels. All containers are run by a
single operating system kernel and therefore use fewer resources than a virtual machine.
What is Docker?
Docker is an open-source containerization platform by which you can pack your application
and all its dependencies into a standardized unit called a container. Containers are light in
weight which makes them portable and they are isolated from the underlying infrastructure
and from each other container. You can run the docker image as a docker container in any
machine where docker is installed without depending on the operating system.
1. Portability.
2. Reproducibility.
3. Efficiency.
4. Scalability.
What is Dockerfile?
The Dockerfile uses DSL (Domain Specific Language) and contains instructions for generating
a Docker image. Dockerfile will define the processes to quickly produce an image. While
creating your application, you should create a Dockerfile in order since the Docker daemon
runs all of the instructions from top to bottom.
To Know more about the Dockerfile refer to the Docker – Concept of Dockerfile.
Docker makes use of a client-server architecture. The Docker client talks with the docker
daemon which helps in building, running, and distributing the docker containers. The Docker
client runs with the daemon on the same system or we can connect the Docker client with the
Docker daemon remotely. With the help of REST API over a UNIX socket or a network, the
docker client and daemon interact with each other. To know more about working of docker
refer to the Architecture of Docker.
It is a file, comprised of multiple layers, used to execute code in a Docker container. They are
a set of instructions used to create docker containers. Docker Image is an executable package
of software that includes everything needed to run an application. This image informs how a
container should instantiate, determining which software components will run and how.
Docker Container is a virtual environment that bundles application code with all the
dependencies required to run the application. The application runs quickly and reliably from
one computing environment to another.
Docker Hub is a repository service and it is a cloud-based service where people push their
Docker Container Images and also pull the Docker Container Images from the Docker Hub
anytime or anywhere via the internet. Generally it makes it easy to find and reuse images. It
provides features such as you can push your images as private or public registry where you
can store and share Docker images
Mainly DevOps team uses the Docker Hub. It is an open-source tool and freely available for all
operating systems. It is like storage where we store the images and pull the images when it is
required. When a person wants to push/pull images from the Docker Hub they must have a
basic knowledge of Docker. Let us discuss the requirements of the Docker tool.
Docker Compose will execute a YAML-based multi-container application. The YAML file
consists of all configurations needed to deploy containers Docker Compose, which is
integrated with Docker Swarm, and provides directions for building and deploying containers.
With Docker Compose, each container is constructed to run on a single host.
Docker Commands
There are “n” no.of commands in docker following are some of the commands mostly used.
1. Docker Run
2. Docker Pull
3. Docker PS
4. Docker Stop
5. Docker Start
6. Docker rm
7. Docker RMI
8. Docker Images
9. Docker exec
To Know more about the docker commands refer tot the Docker – Instruction Commands.
Docker Engine
The software that hosts the containers is named Docker Engine. Docker Engine is a client-
server based application. The docker engine has 3 main components:
2. REST API: It specifies how the applications can interact with the Server and instructs it
what to do.
3. Client: The Client is a docker command-line interface (CLI), that allows us to interact
with Docker using the docker commands.
Docker can be used to pack the application and its dependencies which makes it lightweight
and easy to ship the code faster with more reliability. Docker make every simple to run the
application in the production environment docker container can be platform independent if
the docker engine is installed in the machine.
Docker is the most powerful tool to run the application in the form of containers. Docker
container are light in weight and can be run on any operating system.
AWS provides the Amazon Elastic Container Service (Amazon ECS) it is an fully managed
container service by which you can deploy, scale and manage the docker containers. Amazon
ECS is the most reliable platform according to the performance and also it can be integrated
with the other AWS Service like load balancing, service discovery, and container health
monitoring. To know more about Amazon Elastic Container Service (Amazon ECS).
They don’t contain a guest OS for each Each VM has its own copy of an operating system
container and rely on the underlying along with the application and necessary
Docker Containers Virtual Machines
OS kernel, which makes the binaries, which makes it significantly larger and it
containers lightweight. requires more resources.
ca-certificates \
curl \
gnupg \
lsb-release
$ sudo mkdir -p /etc/apt/keyrings
$ echo \
Dockerfile
main.py
Python3
#!/usr/bin/env python3
FROM python:latest
COPY main.py /
Once you have created and edited the main.py file and the Dockerfile, create your image to
contain your application.
The ‘-t’ option allows to define the name of your image. ‘python-test’ is the name we have
chosen for the image.
1. Create an account on Docker Hub or use an existing one if you already have one.
2. Click on the “Create Repository” button, put the name of the file, and click on “Create”.
3. Now will “tag our image” and “push it to the Docker Hub repository” which we just created.
$ docker images
Image ID is used to tag the image. The syntax to tag the image is:
1. To remove all versions of a particular image from our local system, we use the Image ID for
it.
2. Now run the image, it will fetch the image from the docker hub if it doesn’t exist on your
local machine.
Conclusion
So you have learned about the basics of Docker, the difference between Virtual Machines and
Docker Containers along some common terminologies in Docker. Also, we went through the
installation of Docker on our systems. We created an application using Docker and pushed our
image to Docker Hub. Lastly, we learned how we could remove a particular image from our
local system and later pull the image from Docker Hub if it doesn’t exist locally.