Introduction To Ibm Cloud
Introduction To Ibm Cloud
-AI
-IOT
-Block Chain
-Analytics
works with massive amt of data and need huge storage space and computational power.
MODULE 1
1.n/ws
2.servers
3.storage
4.applications
5.services
-5 essential characteristics
-3 deployment models
-3 service models
5 ESSENTIAL CHARACTERISTICS
1. on-demand self-service,
3.resource pooling, is what gives cloud providers economies of scale, which they pass
on to their customers, making cloud cost-efficient.Using a multi-tenant model,
computing resources are pooled to serve multiple consumers; cloud resources are
dynamically assigned and reassigned, according to demand, without customers needing
to concern themselves with the physical location of these resources.
4.rapid elasticity,
Cloud computing is abt utilizing tech as a service(Caas). Leveraging remote systems on-
demand over the open internet, scaling up and scaling back, and paying for what you use.
t is a revolution in that it has changed the way the world consumes compute services
by making them more cost-efficient while also making organizations more agile in responding
1.Public cloud is when you leverage cloud services over the open internet on hardware owned by
2.Private cloud means that the cloud infrastructure is provisioned for exclusive use by a single
organization.It could run on-premises or it could be owned, managed, and operated by a service
provider.
3.And when you use a mix of both public and private clouds, working together seamlessly,
3 SERVICE MODELS
based on the three layers in a computing stack - Infrastructure, Platform, and Applications.
These cloud computing models are aptly referred to as
In an Infrastructure as a Service model, you get access to infrastructure and physical
computing resources such as servers, networking, storage, and data center space - without the
In a Platform as a Service model, you get access to the platform, that is the hardware
and software tools, usually those needed to develop and deploy applications to users over
the Internet.
Software as a Service is a software licensing and delivery model in which software and applications
are centrally hosted and licensed on a subscription basis, and sometimes also referred to as "on-
demand
software."
The concept of cloud computing dates to the 1950s when large-scale mainframes with high-
volume processing power became available.
In order to make efficient use of the computing power of mainframes, the practice of time
Using dumb terminals, whose sole purpose was to facilitate access to the mainframes, multiple
users were able to access the same data storage layer and CPU power from any terminal.
In the 1970s, with the release of an operating system called Virtual Machine (VM), it became
possible for mainframes to have multiple virtual systems, or virtual machines, on a single
physical node.
The virtual machine operating system evolved the 1950s application of shared access of
physical hardware.
Each virtual machine hosted guest operating systems that behaved as though they had their
own memory, CPU, and hard drives, even though these were shared resources.
Virtualization thus became a technology driver and a huge catalyst for some of the biggest
With the internet becoming more accessible, and the need to make hardware costs more viable,
servers were virtualized into shared hosting environments, virtual private servers, and
virtual dedicated servers, using the same types of functionality provided by the virtual
So, for example, if a company needed ‘x’ number of physical systems to run their applications,
they could take one physical node and split it into multiple virtual systems.
A hypervisor is a small software layer that enables multiple operating systems to run
A hypervisor also separates the Virtual Machines logically, assigning each its own slice of
the underlying computing power, memory, and storage, preventing the virtual machines from
So, if, for example, one operating system suffers a crash or a security compromise,
As technologies and hypervisors improved and were able to share and deliver resources reliably,
some companies decided to make the cloud’s benefits accessible to users who didn’t
have an abundance of physical servers to create their own cloud computing infrastructure.
Since the servers were already online, the process of spinning up a new instance was
instantaneous.
Users could now order cloud resources they needed from a larger pool of available resources,
and they could pay for them on a per-use basis, also known as Pay-As-You-Go.
This pay-as-you-go or utility computing model became one of the key drivers behind cloud
The pay-per-use model allowed companies and even individual developers to pay for the
computing resources as and when they used them, just like units of electricity.
This allowed them to switch to a more cash-flow friendly OpEx model from a CapEx model.
This model appealed to all sizes of companies, those who had little or no hardware, and even
those that had lots of hardware, because now, instead of making huge capital expenditures
in hardware, they could pay for compute resources as and when needed.
It also allowed them to scale their workloads during usage peaks, and scale down when usage
subsided.
Agility,
flexibility,
and competitiveness are key drivers for moving to the cloud, provided
and performance.
Let’s look at some key considerations that organizations can use as a guide while working
-The cost of building and operating data centers can become astronomical.
-On the other hand, low initial costs and pay-as-you-go attributes of cloud computing can add up to
-Also, a point to consider is that not all workloads may be ready for the cloud, as-is.
-Organizations need to consider if paying for application access is a more viable option
Organizations also need to consider speed and productivity—what it means for them
to get a new application up and running in ‘x’ hours on the cloud versus a couple
-And the person-hour cost efficiencies they gain from using cloud dashboards, real-time
risk exposure.
Is it riskier, for example, for them to invest in the hardware and software or rent by the
hour?
Is it safer for them to work on a 12-month plan to build, write, test, and release the
And is it better for them to “try” something new paying-as-you-go rather than making long-term
Let’s look at some of the benefits of cloud adoption, categorized broadly into Flexibility,
-Users can select from a menu of pre-built tools and features to build a solution that
-And Virtual Private Clouds, encryption, and API keys help keep data secure.
-Enterprise users can get applications to market quickly without worrying about underlying
-Cloud-based applications and data are accessible from virtually any internet-connected device.
-Cloud computing uses remote resources, saving organizations the cost of servers and other
3.Strategic Value
-Cloud services give enterprises a competitive advantage by providing the most innovative
technologies available while managing the underlying infrastructure, thus enabling organizations
-While cloud brings great opportunity, it also introduces challenges for business leaders
and IT departments.
-Lack of standardization in how the constantly evolving technologies integrate and interoperate; -
Choosing the right deployment and service models to serve specific needs;
With the right cloud adoption strategies, technologies, services, and service providers,
Gartner predicts:
The question for businesses today is no longer “if” they need to adopt the cloud, rather
“what” their cloud adoption strategy should be to best serve their businesses and customers.
Keeping up with this technological wave, and driving it forward, are the Cloud Service
1.ALIBABA CLOUD
while relatively new, is the largest Chinese cloud computing service provider.
Aliyun provides a comprehensive suite of global cloud computing services to power not just
their customers’ online businesses but also the Alibaba Group’s own e-commerce ecosystem.
-compute,
-network,
-storage,
-security,
- analytics,
-IoT,
-application development, -
data migration,
AWS Cloud
The Amazon Cloud provides a wide range of products, services, and solutions ranging
from Compute, DevOps, Data, Analytics, IoT, Machine Learning, Networking, Content Delivery,
Google also uses GCP internally for their end-user products such as Google Search and
YouTube.
Google Cloud includes G Suite with products for communication, productivity, collaboration,
The Google App Engine is a platform for developing and hosting web applications in Google-
managed data centers, automatically allocating and de-allocating resources to handle demand.
4.IBM Cloud
a full stack cloud platform that spans public, private, and hybrid environments
with products and services covering compute, network, storage, management, security, DevOps,
and databases.
Some of their prominent offerings include their Bare Metal Servers, VMWare, Cloud Paks
for Application Modernization, Virtual Private Cloud, and the suite of emerging technologies
With the acquisition of Red Hat, IBM is also positioning itself as the leading hybrid cloud
5. Microsoft Azure
With its data centers spread out in many regions, Azure provides a global reach with a local
presence.
6.Oracle cloud
Oracle’s SaaS offering includes wide-ranging applications such as ERP, SCM, HCM, Marketing,
And the Oracle Data Cloud provides one of the largest cloud-based data management platforms
helping customers personalize their online, offline, and mobile marketing campaigns, for
targeted audiences.
Oracle Cloud also provides some cloud Infrastructure and Platform services.
7.Salesforce
relationship management, supporting businesses to better connect with their customers, partners,
Salesforce offers multiple cloud services such as Sales Cloud, Service Cloud, and Marketing
Cloud, helping customers track analytics in real-time, customer success and support, customer
8.SAP
SAP is known for Enterprise software and applications such as ERP, CRM, HR, and Finance,
running
in the cloud.
There is also an SAP Cloud Platform for building and extending business applications with rapid
From a single individual to a global multi-billion-dollar enterprise, anybody can access the
computing capacity they need on the cloud.
The lag time from decision to value is no longer a journey of years with high upfront
capital; cloud makes it possible for businesses to experiment, fail, and learn much faster
Businesses today have greater freedom to change course than to live with the consequences
According to an IBM Institute for Business Value study, more than three-quarters of enterprises
74% have adopted cloud to improve customer experience; and 71% use cloud to create enhanced
products and services—while simultaneously downsizing legacy systems and reducing costs.
changes, use analytics to understand customer experience, and apply that understanding to
Product lifecycles have shortened, and barriers to entry have become lower.
-Cognitively-enabled workflows,
The power, scalability, flexibility, and pay-as-you-go economics of cloud provides the foundation
The International Data Corporation, IDC, predicts that by 2025, the total amount of digital
data created worldwide will rise to 163 zettabytes (where one zettabyte is equivalent to a trillion
gigabytes).
Considering the unprecedented amounts of data being produced daily, and the ability to make
data-driven decisions crucial to any business, cloud computing becomes essential for businesses
A cloud strategy, more than just an IT strategy, is the core component of any business strategy
today.
Businesses that haven’t already, or are not currently, integrating cloud into their
business strategy, run the risk of lacking the speed, agility, innovation, and decision-making
1.American Airlines
Challenges:
Solution:
a new technology platform and a new approach to development that would help it deliver digital
self-service tools and customer value more rapidly across its enterprise.
The airline recognized the opportunity to remove the constraints of their existing customer-facing
on the cloud.
Results:
-Cost savings by avoiding existing upgrade costs via migration to the IBM Cloud.
2.UBANK
3.Bitly
from a startup that offered intelligent link-shortening technology adopted by users to compress
lengthy
Their need was to have a cloud-based model with pay-as-you-go pricing, the ability to
scale up and scale down, a more global presence, and the ability to geodistribute into more
POPs.
Bitly migrated to an IBM Cloud environment, establishing a scalable hosting platform for
one hosting site to Cloud infrastructure with data center locations worldwide.
1 billion user interaction data set stored and managed in a flexible, cost-effective
Transformed IT operations to scale for growth, control costs and focus valuable resources
4.ACTIVTRADES-Accelerating growth
As a leading online broker in forex, commodities, equities, cryptocurrencies, indices, and other
financial instruments, ActivTrades enables investors to buy and sell on numerous financial
markets.
Investors need reliable access to accurate market information, combined with the ability
As its client base grew, ActivTrades wanted to cut latency, accelerate execution, and
ActivTrades migrated three major trading systems from on-premises infrastructure to IBM Cloud
for VMware solutions, backed by data storage, networking, and security offerings on the
IBM Cloud.
2. Hours, not days to fire up new resources, for faster response to emerging
requirements
IOT in CLOUD
giant network of connected things and people that have changed much of how we
live our daily lives - from the way we drive, to how we make purchases,
monitoring our personal health, and even how we get energy for our homes.
Smart devices and sensors are continuously tracking and collecting data. For example,
a smart building could have thousands of sensors measuring all kinds of data
connecting the IoT device user to the cloud - be it for device registration,
Data collected through IoT devices is stored and processed on the cloud since IoT devices
So, from IOT platforms running entirely on the cloud to the interfaces used by
Cloud service providers also offer specialized IoT services designed to help speed up the
development of IoT solutions.
Let's look at a case study that demonstrates the use of the IoT on
The rhinos have become one of the the key species that is becoming endangered due
Up until now, poachers have been increasing in numbers, and they become
more militarized with weapons. And so of course we've had to do the same. This is
not sustainable. The only way to do this better, is to bring in technology and
things that they do not have. This endangered species is getting help from
some unexpected friends, the zebra and antelope. They're wearing IoT sensors
connected to the IBM cloud. When poachers enter the area, the animals run for it,
which alerts Rangers who can track their emotions and help stop them before any
harm is done. It's a smart way to help increase the Rhino population and turn
the poachers into the endangered species
AI on CLOUD
Making sense of the endless streams of data is where Artificial Intelligence, or AI,
comes in.
--> Many of the applications where we apply AI today simply wouldn't have been
possible without the scalable, on-demand computing offered by the cloud.
Just as AI consumes the data produced by IoT devices, the IoT devices’ behavior can be
For example, Smart Assistants, a common type of IoT device, continues to learn about
the user’s preferences as usage grows, such as the songs they like, their home
temperature settings, preferred meal times, and over time they anticipate their actions
based on the user’s past history.
So, what we see is a symbiotic relationship between IoT, AI, and Cloud.
IoT delivers the data, AI powers the insights, and both these emerging technologies
leverage cloud’s scalability and processing power to provide value to individuals and
businesses alike.
Let’s look at how the United States Tennis Association, USTA, is using AI on the Cloud
For two weeks at the end of every summer, tennis fans around the world turn their eyes
IBM integrates and analyzes the data flowing from the court.
And delivers unique digital experiences to more than ten million tennis fans around the
world.
And it delivers a consistant experience to our fans all around the globe.
And with Watson on the IBM Cloud, we can engage fans in unique ways, year after year.
Slam Tracker analyzes more than twenty-six million historical data points.
It gives fans deep insight into featured matches, and it can see the momentum of a
match shifting in real time.
And this year we're putting the power of AI Highlights into the hands of US players and
coaches.
so coaches can quickly find the footage they need to guide the development of their
players.
And if you need to know where to park, find a good burger, or grab the latest US Open
gear, you can find the answers with the Guest Information feature in the US Open app
We work with IBM because they keep us on the cutting edge of the fan experience.
They help us to adopt the latest technology, like Cloud and AI.
And they bring data to life in a way that's accessible and engaging for our fans.
Blockchain on cloud
transactional applications.
The more open, diverse, and distributed the network, the stronger the trust and transparency in
the data and transactions.
needs, with more than 70% using more than three. These businesses need to be able
to move applications and data across multiple clouds easily and securely,
leading to the emerging demand to build and manage business applications such as
IoT and AI, powered by the cloud, also have a three-way relationship. Where
powers the analytics and decision-making from the data collected, and cloud
to support both the unprecedented amounts of data being collected and the
Blockchain serves to make AI more understandable by recording the data and variables that go
Analytics on cloud
invested in cloud and IoT technologies to power a data analytics and predictive
walks, and doors All of these devices are streams of data that we are collecting.
In order to process those streams, we need a scalable way of handling the amount
of data that is coming in. And that's where cloud function fits in perfectly.
that data, and to generate further events on that data, that are then utilized and
platform, we analyzed the set of data and we generate value predictive in a sense
that we can predict the failure rate to a certain percentage that is about to
happen in the future for our equipment. And this allows us to perform predictive
maintenance. And this is kind of the whole concept that we have behind our
equipment is connected to the cloud and we are monitoring it. And that's where we
moment we use almost all aspects of the IBM cloud. We use storage from the cloud.
use IoT services. So a number of services already in use and platform. And that use
Module 3:
Overview
IAAS- IaaS is a set of compute
that a user can access and configure them any way they want. the persona for IaaS
don't have to install on your machine and you don't have to manually update.
And so the user for Software as a Service could be anyone. In fact, if you're
watching this on YouTube right now, then you're a user of Software as a Service.
It's usually charged on a subscription model rather than a one-time license fee.
virtualized resources from Iaas and then just abstracts them away, so the
resources. The user for PaaS is not a system admin, usually. It's usually a dev.
physical resources, data centers, cooling power, Network and security, as well as
which includes the operating systems, development tools, databases, and business
analytics.
-In the SaaS model, in addition to the infrastructure and the platform
resources, the provider also hosts and manages the applications and data.
IAAS-INFRASTRUCTURE AS A SERVICE
-is a form of cloud computing that delivers fundamental compute, network, and storage
The cloud provider hosts the infrastructure components traditionally present in an on-premises
in their choice of Region and Zone available from the Cloud Provider.
These VMs typically come pre-installed the customer’s choice of operating system.
The customers can then deploy middleware, install applications, and run workloads on
these VMs.
They can also and create storage for their workloads and backups.
Cloud providers often provide customers the ability to track and monitor the performance
1.Physical data centers: IaaS providers manage large data centers that contain the physical
In most IaaS models, end users do not interact directly with the physical infrastructure
2.Compute: IaaS providers manage the hypervisors and end-users programmatically provision virtual
Cloud compute typically comes with supporting services like auto scaling and load balancing
through APIs.
4.Storage: There are three types of cloud data storage: object, file, and block storage.
Object storage is the most common mode of storage in the cloud, given that it is highly
Organizations today are using cloud infrastructure services to enable their teams to set up test
and development environments faster, helping create new applications more quickly.
By abstracting the low-level components, cloud infrastructure is helping developers focus
IaaS is helping organizations reduce this cost and make applications and data accessible
Organizations are using cloud infrastructure to deploy their web applications faster and
to solve complex problems involving millions of variables and calculations, such as climate
Mining massive data sets to locate valuable patterns, trends, and associations requires
Cloud infrastructure not only provides the required high-performance computing but also
While there are some concerns regarding the lack of transparency in the cloud infrastructure’s
PLATFORM AS A SERVICE
- a cloud computing model that provides customers a complete platform—hardware, software, and
infrastructure—to develop,deploy, manage, and run applications created by them or acquired from
a third-party.
->The PaaS provider hosts everything—servers, networks, storage, operating system, application
runtimes, APIs, middleware, databases, and other tools—at their data center.
The provider also takes responsibility for the installation, configuration, and operation
of the application infrastructure, leaving the user responsible for only the application
Customers pay for this service on a usage basis and purchase resources on-demand.
With IaaS, the cloud provider offers access to ‘raw’ computing resources, such as
servers, storage, and networking, while the user is responsible for the platform and application
software.
With PaaS, the cloud service provider delivers and manages the entire platform infrastructure,
PaaS clouds are distinguished by the high level of abstraction they provide to the users,
PaaS clouds provide services and APIs that help simplify the job of developers in delivering
These services typically include a variety of capabilities such as APIs for distributed
caching, queuing and messaging, file and data storage, workload management, user identity,
The PaaS runtime environment executes end-user code according to policies set by the application
Many of the PaaS offerings provide developers with rapid deployment mechanisms, or “push
such as application servers, database management systems, business analytics services, mobile
back-end services, integration services, business process management systems, rules engines,
The most important use case for PaaS is strategic—build, test, deploy, enhance, and scale
applications rapidly and cost-effectively.
are using PaaS to develop, run, manage, and secure APIs and microservices, which are loosely
2.Internet of Things, or IoT: PaaS clouds support a broad range of application environments,
to find business insights that enable more informed business decisions and predictions.
4.Business Process Management, or BPM: Organizations are using the PaaS cloud to access BPM
platformdelivered as a service.
5.Master data management, or MDM: Organizations are leveraging the PaaS cloud to provide a
single point of reference for critical business data such as information about customer transactions
rapid allocation and deallocation of resources with a pay-as-you-use model offered by PaaS.
The APIs, support services, and middleware capabilities that PaaS clouds provide assist
developers in focusing their efforts on application development and testing, resulting in faster
time to market for their products and services.
2.Middleware capabilities also reduce the amount of code that needs to be written while expanding
3.Greater agility and innovation because using PaaS platforms means that you can experiment
with multiple operating systems, languages, and tools without having to invest in these
resources.
4.You can evaluate and prototype ideas with very low risk exposure resulting in faster,
5.Some of the key PaaS offerings available in the market today include AWS Elastic Beanstalk,
Cloud Foundry on IBM Cloud, IBM Cloud Paks, Windows Azure, RedHat OpenShift, Magento
Commerce Cloud, Force.com, and Apache Stratos.
PaaS clouds do come with some risks—risks that all cloud offerings have in general,
such as information security threats and dependency on the service provider’s infrastructure.
Services can get impacted when a service provider’s infrastructure experiences downtime.
Customers also don’t have any direct control over the changes that may take place when
But the benefits can far outweigh these risks—PaaS continues to experience strong growth and
SAAS
SaaS providers maintain the servers, databases, and code that constitute an application.
They also manage access to the application, including security, availability, and performance.
Applications reside on a remote cloud network, and users use these applications without having
email and collaboration via offerings such as Microsoft's Office 365 and Google's Gmail,
Customer Relationship Management via services such as NetSuite CRM and Salesforce,
Human Resource Management via services from Workday and SAP SuccessFactors,
Solutions once available with several different deployment options are now SaaS-only.
Infrastructure and code are maintained centrally and accessed by all users.
SaaS makes it easy for users to manage privileges, monitor data use, and ensure everyone sees
Users can customize applications to fit their business processes with point-and-click ease.
Users can customize the UI to work with their branding guidelines; they can modify data
Users pay for the use of the services via a subscription model.
without upfront capital and assistance from IT, greatly reducing the time from decision
3.Users can access core business apps from wherever they; they can also buy and deploy apps in
minutes reducing the typical obstacles enterprises have to to test the products they they might
use.
4.Using SaaS applications, individuals and small enterprises can spread out their software
Let’s look at some use cases for SaaS: Organizations are moving to SaaS for their
core business needs as part of their strategic transformation to reduce on-premises IT infrastructure
Oragnzaitions are leveraging SaaS to avoid the need for ongoing upgrades, maintenance,
and patching, done traditionally by internal IT resources; applications run reliably with
minimal input, for example, email servers and office collaboration and productivity
tools.
Organizations are increasingly opting for SaaS eCommerce Platforms to manage their websites,
With SaaS, organizations are able to take advantage of the resilience and business continuity
Enterprises are now developing SaaS integration platforms (or SIPs) for building additional
SaaS applications, moving SaaS beyond standalone software functionality to a platform for mission-
critical
applications.
business-critical data.
And application access relies on a good internet connection—if you’re not connected, you
But the benefits far outweigh the concerns, with SaaS making up the largest segment of
DEPLOYMENT MODELS
Deployment models indicate where the infrastructure resides, who owns and manages it, and
how cloud resources and services are made available to users.
PUBLIC CLOUD
In a public cloud model, users get access to servers, storage, network, security, and
Using web consoles and APIs, users can provision the resources and services they need.
The cloud provider owns, manages, provisions, and maintains the infrastructure, renting
Users don’t own the servers their applications run on or storage their data consumes, or
manage the operations of the servers, or even determine how the platforms are maintained.
In very much the same way that we consume and pay for utilities such as water, electricity,
or gas in our everyday lives, we don’t own any of these cloud resources—we make an
agreement with the service provider, use the resources, and pay for what we use within
a certain period.
Public clouds offer significant cost savings as the provider bears all the capital, operational,
and maintenance expenses for the infrastructure and the facilities they are hosted in.
However, with a public cloud, the user does not have any control over the computing
environment and is subject to the performance and security of the cloud provider’s infrastructure.
There are several public cloud providers in the market today, such as Amazon Web Services,
Microsoft Azure, IBM Cloud, Google Cloud Platform, and Alibaba Cloud.
While all providers include a common set of core services, such as servers, storage, network,
security, and databases, they also offer a wide spectrum of niche services with varied
payment options.
The cloud providers pool of resources, including infrastructure, platforms, and software, are
Public clouds have significant benefits; we’ll go over some of these benefits here:
Considering the large number of users that share the centralized cloud resources on-demand,
The sheer number of server and network resources available on the public cloud means that a
public cloud is highly reliable—if one physical component fails, the service still runs unaffected
It’s also important to note some concerns users have regarding public clouds
Security issues such as data breaches, data loss, account hijacking, insufficient due
diligence, and system and application vulnerability seem to be some of the fears users continue
With data being stored in different locations and accessed across national borders, it has
also become increasingly critical for companies to be compliant with data sovereignty regulations
A service provider’s ability to not just keep up with the regulations, but also the
their teams can focus on building and testing applications, and reducing time-to-market
Businesses with fluctuating capacity and resourcing needs are opting for the public cloud.
Organizations are using public cloud computing resources to build secondary infrastructures
More and more organizations are using cloud storage and data management services for greater
IT departments are outsourcing the management of less critical and standardized business
In the next video, we will look at the private cloud model, its features, benefits, and some
use cases.
PRIVATE CLOUD
The National Institute of Standards and Technology defines Private Cloud as cloud infrastructure
It may be owned, managed, and operated by the organization, a third party, or some combination
When the platform is provisioned over an organization’s internal infrastructure, it runs on-
premises and is owned, managed, and operated by the organization.
This external private cloud offering that resides on a cloud service provider’s infrastructure
A VPC is a public cloud offering that lets an organization establish its own private
and secure cloud-like computing environment in a logically isolated part of a shared public
cloud.
Using a VPC, organizations can leverage the dynamic scalability, high availability, and
lower cost of ownership of a public cloud, while having the infrastructure and security
Virtual Private Clouds are offered by most Public Cloud providers such as IBM and Amazon.
cloud platform without the perceived disadvantages of an open and shared public platform.
Users of a private cloud, such as Developers and Business Units in an organization, still
get to leverage benefits such as economies of scale, granular scale, operational efficiencies,
and user self-service, while exercising full control over access, security, and compliances
Private clouds provide you with The ability to leverage the value of cloud
computing using systems that are directly managed or under perceived control of the
The ability to better utilize internal computing resources, such as the organization’s existing
public cloud instances for a period of time but returning to the private cloud when the
surge is met.
Controlled access and greater security measures customized to specific organizational needs.
The ability to expand and provision things in a relatively short amount of time, providing
greater agility.
Organizations may choose to opt for private cloud because of various reasons—because
their applications provide a unique competitive advantage, there are security and regulatory
concerns, or because the data is highly sensitive and subject to strict industry or governmental
regulations.
A private cloud is an opportunity for organizations to modernize and unify their in-house and
legacy applications.
Moving these applications from their dedicated hardware to the cloud also allows them to
leverage the power of the compute resources and multiple services available on the cloud.
Using the private cloud, organizations are integrating data and application services
This allows them to leverage their private cloud’s compute capability for the larger
jobs while pulling data into an application on a private cloud to leverage public cloud
Using the private cloud gives organizations the flexibility to build applications anywhere,
and move them anywhere, without having to compromise security and compliance in the
process.
Some of the key reasons that may prevent an organization from moving to a public cloud
A private cloud offers these organizations the benefits of on-demand enterprise resources
while exercising full control over critical security and compliance issues from within
HYBRID CLOUD
cloud and third-party public cloud into a single, flexible infrastructure for running
The mix of public and private cloud resources gives organizations the flexibility to choose
Organizations can choose to run the sensitive, highly-regulated, and mission-critical applications
or workloads with reasonably constant performance and capacity requirements on private cloud
infrastructure while deploying the less-sensitive and more-dynamic workloads on the public cloud.
With proper integration and orchestration between the public and private clouds, you
For example, you can leverage additional public cloud capacity to accommodate a spike in
demand for a private cloud application (also known as “cloud bursting”).
The key tenets of a hybrid cloud are interoperability, scalability, and portability.
Hybrid cloud is interoperable—which means that the public and private cloud services
can understand each other’s APIs, configuration, data formats, and forms of authentication
and authorization.
When there is a spike in demand, a workload running on the private cloud can leverage
A hybrid cloud is also portable—since you’re no longer locked-in with a specific vendor,
you can move applications and data not just between on-premise and cloud systems, but
A hybrid monocloud is a hybrid cloud with one cloud provider, while a hybrid multicloud
is an open standards-based stack that can be deployed on any public cloud infrastructure.
The difference lies in the flexibility that the hybrid multicloud offers organizations
There is also a variant of hybrid multicloud, called the composite multicloud, which makes
this flexibility even more granular as it distributes single applications across multiple
providers, allowing you to move application components across cloud services and vendors
as needed.
Hybrid cloud offers significant benefits in areas of security and compliance, scalability
A hybrid cloud lets organizations deploy highly regulated or sensitive workloads in a private
Using a hybrid cloud, you can scale up quickly, inexpensively, and even automatically using
the public cloud infrastructure, all without impacting the other workloads running on your
private cloud.
Because you’re not locked-in with a specific vendor and also don’t have to make either-or
decisions between the different cloud models, you can make the most cost-efficient use of
You can maintain workloads where they are most efficient, spin-up environments using
pay-as-you-go in the public cloud, and rapidly adopt new tools as you need them.
A typical organization will have a range of applications and workloads spread across private,
public, and traditional IT environments.
This represents a range of opportunities for optimization via a hybrid cloud approach.
Software-as-a-Service integration.
available in the public cloud to their existing public cloud, private cloud, and traditional
Organizations today are creating richer and more personal experiences by combining new
data sources on the public cloud—such as weather, social, the Internet of Things, CRM,
and ERP—with existing data and analytics, machine learning, and AI capabilities.
An increasing number of organizations are using public cloud services to upgrade the
user experience of their on-premises applications and deploy them globally to new devices, while
VMware migration.
More and more organizations are “lifting and shifting” their on-premises virtualized
data center footprint and position themselves to scale without added capital expense.
Module 4
CLOUD INFRASTRUCTURE_OVERVIEW
After choosing the cloud service model and the cloud type offered by vendors, customers
This layer consists of physical resources that are housed in Regions, Zones and Data
Centers.
A Cloud provider’s IT environment is typically distributed across many Regions around the
world.
The cloud Regions are isolated from each other so that if one Region was impacted by a natural
disaster like an Earthquake, the Cloud operations in other Regions would keep running.
Each Cloud Region can have multiple Zones (or Availability Zones or AZ for short), which
are typically distinct Data Centers with their own power, cooling and networking resources.
The isolation of zones improves the cloud’s overall fault tolerance, decreases latency,
The Availability Zones (and DataCenters within them) are connected to other AZs and regions,
private datacenters and the Internet using very high bandwidth network connectivity.
These data centers contain pods and racks or standardized containers of computing resources
such as servers, as well as storage, and networking equipment - virtually everything that a physical
IT environment has.
Computing Resources:
– Virtual Servers,
Most of the servers in a cloud datacenter run hypervisors to create virtual servers
or virtual machines (also called VMs for short), that are software-based computers, based on
virtualization technologies.
Other servers in the racks are bare metal servers that are physical servers that aren’t
virtualized.
Customers can provision VMs and Bare Metals servers as and when they need them and run
Cloud users can also run their workloads on serverless computing resources, which are
Storage:
->Information and data can consist of files,
code, documents, images, videos, backups, snapshots, and databases and can be stored
->Bare Metal Servers and Virtual Servers are provisioned with default storage in local
drives.
->Since these cloud servers can be provisioned and decommissioned by customers on demand
and freed up for use by other users, any information stored in a local drive can be lost when you
->However there are other storage options available on the cloud to persist data that you can
choose depending on factors like how important your data is, how quickly you want to be able
to access it, how often you access it, and how secure you need it to be.
->These additional storage options include Block storage, File storage, and Object storage.
->Block and file storage modes are commonly used in traditional data centers, but “often
->Object storage is the most common mode of storage in the cloud as it’s both highly
Networking:
includes traditional networking hardware like routers and switches, but more importantly
for users of the Cloud, the Cloud providers have Software Defined Networking (or SDN)
options where certain networking resources are virtualized or made available programmatically,
through APIs.
This allows for easier network provisioning, configuration, and management in the cloud.
When servers in the cloud are provisioned, you need to setup their public and private
network interfaces.
The public network interfaces, as the name suggests, connect the servers to the public
internet, whereas the private ones provide connectivity to your other cloud resources
As in the physical IT world, network interfaces in the cloud need to have IP addresses and
can access your resources, which can be done by setting up Security Groups and Access Control
For further security and isolation of your resources in the cloud, most Cloud providers
provide Virtual Local Area Networks (VLANs), Virtual Private Clouds (VPCs), and Virtual
Some of the traditional hardware appliances such as firewalls, load balancers, gateways
and traffic analyzers can also be virtualized and made available as services in the cloud.
Another networking capability provided by the Cloud Providers is Content Delivery Networks
or CDNs, that distribute content to multiple points throughout the world so users accessing
the content can access it more quickly by getting it from a point nearest to them.
hypervisor is, is it's simply a piece of software that runs above the physical server, or host.
resources from the physical server and allocate them to your virtual
environments.
2 types
1.Type 1/Bare metal hypervisor=>is installed directly on top of the physical server.
=>frequently used type of hypervisors and they're most secure, they lower the
latency, and these are the ones that you'll see in the market the most. Some
KVM.
2.Type 2=>difference from type 1 is a layer of host OS that sits between the
physical server and the hypervisor. By that nature they are also called, Hosted.
=>Hhese are a lot less frequent. They're mostly used for end-user virtualization.
You might see some in the market that are called: Oracle, VirtualBox,
or VMware Workstation. They have a higher latency than a Type 1 hypervisor.
they're completely independent of one another, but you can run multiple of them
on a hypervisor. And the hypervisor manages the resources that are allocated
Because they're
machines. You could run Windows here, or Linux here, or UNIX here for example.
Because they're independent they're also extremely portable. You can move a
Benefits:
1) Cost savings.
When you think about this and the fact that you can run multiple virtual
consolidation at its core. And the fact that you don't have to maintain nearly
as many servers, run as much electricity, save on maintenance costs, means that you
2) Would
relatively easy and quick - a lot more simple than provisioning an entire new
environment for your developers if they say they want to spin up a new
environment so that they can run a test scenario. Whatever it might be,
Let's say that this host goes out unexpectedly. The fact that you can move
means that you have a great backup plan in place. Right? So, if this host goes down
you can simply move your VMs very quickly to another hypervisor on a
machine that is working. Virtualization and VMs are at the center of cloud
also known as virtual servers orvirtual instances or instances depending on cloud provider.
The various cloud providers make VMs available in a variety of configurations and deployment
When you create a virtual server in the cloud, you specify the Region and Zone or Data Center
you want the server to be provisioned in and the Operating System you want on it.
You can choose between shared (that is, a multi-tenant) VMs or dedicated (that is,
a single-tenant) VMs.
You can also choose between hourly or monthly billing, and select storage and networking
Shared or Public Cloud VMs are provider-managed, multi-tenant deployments that can be
provisioned on-demand with predefined sizes.
Being multi-tenant means that the underlying physical server is virtualized and is shared
To satisfy different workloads, cloud providers offer predefined sizes and configurations
ranging from a single virtual core and a small amount of RAM to multiple virtual cores and
For example there can be configurations for Compute Intensive workloads, Memory intensive
providers also offer custom configurations that allow users to define the number of cores
Public VMs are usually priced by the hour (or in some cases even seconds) and configurations
start as low as pennies per hour.
Some providers also let you get monthly VMs, which can result in some cost savings if you
know you will run the VM for at least a month, but if you decide to de-commision the VM in
the middle of the month, you will still be charged for the full month.
Transient or Spot VMs take advantage of unused capacity in a cloud data center.
Cloud providers make this unused capacity available to users at a much lower cost than
Although the Transient VMs are available at a huge discount, the Cloud provider can choose
to de-provision them at any time and reclaim the resources for provisioning regular, higher-priced,
VMs.
Because you run the risk of losing these VMs when capacity in the data center decreases,
these VMs are great for non-production workloads such as testing and developing applications.
They are also useful for running stateless workloads, testing scalability, or running
big data and high performance computing (HPC) workloads at a low cost.
Reserved virtual server instances allow you to reserve capacity and guarantee resources
You reserve desired amount of virtual server capacity, provision instances from that capacity
when you need them, and choose a term, such as 1 year or 3 years, for your reserved capacity.
You're guaranteed this capacity within the data center of your choice for the life of
By committing to a longer term, you can also lower your costs compared to hourly or monthly
instances.
This can be useful when you know you require at least a certain level of cloud capacity
And if you exceed your reserved capacity, you can always choose to supplement your unplanned
Note however that not all predefined VMs families or configurations may be available as reserved.
Dedicted Hosts
When provisioning a dedicated host you to specify the data center and POD in which you
A bare metal server is a single-tenant, dedicated physical server. In other words, it's dedicated
to a single customer.
The cloud provider actually takes the physical server and plugs it into a rack in a data
center for customers. The cloud provider manages the server up to the operating system (OS),
which means if anything goes wrong with the hardware or rack connection, they will fix
or replace it and then reboot the server. The customer is responsible for administering
Bare metal servers are either preconfigured by the cloud provider to meet workload packages
or they can be custom-configured as per customer specifications. This includes the processors,
own OS and can install certain hypervisors that aren't available from the cloud provider,
and thus create their own virtual machines and farms. With bare metal servers you can
also add GPUs, which are designed for accelerating scientific computation, data analytics, and
Because bare metal servers are physical machines, they take longer to provision than virtual
servers. Pre-configured builds of bare metal can take 20 to 40 minutes to provision and
custom-builds can take around three or four hours, but these provisioning times can vary
by Cloud provider.
As Bare Metal servers are dedicated for use by a single client at any
given time, they tend to be more expensive than similarly sized Virtual Machines.
Also
note that unlike virtual servers, not all cloud providers provide Bare Metal servers."
Since bare metal servers are fully customizable, they can do what a customer wants in the most
demanding environments.
Bare metal servers are dedicated and intended for long term,
and control of bare metal servers because there’s no hypervisor required. This means
that they can be scaled up and optimized for high availability as needed.
As there is no sharing underlying server hardware with other customers, Bare metal servers fulfil
the demanding needs of high-performance computing (HPC) and data intense applications that
require
“minimal latency-related delays”. These servers also excel in big data analytics applications
Some workload examples that bare metal servers satisfy are ERP, CRM, AI, Deep Learning, and
virtualization.
If you use any applications that require high degrees of security control or apps that you’ve
typically run in an on-premises environment, then a bare metal server is a good alternative
in the cloud.
When comparing bare metal servers to virtual servers, some of the most important considerations
are found in customer need. Bare metal servers work best for: CPU and I/O intensive workloads,
excel with highest performance and security, satisfy strict compliance requirements, and
offer complete flexibility, control and transparency but come with added management and
operational
overhead. Whereas, virtual servers are rapidly provisioned, provide an elastic & scalable
environment, and are low cost to use, however since they share underlying hardware with
As cloud environments gain greater adoption, and digital data invites rapidly increasing
As one might expect, the notion of building a cloud network is not much different from
The main difference stems from the fact, that in the cloud, we use logical instances of
networking elements as opposed to physical devices.
For example, Network Interface Controllers (NICs) would be represented by vNICs in cloud
environments.
In the cloud, networking functions are delivered as a service rather than in the form of rack-
mounted devices.
To create a network in the cloud, one starts by defining the size of the network, or the
Cloud networks are deployed in networking spaces that are logically separated segments
of the networks using options, including Virtual private Cloud (VPC) that in turn can be divided
Logically segmented cloud networks are private carveout of the cloud that offer customers
Cloud resources, such as VMs or Virtual Server Instances (VSIs), Storage, network connectivity
Using subnets allows users to deploy enterprise applications using the same multi-tier concepts
Subnets are also the main area where security is implemented in the cloud.
Every subnet is protected by Access Control Lists (ACLS) that serve as a subnet-level
fire wall.
Within the subnet, one could create Security Groups that provide security at the instance
Once you build a subnet, then it is time to add some VSIs and storage to it so that you
Let’s say you have a 3-tier application that require web access VSIs, applications
In this case, we would place the web facing VSIs into one Security Group, the Application
VSIs in a second Security Group, while the database VSIs in a third SG.
It goes without saying that the web-facing VISs need Internet access.
A public Gateway instance is added to the network to enable users’ access to the application
While public gateways are great for Internet access to the cloud, enterprises are interested
in extending their on-premises resources to the cloud by securely connecting them using
When building many subnets and deploying several workloads, it becomes necessary to ensure
That is achieved with Load Balancers that ensure availability of bandwidth for the different
applications.
Enterprises with hybrid cloud environment find using dedicated high-speed connections
between clouds and on-premises resources is a more secured and more efficient way than
Some cloud service providers offer such connectivity, such as IBM Cloud and its Direct Link
solution
a cloud network entails creating a set of logical constructs that deliver networking
functionality that is akin to the data center networks that all IT professionals have come
to rely on for securing their environments and ensuring high performing business applications.
CONTAINERS
-->Containers are an executable unit of software in which application code is packaged, along with
its libraries and dependencies, in common ways so that it can be run anywhere, whether it be on
Containers are small, fast, and portable, and unlike virtual machines, they do not
need to include a guest OS in every instance and can, instead, simply leverage
quite some time. It's actually back in 2008 that the Linux kernel introduced C groups, and control
groups, that basically paved the way for all the different container technologies we see today.
So that includes Docker, but also things like Cloud Foundry, as well as Rocket and
Let's
about containers.
of foundations is big, he
a different engine.
left.
Next, let's say that my coworker decides,
the other
that hardware.
storage.
Certain storage must be attached to a compute node before the storage can be accessed, whereas
other storage types can be directly accessed either through the public Internet or a dedicated
Cloud providers host, secure, manage, and maintain the cloud storage and associated
infrastructure to ensure you have access to your data when you need it.
Cloud storage services allow you to scale your capacity as you need so you only pay
The cost of storage will vary by type but in general, the faster the read / write speed
Cloud storage is available in four main types – Direct Attached, File Storage, Block Storage
is presented directly to a cloud-based server and is effectively either within the host
-This storage is fast and normally only used to store a server’s operating system, although
it can have other use cases.The main two reasons why direct attached storage is not so great
for other uses besides to store the operating system is that it’s typically ‘Ephemeral’
– meaning that it only lasts as long at the compute resource it’s attached to – it
cannot be shared with other nodes and while you can use RAID techniques, it’s not as
File storage
-NFS stands for Network File System and means that the storage is connected to compute nodes
storage or block storage because the data travels over an ethernet network.
It also tends to be lower cost than either direct attached or block storage.
One advantage of File Storage is that it can be mounted or used on multiple servers at
once.
File-based storage is a simple, straightforward approach to data storage and works well for
organizing data in a hierarchical folder structure, that desktop users are familiar with.
Block storage
means that read and write speeds are typically much faster and reliable than with file storage,
making block storage suitable for use with databases and other applications where disk
speed is important.
You typically provision block storage in ‘volumes’, which can then be mounted onto a compute node,
Volumes can normally only be mounted onto one compute node at a time.
With both File and Block storage, you may also hear the term ‘IOPS’.
IOPS stands for ‘Input/Output Operations Per Second’ and relates to the speed of
the storage or to put it another way, how quickly data can be read from or written to
the storage.
Persistence is a term that is used when provisioning File or Block storage and relates to what
If the storage is set to ‘persist’ then it will not be deleted along with the compute
node, meaning that it and its data are preserved and available to mount onto another compute
You can also, in some cases, set the storage so that it is automatically deleted with the
compute node that it is mounted onto – in this case, as we know, it becomes Ephemeral
Storage.
Here, you will also stop paying for the storage but you will lose any data unless it is backed
up somewhere.
There are several ways to backup data in the cloud but one way to back up both File and
Snapshots are usually fast to create (they don’t actually write any data, or rather
they create metadata), don’t require downtime and subsequent snapshots record only changes
to the data.
They are great for returning storage to the way it was at a particular snapshot, though
note, they cannot be used to recover individual files.
This is a different type of storage in so much as it’s not attached to a compute node,
Of all the storage types, Object Storage is by far the cheapest and also the slowest in
terms of read and write speeds, but it is infinite in size to the end user.
Unlike File and Block storage where you provision a certain storage capacity and it fills up
over time, with Object Storage you can keep adding files to it and it never fills up,
This makes Object Storage a fantastic repository for all sorts of unstructured data types,
large and small, including documents, video, logs, backups, data from IoT, application
FILE STORAGE
Like direct attached storage, file storage must be attached to a compute node before
However, File Storage can be less expensive, more resilient to failure, and involve lesser
disk management and maintenance for you as the user to do , as compared to direct attached
storage.
You can also provision much larger amounts of File Storage and present it as a disk to
a server.
That is, the physical disks are contained in a separate, specialised piece of hardware
and they are then connected to the compute node via the underlying infrastructure in
the datacentre.
These storage appliances are not only extremely resilient to failure, the data is also far
more secure in them as these storage appliances offer services such as encryption in transit
File Storage is mounted to compute nodes via an ethernet network – the same kind of network
that you might receive email or browse the internet over, although this ethernet network
is normally dedicated to the task.
One of the issues with ethernet networks is that their speed can vary – the more loaded
an ethernet network is, the more likely it becomes that it’s speed or bandwidth will
be affected.
Of course, Cloud Providers build their storage networks to handle very high volumes of traffic
Therefore, File storage tends to be used for workloads where consistently high network
In terms of workloads, File Storage can typically be mounted onto more than one compute node
at a time, where the mounted disk or volume looks just like another drive on the compute
node.
The ability for File Storage to be mounted to multiple compute nodes at a time make it
an ideal solution where some sort of common storage is required – for example, a departmental
file share, a ‘landing zone’ for incoming files that need to be processed by an application,
In these applications, the potential variance in the speed of the connecting network is
Of course, where cost is an issue, you can use file storage for other applications such
When you provision file storage, one consideration you need to take into account is the IOPS
IOPS stands for Input/Output Operations Per Second and refers to the speed at which the
disks can write and read data (note this is not the speed of the network between the storage
The higher the IOPS value, the faster the speed of the underlying disk.
Understanding IOPS is important because if the IOPS value is too low for your application,
the storage can become a bottleneck and cause your application to run slowly.
Alternatively, if the IOPS is too high, you will probably be paying more that you need
to for your storage.
For example, a file share may be mounted on 30 different compute nodes and an application
writes and requests data to and from that share 60 times per minute.
With this simple example, you can see that each application has different IOPS requirements.
BLOCK STORAGE
Block storage breaks files into chunks (or blocks) of data and Stores each block separately
Like direct attached storage and file storage, block storage also must be attached to a compute
Block storage, like file storage, can be mounted from remote storage appliances, making it
extremely resilient to failure, and keeping data far more secure in them, on account of
Block storage is mounted as a volume to compute nodes using a dedicated network of fibres,
These fibre optic networks are more expensive to build than the ethernet ones which deliver
File Storage, which is one reason why Block Storage tends to have a higher price-point.
However, since the traffic is moving faster and with speed consistency, they are perfect
In terms of workloads, it is important to note that unlike File Storage, which can be
mounted onto 80 computer nodes or more, Block storage is normally mounted onto only one
Since these disks run at a consistent high speed, they are perfect for workloads that
Block storage is not suitable for workloads where there needs to be some level of disk
For block storage, as it is for file storage, you need to take the IOPS capacity of the
Most cloud providers will allow you to specify IOPS characteristics when you provision storage
and, in some cases, adjust the IOPS of your storage as you need, so if the requirements
Block and File Storage is taken from appliances which are maintained by the service provider.
Both are normally highly available and resilient and will often include data encryption at
File storage is very reliable, but the speed of the connecting network can vary, based
on load.
Block storage is attached via a high-speed fibre network, which is very reliable and
consistent.
File storage is a good choice where file shares are required, where workloads do not require
Block storage is a good choice when supporting an application that needs consistent fast
Remember to consider the IOPS requirements of the application when provisioning either
OBJECT STORAGE-OVERVIEW
In this video, we’re going to start to understand what Object Storage is, how data is stored
in Object Storage, and how it differs from the more traditional storage types such as
The first thing to note about Object Storage is that you do not connect it to a particular
Instead, you provision an Object Storage service instance and use an API (or Application Program
This means you can directly use Object Storage with anything that can call an API and you
The second thing to note about Object Storage is that it’s less expensive that other cloud
storage options.
It’s per gigabyte cost is typically a couple of US cents per month and in some cases, even
The third and possibly most important thing to note about Object Storage is that it’s
effectively infinite.
With file and block storage, you specify the size of the storage you want in gigabytes
or terabytes and then pay a fee based on the size you provisioned.
With Object Storage, you just consume the storage you need and pay per gigabyte cost
You can keep uploading files and the storage will never run out.
Well, Object Storage is great for storing large amounts of unstructured data.
By unstructured this means that the data is not stored in any kind of hierarchical folder
or directory structure – Object Storage uses ‘buckets’, and objects are stored
A bucket is a bit like a folder, in the sense that you can give them meaningful names, and
of course have different buckets for different object-types but you cannot place a bucket
within a bucket.
When an object is placed in a bucket, it also has some metadata (data about the data) added
This metadata helps applications to both locate and access the object, as well as provide
information on the time that the data was stored or last accessed.
When you create a bucket, you don’t need to provide or define any sizing information—the
bucket will just hold the data that you place inside it and the service provider ensures
Buckets can hold as little as a few bytes of data, right up to multiple petabytes and
you can build up the amount of data stored as slowly or quickly as you like—as well
The service provider also takes care of resilience and making sure that the Object Storage solution
is highly available.
Some cloud providers offer different types of buckets with different levels of resilience.
For example, they offer buckets which are resilient, but the data is only stored in
This is a good option where data needs to reside in a particular geographical location
They will then offer buckets which are highly available across regions, where the data is
stored multiple times in different datacentres (or zones) in the same region or even in multiple
regions.
These options usually cost more but they provide both the highest level of resilience as well
Object Storage has a very ‘flat’ storage structure, which we’ll explain in the next
lesson.
This data can be anything from text files to audio and video files, from IOT data to
Pretty much any data which is static and where fast read and write speeds are not necessary
Object Storage would, however, not be suitable for running operating systems, nor applications
such as databases or anything else where the contents of the files changes.
The data that you can store using Object Storage can be anything from text files to audio and
video files, from IOT data to virtual machine images, from backup files to data archives.
You cannot run operating systems or other applications such as databases using Object
Storage.
You can have multiple buckets, but you cannot place buckets within buckets.
You do not need to specify a size for a bucket, you can just use as little or as much space
as you need.
Many providers offer different types of buckets with different charges for each.
Some are based on resilience and availability, while others are based on the frequency at
Storage APIs.
Object Storage buckets also have storage ‘tiers’ or ‘classes’ associated with them and
A standard tier bucket is where you would store objects that are frequently accessed.
This tier tends to have the highest per gigabyte cost associated with it.
A ‘vault’ or ‘archive’ tier is where you might store documents that are only accessed
perhaps only once or twice a month or less, and this will be offered at a lower storage
cost, whereas there may also be ‘cold vault’ tier, where you would store data that is typically
This storage often costs just a fraction of a US cent per gigabyte per month.
Often, you can also set up automatic archiving rules for your data, meaning that if an object
isn’t accessed for a period of time, it will automatically be moved to a cheaper storage
tier.
The rule uses some of the object’s metadata to determine when it should be archived.
Note that Object Storage does not come with IOPS options.
Object Storage tends to be very slow in comparison with file or block storage, where downloads
Where providers offer ‘cold vault’ buckets, data retrieval from these tiers can sometimes
If your application needs fast access to files, then Object Storage may not be a good option.
We’ve mentioned that Object Storage is priced per gigabyte used but there can also be other
These costs are similarly low but access charges can be higher for data that is in vault or
cold vault tiers, so it is important to ensure that the data is in the correct tier, based
Object Storage does not need to be attached to a compute node for you to access it, rather
The most common API for object storage is called the ‘S3’ API, which is a standard
because it means developers can write code which is able to access multiple vendor’s
Object Storage.
The API call allows applications to manage object storage and buckets as well as PUT
Object storage is not just for new applications but can be used to meet requirements for existing
ones.
It can also be used as an effective solution for backup and disaster recovery as a replacement
Many backup packages now include the ability to back data up into the cloud, using Object
Storage.
Object storage is more efficient than tape backup solutions, which require tapes that
need to be physically loaded into, and removed from, tape drives, and moved off-site for
geographic redundancy.
Object Storage has different tiers, with different charges for each.
Some are based on the frequency at which the objects inside are accessed.
Object Storage is priced per gigabyte of storage used per month, plus some charges for data
retrieval.
Object Storage is very slow in comparison with File and Block Storage.
You can often create rules which allow the automatic ‘archiving’ of objects to cheaper
Many Object Storage providers have an ‘S3 Compatible’ API, which means developers
can create code that will work against multiple-vendors Object Storage solutions
Object storage in the Cloud offeres an effective Backup and Disaster Recovery solution.
CDN
[Music]
A content delivery network, or CDN, is a
users are
HYBRID MULTICLOUD
modernization.
capabilities.
MICROSERVICES
micro-services architecture is an
team
where it needs to go
click
recommendations micro-service is
[Music]
management tasks such as scaling, scheduling, patching, and provisioning application stacks
to cloud providers, allowing developers to focus their time and effort on the code and
Serverless doesn’t mean there are no servers; only that the management of the underlying
The serverless computing environment allocates resources as needed for the applications.
Let’s look at some key attributes that distinguish serverless computing from other compute models.
Serverless computing runs code only on-demand on a per-request basis, scaling transparently
Serverless enables end users to pay only for resources being used, never paying for idle
capacity, which is unlike virtual servers on the cloud—where end users pay for VMs
Code is executed as individual functions where each function runs inside a stateless container.
No prior execution context is required to serve a request; and with each new request,
You could, for example, have a serverless platform between the front-end of your website
The serverless app could be translating text files and storing it in a cloud-based storage
service.
Using the front-end of your website, you send text files to a serverless app; the app creates
translations in different languages, and then stores these translated files in cloud storage,
Some of the key serverless computing services today include IBM Cloud Functions (which is
scenarios.
You need to evaluate application characteristics and ensure that the application is aligned
Applications that qualify for a serverless architecture include some of the following
or minutes).
Serverless architectures are well-suited for use cases around data and event processing,
Given its inherent and automatic scaling, rapid provisioning, and a pricing model that
does not charge for idle time, supporting microservices architecture has become one
Serverless is well-suited to working with structured text, audio, image, and video data
around tasks such as data enrichment, transformation, validation and cleansing, PDF processing,
Parallel tasks such as data search and processing, and genome processing, are also well-suited
Serverless is also well-suited for working with all sorts of data stream ingestions,
including business data streams, IoT sensor data, log data, and financial market data.
And finally, let’s look at some challenges worth considering about serverless.
Serverless workloads are designed to scale up and down in response to workload, but for
The serverless application architecture can be vendor dependent, and so there is a potential
Because serverless architectures scale up and down in response to workload, they also
sometimes need to start up from zero to serve a new request.
For certain applications, this delay isn’t much of an impact, but for something like
enabling innovation,
Devops On Cloud
Development teams need to design, develop, deliver and run software as reliably and efficiently
as possible.
Operations teams need to identify and resolve problems as soon as possible by monitoring,
Combining development and operations with the ability to monitor and analyze and optimize
bottlenecks gives us DevOps—a collaborative approach where business owners and the
development,
A DevOps approach applies agile and lean thinking principles to all stakeholders in an organization
who develop, operate, or benefit from the business’s software systems, including customers,
suppliers, partners.
By extending lean principles across the software supply chain, DevOps capabilities improve
Using the DevOps approach, developers can produce software in short iterations
on a continuous delivery schedule of new features and bug fixes in rapid cycles;
and businesses can seize market opportunities and reduce time to include customer feedback
in their products.
Continuous Integration; creating packaged builds of the code changes released as immutable
images; where immutable implies that when modifications are needed, the entire component
Continuous Deployment, which involves progressing each new packaged build through the
deployment
Continuous Monitoring; with tools that help developers understand the performance and
Delivery Pipeline; which is an automated sequence of steps that involves the stages of
Ideation, Coding, Building, Deploying, Managing, and Continuous Improvement; which loops back
While DevOps can apply to applications anywhere, there is especially a compelling case for
With its near limitless compute power and available data and application services, cloud
DevOps’ tools, practices, and processes are helping tackle some of the complexities
and challenges posed by the cloud and allowing solutions to be delivered—quickly and reliably.
Let’s look at some core capabilities that DevOps provides to help building and running
applications in the cloud a lot more manageable: DevOps best practices make it possible to
fully automate the installation process in a way that is documented, repeatable, verifiable,
and traceable.
The DevOps’ practices of continuous integration and continuous deployment help create a fully
automated deployment pipeline, which is important all through the application development
lifecycle.
Cloud native applications form a complex distributed system with multiple moving parts,
independent
DevOps principles are essential to define how people work together to build, deploy,
and manage applications in a cloud native approach.
With the DevOps best practices of automated provisioning and continuous deployment, developers,
quality professionals, and other stakeholders can test in low-cost, production-like test
environments that were previously not available—enhancing both productivity and quality.
When systems are compromised or struggling to recover from natural disasters, DevOps
best practices make it possible to rebuild these systems quickly and reliably.
DevOps provides a powerful set of principles, practices, and tools to realize the full potential
cloud benefits.
APPLICATION MODERNISATION
[Music]
service.
microservices. So we've
application modernization.
fashion.