Unit 3 Notes
Unit 3 Notes
ON
CLOUD COMPUTING
III B. TECH – II SEM
Prepared by
INTRODUCTION
As we know, cloud computing technology is used by both small and large organizations
to store the information in cloud and access it from anywhere at anytime using the
internet connection.
There are several processes and components of cloud computing that need to be
discussed. One of the topics of such prime importance is architecture.
Architecture is the hierarchical view of components over which the existing technology is
built and the components that are dependent on the technology. Another topic that is
related to architecture is anatomy. Anatomy describes the core structure of the cloud.
CLOUD ARCHITECTURE
• Any technological model consists of an architecture based on which the model functions,
and has a hierarchical view of describing the technology.
• The cloud also has an architecture that describes its working mechanism. It includes the
dependencies on which it works and the components that work over it.
• The cloud is a recent technology that is completely dependent on the Internet for its
functioning
• Architecture can be divided into four layers based on the access of the cloud by the user.
They are as follows.
CLOUD COMPUTING NOTES
• This layer is the lowest layer in the cloud architecture, All the users or client belong to this
layer.
• This is the place where the client/user initiates the thick client, or thin client i.e,mobile or
any handheld device that would support basic functionalities to access a web application.
• The thin client here refers to a device that is completely dependent on some other system
for its complete functionality. In simple terms, they have very low processing capability.
Everyday examples of thin
clients include Yahoo Messenger, Office 365, Microsoft Outlook.
• Similarly, thick clients are general computers that have adequate processing capability.
They have sufficient capability for independent work. Everyday examples of thick clients
include desktop PCs or laptops running Windows or MacOS
• Usually, a cloud application can be accessed in the same way as a web application. But
internally, the properties of cloud applications are significantly different. Thus, this layer
consists of client devices.
Layer 2 (Network Layer)
• The whole cloud infrastructure is dependent on this connection where the services are
offered to the customers.
• The public cloud usually exists in a specific location and the user would not know the
location as it is abstract. And, the public cloud can be accessed all over the world. In the
case of a private cloud, the connectivity may be provided by a local area network (LAN).
• Even in this case, the cloud completely depends on the network that is used. Usually, when
accessing the public or private cloud, the users require minimum bandwidth, which is
sometimes defined by the cloud providers.
• This layer does not come under the purview of service-level agreements (SLAs), that is,
SLAs do not take into account the Internet connection between the user and cloud for
quality of service (QoS).
Layer 3 (Cloud Management Layer)
• This layer consists of software's that are used in managing the cloud. The software’s can
be a cloud operating system (OS), a software that acts as an interface between the data
center and the user, or a management software that allows managing resources.
SIET III – II
- Optimization like server consolidation, storage workload consolidation and -
Internal cloud governance.
• This layer comes under the purview of SLAs and also called as infrastructure of cloud.
that is, the operations taking place in this layer would affect the SLAs that are being
decided upon between the users and the service providers.
• Any delay in processing or any discrepancy in service provisioning may lead to an SLA
violation. As per rules, any SLA violation would result in a penalty to be given by the
service provider. These SLAs are for both private and public clouds Popular service
providers are Amazon Web Services (AWS) and Microsoft Azure for public cloud.
Similarly, OpenStack and Eucalyptus allow private cloud creation, deployment, and
management.
Layer 4 (Hardware Resource Layer)
• Usually, in the case of a public cloud, a data center is used in the back end. Similarly, in
a private cloud, it can be a data center, which is a huge collection of hardware resources
interconnected to each other that is present in a specific location or a high configuration
system.
• This layer comes under the purview of SLAs. This is the most important layer that
governs the SLAs. This layer affects the SLAs most in the case of data centers.
• Whenever a user accesses the cloud, it should be available to the users as quickly as
possible and should be within the time that is defined by the SLAs. As mentioned, if
there is any discrepancy in provisioning the resources or application, the service provider
has to pay the penalty.
• Hence, the data center consists of a high-speed network connection and a highly efficient
algorithm to transfer the data from the data center to the manager. There can be a
number of data centers for a cloud, and similarly, a number of clouds can share a data
center.
• Cloud anatomy can be simply defined as the structure of the cloud. Cloud anatomy cannot
be considered the same as cloud architecture. It may not include any dependency on which
or over which the technology works , whereas architecture wholly defines and describes
the technology over which it is working.
• Architecture is a hierarchical structural view that defines the technology as well as the
technology over which it is dependent or/and the technology that are dependent on it. Thus,
anatomy can be considered as a part architecture.
There are basically five components of the cloud:
CLOUD COMPUTING NOTES
1. Application: The upper layer is the application layer. In this layer, any applications are
executed.
2. Platform: This component consists of platforms that are responsible for the execution of the
application. This platform is between the infrastructure and the application.
3. Infrastructure: The infrastructure consists of resources over which the other components
work. This provides computational capability to the user.
4. Virtualization: Virtualization is the process of making logical components of resources
over the existing physical resources. The logical components are isolated and independent,
which form the infrastructure.
5. Physical hardware: The physical hardware is provided by server and storage units.
• Performance is the most important aspect of the cloud, because everything in the cloud is
dependent on the SLAs and the SLAs can be satisfied only if performance is good.
SIET III – II
Fig: SLA’S
• Cost is a very important criterion as far as the business prospects of the cloud are
concerned . On the part of the service providers, if they incur less cost for managing the
cloud, then they would try to reduce the cost so as to get a strong user base.
• Hence, a lot of users would use the services, improving their profit margin. Similarly, if
the cost of resource management is high, then definitely the cost of accessing the
resources would be high and there is never a lossy business from any organization and so
the service provider would not bear the cost and hence the users have to pay more.
• Similarly, this would prove costly for service providers as they have a high chance of
losing a wide user base, leading to only a marginal growth in the industry.
• And, competing with its industry rivals would become a big issue. Hence, efficient
management with less cost is required.
• At a higher level, other than Performance , Functionality, cost issues, there are few more
issues that depend on resource management.
• These are power consumption and optimization of multiple objectives to further reduce
the cost. To accomplish these tasks, there are several approaches followed, namely,
Cloud optimization and consolidation of server and storage workloads.
• Cloud optimization is the process of correctly selecting and assigning the right resources
to a workload or application. That is process of eliminating cloud resource waste by
selecting, provisioning, and right-sizing the resources you spend on specific cloud
features.
• Consolidation would reduce the energy consumption and in some cases would increase
the performance of the cloud. According to Margaret Rouse [5], server consolidation by
definition is an approach to the efficient usage of computer server resources in order to
reduce the total number of servers or server locations that an organization requires.
(b) Load Balancing :
CLOUD COMPUTING NOTES
• The previously discussed prospects are mostly suitable for IaaS. Each of the type has its
own way of management. Load fluctuation is the point where the workload of the system
changes continuously.
• This is one of the important criteria and issues that should be considered for cloud
applications.
• Load fluctuation can be divided into two types: predictable and unpredictable.
- Predictable load fluctuations are easy to handle. The cloud can be preconfigured for
handling such kind of fluctuations.
- Unpredictable load fluctuations are difficult to handle, ironically this is one of the reasons
why cloud is preferred by several users.
• There are several aspects of cloud governance out of which SLAs are one of the
important aspects.
• SLAs are the set of rules that are defined between the user and cloud service provider
that decide upon the QoS factor. If SLAs are not followed, then the defaulter has to pay
the penalty.
• But, this shift or moving the applications to the cloud environment brings new
complexities.
• Applications become more composite and complex, which requires leveraging not only
capabilities like storage and database offered by the cloud providers but also third-party
SaaS capabilities like e-mail and messaging.
SIET III – II
• The composite nature of cloud applications requires visibility into all the services to
determine the overall availability and uptime.
• Cloud application management is to address these issues and propose solutions to make it
possible to have insight into the application that runs in the cloud as well as governance
and auditing the environment management while the application is deployed in the cloud.
CLOUD FRONT
• Amazon CloudFront is a CDN service that speeds up distribution of your static and
dynamic web content, such as .html, .css, .js, and image files, to your users
• CloudFront delivers your content through a worldwide network of data centers called edge
locations
• When a user requests content that you're serving with CloudFront, the request is routed to
the edge location that provides the lowest latency (time delay), so that content is delivered
with the best possible performance
Content Delivery Network
• A content delivery network (CDN) is a network of interconnected servers that speeds up
webpage loading for data-heavy applications
• When a user visits a website, data from that website's server has to travel across the
internet to reach the user's computer
• If the user is located far from that server, it will take a long time to load a large file,
such as a video or website image
• Instead, the website content is stored on CDN servers geographically closer to the users
and reaches their computers much faster.
• Because of the global and complex nature of the internet, communication traffic between
websites (servers) and their users (clients) has to move over large physical distances
• The communication is also two-way, with requests going from the client to the server and
responses coming back.
CLOUD COMPUTING NOTES
• A CDN improves efficiency by introducing intermediary servers between the client and the
website server
• They decrease web traffic to the web server, reduce bandwidth consumption, and
improve the user experience of your applications
Benefits of CDN
• Reduce page load time
• If the content is already in the edge location with the lowest latency, CloudFront
delivers it immediately.
• If the content is not in that edge location, CloudFront retrieves it from an origin
that you've defined—such as an Amazon S3 bucket etc.
• For e. g., you might serve an image, sunsetphoto.png, using the URL
https://example.com/sunsetphoto.png
• CloudFront speeds up the distribution of your content by routing each user request
through the AWS backbone network to the edge location that can best serve your
content
• You also get increased reliability and availability because copies of your files
(also known as objects) are now held (or cached) in multiple edge locations
around the world.
• Serve video on demand or live streaming video- CloudFront can stream your
media to global viewers—both pre-recorded files and live events
• Encrypt specific fields throughout system processing- you can have additional
level of security (using field level encryption) in addition to HTTPS security
SIET III – II
How Cloud Front delivers content?
• After you configure CloudFront to deliver your content, here’s what happens
when users request your objects
i. A user accesses your website or application
ii. DNS routes the request to the CloudFront POP (edge location) that can best
serve the request
iii. If the object is in the cache, CloudFront returns it to the user. If not available
in the cache, CloudFront takes following steps –
a. CloudFront forwards the request to your origin server for the corresponding
object—for example, to your Amazon S3 bucket or your HTTP server
b. The origin server sends the object back to the edge location
c. As soon as the data starts arriving, CloudFront begins to forward the object to
the user
d. CloudFront also adds the object to the cache
1. Create S3 Bucket
2. Give Bucket Name
3. ACL should be disabled
4. Uncheck Block all public access
5. Check the box for I acknowledge that the current settings ….
CLOUD COMPUTING NOTES
6. Create Bucket
7. Amazon S3 Buckets bucket name (give your bucket name here)
8. Choose Properties Edit Static website hosting
9. Enable static website hosting
10. Enter index.html for Name of Index Document
11. Save Changes
12. Upload the Files index.html
• Cloud migration encompasses moving one or more enterprise applications and their IT
environments from the traditional hosting type to the cloud environment, either public,
private, or hybrid.
• This activity comprises of different strategies like 6 R’s and phases like evaluation,
migration strategy,.
SIET III – II
a) Cloud Migration Strategies
- The type of data and applications the enterprise transfers, and the location are shifted to,
significantly impact the migration strategy designed and implemented. There are six
main cloud migration strategies—rehosting (lift and shift), re-platforming, repurchasing,
refactoring, retiring, and retaining(re-visiting).
1. Rehosting (lift-and-shift)
• The most general path is rehosting (or lift-and-shift), which implements as it sounds.
• It holds our application and then drops it into our new hosting platform without
• Also, it is a general way for enterprises unfamiliar with cloud computing, who profit
from the deployment speed without having to waste money or time on planning for
enlargement.
• Besides, by migrating our existing infrastructure, we are applying a cloud just like other
data centers. It pays for making good use of various cloud services present for a few
enterprises.
• For example, adding scalable functions to our application to develop the experience for
2. Re-platforming:
• Replatforming is the second option. This is where we modify “lift and shift” into
something more complicated but better suited to the new cloud environment.
• Replatforming is a process that optimizes the application during the migration phase.
• You might move from your own database system to a managed DB hosted on a cloud
provider.
• In this type of migration, you stick with similar underlying technology but modify the
3. Re-factoring
It means to rebuild our applications from leverage to scratch cloud-native abilities
because it could not perform serverless computing or auto-scaling.
A potential disadvantage is vendor lock-in as we are re-creating on the cloud
infrastructure. It is the most expensive and time-consuming route as we may expect.
But, it is also future-proof for enterprises that wish to take benefit from more standard
cloud features.
4. Re-purchasing:
Sometimes referred to as “drop and shop,” this cloud migration strategy comprises a full
switch to another product. It means replacing our existing applications along with a new
SaaS-based and cloudnative platform (such as a homegrown CRM using Salesforce).
(The complexity is losing the existing training and code's familiarity with our team over a
new platform. However, the profit is ignoring the cost of the development.)
However, it may be one that does not have modern code or one that cannot be transported
from one provider to the next. When transferring to a new product or using a proprietary
platform, the “repurpose” strategy is used.
5. Retiring
When we don't find an application useful and then simply turn off these applications. The
consequencing savings may boost our business situation for application migration if we are
accessible for making the move.
6. Re-visiting
Re-visiting may be all or some of our applications must reside in the house. For example,
applications that have unique sensitivity or handle internal processes to an enterprise. Don't
be scared for revisiting cloud computing at any later date. We must migrate only what makes
effects to the business.
SIET III – II
3. Prototyping: Migration activity is preceded by a prototyping activity to validate and
ensure that a small portion of the applications are tested on the cloud environment with test
data setup.
4. Provisioning: Premigration optimizations identified are implemented. Cloud servers are
provisioned for all the identified environments, necessary platform softwares and
applications are deployed, configurations are tuned to match the new environment sizing,
and databases and files are replicated. All internal and external integration points are
properly configured. Web services, batch jobs, and operation and management software are
set up in the new environments.
WHAT IS IAM?
• AWS Identity and Access Management is a web service that enables AWS customers
(organizations) to manage their users(employees) and user permissions in AWS
Management Console.
CLOUD COMPUTING NOTES
Why IAM?
• Without IAM, Organization with multiple users must either create multiple user accounts,
each with its own billing and subscriptions to AWS products or share an account with a
single security credential.
• Without IAM, Organization don't have control about the tasks that the users can do.
• With IAM, Organization can centrally manage users, security credentials such as access
keys, and permissions that control which AWS resources users can access.
• IAM enables the organization to create multiple users, each with its own security
credentials, controlled and billed to a single aws account.
• IAM allows the user to do only what they need to do as a part of the user's job.
• Centralized control :Root User can control creation and cancellation of each user's security
credentials. Root user can also control what data in the aws system users can access and how
they can access.
• Shared Access :Users can share the resources for the collaborative projects.
• Granular permissions: It is used to set a permission that user can use a particular service
but not other services.
• Free to use: AWS IAM is a feature of AWS account which is offered at no additional
charge. You will be charged only when you access other AWS services by using IAM user.
• Granular permissions
SIET III – II
• Setup password rotation policy ( eg: password will expire every 30 days )
AWS Components
• USERS- End Users ( people )
• GROUPS– Collections of users, under one set of permissions.
• ROLES – We can create roles and assign them to AWS
resources
• POLICIES– Set of permissions
(a) USERS
• IAM users are identities created by Root User with
credentials and permissions attached.
• The IAM user represents the person or service who uses the
IAM user to interact with AWS
• The IAM service lets you create a user name for every
employee of your company so they can securely access
AWS services
• Root User can attach each IAM User to a group that has the
permissions needed to perform particular tasks. •
(b)Group Users
• IAM groups are collections of IAM users.
• IAM groups help specify permissions across multiple users so that
any permissions granted to the group will also be given to the
individual users in the group.
• Managing group Users is relatively easy.
• Root User an create a group, add users to it, remove them from it,
or change permissions in one place.
(c)IAM ROLES
• An IAM role is a set of permissions that define what actions
are allowed and denied by an entity in the AWS console.
• Role permissions are temporary credentials.
AWS Identity and Access Management (IAM) is a service that helps you securely control
access to AWS resources. You use IAM to control who is authorized (has permissions) to use
resources.
When you first create an AWS account, it has complete access to all AWS services. This identity
is called the AWS account root user.
(a) Procedure to Create IAM User Login to access S3 bucket as readonly mode
from EC2 instance through GUI:
Step 1: Initially login with Root User to Create IAM Users
Step 4: Attach Policy search for –” administratoraccess” and click create user group
Step 7: Adding user to the group and download user credentials for further login
Step 10: Copy the Account ID of root user and log out
Step11: Login with IAM User paste the copied Account Id of root user
Step 15 :S3 bucket Access through GUI by updating Iam role by typing “aws s3 ls” in
console ,it is unable to access
Step 16: add Role to the user “go to Iam “ select user and select Roles in the left panel of Iam
services
Step 17: Click on create Role ,enter role name
SIET III – II
Step 21: Update EC2 Role from Actions ->Security->Modify Iam role-> select s3 role name-
>updated