0% found this document useful (0 votes)
60 views18 pages

Unit 3 Notes

The document provides an overview of cloud computing architecture, management, and the anatomy of the cloud, detailing the four layers of cloud architecture: User/Client, Network, Cloud Management, and Hardware Resource layers. It discusses the importance of managing cloud infrastructure and applications to ensure quality of service (QoS) and outlines key components such as resource management, load balancing, and cloud governance. Additionally, it introduces Amazon CloudFront as a content delivery network (CDN) service that enhances the distribution of web content globally.

Uploaded by

Nihaal Varma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views18 pages

Unit 3 Notes

The document provides an overview of cloud computing architecture, management, and the anatomy of the cloud, detailing the four layers of cloud architecture: User/Client, Network, Cloud Management, and Hardware Resource layers. It discusses the importance of managing cloud infrastructure and applications to ensure quality of service (QoS) and outlines key components such as resource management, load balancing, and cloud governance. Additionally, it introduces Amazon CloudFront as a content delivery network (CDN) service that enhances the distribution of web content globally.

Uploaded by

Nihaal Varma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

DIGITAL NOTES

ON
CLOUD COMPUTING
III B. TECH – II SEM

Prepared by

Mr. M. Hari Prasad M.Tech


Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE &


ENGINEERING
Sreyas Institute of Engineering and Technology
An Autonomous Institution
Approved by AICTE, Affiliated to JNTUH, Accredited by NAAC-A Grade, NBA (CSE, ECE & ME) & ISO 9001:2015
Certified
Bandlaguda, Nagole, Hyderabad, Telangana 500068
UNIT III
Cloud computing Architecture and Management: Cloud Architecture, Layer, Anatomy of the
cloud, Managing the cloud and managing the cloud infrastructure using AWS cloud Front,
Managing the cloud application, Managing Identity and Access (IAM), Migrating Application to
cloud, Phases of cloud migration, Approaches for Cloud Migration.

INTRODUCTION
 As we know, cloud computing technology is used by both small and large organizations
to store the information in cloud and access it from anywhere at anytime using the
internet connection.
 There are several processes and components of cloud computing that need to be
discussed. One of the topics of such prime importance is architecture.
 Architecture is the hierarchical view of components over which the existing technology is
built and the components that are dependent on the technology. Another topic that is
related to architecture is anatomy. Anatomy describes the core structure of the cloud.

CLOUD ARCHITECTURE

• Any technological model consists of an architecture based on which the model functions,
and has a hierarchical view of describing the technology.

• The cloud also has an architecture that describes its working mechanism. It includes the
dependencies on which it works and the components that work over it.

• The cloud is a recent technology that is completely dependent on the Internet for its
functioning

• Architecture can be divided into four layers based on the access of the cloud by the user.
They are as follows.
CLOUD COMPUTING NOTES

Layer 1 (User/Client Layer)

• This layer is the lowest layer in the cloud architecture, All the users or client belong to this
layer.

• This is the place where the client/user initiates the thick client, or thin client i.e,mobile or
any handheld device that would support basic functionalities to access a web application.

• The thin client here refers to a device that is completely dependent on some other system
for its complete functionality. In simple terms, they have very low processing capability.
Everyday examples of thin
clients include Yahoo Messenger, Office 365, Microsoft Outlook.

• Similarly, thick clients are general computers that have adequate processing capability.
They have sufficient capability for independent work. Everyday examples of thick clients
include desktop PCs or laptops running Windows or MacOS

• Usually, a cloud application can be accessed in the same way as a web application. But
internally, the properties of cloud applications are significantly different. Thus, this layer
consists of client devices.
Layer 2 (Network Layer)

• This layer allows the users to connect to the cloud.

• The whole cloud infrastructure is dependent on this connection where the services are
offered to the customers.

• The public cloud usually exists in a specific location and the user would not know the
location as it is abstract. And, the public cloud can be accessed all over the world. In the
case of a private cloud, the connectivity may be provided by a local area network (LAN).

• Even in this case, the cloud completely depends on the network that is used. Usually, when
accessing the public or private cloud, the users require minimum bandwidth, which is
sometimes defined by the cloud providers.

• This layer does not come under the purview of service-level agreements (SLAs), that is,
SLAs do not take into account the Internet connection between the user and cloud for
quality of service (QoS).
Layer 3 (Cloud Management Layer)

• This layer consists of software's that are used in managing the cloud. The software’s can
be a cloud operating system (OS), a software that acts as an interface between the data
center and the user, or a management software that allows managing resources.

• These software usually allow

- Resource management like scheduling, provisioning

SIET III – II
- Optimization like server consolidation, storage workload consolidation and -
Internal cloud governance.

• This layer comes under the purview of SLAs and also called as infrastructure of cloud.
that is, the operations taking place in this layer would affect the SLAs that are being
decided upon between the users and the service providers.

• Any delay in processing or any discrepancy in service provisioning may lead to an SLA
violation. As per rules, any SLA violation would result in a penalty to be given by the
service provider. These SLAs are for both private and public clouds Popular service
providers are Amazon Web Services (AWS) and Microsoft Azure for public cloud.
Similarly, OpenStack and Eucalyptus allow private cloud creation, deployment, and
management.
Layer 4 (Hardware Resource Layer)

• Layer 4 consists of provisions for actual hardware resources.

• Usually, in the case of a public cloud, a data center is used in the back end. Similarly, in
a private cloud, it can be a data center, which is a huge collection of hardware resources
interconnected to each other that is present in a specific location or a high configuration
system.

• This layer comes under the purview of SLAs. This is the most important layer that
governs the SLAs. This layer affects the SLAs most in the case of data centers.

• Whenever a user accesses the cloud, it should be available to the users as quickly as
possible and should be within the time that is defined by the SLAs. As mentioned, if
there is any discrepancy in provisioning the resources or application, the service provider
has to pay the penalty.

• Hence, the data center consists of a high-speed network connection and a highly efficient
algorithm to transfer the data from the data center to the manager. There can be a
number of data centers for a cloud, and similarly, a number of clouds can share a data
center.

ANATOMY OF THE CLOUD

• Cloud anatomy can be simply defined as the structure of the cloud. Cloud anatomy cannot
be considered the same as cloud architecture. It may not include any dependency on which
or over which the technology works , whereas architecture wholly defines and describes
the technology over which it is working.

• Architecture is a hierarchical structural view that defines the technology as well as the
technology over which it is dependent or/and the technology that are dependent on it. Thus,
anatomy can be considered as a part architecture.
There are basically five components of the cloud:
CLOUD COMPUTING NOTES

1. Application: The upper layer is the application layer. In this layer, any applications are
executed.
2. Platform: This component consists of platforms that are responsible for the execution of the
application. This platform is between the infrastructure and the application.
3. Infrastructure: The infrastructure consists of resources over which the other components
work. This provides computational capability to the user.
4. Virtualization: Virtualization is the process of making logical components of resources
over the existing physical resources. The logical components are isolated and independent,
which form the infrastructure.
5. Physical hardware: The physical hardware is provided by server and storage units.

MANAGING THE CLOUD


Cloud management is aimed at efficiently managing the cloud so as to maintain the QoS. It
is one of the prime jobs to be considered. The whole cloud is dependent on the way it is
managed. Cloud management can be divided into two parts:
1. Managing the infrastructure of the cloud
2. Managing the cloud application

(1) Managing the Cloud Infrastructure


The infrastructure of the cloud is considered to be the backbone of the cloud. Mainly the
infrastructure consists of Resource management, load consolidation and Internal cloud
governance. This component is mainly responsible for the QoS factor. If the
infrastructure is not properly managed, then the whole cloud can fail and QoS would be
adversely affected. The core of cloud management is resource management.
(a) Resource management:

• A cloud infrastructure is a very complex system that consists of a lot of resources. It


involves several internal tasks such as resource scheduling, provisioning, and load
balancing. These tasks are mainly managed by the cloud service provider’ssuch as the
cloud OS that is responsible for providing services to the cloud and that internally
controls the cloud.

• Poor resource management may lead to several inefficiencies in terms of performance,


functionality, and cost.

• Performance is the most important aspect of the cloud, because everything in the cloud is
dependent on the SLAs and the SLAs can be satisfied only if performance is good.

• Functionality of the cloud should always be provided and considered,even if there is a


small discrepancy in providing the functionality, the whole purpose of maintaining is
futile. A partially functional cloud would not satisfy the SLAs.

SIET III – II
Fig: SLA’S

• Cost is a very important criterion as far as the business prospects of the cloud are
concerned . On the part of the service providers, if they incur less cost for managing the
cloud, then they would try to reduce the cost so as to get a strong user base.

• Hence, a lot of users would use the services, improving their profit margin. Similarly, if
the cost of resource management is high, then definitely the cost of accessing the
resources would be high and there is never a lossy business from any organization and so
the service provider would not bear the cost and hence the users have to pay more.

• Similarly, this would prove costly for service providers as they have a high chance of
losing a wide user base, leading to only a marginal growth in the industry.

• And, competing with its industry rivals would become a big issue. Hence, efficient
management with less cost is required.

• At a higher level, other than Performance , Functionality, cost issues, there are few more
issues that depend on resource management.

• These are power consumption and optimization of multiple objectives to further reduce
the cost. To accomplish these tasks, there are several approaches followed, namely,
Cloud optimization and consolidation of server and storage workloads.

• Cloud optimization is the process of correctly selecting and assigning the right resources
to a workload or application. That is process of eliminating cloud resource waste by
selecting, provisioning, and right-sizing the resources you spend on specific cloud
features.

• Consolidation would reduce the energy consumption and in some cases would increase
the performance of the cloud. According to Margaret Rouse [5], server consolidation by
definition is an approach to the efficient usage of computer server resources in order to
reduce the total number of servers or server locations that an organization requires.
(b) Load Balancing :
CLOUD COMPUTING NOTES

• The previously discussed prospects are mostly suitable for IaaS. Each of the type has its
own way of management. Load fluctuation is the point where the workload of the system
changes continuously.

• This is one of the important criteria and issues that should be considered for cloud
applications.

• Load fluctuation can be divided into two types: predictable and unpredictable.

- Predictable load fluctuations are easy to handle. The cloud can be preconfigured for
handling such kind of fluctuations.

- Unpredictable load fluctuations are difficult to handle, ironically this is one of the reasons
why cloud is preferred by several users.

(c) Cloud governance :

• It is another topic that is closely related to cloud management.

• Cloud governance is different from cloud management. Governance in general is a term


in the corporate world that generally involves the process of creating value to an
organization by creating strategic objectives that will lead to the growth of the company
and would maintain a certain level of control over the company. Similar to that, here
cloud organization is involved.

• There are several aspects of cloud governance out of which SLAs are one of the
important aspects.

• SLAs are the set of rules that are defined between the user and cloud service provider
that decide upon the QoS factor. If SLAs are not followed, then the defaulter has to pay
the penalty.

• The whole cloud is governed by keeping these SLAs in mind

(2) Managing the Cloud Application

• Business companies are increasingly looking to move or build their corporate


applications on cloud platforms to improve agility or to meet dynamic requirements that
exist in the globalization of businesses and responsiveness to market demands.

• But, this shift or moving the applications to the cloud environment brings new
complexities.

• Applications become more composite and complex, which requires leveraging not only
capabilities like storage and database offered by the cloud providers but also third-party
SaaS capabilities like e-mail and messaging.

• So, understanding the availability of an application requires inspecting the


infrastructure, the services it consumes, and the upkeep of the application.

SIET III – II
• The composite nature of cloud applications requires visibility into all the services to
determine the overall availability and uptime.

• Cloud application management is to address these issues and propose solutions to make it
possible to have insight into the application that runs in the cloud as well as governance
and auditing the environment management while the application is deployed in the cloud.

• These cloud-based monitoring and management services can collect a multitude of


events, analyze them, and identify critical information that requires additional remedial
actions like adjusting capacity or provisioning new services.

• Additionally application management has to be supported with tools and processes


required for managing other environments that might coexist, enabling efficient
operations.

CLOUD FRONT
• Amazon CloudFront is a CDN service that speeds up distribution of your static and
dynamic web content, such as .html, .css, .js, and image files, to your users
• CloudFront delivers your content through a worldwide network of data centers called edge
locations
• When a user requests content that you're serving with CloudFront, the request is routed to
the edge location that provides the lowest latency (time delay), so that content is delivered
with the best possible performance
Content Delivery Network
• A content delivery network (CDN) is a network of interconnected servers that speeds up
webpage loading for data-heavy applications

• When a user visits a website, data from that website's server has to travel across the
internet to reach the user's computer

• If the user is located far from that server, it will take a long time to load a large file,
such as a video or website image

• Instead, the website content is stored on CDN servers geographically closer to the users
and reaches their computers much faster.

Why is CDN important?


• The primary purpose of a content delivery network (CDN) is to reduce latency, or reduce
the delay in communication created by a network's design

• Because of the global and complex nature of the internet, communication traffic between
websites (servers) and their users (clients) has to move over large physical distances

• The communication is also two-way, with requests going from the client to the server and
responses coming back.
CLOUD COMPUTING NOTES

• A CDN improves efficiency by introducing intermediary servers between the client and the
website server

• These CDN servers manage some of the client-server communications

• They decrease web traffic to the web server, reduce bandwidth consumption, and
improve the user experience of your applications

Benefits of CDN
• Reduce page load time

• Reduce bandwidth costs

• Increase content availability

• Improve website security

What is Cloud front ?

• If the content is already in the edge location with the lowest latency, CloudFront
delivers it immediately.

• If the content is not in that edge location, CloudFront retrieves it from an origin
that you've defined—such as an Amazon S3 bucket etc.

• For e. g., you might serve an image, sunsetphoto.png, using the URL
https://example.com/sunsetphoto.png

• CloudFront speeds up the distribution of your content by routing each user request
through the AWS backbone network to the edge location that can best serve your
content

• You also get increased reliability and availability because copies of your files
(also known as objects) are now held (or cached) in multiple edge locations
around the world.

Cloud Front Use Cases


• Accelerate static website content delivery- CloudFront can speed up the delivery
of your static content (for example, images, style sheets, JavaScript, and so on) to
viewers across the globe

• Serve video on demand or live streaming video- CloudFront can stream your
media to global viewers—both pre-recorded files and live events

• Encrypt specific fields throughout system processing- you can have additional
level of security (using field level encryption) in addition to HTTPS security

• Customize at the edge– error messages can be customized e. g. when server is


down

• Serve private content by using Lambda@Edge customizations – allows various


customizations and server your content privately

SIET III – II
How Cloud Front delivers content?
• After you configure CloudFront to deliver your content, here’s what happens
when users request your objects
i. A user accesses your website or application

ii. DNS routes the request to the CloudFront POP (edge location) that can best
serve the request

iii. If the object is in the cache, CloudFront returns it to the user. If not available
in the cache, CloudFront takes following steps –
a. CloudFront forwards the request to your origin server for the corresponding
object—for example, to your Amazon S3 bucket or your HTTP server
b. The origin server sends the object back to the edge location
c. As soon as the data starts arriving, CloudFront begins to forward the object to
the user
d. CloudFront also adds the object to the cache

CloudFront Distribution demo steps

1. Create Bucket with Website Enabled


2. Upload the sample files
3. Test Website is working
4. Create CloudFront Distribution for the buckets
5. Test the distribution
6. Disable and Delete the Distribution

Step by Step procedure:


1. Create S3 Bucket with Static Website Enabled

1. Create S3 Bucket
2. Give Bucket Name
3. ACL should be disabled
4. Uncheck Block all public access
5. Check the box for I acknowledge that the current settings ….
CLOUD COMPUTING NOTES

6. Create Bucket
7. Amazon S3 Buckets bucket name (give your bucket name here)
8. Choose Properties Edit Static website hosting
9. Enable static website hosting
10. Enter index.html for Name of Index Document
11. Save Changes
12. Upload the Files index.html

13. Copy the URL

14. Test the URL

Speed up website with Cloud Front

 Create Cloud Front


 Create CloudFront Distribution
 Enter Bucket Name
 Check Endpoint is a Website
 Create Distribution
 Successfully created new distribution ,copy the distributed domain name and paste in
browser
 Simple Output (try to make it more interesting).

MIGRATING APPLICATION TO CLOUD

• Cloud migration encompasses moving one or more enterprise applications and their IT
environments from the traditional hosting type to the cloud environment, either public,
private, or hybrid.

• Cloud migration presents an opportunity to significantly reduce costs incurred on


applications.

• This activity comprises of different strategies like 6 R’s and phases like evaluation,
migration strategy,.

SIET III – II
a) Cloud Migration Strategies

- The type of data and applications the enterprise transfers, and the location are shifted to,
significantly impact the migration strategy designed and implemented. There are six
main cloud migration strategies—rehosting (lift and shift), re-platforming, repurchasing,
refactoring, retiring, and retaining(re-visiting).

1. Rehosting (lift-and-shift)

• The most general path is rehosting (or lift-and-shift), which implements as it sounds.

• It holds our application and then drops it into our new hosting platform without

changing the architecture and code of the app.

• Also, it is a general way for enterprises unfamiliar with cloud computing, who profit

from the deployment speed without having to waste money or time on planning for
enlargement.

• Besides, by migrating our existing infrastructure, we are applying a cloud just like other

data centers. It pays for making good use of various cloud services present for a few
enterprises.

• For example, adding scalable functions to our application to develop the experience for

an improving segment of many users.

2. Re-platforming:

• Replatforming is the second option. This is where we modify “lift and shift” into

something more complicated but better suited to the new cloud environment.

• Replatforming is a process that optimizes the application during the migration phase.

This requires some programming knowledge and input.

• You might move from your own database system to a managed DB hosted on a cloud

provider.

• In this type of migration, you stick with similar underlying technology but modify the

business model and have cloud resilience as a huge bonus.


CLOUD COMPUTING NOTES

3. Re-factoring
 It means to rebuild our applications from leverage to scratch cloud-native abilities
because it could not perform serverless computing or auto-scaling.
 A potential disadvantage is vendor lock-in as we are re-creating on the cloud
infrastructure. It is the most expensive and time-consuming route as we may expect.
 But, it is also future-proof for enterprises that wish to take benefit from more standard
cloud features.
4. Re-purchasing:
 Sometimes referred to as “drop and shop,” this cloud migration strategy comprises a full
switch to another product. It means replacing our existing applications along with a new
SaaS-based and cloudnative platform (such as a homegrown CRM using Salesforce).
 (The complexity is losing the existing training and code's familiarity with our team over a
new platform. However, the profit is ignoring the cost of the development.)
 However, it may be one that does not have modern code or one that cannot be transported
from one provider to the next. When transferring to a new product or using a proprietary
platform, the “repurpose” strategy is used.

5. Retiring
 When we don't find an application useful and then simply turn off these applications. The
consequencing savings may boost our business situation for application migration if we are
accessible for making the move.

6. Re-visiting
Re-visiting may be all or some of our applications must reside in the house. For example,
applications that have unique sensitivity or handle internal processes to an enterprise. Don't
be scared for revisiting cloud computing at any later date. We must migrate only what makes
effects to the business.

(b) Process and Phases of Cloud Migration


There are various ways to go about a cloud migration based on the type of strategy you choose
or the size of your organization.
1. Evaluation: Evaluation is carried out for all the components like current infrastructure
and application architecture, environment in terms of compute, storage, monitoring, and
management, SLAs, operational processes, financial considerations, risk, security,
compliance, and licensing needs are identified to build a business case for moving to the
cloud.
2. Migration strategy: Based on the evaluation, a migration strategy is drawn—a hot plug
strategy is used where the applications and their data and interface dependencies are isolated
and these applications can be operationalzed all at once. A fusion strategy is used where the
applications can be partially migrated; but for a portion of it, there are dependencies based
on existing licenses, specialized server requirements like mainframes, or extensive
interconnections with other applications.

SIET III – II
3. Prototyping: Migration activity is preceded by a prototyping activity to validate and
ensure that a small portion of the applications are tested on the cloud environment with test
data setup.
4. Provisioning: Premigration optimizations identified are implemented. Cloud servers are
provisioned for all the identified environments, necessary platform softwares and
applications are deployed, configurations are tuned to match the new environment sizing,
and databases and files are replicated. All internal and external integration points are
properly configured. Web services, batch jobs, and operation and management software are
set up in the new environments.

(3) Benefits of cloud migration

•Flexibility: No organization facilitating experiences a similar demand level by a similar


number of users every time. If our apps face fluctuations in traffic, then cloud infrastructure
permits us to scale down and up to meet the demand. Hence, we can apply only those
resources we require.
•Scalability: The analytics grow as the organization grows with databases, and other escalates
workloads. The cloud facilitates the ability to enhance existing infrastructure. Therefore,
applications have space to raise without impacting work.
•Agility: The part of the development is remaining elastic enough for responding to rapid
modifications within the technology resources. Cloud adoption offers this by decreasing the
time drastically it takes for procuring new storage and inventory.
•Productivity: Our cloud provider could handle the complexities of our infrastructure so we
can concentrate on productivity. Furthermore, the remote accessibility and simplicity of
most of the cloud solutions define that our team can concentrate on what matters such as
growing our business.
•Security: The cloud facilitates security than various others data centers by centrally storing
data. Also, most of the cloud providers give some built-in aspects including cross-enterprise
visibility, periodic updates, and security analytics.
•Profitability: The cloud pursues a pay-per-use technique. There is no requirement to pay for
extra charges or to invest continually in training on, maintaining, making, and updating
space for various physical servers.

WHAT IS IAM?
• AWS Identity and Access Management is a web service that enables AWS customers
(organizations) to manage their users(employees) and user permissions in AWS
Management Console.
CLOUD COMPUTING NOTES

Why IAM?

• Without IAM, Organization with multiple users must either create multiple user accounts,
each with its own billing and subscriptions to AWS products or share an account with a
single security credential.

• Without IAM, Organization don't have control about the tasks that the users can do.

• With IAM, Organization can centrally manage users, security credentials such as access
keys, and permissions that control which AWS resources users can access.

• IAM enables the organization to create multiple users, each with its own security
credentials, controlled and billed to a single aws account.

• IAM allows the user to do only what they need to do as a part of the user's job.

AWS IAM Features

• Centralized control :Root User can control creation and cancellation of each user's security
credentials. Root user can also control what data in the aws system users can access and how
they can access.

• Shared Access :Users can share the resources for the collaborative projects.
• Granular permissions: It is used to set a permission that user can use a particular service
but not other services.

• Free to use: AWS IAM is a feature of AWS account which is offered at no additional
charge. You will be charged only when you access other AWS services by using IAM user.

• Multifactor Authentication: An AWS provides multifactor authentication as we need to


enter the username, password, and security check code to log in to the AWS Management
Console.
Advantages of IAM
• Centralized control of your AWS Account

• Shared access to your AWS Account

• Granular permissions

• Identify federation ( users can login using LinkedIn , Facebook etc)

• Multifactor Authentication ( password and OTP )

SIET III – II
• Setup password rotation policy ( eg: password will expire every 30 days )

AWS Components
• USERS- End Users ( people )
• GROUPS– Collections of users, under one set of permissions.
• ROLES – We can create roles and assign them to AWS
resources
• POLICIES– Set of permissions
(a) USERS
• IAM users are identities created by Root User with
credentials and permissions attached.
• The IAM user represents the person or service who uses the
IAM user to interact with AWS
• The IAM service lets you create a user name for every
employee of your company so they can securely access
AWS services
• Root User can attach each IAM User to a group that has the
permissions needed to perform particular tasks. •

(b)Group Users
• IAM groups are collections of IAM users.
• IAM groups help specify permissions across multiple users so that
any permissions granted to the group will also be given to the
individual users in the group.
• Managing group Users is relatively easy.
• Root User an create a group, add users to it, remove them from it,
or change permissions in one place.

(c)IAM ROLES
• An IAM role is a set of permissions that define what actions
are allowed and denied by an entity in the AWS console.
• Role permissions are temporary credentials.

(d) IAM Policies

• A policy is an object in AWS that, when associated


with an identity or resource, defines their
permissions.Root User manage access in AWS by
creating policies and attaching them to IAM identities
(users, groups of users, or roles) or AWS resources.

• Permissions in the policies determine whether the


request is allowed or denied.
CLOUD COMPUTING NOTES

• Most policies are stored in AWS as JSON documents.

Use Case 1: Role Based Access


Its related to administration.

AWS Identity and Access Management (IAM) is a service that helps you securely control
access to AWS resources. You use IAM to control who is authorized (has permissions) to use
resources.

When you first create an AWS account, it has complete access to all AWS services. This identity
is called the AWS account root user.

(a) Procedure to Create IAM User Login to access S3 bucket as readonly mode
from EC2 instance through GUI:
Step 1: Initially login with Root User to Create IAM Users

Step 2: Selecting IAM from AWS services

Step 3: Creating User group,enter user name

Step 4: Attach Policy search for –” administratoraccess” and click create user group

Step 5: Add Users (left side pane of Iam services)

Step 6: set user details

Step 7: Adding user to the group and download user credentials for further login

Step 8: Review & create User

Step 9:user created & added to the group

Step 10: Copy the Account ID of root user and log out

Step11: Login with IAM User paste the copied Account Id of root user

Step 12 :Enter the user credentials of I am user

Step 13:Launch EC2 Instance

Step 14: Connect EC2 instance

Step 15 :S3 bucket Access through GUI by updating Iam role by typing “aws s3 ls” in
console ,it is unable to access

Step 16: add Role to the user “go to Iam “ select user and select Roles in the left panel of Iam
services
Step 17: Click on create Role ,enter role name

Step 18: Adding Permissions: S3 read only

Step 19: Successfully S3 Role created with read-only permission

Step 20: EC2 s3 Role based Access is done

SIET III – II
Step 21: Update EC2 Role from Actions ->Security->Modify Iam role-> select s3 role name-
>updated

Step 22: Showing S3 files in read only in Gui console


Step 1: Initially login with Root User to Create IAM Users

Step 2: Selecting IAM from AWS services

You might also like